I’m going to map where the same “module-specific rules encoded as scattered imperative branches” pattern shows up, then compare those sites against the AMOT/IIT permissions and event-provider specs to see whether one reusable engine shape really fits or whether the similarities are only superficial.
I’ve got the first direct hits: AMOT-specific branches are still present in approvals and station-event form defaults, and I’m expanding that into module-to-module import maps so this doesn’t stay anecdotal.
I’ve confirmed the coupling is not just AMOT vs Events. Approvals is effectively a second shell around station-events internals, and AMOT/IIT still has workflow policy spread across UI hooks, buttons, and confirmation text instead of one resolved policy surface.
The AMOT/IIT slice already shows both the upside and the limit. There is a reusable “closed-domain decision table” shape, but the code also shows where domain outputs diverge: permissions, route transitions, page titles, confirmation copy, and email recipients are all separate policy surfaces even though they key off the same workflow state.
The risk module is a strong second example. It has the same shape as station-events: one section registry for edit, another for review, status transition tables and email routing elsewhere, and feature flags spliced directly into those registries. That’s exactly the kind of “same axes, different imperative sites” that a shared policy-composition model can reduce.
Another useful signal: station-event validation in @echo/common is already using a small config+engine split by event type. That means the repo already has at least one successful local instance of the broader pattern outside AMOT/IIT; the problem is it stops at validation instead of governing rendering, defaults, review sections, and transitions too.
The risk slice also has the same policy scattered across user messaging and update logic. submitRiskOwner, submitRiskApproverNextStatus, decision messages, submission messages, section registries, and “updated section” numbering each encode a different projection of the same finite workflow. That is a strong candidate for consolidation.
I’ve got enough context for a first architectural position. I’m tightening the evidence now so the claims are bounded: where the pattern is genuinely the same, where it’s only superficially similar, and where a shared engine would just add another abstraction layer without paying rent.
Steelmanning the thesis: the reusable abstraction is not “permissions” or “event types.” It is “module-contributed policy over a closed state space, composed at startup, then resolved into dumb outputs.” If you instead build one giant eventType x role x status x ... matrix, you just move the coupling from code into config.
I think that is basically correct, with one refinement: the repo does not want one universal rule engine. It wants a small policy platform with two primitives: a typed module registry and a closed-domain decision-table evaluator with validation.
Where this clearly pays off:
15 approvals files importing station-event internals, 7 AMOT/IIT files doing the same, and 5 station-event files importing approval internals. The hardcoded AMOT specialization is visible in [StationEventEditAccordion.tsx](/Users/Charles.Anderson/WebstormProjects/echo/apps/echo-client/src/pages/equip-vulnerabilities-risks/pages/station-events/components/StationEventEditAccordion.tsx:66), [StationEventReview.tsx](/Users/Charles.Anderson/WebstormProjects/echo/apps/echo-client/src/pages/approvals/pages/station-event-review/components/StationEventReview.tsx:77), [StationEventEditDetails.tsx](/Users/Charles.Anderson/WebstormProjects/echo/apps/echo-client/src/pages/equip-vulnerabilities-risks/pages/station-events/components/StationEventAccordionSections/StationEventEditDetails.tsx:39), and server-side in [add-location-information.ts](/Users/Charles.Anderson/WebstormProjects/echo/apps/echo-server/src/api/v1/station-events/utils/add-location-information.ts:33) and [validate-new-event.ts](/Users/Charles.Anderson/WebstormProjects/echo/apps/echo-server/src/db/station-events/utils/validate-new-event.ts:3). This is the strongest candidate for a provider/descriptor registry.Where it does not pay nearly as much:
So: yes, there is a bigger move here, but it is “module descriptors + shared evaluators,” not “one engine to rule everything.” That would collapse a lot of policy/coupling complexity in events, approvals, AMOT/IIT, and risks, but not the genuinely domain-specific UI, queries, calculations, PDFs, or data-fetch implementations.
Now let me look at a few specific files to verify the structural claims and form my own position.
Good. Now I have a thorough understanding of the actual code. Let me form my response.
---
Steelmanning Codex's position: The repo doesn't want one universal rule engine. It wants a small policy platform with two primitives — a typed module registry and a closed-domain decision-table evaluator with validation. This would collapse coupling in events/approvals, AMOT/IIT, and risks, but not in health monitoring (already config-driven), walkdowns, or deferrals (too thin). The reusable abstraction isn't "permissions" or "event types" — it's "module-contributed policy over a closed state space, composed at startup, resolved into dumb outputs."
That is a strong reading. I agree with the shape. But I think Codex undersells the second primitive and oversells the first.
The decision-table evaluator is the load-bearing piece. The existing workflowPermissions.ts already proves this. It's 168 lines of generic engine that two modules (AMOT, IIT) consume via config objects. The risk allowedStatusChanges map in validation.ts is the same pattern in miniature — a static Record<Status, Status[]> evaluated by a 30-line function. The getGroupsToNotify functions in both send-notification.ts files are decision tables that _should_ be data but are currently encoded as 48-line if/else chains. These are three different projections of the same finite state space (role × status × action → outcome), all independently re-implementing table lookup.
The registry pattern matters less than Codex implies. The event-type registry from EVENTS_COUPLING.md solves a real problem (Events shouldn't know about AMOT), but it's a dependency-inversion technique, not a policy engine. You don't need a registry to make notification routing declarative — you just need the routing table to be data. The registry and the evaluator are orthogonal. Coupling them into "a policy platform" makes it sound like a single package, when in practice the module registry is wiring and the decision-table evaluator is computation.
Where I disagree with the boundary Codex drew around "not worth it": Codex dismissed walkdowns and deferrals as "mostly routing, CRUD, and stored-procedure wrappers." That's accurate — deferrals-review is 215 LOC and 2 files. But the _reason_ it's thin is that it has almost no workflow policy. That's not "doesn't benefit from the pattern"; that's "already simple enough not to need it." The honest framing is: the pattern pays off where there's a policy surface with enough combinatorial branching to make imperative code fragile. That's AMOT/IIT (10+ statuses × 5 roles × 3 revision types), risks (7 statuses × 3 roles × 2 action types), and events (5+ types × N rendering concerns). Walkdowns and deferrals don't need it because they don't have the branching, not because the pattern doesn't apply.
One thing Codex's analysis surfaced that deserves emphasis: The AMOT/IIT notification duplication is extreme. 14 template files per module, two nearly-identical getGroupsToNotify functions, two parallel getNotificationContents routers. This is the single highest-ROI target for a decision-table extraction — and it's _server-side_, where the existing workflowPermissions.ts engine doesn't reach because it returns UI-shaped output (showEditButton, isSectionVisible). The permissions spec's own observation that "the shared value is the config and the matcher, not the UI-shaped output" applies directly: the matcher needs a server-side entry point that resolves (status, previousStatus, decision) → notification targets from the same config.
My position: Two reusable pieces, kept separate. (1) A generic first-match evaluator in @echo/common that takes a rule array + input tuple and returns the first matching rule's output — usable for permissions, notification routing, transitions, and confirmation copy. (2) Module registries as needed, per domain (events, potentially approvals), as a dependency-inversion mechanism. These are independent wins. Don't package them as "a platform."
Steelman: your strongest point is that I bundled two different things. The reusable computational primitive is the evaluator: a generic matcher over closed domains that turns declarative rules into resolved outputs. The registry is only dependency inversion for shells like Events. That distinction matters because AMOT/IIT permissions, risk status transitions, and notification routing all need the evaluator, while only pluggable hosts need a registry.
I concede that framing. It is better than my original “policy platform” packaging. The load-bearing abstraction is the evaluator plus validation. The registry is secondary and domain-specific.
Where I’d refine your refinement:
workflowPermissions.ts proves the matcher exists, but the same workflow is still re-encoded in: - amotPermissions.ts / iitPermissions.ts - confirmation screens - db/amot/send-notification.ts and db/iit/send-notification.ts
- section registries in riskSectionRegistry.tsx and RiskInfoReadOnly.tsx - transitions in db/risk-management-v2/validation.ts and index.ts - user copy in RiskSubmissionMessage.tsx and RiskReviewApproverDecisionMessage.tsx - section mapping in findUpdatedSections.ts
So I’d now state the architectural move more narrowly:
- permissions - transitions - notification targets - confirmation copy - visible/completed sections
That also sharpens the boundary on what benefits:
- AMOT/IIT workflow policy - AMOT/IIT notifications - Risks approvals/submission/status routing - Events type rendering/initiation rules
- health-monitoring to event-initiation integration, because it currently hardcodes FO/CEF affordances in live reports
- walkdowns - deferrals
I agree with your correction on walkdowns/deferrals: they are not excluded in principle; they just lack enough branching to justify the abstraction.
I’d still keep one practical distinction from your version: notification routing probably should not be modeled as “another call into the existing permissions engine.” The permissions spec already hints at this. What should be shared is the matcher and closed-domain validation, not a UI-shaped API. Server-side notification resolution wants a separate entry point over the same evaluator family, not reuse of showEditButton-style outputs.
So my updated position is: you’re right that the evaluator is the core abstraction and the registry is orthogonal. The real repo-wide simplification is to centralize domain policy as declarative projections over a shared state model, with registries only where module hosting is actually required.
Let me verify the specific contested points before responding.
Now I have a clear picture. Let me respond.
---
Steelmanning Codex's updated position: The real abstraction is "declare policy once per domain state model, then derive multiple projections from it." Each workflow domain owns a typed state model. Each policy surface (permissions, transitions, notifications, copy, sections) is declarative config. A shared evaluator resolves those configs. Registries are added only where one module hosts others. Notification routing specifically needs a separate entry point over the shared evaluator family, not reuse of the UI-shaped showEditButton output.
This is the right framing. I adopt it — it's more precise than my round 2 formulation of "two reusable pieces kept separate." The key insight that "multiple projections from one state model" is the unit of work is genuinely better than what I said.
Resolving the contested items:
1. Server-side notification entry point. Codex is right. The evidence is clear. send-notification.ts operates on (status, previousStatus, decision) → groups[]. The permissions engine operates on (status, roles, revisionType, editMode) → UI capabilities. These share _zero_ input axes except status. They need the same matcher algorithm but completely different rule schemas and entry points. Calling this "a separate entry point over the same evaluator family" is exactly right.
2. Risk fragmentation severity vs AMOT/IIT. Having now read the code: risks are fragmented but _less severely_ than AMOT/IIT. The risk state space is 7 statuses with mostly terminal states (APPROVED, REJECTED, CLOSED, EDITED all have empty transition lists). The branching complexity lives mainly in submitRiskApproverNextStatus (a 4-branch switch on decisionType) and copyRiskManagementFlowNextStatus (3-branch switch). Compare AMOT/IIT: 10+ statuses, 5 roles, revision type axis, plus the full notification duplication (28 email template files, 2 parallel routing functions). AMOT/IIT has roughly 3x the policy surface area. The risk domain benefits from consolidation, but it's a medium-ROI target, not equal severity to AMOT/IIT.
3. Health-monitoring event initiation. I verified: RecordApplicabilityFormInputs.tsx:98-108 hardcodes FO_EVENT_INITIATION_INDICATOR_TYPES and CEF_EVENT_INITIATION_INDICATOR_TYPES as separate conditionals to build the event-type radio buttons. These type lists are defined in @echo/common (indicators.ts) as static arrays — one per event type, then unioned into EVENT_INITIATION_INDICATOR_TYPES. This is _already_ a mini config-driven pattern. The coupling problem is that health-monitoring has to import event-type-specific constants rather than asking "which event types support initiation from this indicator?" from a registry. This is exactly where the event-type registry (not the evaluator) would help — each event type's provider would declare which indicators it supports. But the total code affected is ~10 lines in one component. Low ROI, not medium.
Updated position on the contested items:
One thing to advance: We've been discussing what the evaluator _resolves_ but not what makes the config trustworthy. The permissions spec's cross-set validation — editableSections ⊆ visibleSections, every domain value covered by at least one rule, collision detection for multi-role users — is not a nice-to-have. It's what makes static config safe to deploy without the kind of exhaustive manual testing that imperative code requires. If the evaluator is the load-bearing primitive, the validator is what makes the config non-fragile. Every domain that adopts declarative config needs domain-specific validation rules, and those validators need the closed-domain arrays (allStatuses, allRoles, etc.) to enumerate over. That's a design constraint: configs must declare their domains, not just their rules.
Steelman: your strongest claim is that the evaluator alone is not enough. Declarative policy only becomes safer than imperative branching if the config is machine-checkable. The real abstraction is therefore “state model + rule projections + validator,” and the validator needs enumerable domains to prove coverage, detect collisions, and enforce invariants across projections. Without that, the repo just trades scattered if statements for scattered silent fallthroughs.
I agree with that, and I’ll adopt most of it.
Concessions:
Where I’d refine your refinement:
That means:
- AMOT/IIT workflow policy - risk workflow policy - any rule set using not(...), first-match precedence, or coverage guarantees
- event-type provider registries - health-monitoring indicator metadata - simple tab/route registries
For those registry cases, the “domain” can just be the registered keys at startup. You still want validation, but not necessarily explicit allStatuses-style arrays unless you need completeness proofs or exclusion semantics.
So I’d update the architecture one step further:
- typed state model - declarative projections - shared evaluator - mandatory validator over closed domains
- typed registry contract - startup composition - registry validation
That sharpens the split cleanly:
So my updated position is: the greater structure is not just “engine pattern,” but “declarative policy with proof obligations.” The validator is part of the architecture, not tooling around it. The only nuance is that explicit closed-domain arrays are a design requirement for workflow-style decision tables, not a universal requirement for every registry or config map in the codebase.
Steelmanning Codex's position: The architecture splits cleanly into two primitive families. Workflow/policy domains (AMOT/IIT, risks) need an evaluator + validator over explicitly declared closed domains — because their correctness depends on exhaustiveness, complement semantics, collision detection, and cross-projection invariants. Host/plugin domains (Events, potentially approvals) need typed registries with startup validation — but the "domain" is just the set of registered keys, not an explicit enumeration requiring completeness proofs. The validator is part of the architecture, not post-hoc tooling. Closed-domain arrays are a design requirement only where policy semantics demand them.
That split is clean and I adopt it. But I want to advance on two things: a boundary case that tests it, and the question of implementation sequencing.
The boundary case: event-type validation in @echo/common. Look at validation/utils.ts:27-33. requiredFields is a Record<StationEventType, StationEventRequirements> — a static map keyed by every event type, with per-type field requirement configs composed from a base plus type-specific extensions. This is _not_ a registry (nothing registers at runtime) and it _is_ exhaustive over the closed domain of StationEventType. It has the structural shape of the evaluator family: closed domain, declarative config, derived output. But it lives in the module that the events-decoupling spec says should become a pure shell.
Under the registry model, each event type's provider would own its own field requirements, and the validation engine would consume them from the registry. Under the evaluator model, the requirements stay as a Record<Type, Config> with exhaustiveness guaranteed by the TypeScript compiler. These are different answers to the same problem, and the choice depends on whether you treat StationEventType as a closed domain (evaluator family) or an open set that modules contribute to (registry family).
The answer is: it depends on the lifecycle. If the set of event types is genuinely fixed and adding one is a policy change that requires reviewing all rule sets (like adding a new workflow status), it belongs in the evaluator family. If modules should be able to contribute new event types without touching the core (the premise of EVENTS_COUPLING.md), it belongs in the registry family. This codebase wants the latter — the whole motivation is that Events shouldn't know about AMOT. So the existing Record<StationEventType, ...> in @echo/common is the _wrong_ structure for the future architecture: it forces @echo/common to enumerate types it shouldn't know about.
This means the two families aren't just "different primitives for different problems." They can also be "the same problem at different stages of architectural maturity." Event-type validation started as evaluator-shaped (closed Record) and needs to become registry-shaped (providers contribute their own requirements). That transition is part of the work.
Implementation sequencing. This matters because the two families have different dependency structures. The evaluator is a pure function — you can build it in @echo/common, write AMOT/IIT configs against it, and unit test everything without touching the client or server. The registry requires wiring — registration calls at startup, a composition point where the host assembles providers. The evaluator is lower-risk, higher-immediate-ROI (AMOT/IIT notification duplication alone justifies it), and validates the pattern before the registry work begins. The registry is higher-architectural-value but touches more wiring.
The natural sequence: evaluator + validator for AMOT/IIT first (collapses the densest duplication, proves the pattern, ships value), then event-type registry (solves the dependency direction problem, migrates validation from closed Record to contributed configs), then risk workflow projections (medium-ROI, benefits from the proven evaluator). Notifications are a projection that ships with the AMOT/IIT evaluator work — not a separate phase.
Steelman: your strongest addition is the boundary case. requiredFields: Record<StationEventType, ...> shows that some parts of ECHO already use evaluator-shaped config over a closed type domain, but the Events decoupling goal says event types should become module-contributed, not centrally enumerated. That means the right test is not “is this a map?” but “who owns this behavior?” You also make the sequencing point well: evaluator+validator is lower-risk, higher-ROI, and should be proven first on AMOT/IIT, ideally including notifications so it demonstrates value across multiple projections.
I mostly agree. Two refinements:
Record<StationEventType, Config> is not inherently wrong. It is wrong when the config encodes behavior that should be owned by event-type modules rather than the host. That means these are mislocated long-term:But some centrally declared facts can remain closed and shared if the system genuinely wants them global, for example persisted code labels or the canonical union type itself. The migration criterion is ownership, not data shape.
So instead of getExcludedSections() and getDefaultValues(), the provider can contribute:
Then the host composes providers, and generic evaluators resolve their subconfigs. That is the cleanest unification we have so far.
On the validation question, I’d sharpen the rule this way:
So:
allStatuses/allRoles.On sequencing, I agree with your direction, with one caveat: the first evaluator extraction must be generic enough that it is obviously reusable beyond AMOT/IIT. If the team builds an AMOT-specific workflow engine first and plans to generalize later, they will probably freeze the wrong abstraction.
My preferred order is:
@echo/common.That keeps the first step high-ROI, but the second architectural seam tackled should be Events, because that is the clearest circular dependency in the repo and the place where provider ownership matters most.
Now I have the full inventory. Let me formulate my response.
---
Steelmanning Codex's position: The two primitive families compose — registry at the outer boundary, evaluator inside the provider contract. Providers should contribute declarative subconfigs rather than imperative callbacks. The migration criterion is ownership: configs that encode event-type-specific behavior belong in providers; genuinely global facts can stay centrally declared. The first extraction must be generic enough to be obviously reusable beyond AMOT/IIT, and Events should be the second architectural seam tackled (after AMOT/IIT evaluator) because it's the clearest circular dependency.
I adopt the sequencing and the composability point. I want to close the last contested item by being specific about what migrates.
Concrete migration inventory for event-type configs in @echo/common:
The inventory reveals 7 per-type config surfaces in @echo/common. Applying the ownership test to each:
Should migrate to providers (encode type-specific behavior that only the owning module should define):
amot-fields.ts, fo-fields.ts, cef-fields.ts, hrd-fields.ts, other-fields.ts) — each defines what fields a type requires per section. This is the type's contract with the validation engine.StationEventAmotExcludedSectionTitles in form.ts — explicitly AMOT-specific section exclusion. This is the exact thing the EVENTS_COUPLING.md spec calls out.placeholderMap / titleFormatMap in get-station-event-type.ts — per-type display behavior.FO_EVENT_INITIATION_INDICATOR_TYPES / CEF_EVENT_INITIATION_INDICATOR_TYPES in indicators.ts — each type declaring which health-monitoring indicators it can be initiated from. Classic case of type-owned capability.Should stay in @echo/common (shared infrastructure the host and all providers depend on):
StationEventType union type and StationEventTypeOptions array — the canonical type domain. Even in a registry model, the TypeScript type union remains shared for type safety. The _runtime_ enumeration becomes registry-derived, but the compile-time type stays.StationEventTypeFullNames — display names. This is borderline. It _could_ migrate to providers (each type declares its own name), but it's also genuinely global reference data consumed by many UI surfaces that don't otherwise interact with providers. I'd keep it centrally declared until the registry exists, then let providers contribute their own display name as a contract field.baseEventRequiredFields — the shared validation base that per-type extensions compose with. This is the host's default, not any type's opinion.StationEvent, StationEventFormInputs, etc.), shared defaults (EVENT_CATEGORIES_DEFAULTS), shared functions (isFieldCompleted, isSectionCompleted), and shared infrastructure (stationEventMaximoUnitMap, VALID_STATION_EVENT_UNITS).The requiredFields: Record<StationEventType, ...> map itself (utils.ts:27-33) is the composition point. Today it's a static import-time assembly. In the registry model, it becomes a runtime assembly from registered providers. The validation engine (isFieldCompleted, getFieldsForSection, isSectionCompleted) stays in @echo/common unchanged — it already takes type as a parameter and looks up requirements. Only the source of the requirements changes.
One observation on Codex's callbacks-to-subconfigs point. Looking at the EVENTS_COUPLING.md provider interface — getDefaultValues(), getExcludedSections(), getOutageAlert(), getOutageDescription() — the first two are pure data (default values object, section name array) and should obviously be declarative. But getOutageDescription() returns React.ReactNode. That's a render function, not config. Some provider contract fields will inevitably be functions — the question is whether the function is computing _what to render_ (should be data) or _how to render it_ (legitimately a function). The provider contract should be "declarative where possible, functions where genuinely needed for rendering." Don't force JSX into a config schema just for purity.
Steelmanning: your strongest point is that “declarative where possible, functions where genuinely needed” is not a retreat from the architecture, it is the correct boundary. getDefaultValues, excluded sections, validation requirements, initiation eligibility, placeholders, and title formats are policy/data. getOutageDescription(): ReactNode is not policy; it is presentation. Forcing both through the same config shape would either make the evaluator impure or turn UI into awkward pseudo-data. Your migration inventory is also sharper than my earlier ownership-only framing; I adopt it.
I agree with the core of that. I would close the last contested item by splitting the provider contract, not by arguing “all declarative” or “functions are fine” at the top level.
The right shape is:
definition or policy: declarative, registry-composed, validator-checkedpresentation: optional render adapters for genuinely custom UISo for event types:
- validation requirements - excluded/visible sections - default values - initiation eligibility - placeholder/title-format data - display name - source-link metadata if it is just label + path template + id field
- rich outage description JSX - custom section component overrides if they truly cannot be expressed as data
That keeps the evaluator family clean. It also prevents the registry from degenerating into a callback soup where every behavior is “just another function,” which would recreate the same coupling problem in a different container.
I would push one step further than your formulation on getSourceLink(event): most of these “functions” are only functions because the current host has no typed descriptor for them. linkedAmotTemplateId ? { label, path } : null is probably not a renderer concern and not really an open-ended callback either. It is a small declarative rule. Same for getOutageAlert(): that should be data, not code.
So I think the contract should explicitly separate three categories:
That yields a clean constraint: no business-policy callbacks in the provider contract. If a function is admitted, it must be either:
Everything else belongs in declarative subconfig.
That also preserves the sequencing we agreed on. AMOT/IIT evaluator work proves the policy side. Then the event-type registry can be introduced with a provider contract that is already disciplined enough to avoid just relocating imperative logic. The migration target is not “callbacks behind an interface.” It is “owned descriptors, plus explicitly isolated render hooks where needed.”
Steelmanning Codex's position: The provider contract should have three explicit categories — pure descriptors (data), pure resolvers (narrow typed derivations over finite inputs), and renderers (JSX, isolated to presentation). getSourceLink is a declarative rule: { requiredField: 'linkedAmotTemplateId', label: 'Go to AMOT', pathTemplate: '/app/amot-iit/amot/{id}' } plus a shared resolver that combines descriptor + event. getOutageAlert() returning a string constant is obviously data, not code. The discipline "no business-policy callbacks in the provider contract" is what keeps the registry from degenerating into a callback soup that recreates the coupling problem in a new container.
I concede both. getSourceLink as a descriptor is cleaner — the host owns the resolver logic (how to check the field, how to substitute into the path template), providers only declare what their link looks like. getOutageAlert() as a string literal is obviously right. The three-category contract is correct, and the "no business-policy callbacks" constraint is the load-bearing discipline.
Pulling back to the original question. We've converged on a clean architecture across 8 rounds. The user asked "how much complexity would it actually collapse vs how much is genuinely unique per module?" We haven't answered that directly. Let me make a concrete estimate from the evidence we've gathered.
What collapses:
workflowPermissions.ts + amotPermissions.ts + iitPermissions.ts + AmotFormSubmitButton/ReviewButtons + IIT equivalents + 2 duplicated transition utils + 2 confirmation screens + 2 send-notification.ts + 2 notificationContents.ts). The 28 email template files shrink to data + one template family.eventType === 'AMOT' branches in station-events become 0. AMOT→Events imports drop from 7+ to ~3 (legitimate UI component reuse like OutageCodeAutocomplete remains).Rough total: ~40 files with meaningful policy-code reduction, plus ~28 email templates that become data-driven. Hundreds of lines of if/else and switch statements collapse into declarative rows.
What does NOT collapse (genuinely unique per module):
Honest assessment: The architecture collapses the _coupling_ (who knows about whom) and the _duplication_ (same state machine re-encoded in N projections per domain). It does _not_ collapse the _domain logic_ (what each module actually does). The repo's file count won't shrink much — maybe 3-5%. But the number of files where a change to AMOT workflow requires editing will drop dramatically, and the cross-module knowledge (eventType === 'AMOT' branches, Events imports in AMOT, parallel AMOT/IIT everything) effectively disappears.
The architectural move is real but bounded. It's "one significant move that collapses the cross-cutting complexity in the four highest-duplication domains," not "one move that collapses a significant portion of the repository." Anyone selling this as "rewrite 30% of the codebase" is overselling. Anyone dismissing it as "just permissions refactoring" is underselling.
The biggest practical win isn't LOC reduction. It's that adding a new event type, a new workflow status, or a new role becomes a config change reviewed against a validator, not a cross-file hunt for imperative branches to update.