TL;DR
- 01A neurosymbolic system is only as good as its task decomposition. The neural component handles fluent recognition; the symbolic component handles typed structure and validation. Atomization is what tells the orchestrator which is which.
- 02We decompose every conflict-analysis task into atomic operations against the ACO. Each operation is either NEURAL-recognise or SYMBOLIC-validate-or-query.
- 03This is what lets DIALECTICA be deterministic where determinism matters (graph state, audit trail) and probabilistic where it has to be (recognition over noisy text).
- 04Frameworks for agentic decomposition exist (ReAct, AutoGPT lineage, DSPy, LangGraph). None of them, in our experience, survive contact with adversarial or contested data without a typed scaffold underneath.
A neurosymbolic system is only as good as its task decomposition. The neural component handles fluent recognition; the symbolic component handles typed structure and validation. Atomization is what tells the orchestrator which is which.
The atomization rule
Every analyst-facing operation reduces to a sequence of atomic ops against the kernel. Each op is one of: CREATE <Primitive>, CREATE <Edge>, UPDATE <StatusTransition>, INVALIDATE <Edge>, QUERY @ <time>, VALIDATE <Extension>. Recognition over text emits candidate ops; the validator decides commit or quarantine.
Which side does which work
- ▸Neural (AGON/KAIROS): span extraction, alias resolution, speech-act classification, candidate edge proposal with confidence.
- ▸Symbolic (validator): schema conformance, kernel invariant check, extension invariant check, provenance binding, bi-temporal consistency.
- ▸Neural again (RZN layer): open-question reasoning over the typed graph as a hard scaffold, never as a soft retrieval source.
Why most agent frameworks struggle here
ReAct-style loops are tool-call orchestration with no typed memory. AutoGPT-lineage systems accumulate self-generated context until accuracy decays (Letta's Context-Bench documents the ceiling). DSPy and LangGraph give you better composition but no domain ontology. Conflict-domain work demands typed memory plus domain ontology plus bi-temporal honesty — the agent framework is necessary but not sufficient.
What the typed scaffold buys you
- ▸Determinism on graph state: the same atomic ops applied in the same order produce the same graph, regardless of the model.
- ▸Auditability: every op carries a triggering claim or event, a transaction-time stamp, and a source span. The audit log is the engine's honest face.
- ▸Reproducibility under model upgrades: the recognition side can change; the graph state cannot move silently.
SOURCES