Institutional reasoning fails at handoff. The analyst leaves; the file becomes opaque. Generic AI stores text. The knowledge layer stores typed reasoning that survives the analyst — because the kernel is fixed, the extensions are logged, and the provenance is mandatory.
Inheritance is what makes a code library reusable. It is also what makes an institution survive the people who staff it. Both versions are about the same problem: a thing that costs a lot to build needs to be useful to whoever shows up next, without them having to rebuild it.
Institutional knowledge work fails at handoff. The analyst rotates; the desk officer moves on; the mediator retires. What gets inherited today is a folder of PDFs and a few decks. What needed to be inherited is the reasoning — which actor owns which interest, which commitment is still active, which event triggered which policy shift, which narrative stopped being credible last March.
Generic AI does not solve this. It stores text and generates text. It does not maintain typed structure across sessions. The 2025 work on context engineering and Letta’s Context-Bench document the same failure mode: as agents accumulate self-generated context, accuracy decays in ways that look fluent and read confident. An institution that stores its reasoning in a chat transcript pays that bill twice — once on the way in, once on the way back out.
The TACITUS knowledge layer treats inheritance as a first-class architectural property. The kernel is fixed across cases. Per-case extensions are logged, named, and reviewable. Every claim has a source span. Every commitment is bi-temporal. The graph survives the analyst because the analyst was never the storage layer in the first place — the typed graph was.
Placeholder paragraph. The full note is being written. Send comments to hello@tacitus.me.