The Architecture

Six layers.
One memory.

What happens under the hood when ResDB surfaces a pattern from your organization's knowledge — explained for the technical and non-technical reader alike.

The homepage explains what ResDB gives you. This page explains how it actually works. Each layer builds on the last. The result is a system that doesn't retrieve information — it remembers it.

01 Encoding

Knowledge encoded
as geometry

Every piece of knowledge ingested into ResDB — a document, a decision, a conversation, a data record — is encoded as a hypervector: a point in an 8,192-dimensional mathematical space.

This is not semantic indexing. ResDB doesn't store text chunks and find similar ones at query time. It encodes each piece of knowledge into a single geometric object that binds four things together simultaneously: what it says, what type of thing it is, when it happened, and where it came from. These layers are composed into one vector — not stored separately and joined later at query time.

In this space, relationships are geometry. Knowledge that shares structural patterns — regardless of the words used to express it — occupies related regions. A decision made in 2021 and the problem it caused in 2024 end up geometrically related, not because a human tagged them, but because the structure of causality is preserved in the encoding.

This is hyperdimensional computing, or HDC — the foundational departure from every other approach to knowledge retrieval. Raw semantic embeddings are transient inputs to the HDC encoder. They are never stored or used for retrieval.

Architecture detail

Vectors are 8,192 dimensions. A fixed basis of role vectors (theme, cause, effect, evidence, counter-evidence) and layer vectors (semantic, temporal, provenance, schema) is deterministically generated and approximately orthogonal — ensuring consistent encoding across the full knowledge base, at any scale. Composition uses the bind and superpose operations standard to hyperdimensional computing.

02 Retrieval

The field settles.
It doesn't search.

When you bring a question to ResDB, it is encoded the same way — as a hypervector in the same 8,192-dimensional space. What follows is not a search.

A process called field iteration begins. The seed activates nearby knowledge, which activates related knowledge, propagating across the hyperdimensional field. Each activation is weighted by semantic alignment, temporal relevance, and provenance strength. Activations spread, compete, and converge — until the field stabilises into a cluster.

That cluster is the result. Not the most similar documents. The pattern that your organization's collective knowledge converges to around your question. Every element mutually reinforces the others. Isolated similar-but-unrelated documents don't make it through — only what holds together as a coherent structure.

This is resonance — not a metaphor, but the actual name of the retrieval mechanic. Multiple attractor basins compete during convergence, preventing any single dominant signal from collapsing the result. The system surfaces the coherent pattern, not the loudest match.

Architecture detail

Field iteration uses attractor basin dynamics. Two thresholds gate basin capture: a minimum affinity to join a basin, and a higher threshold to update the basin prototype. A capacity prior penalty prevents winner-take-all collapse, ensuring multiple coherent patterns can coexist in a single result. The process continues until activation shift falls below a convergence threshold or maximum iterations are reached.

03 Time

Time is structure,
not a filter.

Most knowledge systems treat time as a label — a date you can filter by after the fact. In ResDB, time is intrinsic to the architecture.

Every relationship in the system carries a half-life. Knowledge decays exponentially: recent signals carry more weight automatically, without any manual curation or archiving. This decay is applied during field iteration — it shapes which patterns activate and how strongly — not as a post-processing step applied after retrieval has already happened.

Three temporal horizons are available: SHORT (~7 days), MID (~90 days), LONG (~365 days). These govern how quickly a relationship's influence fades. Short-horizon queries surface what's current. Long-horizon queries surface historical patterns that remain structurally relevant even as their recency fades.

The result: ResDB naturally favours what's current, while remaining capable of surfacing historical patterns when the structure of the knowledge demands it. The past is never discarded — it is weighted. This is why the same system can answer "what is our current position on this?" and "what pattern has appeared three times in five years?" with equal reliability.

Architecture detail

Temporal decay is applied per-edge during field iteration using exponential decay, weighted by the age of each relationship relative to its configured half-life. A separate temporal alignment boost activates based on horizon matching — a SHORT-horizon query preferentially activates recent knowledge before decay weighting is even applied. Half-lives are configurable per horizon and per deployment.

04 Provenance

Every result
is fully traceable.

Every piece of knowledge in ResDB carries its full lineage from the moment of ingestion: where it came from, what it was derived from, when it entered the system, how confident the system is in it.

This lineage is hash-chained. Each record in the provenance DAG is cryptographically linked to its parent via SHA-256. If anything in the chain has been altered, it shows. The provenance is tamper-evident and fully auditable at every step.

When ResDB surfaces a result, it rehydrates the complete provenance chain. You don't receive an answer — you receive the answer together with the full chain of evidence behind it, traceable to original sources. For governance, compliance, regulated industries, and any decision that needs to be explained or defended, this is foundational.

Provenance also enables ResDB to surface contradiction explicitly: when one part of the organization holds a belief that another part's evidence contradicts, the lineage structure makes that tension visible rather than averaging it away.

Architecture detail

Provenance is stored as a directed acyclic graph, with each record cryptographically linked to its ancestors via SHA-256 hash chaining. DAG traversal supports forward (descendants), backward (ancestors), and bidirectional queries. Every resonance result includes full provenance rehydration as a standard part of the response — not an optional query parameter.

05 Self-Regulation

The memory
maintains itself.

Most systems degrade as they scale. More data means more noise, more interference between patterns, lower retrieval coherence. ResDB has a control layer designed specifically to prevent this.

The homeostatic system continuously monitors the health of the memory — not just operational metrics like latency and GPU pressure, but coherence: the measure of attractor stability in the field. When the quality of the memory itself begins to drift, the system responds. It adjusts edge weights, tightens traversal depth, and slows or pauses ingestion if vector drift is detected. It does not wait to be told. It maintains the memory as it grows.

The adaptive control uses Liquid Neural Networks — a class of continuous-time neural network with ODE-based dynamics, developed at MIT — running alongside classical PID control in a hybrid arrangement. The combination is deliberate: PID handles stable, near-linear regimes; LNNs handle the non-linear drift and delayed effects that classical control misses. The system monitors its own performance and switches the blend automatically.

The biological analogy is exact: homeostasis. An organism maintaining internal equilibrium as its environment changes. ResDB maintains memory coherence as your organizational knowledge grows. The longer it runs, the more it has learned about how to keep itself healthy.

Architecture detail

The control loop monitors query latency, memory and GPU pressure, cache efficiency, and coherence score — attractor stability — each against explicit setpoints. Multiple actuated parameters respond to control signals: traversal depth, edge weight scaling, activation sparsity, ingestion rate, and others. The LNN component uses Liquid Time Constant neurons with adaptive time constants; stability is monitored via Lyapunov energy functions. The PID/LNN blend adjusts automatically — PID handles stable near-linear regimes, LNNs handle the non-linear drift and delayed effects that classical control misses.

06 Interface

Familiar interfaces.
Novel engine inside.

The "DB" in ResDB is intentional. We took neuroscience-inspired associative memory and wrapped it in the interfaces your engineering team already knows how to work with. The architecture underneath is new. The way you talk to it doesn't have to be.

RQL — Resonant Query Language — reads like SQL. Most engineers pick it up in an hour. But where SQL instructs a query planner to find records, RQL parameterises the field: what to seed with, which time horizons to activate, what trust constraints to apply, what form the result should take. Counterfactual queries remove a key assumption or data source and measure how the pattern shifts — stress-testing organizational beliefs before committing to a decision. Predictive queries project forward on a long temporal horizon, surfacing structural tendencies in the knowledge base. Both are native to the retrieval mechanic, not bolt-on features.

MCP (Model Context Protocol): any agent or tool that already speaks MCP — Claude, Cursor, and a growing list of others — connects to ResDB's memory directly. No custom adapter. No middleware. The agent queries organizational memory the same way it calls any other MCP server.

A2A (Agent-to-Agent Protocol): ResDB participates natively in multi-agent pipelines as a memory node. Agents query it, receive resonance results with full provenance, and pass them downstream. Memory becomes a first-class participant in orchestrated agent workflows — not a lookup service bolted on the side.

A REST API covers everything else. Same pattern, any language, any stack.

Architecture detail

RQL is a domain-specific language. Core clauses: WITH seed "..." (encodes the query into the HDC field), USING policy name@version (applies governance constraints), WHERE (property, temporal, and trust filters applied before field iteration), RETURN CLUSTER TOP k [INCLUDE_PROVENANCE] (result shape and depth), COUNTERFACTUAL { ... } (assumption removal and impact measurement), PREDICT horizon=long (forward structural projection). Queries are constraint declarations for the field — not instructions for a query planner.

Ready to build on it?

Beta available Q2 2026. Limited early access partners accepted now.

Request Early Access Read the Whitepaper →