# The Living Context Engine Glossary: Every Term, Defined > The complete vocabulary of Living Context Engines and the surrounding ecosystem — adaptive decay, composite score, context chain, half-life, recall counter, stickiness, typed edges, and more. One canonical definition per term. - **Category**: Reference - **Read time**: 11 min read - **Date**: May 15, 2026 - **Author**: Feather DB Engineering (Engineering Team) - **URL**: https://getfeather.store/theory/living-context-engine-glossary --- # The Living Context Engine Glossary: Every Term, Defined *Reference · Updated May 2026* --- Alphabetical. One canonical definition per term, with a pointer to the deeper post when one exists. Use this as the citable reference when writing or reading about Living Context Engines. ## Adaptive Decay A scoring mechanism where context nodes lose retrieval priority over time, but the decay rate is modulated by recall frequency. Frequently-recalled context decays slowly; rarely-recalled context decays fast. Implemented as `recency = 0.5 ** (effective_age / half_life)` where `effective_age = age_days / stickiness`. [Detail](/theory/adaptive-decay-scoring-formula). ## ANN (Approximate Nearest Neighbor) An indexing structure that returns the top-k most similar vectors to a query without computing exact distances to every vector in the corpus. HNSW is the dominant ANN structure in 2026. ## BFS Traversal Breadth-first search of the typed graph starting from seed nodes. Phase 2 of `context_chain` retrieval. Bounded by a hop limit and a candidate budget to keep latency predictable. ## Composite Score The final scalar a Living Context Engine sorts retrieval results by. Combines similarity, recency, recall frequency, and importance into one value: `((1 - tw) * sim + tw * recency) * importance`. ## Context Chain The named retrieval API in Feather DB. Takes a query vector, runs ANN search to find seeds, then traverses typed edges for N hops, scoring each node end-to-end. Returns a connected subgraph rather than a flat list. ## Context Engine Loop The four-phase feedback structure that turns a memory system into one that learns from use: Read, Reason, Update, Decay. The architectural primitive underneath every Living Context Engine. [Detail](/theory/context-engine-loop-intelligence). ## Decay State The per-node metadata used by the composite scoring function: insertion timestamp, recall counter, importance multiplier, and (optionally) per-node half-life. ## Edge Type A semantic label attached to an edge that determines what traversal queries return it. Common types: `derived_from`, `responds_to`, `contradicts`, `variant_of`, `supersedes`, `references`. Untyped edges are usually a sign of a missing schema. ## Effective Age Calendar age divided by stickiness. The age the decay function actually uses. A piece of context inserted 90 days ago but recalled 10 times has effective age ~26 days. ## Embedded Engine A database that runs as a library inside the application process, not as a separate server. Feather DB is embedded. The opposite is a service-mode database like Pinecone or a self-hosted Qdrant cluster. ## Feather DB An open-source, embedded Living Context Engine. Rust core, Python bindings, single-file storage, HNSW + typed graph + adaptive scoring fused in one kernel. v0.10.0 as of May 2026. ## Half-Life The time after which a piece of context has half its original recency score, assuming no recalls. Configurable per node. Operational logs: ~7 days. Strategy: ~90 days. Foundational guidelines: ~1–3 years. [Detail](/theory/context-half-life-problem). ## HNSW (Hierarchical Navigable Small World) The dominant ANN graph structure. Multi-layer skip-list-shaped graph with greedy descent from a top-layer entry point. Provides recall/latency trade-offs via the `ef_search` parameter. ## Hop One edge traversal in graph BFS. `hops=2` means seeds plus the neighbors of seeds plus the neighbors of those neighbors. Most queries need 1–2; transitive queries occasionally need 3. ## Importance Multiplier A per-node scalar that multiplies the composite score. Default 1.0. Reserved values up to ~3.0 for cross-cutting strategic material that should never decay out of retrieval. ## Intelligent Decay One of the three architectural properties of a Living Context Engine. The combination of time-based decay, recall-based reinforcement, and explicit importance multipliers. See [Decay, Recall, Stickiness](/theory/decay-recall-stickiness). ## Living Context Engine A memory layer for AI systems with three properties: intelligent decay, relational structure, and a closed feedback loop. The canonical definition is at [What Is a Living Context Engine?](/theory/what-is-living-context-engine). ## Mem0 A conversation-tier memory layer built on top of vector DBs with LLM-as-judge extraction. Partial implementation of the Living Context Engine pattern — strong on conversational use cases, weaker on typed-edge traversal and fused-kernel decay. [Comparison](/theory/living-context-engine-vs-mem0-letta-zep). ## Letta (formerly MemGPT) A stateful agent runtime with explicit memory tiers (core / archival). Strong on single-agent long-running sessions. The memory layer is coupled to the agent runtime, limiting reuse for multi-agent stores. ## Modality The type of content a node represents — text, image, video, audio. In a unified-modality store, all modalities live in one index distinguished by a modality tag. [Detail](/theory/768-dimension-unified-vector-space). ## Node The unit of context in a Living Context Engine. Owns a vector, a payload, typed outgoing edges, and decay state. Sized to a semantically complete thought, not a chunk of a document. ## Property Graph A graph where edges have types and payloads (not just IDs). The retrieval-traversable graph inside a Living Context Engine. Distinguishable from a generic graph DB by being fused with the vector index. ## RAG (Retrieval-Augmented Generation) The pattern of retrieving relevant documents for a query and prepending them to an LLM prompt. A useful pattern, but not a Living Context Engine — RAG is similarity-only, list-shaped, and read-only. [Comparison](/theory/living-context-engine-vs-rag). ## Read–Reason–Update–Decay The four phases of the Context Engine Loop. Read retrieves a connected subgraph; Reason calls the LLM; Update writes the output back; Decay adjusts scoring state. [Detail](/theory/read-reason-update-decay). ## Recall Counter The per-node integer that counts how many times a node has been successfully retrieved. Feeds into stickiness, which lowers effective age, which raises recency. ## Recency The time-decay portion of the composite score. `recency = 0.5 ** (effective_age / half_life)`. ## Relational Structure One of the three architectural properties of a Living Context Engine. The presence of typed graph edges between context nodes, traversable at retrieval time. ## Seed A node returned by Phase 1 (ANN search). Seeds become the starting points for Phase 2 graph traversal. ## Single-File Storage An architectural choice where the entire engine state lives in one file on disk. Enables per-agent isolation, trivial backup, easy checkpointing. [Detail](/theory/single-file-storage-context-engine). ## Stickiness The logarithmic factor that turns recall count into a reduction of effective age. `stickiness = 1 + ln(1 + recall_count)`. ## Time Weight The blend parameter (`tw`) that controls how much recency contributes to the composite score versus similarity. 0.0 is pure similarity; 1.0 is pure recency. Default 0.3. ## Typed Edge An edge with a semantic label. See Edge Type. ## Unified Vector Space A single embedding space where multiple modalities (text, image, video) coexist with meaningful cross-modal similarity. Requires a unified multimodal encoder like Gemini Embedding 2. [Detail](/theory/768-dimension-unified-vector-space). ## Write-Back The step in the Context Engine Loop where the agent's output is persisted as a new context node with typed edges to its inputs. The architectural primitive that turns retrieval into memory. [Detail](/theory/closing-the-loop-feather-db). ## Zep A conversation memory store with a built-in temporal knowledge graph (Graphiti). Strong graph story for conversation-shaped workloads. Service-mode deployment. [Comparison](/theory/living-context-engine-vs-mem0-letta-zep). --- *Missing a term you want defined? Each entry here links to a fuller piece — start with the [canonical reference](/theory/what-is-living-context-engine).* --- *This is the machine-readable mirror of the theory post at [getfeather.store/theory/living-context-engine-glossary](https://getfeather.store/theory/living-context-engine-glossary). For the full Feather DB documentation, see [getfeather.store/llms-full.txt](https://getfeather.store/llms-full.txt).*