# Adaptive Decay: The Scoring Formula That Makes Memory Behave Like Memory > Static vector stores treat a five-year-old document the same as one indexed yesterday. Adaptive decay models recency, frequency, and importance into a single score — and turns a vector index into a memory. - **Category**: Technical Deep Dive - **Read time**: 13 min read - **Date**: May 14, 2026 - **Author**: Feather DB Engineering (Engineering Team) - **URL**: https://getfeather.store/theory/adaptive-decay-scoring-formula --- # Adaptive Decay: The Scoring Formula That Makes Memory Behave Like Memory *Architecture Deep Dive · Adaptive Scoring · May 2026* --- ## The Symptom: A Vector Store That Won't Forget Run a standard RAG pipeline for six months. You will notice something. The retrieval quality at month one is excellent. At month three, it is uneven. At month six, it has degraded into a kind of permanent staleness — the database has accumulated so much context that every query returns a mixture of the relevant present and the irrelevant past, with no signal to separate them. This is not a bug in the embedding model. The model still computes similarity correctly. The bug is the assumption that *similarity is the only thing that matters*. In a system designed to behave like memory, similarity is one input. The other inputs are recency, frequency, and importance. ## The Mental Model: Memory as a Decaying Function with Reinforcement Human memory is not a flat list of facts ranked by relevance. It is a graph of traces, each with its own decay curve, each curve reset and steepened by reuse. The neuroscience term is *spaced repetition*: a memory accessed today has a flatter forgetting curve than a memory last accessed three years ago. The adaptive decay formula in Feather DB encodes the same idea: ```python stickiness = 1 + ln(1 + recall_count) effective_age = age_in_days / stickiness recency = 0.5 ** (effective_age / half_life) final_score = ((1 - time_weight) * similarity + time_weight * recency) * importance ``` Four moving parts. Each one captures a different signal that vector similarity cannot. ### Stickiness — Frequency of Use The `stickiness` term is logarithmic in recall count. A piece of context recalled ten times is treated as if it were ~3.4x younger than its calendar age suggests. A piece recalled a hundred times is treated as ~5.6x younger. The log keeps the curve bounded — no single piece of context can dominate the index just by being retrieved often. ### Effective Age — Calendar Age, Discounted Raw `age_in_days` is the calendar age. `effective_age` is calendar age divided by stickiness — a piece of frequently-recalled context is treated as much younger than its insertion date suggests. This is the operational definition of "this still matters." ### Recency — Exponential Decay With a Half-Life Once you have effective age, the recency score is a standard exponential decay with a configurable half-life. The default is 90 days. A piece of context at 1x its half-life has recency 0.5; at 2x, 0.25; at 3x, 0.125. The half-life is the lever you tune per use case — operational logs decay fast, strategy briefs decay slowly. ### Importance — A Multiplier for Explicit Priority Some context is important regardless of when it was written or how often it has been recalled. A founder's strategic principle. A compliance constraint. A safety guardrail. Feather DB lets you mark such context with an explicit `importance` multiplier — typically 1.0 by default, up to ~3.0 for material the system should never let drift to the background. ## Composing the Final Score The final score blends similarity and recency in a tunable ratio, then multiplies by importance. The `time_weight` parameter controls the blend: - **time_weight = 0.0** — pure semantic similarity. Identical to standard vector search. - **time_weight = 0.3** — default. Recency is a meaningful signal but does not dominate. - **time_weight = 0.7** — heavily time-biased. Use this for operational queries where freshness matters more than fit. The right setting depends on the workload. A creative brief retrieval system probably wants 0.2–0.3 (the right brief is the right brief, even if it's old). An ops dashboard retrieving recent events probably wants 0.6–0.8. ## What This Buys You in Practice Three concrete behaviors that fall out of adaptive decay, none of which a static vector store can replicate: - **Spaced repetition for free.** The more an agent queries a piece of context, the more it stays present. Documentation that is constantly referenced never feels stale. Documentation that no one looks at fades naturally. - **No manual curation tickets.** No "go through the docs and delete the old stuff" project. The access pattern *is* the curation signal. - **Graceful degradation under volume.** A store with 10 million nodes does not retrieve worse than a store with 100k. The decay multiplier suppresses the long tail automatically. ## The Operational Lesson A vector store is a similarity engine. A context engine is a similarity engine plus a model of time. Adaptive decay is what closes the gap. It is also the cheapest line of code in Feather DB — the entire scoring kernel is fewer than 80 lines of Rust — and the one with the highest leverage on output quality. --- *Part of the architecture series. Next: [HNSW + Typed Edges](/theory/hnsw-typed-edges-fusion).* --- *This is the machine-readable mirror of the theory post at [getfeather.store/theory/adaptive-decay-scoring-formula](https://getfeather.store/theory/adaptive-decay-scoring-formula). For the full Feather DB documentation, see [getfeather.store/llms-full.txt](https://getfeather.store/llms-full.txt).*