Back to Theory
Theory10 min read · May 14, 2026

The Context Half-Life Problem: Modeling Knowledge Like a Physicist

Not all context decays at the same rate. A safety guardrail has a half-life measured in years; an ad performance log has one measured in days. Modeling those rates explicitly is the difference between a useful memory and a noisy archive.

F
Feather DB Engineering
Engineering Team

The Context Half-Life Problem: Modeling Knowledge Like a Physicist

Theory · Living Context Engine Series · May 2026


Knowledge Isn't Uniform

A common mistake in memory system design is to assume one decay rate applies across the corpus. It doesn't. Different categories of context decay at fundamentally different rates, and modeling those rates explicitly is the difference between a memory that ages gracefully and one that turns into noise.

Borrow from physics. Every radioactive isotope has a characteristic half-life. Carbon-14 has 5,730 years. Iodine-131 has 8 days. They obey the same exponential decay law, but the time constant differs by a factor of ~260,000. You wouldn't model both with the same constant. Context isn't different.

Five Half-Life Regimes

In practice, the context an AI agent works with falls into roughly five regimes:

1. Operational Trace — Half-Life ≈ 1–7 Days

Ad performance traces. Model inference logs. Support ticket transcripts. These are useful for short-window analysis and lose value rapidly once the window closes. A 30-day-old ad performance log rarely informs today's decision.

2. Tactical Context — Half-Life ≈ 14–30 Days

Campaign briefs, weekly status documents, sprint plans. Relevant during the cycle, semi-relevant for the next cycle, mostly archival after that.

3. Strategic Context — Half-Life ≈ 60–120 Days

Quarterly briefs, audience research, competitor intelligence reports. Relevant across a quarter or two; slow to age out.

4. Foundational Context — Half-Life ≈ 1–3 Years

Brand guidelines, foundational strategy, audience personas, market thesis. Aging matters but slowly.

5. Invariant Context — Effective Half-Life → ∞

Safety guardrails, compliance constraints, regulatory baselines, ethical principles. These should not decay materially. Encode them with explicit importance multipliers and very long half-lives.

How Feather DB Encodes Per-Node Half-Life

Feather DB attaches the half-life as a per-node parameter, defaulting to a store-wide value (typically 90 days). At retrieval time, the recency curve uses the node's own half-life:

recency_for_node = 0.5 ** (effective_age / node.half_life)

This is a minor implementation detail with major architectural consequences. A single store can house operational logs and brand guidelines simultaneously, and the retrieval kernel ranks them on the same scale without forcing one regime to dominate the other.

How to Pick a Half-Life

The half-life encodes a question: "how long until this is half as useful as it is now?" Answer that question per category, in calendar units that feel honest. Then round to one of the five regimes above. You will not need more granularity than that — the score function is exponential in time, not linear, so a factor-of-two miss on half-life produces a factor-of-two miss on recency, not a catastrophic mis-ranking.

The Architectural Lesson

Treating half-life as a per-node parameter is what makes a single memory store work for an entire organization. Without it, you would need separate stores for separate decay regimes and a merge layer to query across them — exactly the multi-bucket problem the architecture is designed to avoid.

The physics analogy is more than rhetorical. Half-life is a real time constant on real decay processes, and treating it that way — explicitly, per-category, configurable per-node — produces a memory that ages with the same kind of regular law that everything else in nature does. That is the Living Context Engine's quiet structural advantage.


Part of the Living Context Engine series. Next: From Vector Store to Living Engine: A Migration Guide.