# Living Context Engine for Performance Marketing: How Agencies Win With Compounding Memory > Performance marketing generates more signal than any other domain — and stores almost none of it usefully. A Living Context Engine turns briefs, executions, post-mortems, and competitor moves into one connected graph that gets smarter every campaign. - **Category**: Use Case - **Read time**: 13 min read - **Date**: May 15, 2026 - **Author**: Feather DB Engineering (Engineering Team) - **URL**: https://getfeather.store/theory/living-context-engine-performance-marketing --- # Living Context Engine for Performance Marketing: How Agencies Win With Compounding Memory *Use Case · Performance Marketing · May 2026* --- ## The Signal That Goes Nowhere A typical mid-sized performance marketing agency runs hundreds of campaigns a year. Every campaign produces a brief, a strategy document, dozens of creative executions, audience research, competitor scans, mid-flight optimizations, and a post-mortem. That's an enormous stream of structured signal — most of it generated by humans, all of it valuable for the next campaign. In practice, that signal goes nowhere useful. Briefs live in Google Docs. Executions live in DAM systems. Post-mortems live in Slack threads or Notion. The AI tools the team uses see none of it — they see the new brief in isolation and produce generic output. Six months in, the AI feels exactly as smart as it did on day one, because functionally it is. A Living Context Engine is the substrate that captures all of it as connected context. The agency's institutional memory becomes the AI's working memory. ## The Graph Shape ### Node Types NodeContentHalf-life BriefStrategy brief, campaign objectives, audience180 days ExecutionAd copy, hero image, video cut, landing page90 days PerformanceCTR, conversion rate, CPA, ROAS for an execution30 days Competitor moveCompetitor creative or messaging asset60 days Audience insightPersona research, segment analysis365 days Brand guidelineVoice, palette, prohibited claims730 days Post-mortemWhat worked, what didn't, why365 days (high importance) ### Edge Types - `derived_from` — Execution → Brief - `responds_to` — Brief → Competitor move - `measured_by` — Execution → Performance - `variant_of` — Execution → Execution (sister creative) - `contradicts` — Post-mortem → Brief (when the strategy was wrong) - `references` — Brief → Audience insight - `obeys` — Brief → Brand guideline ## The Loop in Action ### 1. New campaign briefed A planner writes a Q3 brand-x brief. The brief is added as a Brief node, edged `references` to relevant audience insights, `obeys` brand guidelines. ### 2. Read — context for creative The creative agent calls `context_chain`: ```python chain = db.context_chain( embed(brief.summary), k=8, hops=2, edge_types=["derived_from", "responds_to", "measured_by", "variant_of"], ) ``` The returned subgraph includes: similar past briefs, the executions that derived from them, the performance scores attached to those executions, the audience insights they referenced, recent competitor moves, the post-mortems from previous quarters. ### 3. Reason — generate creative The agent generates a set of executions — headlines, hero image concepts, video scripts. The connected subgraph means the agent knows what worked last quarter, what the competitor is doing, what the brand voice forbids — without anyone manually prepending that context to the prompt. ### 4. Update — persist as new graph nodes Each generated execution is added as an Execution node, edged `derived_from` the brief, `variant_of` sister executions, `obeys` the brand guideline it references. The graph densifies. ### 5. Decay — performance is the signal As performance data arrives (impressions, clicks, conversions), Performance nodes are added with `measured_by` edges. Strong-performing executions get importance boosts that propagate back to the Brief and the Audience insight via the graph (the most valuable signal in marketing). Underperforming ones decay normally. ## What This Buys an Agency ### Brief-Aware Creative Generation The agent for a new brief operates with the full graph of prior briefs, prior executions, prior performance. Generic output stops happening — not because the model is smarter, but because the substrate carries the agency's earned intelligence. ### Competitor-Reactive Strategy A competitor move added to the graph as a Competitor Move node connects (via similarity + typed edges) to the briefs and executions it's most likely to affect. The agent for the next brief sees the competitor activity in context — not as a manual "did you see what brand-y did?" prompt. ### Cross-Modal Coherence Using [Gemini Embedding 2's unified 768-dim space](/theory/768-dimension-unified-vector-space), text + image + video executions all live in one index. Coherence between a script and its hero image becomes measurable. Incoherent creative (a script that says one thing while the image says another) is flagged automatically. ### Post-Mortem Compounding Post-mortems are the most undercaptured signal in agency life. Wired as high-importance nodes with edges to the briefs they critique, they reshape future retrievals. A Post-Mortem that says "this audience segment doesn't respond to luxury framing" will, three quarters later, suppress luxury-framed creative for that segment in the agent's connected subgraph. ## What This Replaces - **The "creative brief library" folder.** Briefs are first-class nodes; relevant ones surface via retrieval. - **The "competitive scan" Notion page nobody reads.** Competitor moves are first-class; they connect to the strategy graph automatically. - **The "what did we learn last quarter?" planning meeting.** The graph carries last quarter's learnings into this quarter's retrievals. - **The persistent feeling that AI creative tools "don't know our brand."** Because they do now — the substrate carries it. ## An Agency-Scale Snapshot ```python # Multi-client structure: one .feather file per client client_dbs = {client: DB.open(f"{client}.feather", dim=768) for client in clients} def brief_to_creative(client, brief_text, llm): db = client_dbs[client] chain = db.context_chain(embed(brief_text), k=10, hops=2) executions = llm.generate_creative_set(format_context(chain), brief_text) brief_id = add_node(db, brief_text, kind="brief") for ex in executions: ex_id = add_node(db, ex.text, kind="execution") db.link(ex_id, brief_id, edge_type="derived_from") for n in chain.nodes: if n.metadata.get("kind") == "audience_insight": db.link(brief_id, n.id, edge_type="references") db.save() return executions ``` ## Why This Is the Right Workload Performance marketing has three properties that make it the ideal workload for a Living Context Engine: high volume of signal per unit time, structured human artifacts (briefs and post-mortems), and a clean downstream metric (campaign performance) to drive the importance reinforcement. Agencies that wire this substrate now build compounding advantages — institutional memory that the AI sees, that they will not give up easily. --- *Related: [The Context Layer Performance Marketing Actually Needs](/theory/the-context-layer-performance-marketing-actually-needs) · [Earlier: Agencies adopting Living Context](/theory/living-context-performance-marketing-agencies).* --- *This is the machine-readable mirror of the theory post at [getfeather.store/theory/living-context-engine-performance-marketing](https://getfeather.store/theory/living-context-engine-performance-marketing). For the full Feather DB documentation, see [getfeather.store/llms-full.txt](https://getfeather.store/llms-full.txt).*