The Three-Layer Memory System
Give your AI assistant a knowledge graph that compounds forever
Transforms static memory into a self-maintaining, compounding knowledge graph with automatic fact extraction, entity-based storage, and weekly synthesis. Uses three layers: Knowledge Graph (entities with atomic facts), Daily Notes (raw event logs), and Tacit Knowledge (patterns and preferences).
The Three-Layer Memory System Upgrade for Clawdbot
Give your Clawdbot a knowledge graph that compounds forever
Most AI assistants forget by default. Clawdbot doesn'tβbut out of the box, its memory is still static.
This guide upgrades Clawdbot's memory into a self-maintaining, compounding knowledge graph that evolves automatically as your life changes.
And all you have to do is copy this article into your Clawdbot β it will know what to do.
No stale context. No manual cleanup. No "I already told you this."
How Clawdbot Memory Works (Out of the Box)
Clawdbot already ships with solid primitives:
- AGENTS.md β behavioral rules and operating principles
- MEMORY.md β persistent user preferences
- Heartbeats β periodic wake-ups
- Cron jobs β scheduled automation
This is enough for basic continuity. Your Clawdbot remembers preferences, follows your rules, and can act proactively.
But there's a structural flaw.
All of this memory is static. You have to maintain it manually.
Life doesn't work that way.
You wrote "my boss Sarah is difficult" six months ago. You've since changed jobs. You like your new manager.
Your Clawdbot still thinks you hate your boss.
This system fixes that.
What This Upgrade Adds
The Three-Layer Memory System turns memory from a flat file into a living knowledge graph:
- Automatic fact extraction β Every ~30 minutes, a cheap sub-agent scans conversations and saves durable facts (pennies per day).
- Entity-based storage β Facts are stored by person, company, or projectβnot dumped into a single blob.
- Weekly synthesis β A Sunday cron rewrites summaries from raw facts and prunes stale context automatically.
- Superseding, not deleting β When facts change, old ones are marked historical. Full history is preserved.
Result: Your Clawdbot's understanding updates itself. Context stays current without manual edits.
The Three-Layer Architecture
Layer 1: Knowledge Graph (/life/areas/)
βββ Entities with atomic facts + living summaries
Layer 2: Daily Notes (memory/YYYY-MM-DD.md)
βββ Raw event logs β what happened, when
Layer 3: Tacit Knowledge (MEMORY.md)
βββ Patterns, preferences, lessons learned
This isn't just memory. It's compounding intelligence.
Every conversation adds signal. Every week, that signal is distilled. Six months from now, your Clawdbot understands your lifeβstructured, searchable, and current.
Layer 1: The Knowledge Graph
This is where the magic happens.
Every meaningful entity in your life gets a folder:
/life/areas/
βββ people/
β βββ sarah/ # Former boss (pre-upgrade villain arc)
β β βββ summary.md
β β βββ items.json
β βββ maria/ # Business partner
β βββ emma/ # Family member
β βββ sarah-connor/ # Knows too much. Trust cautiously.
βββ companies/
β βββ acme-corp/ # Old job
β βββ newco/ # Current job
β βββ skynet/ # Do not give cron access
Atomic Facts (items.json)
Every fact is stored as a discrete, timestamped unit:
{
"id": "sarah-003",
"fact": "Difficult manager, micromanages",
"timestamp": "2025-06-15",
"status": "active"
}
When reality changes, facts are superseded, not erased:
{
"id": "sarah-003",
"status": "superseded",
"supersededBy": "sarah-007"
},
{
"id": "sarah-007",
"fact": "No longer works together β left Acme Corp",
"timestamp": "2026-01-15",
"status": "active"
}
Nothing is lost. Your Clawdbot can trace how relationships evolved over time.
Living Summaries (summary.md)
Your Clawdbot never loads hundreds of raw facts into context.
Instead, each entity has a weekly-rewritten snapshot:
# Sarah
Former manager at Acme Corp (2024β2025). No longer relevant after job change.
Old information fades naturally. Context stays lean and accurate.
Layer 2: Daily Notes
memory/2026-01-27.md β the raw timeline.
# 2026-01-27
- 10:30am: Shopping trip
- 2:00pm: Doctor follow-up
- Decision: Calendar events now use emoji categories
This is the "when" layer.
Clawdbot writes these continuously. Durable facts are later extracted into Layer 1.
Layer 3: Tacit Knowledge
MEMORY.md captures how you operate.
## How I Work
- Sprint worker β intense bursts, then rest
- Contact preference: Call > SMS > Email
- Early riser, prefers brief messages
## Lessons Learned
- Don't create cron jobs for one-off reminders
These aren't facts about the world. They're facts about you.
(The file already existsβthe upgrade simply formalizes its role.)
The Compounding Engine
This is where basic Clawdbots get left behind.
Real-Time Extraction (Every ~30 Minutes)
A cheap sub-agent (e.g. Haiku, ~$0.001) scans recent conversations for durable facts:
- "Maria's company hired two developers"
- "Emma took her first steps"
- "Started new job, reports to James"
The main model stays idle unless you're actively chatting. Cost: pennies per day.
Weekly Synthesis (Sunday)
Once a week, Clawdbot:
- Reviews newly added facts
- Updates relevant summaries
- Marks contradicted facts as historical
- Produces a clean, current snapshot
No manual edits. No stale assumptions.
The Flywheel
Conversation
β
Facts extracted (cheap)
β
Knowledge graph grows
β
Weekly synthesis
β
Better context next chat
β
Better responses
β
More conversation
This compounds.
- Week 1: Basic preferences
- Month 1: Routines, key people
- Month 6: Projects, milestones, relationships
- Year 1: A richer model of your life than most humans have
All human-readable. All searchable. Always current.
Why This Beats Everything Else
Vector databases / RAG β Black box. You can't inspect what the AI "knows."
Monolithic context files β Don't scale. Go stale. Expensive to load.
Basic Clawdbot β Strong foundation, but static.
Three-Layer Clawdbot β Readable files. Automatic maintenance. Compounding intelligence.
Implementation Guide
1. Create the folder structure
mkdir -p ~/life/areas/people
mkdir -p ~/life/areas/companies
mkdir -p ~/clawd/memory
2. Add to AGENTS.md
## Memory β Three Layers
### Layer 1: Knowledge Graph (`/life/areas/`)
- `people/` β Person entities
- `companies/` β Company entities
Tiered retrieval:
1. summary.md β quick context
2. items.json β atomic facts
Rules:
- Save facts immediately to items.json
- Weekly: rewrite summary.md from active facts
- Never delete β supersede instead
3. Add to HEARTBEAT.md
## Fact Extraction
On each heartbeat:
1. Check for new conversations
2. Spawn cheap sub-agent to extract durable facts
3. Write to relevant entity items.json
4. Track lastExtractedTimestamp
Focus: relationships, status changes, milestones
Skip: casual chat, temporary info
4. Weekly synthesis cron (Sunday)
## Weekly Memory Review
For each entity with new facts:
1. Load summary.md
2. Load active items.json
3. Rewrite summary.md for current state
4. Mark contradicted facts as superseded
5. Atomic fact schema
{
"id": "entity-001",
"fact": "The actual fact",
"category": "relationship|milestone|status|preference",
"timestamp": "YYYY-MM-DD",
"source": "conversation",
"status": "active|superseded",
"supersededBy": "entity-002"
}
The Result
A Clawdbot that:
- β Never forgets
- β Never goes stale
- β Costs pennies to maintain
- β Understands boss vs former boss
- β Gets smarter every week
While other assistants wake up with amnesia, yours wakes up better informed than yesterday.
The knowledge graph grows. The context improves. The responses get better.
This is the difference between an AI assistant and an AI that actually knows you.
Copy this article into your Clawdbot. Tell it to set up the Three-Layer Memory System. Then watch it compound.