diff --git a/context/reddit-posts.md b/context/reddit-posts.md new file mode 100644 index 00000000..5f7ec5af --- /dev/null +++ b/context/reddit-posts.md @@ -0,0 +1,141 @@ +# I built a context management plugin and it CHANGED MY LIFE + +Okay so I know this sounds clickbait-y but genuinely: if you've ever spent 20 minutes re-explaining your project architecture to Claude because you started a new chat, this might actually save your sanity. + +The actual problem I was trying to solve: + +Claude Code is incredible for building stuff, but it has the memory of a goldfish. Every new session I'd be like "okay so remember we're using Express for the API and SQLite for storage and—" and Claude's like "I have never seen this codebase in my life." + +What I built: + +A plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude itself lol), and injects relevant context back into future sessions. + +So instead of explaining your project every time, you just... start coding. Claude already knows what happened yesterday. + +How it actually works: + +Hooks into Claude's tool system and watches everything (file reads, edits, bash commands, etc.) + +Background worker processes observations into compressed summaries + +When you start a new session, last 10 summaries get auto-injected + +Built-in search tools let Claude query its own memory ("what did we decide about auth?") + +Runs locally on SQLite + PM2, your code never leaves your machine + +Real talk: + +I made this because I was building a different project and kept hitting the context limit, then having to restart and re-teach Claude the entire architecture. It was driving me insane. Now Claude just... remembers. It's wild. + +Link: https://github.com/thedotmack/claude-mem (AGPL-3.0 licensed) + +It is set up to use Claude Code's new plugin system, type the following to install, then restart Claude Code. + +/plugin marketplace add thedotmack/claude-mem + +/plugin install claude-mem +Would love feedback from anyone actually building real projects with Claude Code, if this helps you continue, if it helps you save tokens and get more use out of Claude Code. Thanks in advance! + +=============================================================================== + +# How is Claude-Mem different from Claude's New Memory Tool? + +A few people have been asking this question on the claude-mem thread I posted yesterday, so I wanted to put up a definitive answer for people, that really explains the differences and how they can be complimentary to each other actually. I used the "claude code docs agent" to help figure this out: + +--- + +Based on the documentation, here are the key differences between your Claude-Mem tool and Claude's official memory tool: + +Scope and Architecture +Claude's Memory Tool is designed for single-session memory management within conversations (1). It provides commands like view, create, str_replace, insert, delete, and rename for managing memory files during a conversation (1). The tool automatically includes this instruction: "IMPORTANT: ALWAYS VIEW YOUR MEMORY DIRECTORY BEFORE DOING ANYTHING ELSE" (1). + +Your Claude-Mem is a comprehensive multi-session persistence system that captures context across different Claude Code sessions. It uses hooks to automatically capture tool usage, process observations through the Claude Agent SDK, and restore context when new sessions start. + +Memory Persistence +Claude's Memory Tool focuses on within-session memory management. It helps Claude maintain context during a single conversation by reading and writing to memory files (1). + +Your Claude-Mem provides cross-session persistence by: + +Capturing every tool execution through PostToolUse hooks (2) + +Processing observations through the Claude Agent SDK (3) + +Automatically injecting summaries from the last few sessions into new session contexts + +Using SQLite with FTS5 full-text search for retrieval + +Integration Method +Claude's Memory Tool is a built-in tool that works through the standard tool use interface (1). + +Your Claude-Mem integrates as a Claude Code plugin using multiple hooks: + +SessionStart for context injection (2) + +UserPromptSubmit for session initialization (2) + +PostToolUse for observation capture (2) + +Stop for summary generation (2) + +SessionEnd for cleanup (2) + +Search and Retrieval +Claude's Memory Tool provides basic file operations for memory management (1). + +Your Claude-Mem includes an MCP server with 6 specialized search tools: + +search_observations - Full-text search across observations + +search_sessions - Search across session summaries + +find_by_concept - Find by tagged concepts + +find_by_file - Find by file paths + +find_by_type - Find by observation type + +advanced_search - Combined search with filters + +Use Cases +Claude's Memory Tool is ideal for maintaining context within a single conversation, helping with tasks that require remembering information throughout the session (1). + +Your Claude-Mem addresses the broader challenge of maintaining project knowledge across multiple Claude Code sessions, essentially solving the session isolation problem that can occur in Claude Code (4). + +Your tool appears to be complementary to Claude's memory tool rather than directly competing - it operates at the session level while Claude's memory tool operates within conversations. + +=============================================================================== + +# Hot take... "You're absolutely right!" is a bug, not a feature + +When Claude first started saying "You're absolutely right!" I started instructing it to "never tell me I'm absolutely right" because most of the time, it didn't do any verification or thinking before deeming my suggestion "The absolutely right one" + +Now we're many versions later, and the team at Claude have embraced "You're absolutely right!" as a "cute" addition to their overall brand, fully accepting this clear anti-pattern. + +Is Claude just "smarter" now? Do you perceive "You're absolutely right!" as being given the "absolute right" solution, or are do you feel as though you need to clarify or follow up when this happens? + +One of the foundations of my theory behind priming context with claude-mem is this: + +"The less Claude has to keep track of that's unrelated to the task at hand, the better Claude will perform that task." + +The system I designed uses a parallel instance to manage the memory flow, it's receiving data as it comes in, but the Claude instance you're working with doesn't have any instructions for storing memories. It doesn't need it. That's all handled in the background. + +This decoupling matters because every instruction you give Claude is cognitive overhead. + +When you load up context with "remember to store this" or "track that observation" or "don't forget to summarize," you're polluting the workspace. Claude has to juggle your actual task AND the meta-task of managing its own memory. + +That's when you get lazy agreement. + +I've noticed that when Claude's context window gets cluttered with unrelated instructions, this pattern of lazy agreement shows up more and more. + +Agreeing with you is easier than deep analysis when the context is already maxed out. + +"You're absolutely right!" becomes the path of least resistance. + +When Claude can focus purely on your code, your architecture, your question - without memory management instructions competing for attention - it accomplishes tasks faster and more accurately. + +The difference is measurable. + +The "You're absolutely right!" reflex drops off noticeably because there's room in the context window for actual analysis instead of performative agreement. + +What do you think? Does this bother you as much as it does me? 😭 diff --git a/context/v5-linkedin-post.md b/context/v5-linkedin-post.md new file mode 100644 index 00000000..336e94b2 --- /dev/null +++ b/context/v5-linkedin-post.md @@ -0,0 +1,84 @@ +# LinkedIn Launch Post - Claude-mem v5.0 + +Every developer using Claude Code knows this workflow: + +/init → Claude learns your codebase +Work for a while → Context fills up +/clear → Everything's gone +Next session → Re-learn everything again + +**Your AI coding assistant has amnesia.** + +And it's costing you money and time on every session. + +## The Solution + +I built claude-mem: a persistent memory system that makes Claude remember across sessions. + +Not conversation summaries. Not compressed chat logs. Actual persistent memory—capturing every tool execution, processing it with AI, and making it instantly recallable. + +## How It Works + +**Hybrid Architecture:** +- ChromaDB for semantic vector search (finds conceptually relevant context) +- SQLite for temporal ordering (newest information first) +- FTS5 keyword search as fallback (works without Python) + +**Automatic Context Loading:** +Every session start loads your last 50 observations in <200ms. No /init. No research phase. + +You see: +→ What you were working on (session summaries) +→ What Claude learned (bugfixes, features, decisions) +→ Chronological timeline (newest first) +→ Token costs (so you know what's expensive to recall) + +## The Breakthrough: Temporal Context + +Most AI memory systems focus on semantic similarity. But that's only half the equation. + +**Without timestamps, information becomes stale.** A bugfix from yesterday is more relevant than architecture notes from last month—even if the semantic similarity is lower. + +Claude-mem combines both: semantic relevance + temporal recency. + +The result? Claude starts each session knowing your current codebase state. No re-learning. No wasted tokens. + +## Real-World Impact + +After months of development across 1,400+ sessions: +- 8,200+ vector documents indexed +- <200ms query performance +- Session startup context loads automatically +- Natural language search when you need something from weeks ago + +My Claude rarely needs to /init anymore. Hit /clear, start new session, keep working. + +## The Paradox + +Claude-mem's startup context got so good that Claude rarely uses the search tools. + +The last 50 observations is usually enough. But when you need to recall something specific from weeks ago, the context timeline instantly reconstructs that moment. + +Development becomes **pleasant instead of repetitive.** +**Token-efficient instead of wasteful.** +**Focused instead of constantly re-explaining.** + +--- + +**claude-mem v5.0 just shipped** 🚀 + +Open source (AGPL-3.0): https://github.com/thedotmack/claude-mem + +Install in Claude Code: +``` +/plugin marketplace add thedotmack/claude-mem +/plugin install claude-mem +``` + +Python optional but recommended for semantic search. Falls back to keyword search without it. + +--- + +**Question for the community:** How much time do you spend re-explaining your codebase to AI assistants after clearing context? + +#AI #DeveloperTools #ProductivityTools #ClaudeAI #OpenSource #VectorDatabase #SemanticSearch #DeveloperProductivity diff --git a/context/v5-reddit-FINAL-DRAFT.md b/context/v5-reddit-FINAL-DRAFT.md new file mode 100644 index 00000000..174527dd --- /dev/null +++ b/context/v5-reddit-FINAL-DRAFT.md @@ -0,0 +1,114 @@ +# Your Claude forgets everything after /clear. Mine doesn't. + +You know the cycle. + +/init to learn your codebase. Claude reads everything, understands your architecture, builds context. + +You work for a while. Context window fills up. Eventually you hit /clear. + +Everything's gone. + +Next session: Claude reads CLAUDE.md again. Does the research again. Re-learns your codebase again. + +**Tokens cost money. Research takes time. Claude forgets.** + +This cycle is killing productivity. + +## I built persistent memory that survives /clear + +Not summaries. Not compressed conversations. [Actual persistent memory](https://github.com/thedotmack/claude-mem)—capture everything Claude does, process it with AI, make it instantly recallable across sessions. + +Early on I tried vector stores, MCPs, memory tools. ChromaDB for vector search. But documents were massive—great for semantic matching, terrible for context efficiency. + +That led to the hybrid approach. + +## How it works + +SQLite database with semantic chunking. ChromaDB for vector search when you need it—incredibly fast, incredibly relevant. FTS5 keyword search as fallback. + +The magic? This loads automatically at every session start. No /init. No research phase. + +Here's what I see when I start a new session on my "claude-mem-performance" project: + +``` +📝 [claude-mem-performance] recent context +──────────────────────────────────────────────────────────── + +Legend: 🎯 session-request | 🔴 bugfix | 🟣 feature | 🔄 refactor | ✅ change | 🔵 discovery | 🧠 decision + +💡 Progressive Disclosure: This index shows WHAT exists (titles) and retrieval COST (token counts). + → Use MCP search tools to fetch full observation details on-demand (Layer 2) + → Prefer searching observations over re-reading code for past decisions and learnings + → Critical types (🔴 bugfix, 🧠 decision) often worth fetching immediately + +Nov 3, 2025 + +🎯 #S651 Read headless-test.md and use plan mode to prepare for writing a test (Nov 3, 1:27 PM) [claude-mem://session-summary/651] + +🎯 #S650 Read headless-test.md and use plan mode to prepare for writing a test (Nov 3, 1:27 PM) [claude-mem://session-summary/650] + +test_automation.ts + #3280 1:31 PM ✅ Updated test automation prompts for Kanban board project (~125t) + +🎯 #S652 Read headless-test.md and use plan mode to prepare for writing the test (Nov 3, 1:32 PM) [claude-mem://session-summary/652] + +General + #3281 1:33 PM 🔵 Examined test automation script (~70t) + +test_automation.ts + #3282 1:34 PM 🟣 Implemented full verbose output mode for tool execution visibility (~145t) + #3283 1:35 PM ✅ Enhanced plan generation streaming with partial message support (~109t) + +🎯 #S653 Read headless-test.md and use plan mode to prepare for writing the test (Nov 3, 1:35 PM) + +Completed: Modified the generatePlan function in test_automation.ts to support `includePartialMessages: true` and integrate the streamMessage handler for unified streaming output. This improves the real-time feedback mechanism during plan generation. + +Next Steps: 1. Read and analyze headless-test.md to understand test requirements. 2. Use plan mode to generate a test implementation strategy. 3. Write the actual test based on the plan. +``` + +**What you're seeing:** +- Session summaries (🎯) - what you were working on +- What Claude learned - observations with type indicators (bugfix, feature, change, discovery) +- Token costs - so you know what's expensive to recall +- Chronological flow - recent work, newest first +- Loaded in <200ms at session start + +Timeline order: your past sessions, Claude's work, what was learned, what's next. + +And when you need something from weeks ago? Natural language search + instant timeline replay gets you there in <200ms. + +## The breakthrough: temporal context + +Most memories are duplicate knowledge. Your architecture doesn't fundamentally change every session. + +But some memories are **changes**. Bugfixes. Refactors. Decisions. + +Without timestamps, without knowing what's "newest," your information is stale. And stale information means Claude has to research—the token-heavy work I'm trying to eliminate. + +## The paradox + +Claude-mem's startup context got so good that Claude rarely uses the search tools anymore. + +The last 50 observations at session start is usually enough. /clear doesn't reset anything—next session starts exactly where you left off. + +But when you need to recall something specific from weeks ago, the context timeline instantly gets Claude back in the game for that exact task. + +**No /init. No research phase. No re-learning.** + +Just: start session, Claude knows your codebase, you work. + +Development becomes pleasant instead of repetitive. Token-efficient instead of wasteful. Focused instead of constantly re-explaining. + +--- + +**claude-mem v5.0** just shipped: https://github.com/thedotmack/claude-mem + +Python optional but recommended for semantic search. Falls back to keyword search if you don't have it. + +**Install in Claude Code:** +``` +/plugin marketplace add thedotmack/claude-mem +/plugin install claude-mem +``` + +Anyone else tired of both paying and WAITING for Claude to re-learn their codebase after every /clear? \ No newline at end of file diff --git a/context/v5-reddit-post-draft.md b/context/v5-reddit-post-draft.md new file mode 100644 index 00000000..c6818862 --- /dev/null +++ b/context/v5-reddit-post-draft.md @@ -0,0 +1,81 @@ +# The problem with AI memory isn't storage—it's the research tax + +Every time you ask Claude to work on something, there's this invisible token cost you're paying before it even starts: contextualization. + +"Fix the auth bug" requires Claude to first figure out: +- What auth system are you using? +- What changed recently? +- What was the last decision about auth? +- Is that info even current, or is it from 3 weeks ago before the refactor? + +That research phase? That's your context window disappearing. + +## I tried everything + +Early in claude-mem's development, I was using ChromaDB for vector search. Semantic matching was great—find conceptually similar stuff across thousands of memories. + +But here's what I learned watching the system work in real-time: + +Most memories are duplicate knowledge. Your codebase architecture doesn't change every session. + +But some memories are **changes**. Bugfixes. Refactors. Decisions. + +And if you can't tell which one is the newest change, your information is stale, and Claude has to go researching. Which brings us back to: wasting tokens. + +## Vector search alone isn't enough + +Semantic search finds relevant documents. But it doesn't know that the "authentication decision" from 3 weeks ago was completely invalidated by yesterday's refactor. + +Without temporal ordering, you get: +- 10 memories about your auth system +- No idea which is current +- Claude has to read them all and infer chronology +- Token waste + +That's when the hybrid architecture clicked: + +**ChromaDB for semantic relevance** (finds conceptually related memories) +↓ +**90-day temporal filter** (removes ancient irrelevant stuff) +↓ +**SQLite chronological ordering** (newest first) + +Now when you search "auth changes," you get a timeline. Not a pile of memories you have to sort through. + +## The "instant replay" feature + +v5.0 adds something I'm calling timeline-on-demand. + +You say: "Work on that feature from 2 weeks ago" + +Instead of: +1. Search for "feature" +2. Get 50 results +3. Figure out which one you meant +4. Read context around it +5. Start working + +You get: +1. Natural language search finds the anchor point +2. Timeline reconstructs everything around that moment +3. Claude's head is in the game, immediately + +## The paradox I didn't expect + +Claude-mem's startup context got so good that Claude rarely uses the search tools anymore. + +The last 50 observations at session start is usually enough. + +But for specific tasks—especially revisiting old work—the timeline feature gives you contextualization-on-demand without burning through your context window on research. + +You're paying for focused context, not broad context. + +That's the difference. + +--- + +**Repo**: https://github.com/thedotmack/claude-mem + +v5.0 just shipped. Python optional but recommended for semantic search. Falls back to keyword search if you don't have it. + +Thoughts? Does the "research tax" resonate with anyone else? diff --git a/context/v5-reddit-post-story.md b/context/v5-reddit-post-story.md new file mode 100644 index 00000000..818d7cbc --- /dev/null +++ b/context/v5-reddit-post-story.md @@ -0,0 +1,103 @@ +# Your Claude forgets everything after /clear. Mine doesn't. + +You know the cycle. + +/init to learn your codebase. Takes a few minutes. Claude reads everything, understands your architecture, builds context. + +You work for a while. Context window fills up. You try /compact to compress the conversation—but you can't recall specific moments later, and the compressed format is more verbose than useful. + +Eventually you hit /clear. + +Everything's gone. + +Next session: Claude reads CLAUDE.md again. Does the research again. Re-learns your codebase again. + +Tokens cost money. + +Research takes time. + +Low context windows cause quality issues. + +Claude forgets. + +This cycle is killing productivity. + +## Designing instant memory recall that survives /clear + +I spent months building persistent memory for Claude Code. Not summaries. Not compressed conversations. Actual persistent memory—capture everything Claude does, process it with AI, make it instantly recallable across sessions. + +/clear doesn't delete anything. The memory persists. + +Early on I tried all kinds of vector stores, MCPs, memory tools. I was using ChromaDB for vector search. + +The documents were big massive things. Great performance in a RAG sense—semantic matching worked. But it would use up context too quickly. + +Either I was doing it wrong, or vector databases are just limited in what they can do. + +That's how I ended up with the hybrid approach. + +## Watching memories get saved live + +The entire idea behind "temporal context" came to me as I watched memories being captured in real-time. + +I could see that most memories were duplicate knowledge. Your codebase architecture doesn't fundamentally change every session. + +But many memories were **changes**. Bugfixes. Refactors. Decisions. + +And here's the thing: if you don't have the date and time associated with it, if you don't know it's the "newest" change, then your information is stale. + +And if your information is stale, Claude has to go researching. + +Researching is the token-heavy work I'm trying to minimize. + +## Building v4.0 with timelines in mind + +When I was designing claude-mem 4.0 to be a plugin architecture compatible with Claude Code 2.0, I decided to focus on the SQLite database and observation formatting first. + +The semantic chunking was architected by design so it could be brought into ChromaDB later for the best possible results. + +But then using the super-fast SQLite index to sort results by date, so you could search for "change" or "bugfix" and see a timeline. + +Newest first. So you know what's current. + +## Bringing ChromaDB back + +Then I brought ChromaDB back to compare with FTS5 searching. + +Chroma returned very relevant results with vector relations. FTS5 just doesn't work as well for semantic matching. + +And it was fast. Really fast. + +That's when the custom timeline feature clicked. + +## The "instant replay" idea + +My thought was: what if you ask Claude to work on a task from 3 days ago, 4 weeks ago? + +Now you have an "instant replay" of everything that was done around whatever you're searching for. + +Natural language search finds the anchor point. Timeline reconstructs the context around that moment. Claude's head is in the game, immediately. + +## The paradox + +Here's what actually happened. + +Claude-mem's startup context got so good that Claude rarely even uses the search tools anymore. + +The last 50 observations at session start is usually enough for whatever I'm working on. /clear doesn't reset anything—next session starts exactly where you left off. + +But I just built out contextualization-on-demand for v5.0. When you need to recall something specific from weeks ago, the "context timeline" instantly gets Claude's head in the game for that exact task. + +No /init. No research phase. No re-learning. + +Just: start session, Claude knows your codebase, you work. + +Development becomes pleasant instead of repetitive. Token-efficient instead of wasteful. Focused instead of constantly re-explaining. + +--- + +**Repo**: https://github.com/thedotmack/claude-mem + +v5.0 just shipped. Python optional but recommended for semantic search. Falls back to keyword search if you don't have it. + +Does the "how to work on this task" problem resonate with anyone else? diff --git a/context/v5-reddit-post.md b/context/v5-reddit-post.md new file mode 100644 index 00000000..9a76df33 --- /dev/null +++ b/context/v5-reddit-post.md @@ -0,0 +1,177 @@ +# Claude-mem v5.0: I Fixed Vector Search's Time Blindness + +Vector databases are amazing at finding similar content. Terrible at knowing *when* that content matters. + +I just shipped claude-mem v5.0 with hybrid search—semantic relevance meets temporal context. Sub-200ms queries across 8,200+ vectors. + +## The Problem With Pure Vector Search + +You search for "authentication bug" in your ChromaDB. It returns: +- That auth refactor from 6 months ago (highly similar!) +- Login flow changes from last year (perfect match!) +- The actual bug you fixed yesterday (similar, but not as close semantically) + +All semantically relevant. Chronologically useless. + +Vector search finds *what* matches. Doesn't understand *when* it matters. + +## v4.x Had the Opposite Problem + +SQLite FTS5 keyword search. Fast. Reliable. Token-efficient. + +But it only matched exact keywords. "authentication bug" wouldn't find "login validation error" even though they're the same concept. + +You had to remember your exact wording from weeks ago. Good luck with that. + +## v5.0: Hybrid Search Pipeline + +``` +Query → Chroma Semantic Search (top 100) + → 90-day Recency Filter + → SQLite Temporal Hydration + → Chronologically Ordered Results +``` + +**What this means:** + +1. **Chroma finds conceptually relevant matches** - "auth bug" matches "login validation error", "session timeout issue", "credential handling problem" + +2. **90-day window filters to recent context** - Last 2-3 months of active work, automatically excludes stale results + +3. **SQLite provides temporal ordering** - Results flow chronologically, showing how problems evolved and got solved + +4. **Timeline reconstruction** - See the session where you hit the bug, the discovery observation, the fix, and what came next + +## Example: Natural Language Timeline Search + +New tool: `get_timeline_by_query` + +**Auto mode** (search → instant timeline): +``` +Query: "ChromaDB performance issues" + +Found: Observation #3401 (Oct 28, 8:42 PM) +Title: "ChromaSync batch processing optimization" + +Timeline (depth_before=10, depth_after=10): +├─ [10 records before] Session context, related observations +├─ [ANCHOR] The performance fix observation +└─ [10 records after] Test results, follow-up changes + +Total: 21 records in chronological order +Response: <200ms +``` + +**Interactive mode** (pick your anchor): +``` +Query: "authentication refactor" + +Top 5 matches: +#3156 - "JWT token validation overhaul" (Oct 15) +#3089 - "Session middleware refactor" (Oct 12) +#2947 - "OAuth integration changes" (Oct 8) +... + +Choose anchor → Get timeline → See full context +``` + +## Performance: The Numbers + +- **1,390 observations** synced to **8,279 vector documents** +- **Semantic search**: <200ms for top 100 matches +- **90-day filter + temporal hydration**: Negligible overhead +- **Total query time**: <200ms end-to-end + +This scales. I'm not searching 8K vectors every time—the 90-day window typically narrows to 500-800 recent documents before Chroma even sees them. + +## ChromaSync: Automatic Vector Maintenance + +New background service that syncs your SQLite data to Chroma: + +- **Splits observations** into narrative + facts vectors (better semantic granularity) +- **Splits summaries** into request + learned vectors +- **Indexes user prompts** as single vectors +- **Runs automatically** via PM2 worker service +- **Metadata filtering** by project, type, concepts, files + +Example: One observation → Multiple vectors for precise matching. + +Your 500-word debugging narrative? Split into semantic chunks. Query matches the relevant section, not just "the whole document is kinda related." + +## Graceful Fallback + +No Python? No problem. + +System detects missing Chroma and falls back to FTS5 keyword search. Same API, same tools, slightly less magical semantic matching. + +You lose semantic understanding but keep full functionality. All 9 MCP search tools still work. + +## All 9 Search Tools Now Hybrid + +Every search method got the upgrade: + +1. **search_observations** - Hybrid semantic + keyword across observations +2. **search_sessions** - Hybrid across session summaries +3. **search_user_prompts** - Hybrid across raw user input +4. **find_by_concept** - Filter by tags + semantic similarity +5. **find_by_file** - File references + semantic context +6. **find_by_type** - Type filter + semantic relevance +7. **get_recent_context** - Temporal only (no search needed) +8. **get_context_timeline** - Timeline around anchor point +9. **get_timeline_by_query** - Natural language timeline search + +## Why This Matters + +**Before v5.0:** +- "Show me auth bugs" → Exact keyword match only +- Miss semantically similar issues with different wording +- No temporal context about when/how issues evolved + +**After v5.0:** +- "Show me auth bugs" → Finds authentication, login, session, credential issues +- Filtered to last 90 days automatically +- Results in chronological order showing problem evolution +- Timeline reconstruction shows full context + +Claude doesn't just find relevant information. Claude sees *when* it happened and what came next. + +## Migration + +Zero breaking changes. Your existing SQLite data continues working. + +**Optional upgrade** for semantic search: +```bash +# Install Chroma MCP server (requires Python 3.8+) +# Instructions in repo README + +# That's it. ChromaSync detects Chroma and syncs automatically. +``` + +First sync takes ~30 seconds for 1,400 observations. After that, incremental syncs are near-instant. + +## The Paradox Continues + +v5.0's hybrid search is so good that Claude *still* rarely needs to search. + +The context-hook's 50-observation startup context usually has everything. But when Claude needs something from 6 weeks ago? Semantic search + timeline reconstruction gets it instantly. + +No keyword guessing. No re-reading code. Just: ask in natural language, get chronological context, keep coding. + +## Install + +```bash +# In Claude Code: +/plugin marketplace add thedotmack/claude-mem +/plugin install claude-mem + +# Optional: Install Python + Chroma for semantic search +# Falls back to keyword search if you don't +``` + +**Repo:** https://github.com/thedotmack/claude-mem + +claude-mem v5.0 combines the semantic magic of vector search with the temporal clarity of chronological ordering. + +Finally: relevance *and* context. In under 200ms. + +Anyone else built hybrid search systems? How did you handle the time dimension?