Refactor worker management and cleanup hooks
- Removed ensureWorkerRunning calls from multiple hooks (cleanup, context, new, save, summary) to streamline code and avoid unnecessary checks. - Introduced fixed port usage for worker communication across hooks. - Enhanced error handling in newHook, saveHook, and summaryHook to provide clearer messages for worker connection issues. - Updated worker service to start without health checks, relying on PM2 for management. - Cached Claude executable path to optimize repeated calls. - Improved logging for better traceability of worker actions and errors.
This commit is contained in:
@@ -1,84 +0,0 @@
|
||||
# LinkedIn Launch Post - Claude-mem v5.0
|
||||
|
||||
Every developer using Claude Code knows this workflow:
|
||||
|
||||
/init → Claude learns your codebase
|
||||
Work for a while → Context fills up
|
||||
/clear → Everything's gone
|
||||
Next session → Re-learn everything again
|
||||
|
||||
**Your AI coding assistant has amnesia.**
|
||||
|
||||
And it's costing you money and time on every session.
|
||||
|
||||
## The Solution
|
||||
|
||||
I built claude-mem: a persistent memory system that makes Claude remember across sessions.
|
||||
|
||||
Not conversation summaries. Not compressed chat logs. Actual persistent memory—capturing every tool execution, processing it with AI, and making it instantly recallable.
|
||||
|
||||
## How It Works
|
||||
|
||||
**Hybrid Architecture:**
|
||||
- ChromaDB for semantic vector search (finds conceptually relevant context)
|
||||
- SQLite for temporal ordering (newest information first)
|
||||
- FTS5 keyword search as fallback (works without Python)
|
||||
|
||||
**Automatic Context Loading:**
|
||||
Every session start loads your last 50 observations in <200ms. No /init. No research phase.
|
||||
|
||||
You see:
|
||||
→ What you were working on (session summaries)
|
||||
→ What Claude learned (bugfixes, features, decisions)
|
||||
→ Chronological timeline (newest first)
|
||||
→ Token costs (so you know what's expensive to recall)
|
||||
|
||||
## The Breakthrough: Temporal Context
|
||||
|
||||
Most AI memory systems focus on semantic similarity. But that's only half the equation.
|
||||
|
||||
**Without timestamps, information becomes stale.** A bugfix from yesterday is more relevant than architecture notes from last month—even if the semantic similarity is lower.
|
||||
|
||||
Claude-mem combines both: semantic relevance + temporal recency.
|
||||
|
||||
The result? Claude starts each session knowing your current codebase state. No re-learning. No wasted tokens.
|
||||
|
||||
## Real-World Impact
|
||||
|
||||
After months of development across 1,400+ sessions:
|
||||
- 8,200+ vector documents indexed
|
||||
- <200ms query performance
|
||||
- Session startup context loads automatically
|
||||
- Natural language search when you need something from weeks ago
|
||||
|
||||
My Claude rarely needs to /init anymore. Hit /clear, start new session, keep working.
|
||||
|
||||
## The Paradox
|
||||
|
||||
Claude-mem's startup context got so good that Claude rarely uses the search tools.
|
||||
|
||||
The last 50 observations is usually enough. But when you need to recall something specific from weeks ago, the context timeline instantly reconstructs that moment.
|
||||
|
||||
Development becomes **pleasant instead of repetitive.**
|
||||
**Token-efficient instead of wasteful.**
|
||||
**Focused instead of constantly re-explaining.**
|
||||
|
||||
---
|
||||
|
||||
**claude-mem v5.0 just shipped** 🚀
|
||||
|
||||
Open source (AGPL-3.0): https://github.com/thedotmack/claude-mem
|
||||
|
||||
Install in Claude Code:
|
||||
```
|
||||
/plugin marketplace add thedotmack/claude-mem
|
||||
/plugin install claude-mem
|
||||
```
|
||||
|
||||
Python optional but recommended for semantic search. Falls back to keyword search without it.
|
||||
|
||||
---
|
||||
|
||||
**Question for the community:** How much time do you spend re-explaining your codebase to AI assistants after clearing context?
|
||||
|
||||
#AI #DeveloperTools #ProductivityTools #ClaudeAI #OpenSource #VectorDatabase #SemanticSearch #DeveloperProductivity
|
||||
@@ -1,114 +0,0 @@
|
||||
# Your Claude forgets everything after /clear. Mine doesn't.
|
||||
|
||||
You know the cycle.
|
||||
|
||||
/init to learn your codebase. Claude reads everything, understands your architecture, builds context.
|
||||
|
||||
You work for a while. Context window fills up. Eventually you hit /clear.
|
||||
|
||||
Everything's gone.
|
||||
|
||||
Next session: Claude reads CLAUDE.md again. Does the research again. Re-learns your codebase again.
|
||||
|
||||
**Tokens cost money. Research takes time. Claude forgets.**
|
||||
|
||||
This cycle is killing productivity.
|
||||
|
||||
## I built persistent memory that survives /clear
|
||||
|
||||
Not summaries. Not compressed conversations. [Actual persistent memory](https://github.com/thedotmack/claude-mem)—capture everything Claude does, process it with AI, make it instantly recallable across sessions.
|
||||
|
||||
Early on I tried vector stores, MCPs, memory tools. ChromaDB for vector search. But documents were massive—great for semantic matching, terrible for context efficiency.
|
||||
|
||||
That led to the hybrid approach.
|
||||
|
||||
## How it works
|
||||
|
||||
SQLite database with semantic chunking. ChromaDB for vector search when you need it—incredibly fast, incredibly relevant. FTS5 keyword search as fallback.
|
||||
|
||||
The magic? This loads automatically at every session start. No /init. No research phase.
|
||||
|
||||
Here's what I see when I start a new session on my "claude-mem-performance" project:
|
||||
|
||||
```
|
||||
📝 [claude-mem-performance] recent context
|
||||
────────────────────────────────────────────────────────────
|
||||
|
||||
Legend: 🎯 session-request | 🔴 bugfix | 🟣 feature | 🔄 refactor | ✅ change | 🔵 discovery | 🧠 decision
|
||||
|
||||
💡 Progressive Disclosure: This index shows WHAT exists (titles) and retrieval COST (token counts).
|
||||
→ Use MCP search tools to fetch full observation details on-demand (Layer 2)
|
||||
→ Prefer searching observations over re-reading code for past decisions and learnings
|
||||
→ Critical types (🔴 bugfix, 🧠 decision) often worth fetching immediately
|
||||
|
||||
Nov 3, 2025
|
||||
|
||||
🎯 #S651 Read headless-test.md and use plan mode to prepare for writing a test (Nov 3, 1:27 PM) [claude-mem://session-summary/651]
|
||||
|
||||
🎯 #S650 Read headless-test.md and use plan mode to prepare for writing a test (Nov 3, 1:27 PM) [claude-mem://session-summary/650]
|
||||
|
||||
test_automation.ts
|
||||
#3280 1:31 PM ✅ Updated test automation prompts for Kanban board project (~125t)
|
||||
|
||||
🎯 #S652 Read headless-test.md and use plan mode to prepare for writing the test (Nov 3, 1:32 PM) [claude-mem://session-summary/652]
|
||||
|
||||
General
|
||||
#3281 1:33 PM 🔵 Examined test automation script (~70t)
|
||||
|
||||
test_automation.ts
|
||||
#3282 1:34 PM 🟣 Implemented full verbose output mode for tool execution visibility (~145t)
|
||||
#3283 1:35 PM ✅ Enhanced plan generation streaming with partial message support (~109t)
|
||||
|
||||
🎯 #S653 Read headless-test.md and use plan mode to prepare for writing the test (Nov 3, 1:35 PM)
|
||||
|
||||
Completed: Modified the generatePlan function in test_automation.ts to support `includePartialMessages: true` and integrate the streamMessage handler for unified streaming output. This improves the real-time feedback mechanism during plan generation.
|
||||
|
||||
Next Steps: 1. Read and analyze headless-test.md to understand test requirements. 2. Use plan mode to generate a test implementation strategy. 3. Write the actual test based on the plan.
|
||||
```
|
||||
|
||||
**What you're seeing:**
|
||||
- Session summaries (🎯) - what you were working on
|
||||
- What Claude learned - observations with type indicators (bugfix, feature, change, discovery)
|
||||
- Token costs - so you know what's expensive to recall
|
||||
- Chronological flow - recent work, newest first
|
||||
- Loaded in <200ms at session start
|
||||
|
||||
Timeline order: your past sessions, Claude's work, what was learned, what's next.
|
||||
|
||||
And when you need something from weeks ago? Natural language search + instant timeline replay gets you there in <200ms.
|
||||
|
||||
## The breakthrough: temporal context
|
||||
|
||||
Most memories are duplicate knowledge. Your architecture doesn't fundamentally change every session.
|
||||
|
||||
But some memories are **changes**. Bugfixes. Refactors. Decisions.
|
||||
|
||||
Without timestamps, without knowing what's "newest," your information is stale. And stale information means Claude has to research—the token-heavy work I'm trying to eliminate.
|
||||
|
||||
## The paradox
|
||||
|
||||
Claude-mem's startup context got so good that Claude rarely uses the search tools anymore.
|
||||
|
||||
The last 50 observations at session start is usually enough. /clear doesn't reset anything—next session starts exactly where you left off.
|
||||
|
||||
But when you need to recall something specific from weeks ago, the context timeline instantly gets Claude back in the game for that exact task.
|
||||
|
||||
**No /init. No research phase. No re-learning.**
|
||||
|
||||
Just: start session, Claude knows your codebase, you work.
|
||||
|
||||
Development becomes pleasant instead of repetitive. Token-efficient instead of wasteful. Focused instead of constantly re-explaining.
|
||||
|
||||
---
|
||||
|
||||
**claude-mem v5.0** just shipped: https://github.com/thedotmack/claude-mem
|
||||
|
||||
Python optional but recommended for semantic search. Falls back to keyword search if you don't have it.
|
||||
|
||||
**Install in Claude Code:**
|
||||
```
|
||||
/plugin marketplace add thedotmack/claude-mem
|
||||
/plugin install claude-mem
|
||||
```
|
||||
|
||||
Anyone else tired of both paying and WAITING for Claude to re-learn their codebase after every /clear?
|
||||
@@ -1,81 +0,0 @@
|
||||
# The problem with AI memory isn't storage—it's the research tax
|
||||
|
||||
Every time you ask Claude to work on something, there's this invisible token cost you're paying before it even starts: contextualization.
|
||||
|
||||
"Fix the auth bug" requires Claude to first figure out:
|
||||
- What auth system are you using?
|
||||
- What changed recently?
|
||||
- What was the last decision about auth?
|
||||
- Is that info even current, or is it from 3 weeks ago before the refactor?
|
||||
|
||||
That research phase? That's your context window disappearing.
|
||||
|
||||
## I tried everything
|
||||
|
||||
Early in claude-mem's development, I was using ChromaDB for vector search. Semantic matching was great—find conceptually similar stuff across thousands of memories.
|
||||
|
||||
But here's what I learned watching the system work in real-time:
|
||||
|
||||
Most memories are duplicate knowledge. Your codebase architecture doesn't change every session.
|
||||
|
||||
But some memories are **changes**. Bugfixes. Refactors. Decisions.
|
||||
|
||||
And if you can't tell which one is the newest change, your information is stale, and Claude has to go researching. Which brings us back to: wasting tokens.
|
||||
|
||||
## Vector search alone isn't enough
|
||||
|
||||
Semantic search finds relevant documents. But it doesn't know that the "authentication decision" from 3 weeks ago was completely invalidated by yesterday's refactor.
|
||||
|
||||
Without temporal ordering, you get:
|
||||
- 10 memories about your auth system
|
||||
- No idea which is current
|
||||
- Claude has to read them all and infer chronology
|
||||
- Token waste
|
||||
|
||||
That's when the hybrid architecture clicked:
|
||||
|
||||
**ChromaDB for semantic relevance** (finds conceptually related memories)
|
||||
↓
|
||||
**90-day temporal filter** (removes ancient irrelevant stuff)
|
||||
↓
|
||||
**SQLite chronological ordering** (newest first)
|
||||
|
||||
Now when you search "auth changes," you get a timeline. Not a pile of memories you have to sort through.
|
||||
|
||||
## The "instant replay" feature
|
||||
|
||||
v5.0 adds something I'm calling timeline-on-demand.
|
||||
|
||||
You say: "Work on that feature from 2 weeks ago"
|
||||
|
||||
Instead of:
|
||||
1. Search for "feature"
|
||||
2. Get 50 results
|
||||
3. Figure out which one you meant
|
||||
4. Read context around it
|
||||
5. Start working
|
||||
|
||||
You get:
|
||||
1. Natural language search finds the anchor point
|
||||
2. Timeline reconstructs everything around that moment
|
||||
3. Claude's head is in the game, immediately
|
||||
|
||||
## The paradox I didn't expect
|
||||
|
||||
Claude-mem's startup context got so good that Claude rarely uses the search tools anymore.
|
||||
|
||||
The last 50 observations at session start is usually enough.
|
||||
|
||||
But for specific tasks—especially revisiting old work—the timeline feature gives you contextualization-on-demand without burning through your context window on research.
|
||||
|
||||
You're paying for focused context, not broad context.
|
||||
|
||||
That's the difference.
|
||||
|
||||
---
|
||||
|
||||
**Repo**: https://github.com/thedotmack/claude-mem
|
||||
|
||||
v5.0 just shipped. Python optional but recommended for semantic search. Falls back to keyword search if you don't have it.
|
||||
|
||||
Thoughts? Does the "research tax" resonate with anyone else?
|
||||
@@ -1,103 +0,0 @@
|
||||
# Your Claude forgets everything after /clear. Mine doesn't.
|
||||
|
||||
You know the cycle.
|
||||
|
||||
/init to learn your codebase. Takes a few minutes. Claude reads everything, understands your architecture, builds context.
|
||||
|
||||
You work for a while. Context window fills up. You try /compact to compress the conversation—but you can't recall specific moments later, and the compressed format is more verbose than useful.
|
||||
|
||||
Eventually you hit /clear.
|
||||
|
||||
Everything's gone.
|
||||
|
||||
Next session: Claude reads CLAUDE.md again. Does the research again. Re-learns your codebase again.
|
||||
|
||||
Tokens cost money.
|
||||
|
||||
Research takes time.
|
||||
|
||||
Low context windows cause quality issues.
|
||||
|
||||
Claude forgets.
|
||||
|
||||
This cycle is killing productivity.
|
||||
|
||||
## Designing instant memory recall that survives /clear
|
||||
|
||||
I spent months building persistent memory for Claude Code. Not summaries. Not compressed conversations. Actual persistent memory—capture everything Claude does, process it with AI, make it instantly recallable across sessions.
|
||||
|
||||
/clear doesn't delete anything. The memory persists.
|
||||
|
||||
Early on I tried all kinds of vector stores, MCPs, memory tools. I was using ChromaDB for vector search.
|
||||
|
||||
The documents were big massive things. Great performance in a RAG sense—semantic matching worked. But it would use up context too quickly.
|
||||
|
||||
Either I was doing it wrong, or vector databases are just limited in what they can do.
|
||||
|
||||
That's how I ended up with the hybrid approach.
|
||||
|
||||
## Watching memories get saved live
|
||||
|
||||
The entire idea behind "temporal context" came to me as I watched memories being captured in real-time.
|
||||
|
||||
I could see that most memories were duplicate knowledge. Your codebase architecture doesn't fundamentally change every session.
|
||||
|
||||
But many memories were **changes**. Bugfixes. Refactors. Decisions.
|
||||
|
||||
And here's the thing: if you don't have the date and time associated with it, if you don't know it's the "newest" change, then your information is stale.
|
||||
|
||||
And if your information is stale, Claude has to go researching.
|
||||
|
||||
Researching is the token-heavy work I'm trying to minimize.
|
||||
|
||||
## Building v4.0 with timelines in mind
|
||||
|
||||
When I was designing claude-mem 4.0 to be a plugin architecture compatible with Claude Code 2.0, I decided to focus on the SQLite database and observation formatting first.
|
||||
|
||||
The semantic chunking was architected by design so it could be brought into ChromaDB later for the best possible results.
|
||||
|
||||
But then using the super-fast SQLite index to sort results by date, so you could search for "change" or "bugfix" and see a timeline.
|
||||
|
||||
Newest first. So you know what's current.
|
||||
|
||||
## Bringing ChromaDB back
|
||||
|
||||
Then I brought ChromaDB back to compare with FTS5 searching.
|
||||
|
||||
Chroma returned very relevant results with vector relations. FTS5 just doesn't work as well for semantic matching.
|
||||
|
||||
And it was fast. Really fast.
|
||||
|
||||
That's when the custom timeline feature clicked.
|
||||
|
||||
## The "instant replay" idea
|
||||
|
||||
My thought was: what if you ask Claude to work on a task from 3 days ago, 4 weeks ago?
|
||||
|
||||
Now you have an "instant replay" of everything that was done around whatever you're searching for.
|
||||
|
||||
Natural language search finds the anchor point. Timeline reconstructs the context around that moment. Claude's head is in the game, immediately.
|
||||
|
||||
## The paradox
|
||||
|
||||
Here's what actually happened.
|
||||
|
||||
Claude-mem's startup context got so good that Claude rarely even uses the search tools anymore.
|
||||
|
||||
The last 50 observations at session start is usually enough for whatever I'm working on. /clear doesn't reset anything—next session starts exactly where you left off.
|
||||
|
||||
But I just built out contextualization-on-demand for v5.0. When you need to recall something specific from weeks ago, the "context timeline" instantly gets Claude's head in the game for that exact task.
|
||||
|
||||
No /init. No research phase. No re-learning.
|
||||
|
||||
Just: start session, Claude knows your codebase, you work.
|
||||
|
||||
Development becomes pleasant instead of repetitive. Token-efficient instead of wasteful. Focused instead of constantly re-explaining.
|
||||
|
||||
---
|
||||
|
||||
**Repo**: https://github.com/thedotmack/claude-mem
|
||||
|
||||
v5.0 just shipped. Python optional but recommended for semantic search. Falls back to keyword search if you don't have it.
|
||||
|
||||
Does the "how to work on this task" problem resonate with anyone else?
|
||||
@@ -1,177 +0,0 @@
|
||||
# Claude-mem v5.0: I Fixed Vector Search's Time Blindness
|
||||
|
||||
Vector databases are amazing at finding similar content. Terrible at knowing *when* that content matters.
|
||||
|
||||
I just shipped claude-mem v5.0 with hybrid search—semantic relevance meets temporal context. Sub-200ms queries across 8,200+ vectors.
|
||||
|
||||
## The Problem With Pure Vector Search
|
||||
|
||||
You search for "authentication bug" in your ChromaDB. It returns:
|
||||
- That auth refactor from 6 months ago (highly similar!)
|
||||
- Login flow changes from last year (perfect match!)
|
||||
- The actual bug you fixed yesterday (similar, but not as close semantically)
|
||||
|
||||
All semantically relevant. Chronologically useless.
|
||||
|
||||
Vector search finds *what* matches. Doesn't understand *when* it matters.
|
||||
|
||||
## v4.x Had the Opposite Problem
|
||||
|
||||
SQLite FTS5 keyword search. Fast. Reliable. Token-efficient.
|
||||
|
||||
But it only matched exact keywords. "authentication bug" wouldn't find "login validation error" even though they're the same concept.
|
||||
|
||||
You had to remember your exact wording from weeks ago. Good luck with that.
|
||||
|
||||
## v5.0: Hybrid Search Pipeline
|
||||
|
||||
```
|
||||
Query → Chroma Semantic Search (top 100)
|
||||
→ 90-day Recency Filter
|
||||
→ SQLite Temporal Hydration
|
||||
→ Chronologically Ordered Results
|
||||
```
|
||||
|
||||
**What this means:**
|
||||
|
||||
1. **Chroma finds conceptually relevant matches** - "auth bug" matches "login validation error", "session timeout issue", "credential handling problem"
|
||||
|
||||
2. **90-day window filters to recent context** - Last 2-3 months of active work, automatically excludes stale results
|
||||
|
||||
3. **SQLite provides temporal ordering** - Results flow chronologically, showing how problems evolved and got solved
|
||||
|
||||
4. **Timeline reconstruction** - See the session where you hit the bug, the discovery observation, the fix, and what came next
|
||||
|
||||
## Example: Natural Language Timeline Search
|
||||
|
||||
New tool: `get_timeline_by_query`
|
||||
|
||||
**Auto mode** (search → instant timeline):
|
||||
```
|
||||
Query: "ChromaDB performance issues"
|
||||
|
||||
Found: Observation #3401 (Oct 28, 8:42 PM)
|
||||
Title: "ChromaSync batch processing optimization"
|
||||
|
||||
Timeline (depth_before=10, depth_after=10):
|
||||
├─ [10 records before] Session context, related observations
|
||||
├─ [ANCHOR] The performance fix observation
|
||||
└─ [10 records after] Test results, follow-up changes
|
||||
|
||||
Total: 21 records in chronological order
|
||||
Response: <200ms
|
||||
```
|
||||
|
||||
**Interactive mode** (pick your anchor):
|
||||
```
|
||||
Query: "authentication refactor"
|
||||
|
||||
Top 5 matches:
|
||||
#3156 - "JWT token validation overhaul" (Oct 15)
|
||||
#3089 - "Session middleware refactor" (Oct 12)
|
||||
#2947 - "OAuth integration changes" (Oct 8)
|
||||
...
|
||||
|
||||
Choose anchor → Get timeline → See full context
|
||||
```
|
||||
|
||||
## Performance: The Numbers
|
||||
|
||||
- **1,390 observations** synced to **8,279 vector documents**
|
||||
- **Semantic search**: <200ms for top 100 matches
|
||||
- **90-day filter + temporal hydration**: Negligible overhead
|
||||
- **Total query time**: <200ms end-to-end
|
||||
|
||||
This scales. I'm not searching 8K vectors every time—the 90-day window typically narrows to 500-800 recent documents before Chroma even sees them.
|
||||
|
||||
## ChromaSync: Automatic Vector Maintenance
|
||||
|
||||
New background service that syncs your SQLite data to Chroma:
|
||||
|
||||
- **Splits observations** into narrative + facts vectors (better semantic granularity)
|
||||
- **Splits summaries** into request + learned vectors
|
||||
- **Indexes user prompts** as single vectors
|
||||
- **Runs automatically** via PM2 worker service
|
||||
- **Metadata filtering** by project, type, concepts, files
|
||||
|
||||
Example: One observation → Multiple vectors for precise matching.
|
||||
|
||||
Your 500-word debugging narrative? Split into semantic chunks. Query matches the relevant section, not just "the whole document is kinda related."
|
||||
|
||||
## Graceful Fallback
|
||||
|
||||
No Python? No problem.
|
||||
|
||||
System detects missing Chroma and falls back to FTS5 keyword search. Same API, same tools, slightly less magical semantic matching.
|
||||
|
||||
You lose semantic understanding but keep full functionality. All 9 MCP search tools still work.
|
||||
|
||||
## All 9 Search Tools Now Hybrid
|
||||
|
||||
Every search method got the upgrade:
|
||||
|
||||
1. **search_observations** - Hybrid semantic + keyword across observations
|
||||
2. **search_sessions** - Hybrid across session summaries
|
||||
3. **search_user_prompts** - Hybrid across raw user input
|
||||
4. **find_by_concept** - Filter by tags + semantic similarity
|
||||
5. **find_by_file** - File references + semantic context
|
||||
6. **find_by_type** - Type filter + semantic relevance
|
||||
7. **get_recent_context** - Temporal only (no search needed)
|
||||
8. **get_context_timeline** - Timeline around anchor point
|
||||
9. **get_timeline_by_query** - Natural language timeline search
|
||||
|
||||
## Why This Matters
|
||||
|
||||
**Before v5.0:**
|
||||
- "Show me auth bugs" → Exact keyword match only
|
||||
- Miss semantically similar issues with different wording
|
||||
- No temporal context about when/how issues evolved
|
||||
|
||||
**After v5.0:**
|
||||
- "Show me auth bugs" → Finds authentication, login, session, credential issues
|
||||
- Filtered to last 90 days automatically
|
||||
- Results in chronological order showing problem evolution
|
||||
- Timeline reconstruction shows full context
|
||||
|
||||
Claude doesn't just find relevant information. Claude sees *when* it happened and what came next.
|
||||
|
||||
## Migration
|
||||
|
||||
Zero breaking changes. Your existing SQLite data continues working.
|
||||
|
||||
**Optional upgrade** for semantic search:
|
||||
```bash
|
||||
# Install Chroma MCP server (requires Python 3.8+)
|
||||
# Instructions in repo README
|
||||
|
||||
# That's it. ChromaSync detects Chroma and syncs automatically.
|
||||
```
|
||||
|
||||
First sync takes ~30 seconds for 1,400 observations. After that, incremental syncs are near-instant.
|
||||
|
||||
## The Paradox Continues
|
||||
|
||||
v5.0's hybrid search is so good that Claude *still* rarely needs to search.
|
||||
|
||||
The context-hook's 50-observation startup context usually has everything. But when Claude needs something from 6 weeks ago? Semantic search + timeline reconstruction gets it instantly.
|
||||
|
||||
No keyword guessing. No re-reading code. Just: ask in natural language, get chronological context, keep coding.
|
||||
|
||||
## Install
|
||||
|
||||
```bash
|
||||
# In Claude Code:
|
||||
/plugin marketplace add thedotmack/claude-mem
|
||||
/plugin install claude-mem
|
||||
|
||||
# Optional: Install Python + Chroma for semantic search
|
||||
# Falls back to keyword search if you don't
|
||||
```
|
||||
|
||||
**Repo:** https://github.com/thedotmack/claude-mem
|
||||
|
||||
claude-mem v5.0 combines the semantic magic of vector search with the temporal clarity of chronological ordering.
|
||||
|
||||
Finally: relevance *and* context. In under 200ms.
|
||||
|
||||
Anyone else built hybrid search systems? How did you handle the time dimension?
|
||||
Reference in New Issue
Block a user