Merge pull request #477 from thedotmack/bugfix/mcp-clarity

Token Optimizations
This commit is contained in:
Alex Newman
2025-12-29 00:17:03 -05:00
committed by GitHub
39 changed files with 328 additions and 4977 deletions
+17 -37
View File
@@ -17,7 +17,7 @@ Claude Desktop can access your claude-mem memory database through the **mem-sear
Before installing the skill, ensure:
1. **claude-mem is installed** and the worker service is running
2. **MCP server is configured** in Claude Desktop (the skill uses the `mem-search` MCP server)
2. **MCP server is configured** in Claude Desktop (the skill uses the `mcp-search` MCP server)
### Verify Worker is Running
@@ -28,31 +28,9 @@ curl http://localhost:37777/api/health
## Installation
### Step 1: Download the Skill
### Step 1: Configure MCP Server
Download the skill package from the repository:
<Card title="mem-search.zip" icon="download" href="https://github.com/thedotmack/claude-mem/raw/main/plugin/skills/mem-search.zip">
Download the mem-search skill for Claude Desktop
</Card>
Or build from source:
```bash
npm run build # Generates plugin/skills/mem-search.zip
```
### Step 2: Install in Claude Desktop
1. Open **Claude Desktop**
2. Go to **Settings** (gear icon)
3. Navigate to **Skills**
4. Click **Install Skill** or drag the `mem-search.zip` file
5. Confirm installation
### Step 3: Configure MCP Server
The skill requires the `mem-search` MCP server. Add this to your Claude Desktop configuration:
The skill requires the `mcp-search` MCP server. Add this to your Claude Desktop configuration:
<Tabs>
<Tab title="macOS">
@@ -61,7 +39,7 @@ The skill requires the `mem-search` MCP server. Add this to your Claude Desktop
```json
{
"mcpServers": {
"mem-search": {
"mcp-search": {
"command": "node",
"args": [
"/Users/YOUR_USERNAME/.claude/plugins/marketplaces/thedotmack/plugin/scripts/mcp-server.cjs"
@@ -77,7 +55,7 @@ The skill requires the `mem-search` MCP server. Add this to your Claude Desktop
```json
{
"mcpServers": {
"mem-search": {
"mcp-search": {
"command": "node",
"args": [
"C:\\Users\\YOUR_USERNAME\\.claude\\plugins\\marketplaces\\thedotmack\\plugin\\scripts\\mcp-server.cjs"
@@ -93,7 +71,7 @@ The skill requires the `mem-search` MCP server. Add this to your Claude Desktop
Replace `YOUR_USERNAME` with your actual username. Restart Claude Desktop after editing the configuration.
</Warning>
### Step 4: Restart Claude Desktop
### Step 2: Restart Claude Desktop
Close and reopen Claude Desktop for the MCP server configuration to take effect.
@@ -111,19 +89,21 @@ Once installed, the skill auto-activates when you ask about past work:
## Available MCP Tools
The skill provides access to these MCP tools:
The skill provides three core MCP tools following a 3-layer workflow pattern:
| Tool | Description |
|------|-------------|
| `search` | Unified search across observations, sessions, and prompts |
| `search` | Search memory index. Returns compact results with IDs for filtering |
| `timeline` | Get chronological context around a query or observation ID |
| `get_observation` | Fetch a single observation by ID |
| `get_observations` | Fetch multiple observations efficiently |
| `get_session` | Fetch session summary by ID |
| `get_prompt` | Fetch user prompt by ID |
| `get_recent_context` | Get recent timeline items |
| `get_context_timeline` | Get timeline around a specific observation |
| `help` | Load detailed usage instructions |
| `get_observations` | Fetch full observation details by ID (use after filtering with search/timeline) |
### Token-Efficient Workflow
1. **Search** → Get index with IDs (~50-100 tokens/result)
2. **Timeline** → Get context around interesting results
3. **Get Observations** → Fetch full details ONLY for filtered IDs
This 3-layer approach provides ~10x token savings compared to fetching full details upfront.
## Troubleshooting
+1 -1
View File
@@ -1,6 +1,6 @@
{
"mcpServers": {
"mem-search": {
"mcp-search": {
"type": "stdio",
"command": "${CLAUDE_PLUGIN_ROOT}/scripts/mcp-server.cjs"
}
+1 -1
View File
@@ -1,6 +1,6 @@
{
"name": "claude-mem-plugin",
"version": "8.2.5",
"version": "8.2.6",
"private": true,
"description": "Runtime dependencies for claude-mem bundled hooks",
"type": "module",
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
-329
View File
@@ -1,329 +0,0 @@
---
name: mem-search
description: Search claude-mem's persistent cross-session memory database. Use when user asks "did we already solve this?", "how did we do X last time?", or needs work from previous sessions.
---
# Memory Search
Search past work across all sessions. Simple workflow: search → get IDs → fetch details by ID.
## When to Use
Use when users ask about PREVIOUS sessions (not current conversation):
- "Did we already fix this?"
- "How did we solve X last time?"
- "What happened last week?"
## The Workflow
**ALWAYS follow this exact flow:**
1. **Search** - Get an index of results with IDs
2. **Timeline** - Get context around top results to understand what was happening
3. **Review** - Look at titles/dates/context, pick relevant IDs
4. **Fetch** - Get full details ONLY for those IDs
### Step 1: Search Everything
Use the `search` MCP tool:
**Required parameters:**
- `query` - Search term
- `limit: 20` - You can request large indexes as necessary
- `project` - Project name (required)
**Example:**
```
search(query="authentication", limit=20, project="my-project")
```
**Returns:**
```
| ID | Time | T | Title | Read | Work |
|----|------|---|-------|------|------|
| #11131 | 3:48 PM | 🟣 | Added JWT authentication | ~75 | 🛠️ 450 |
| #10942 | 2:15 PM | 🔴 | Fixed auth token expiration | ~50 | 🛠️ 200 |
```
### Step 2: Get Timeline Context
You MUST understand "what was happening" around a result.
Use the `timeline` MCP tool:
**Example with observation ID:**
```
timeline(anchor=11131, depth_before=3, depth_after=3, project="my-project")
```
**Example with query (finds anchor automatically):**
```
timeline(query="authentication", depth_before=3, depth_after=3, project="my-project")
```
**Returns exactly `depth_before + 1 + depth_after` items** - observations, sessions, and prompts interleaved chronologically around the anchor.
**When to use:**
- User asks "what was happening when..."
- Need to understand sequence of events
- Want broader context around a specific observation
### Step 3: Pick IDs
Review the index results (and timeline if used). Identify which IDs are actually relevant. Discard the rest.
### Step 4: Fetch by ID
For each relevant ID, fetch full details using MCP tools:
**Fetch multiple observations (ALWAYS use for 2+ IDs):**
```
get_observations(ids=[11131, 10942, 10855])
```
**With ordering and limit:**
```
get_observations(
ids=[11131, 10942, 10855],
orderBy="date_desc",
limit=10,
project="my-project"
)
```
**Fetch single observation (only when fetching exactly 1):**
```
get_observation(id=11131)
```
**Fetch session:**
```
get_session(id=2005) # Just the number from S2005
```
**Fetch prompt:**
```
get_prompt(id=5421)
```
**ID formats:**
- Observations: Just the number (11131)
- Sessions: Just the number (2005) from "S2005"
- Prompts: Just the number (5421)
**Batch optimization:**
- **ALWAYS use `get_observations` for 2+ observations**
- 10-100x more efficient than individual fetches
- Single HTTP request vs N requests
- Returns all results in one response
- Supports ordering and filtering
## Search Parameters
**Basic:**
- `query` - What to search for (required)
- `limit` - How many results (default 20)
- `project` - Filter by project name (required)
**Filters (optional):**
- `type` - Filter to "observations", "sessions", or "prompts"
- `dateStart` - Start date (YYYY-MM-DD or epoch timestamp)
- `dateEnd` - End date (YYYY-MM-DD or epoch timestamp)
- `obs_type` - Filter observations by type (comma-separated): bugfix, feature, decision, discovery, change
## Examples
**Find recent bug fixes:**
Use the `search` MCP tool with filters:
```
search(query="bug", type="observations", obs_type="bugfix", limit=20, project="my-project")
```
**Find what happened last week:**
Use date filters:
```
search(type="observations", dateStart="2025-11-11", limit=20, project="my-project")
```
**Search everything:**
Simple query search:
```
search(query="database migration", limit=20, project="my-project")
```
**Get detailed instructions:**
Use the `help` tool to load full instructions on-demand:
```
help(topic="workflow") # Get 4-step workflow
help(topic="search_params") # Get parameters reference
help(topic="examples") # Get usage examples
help(topic="all") # Get complete guide
```
## Why This Workflow?
**Token efficiency:**
- **Search results:** ~50-100 tokens per result (table index)
- **Full observation:** ~500-1000 tokens each
- **10x savings** - only fetch full when you know it's relevant
**Batch fetching:**
- **Individual fetches:** 10 HTTP requests, ~5-10s latency
- **Batch fetch:** 1 HTTP request, ~0.5-1s latency
- **10-100x faster** for multi-observation queries
**Clarity:**
- See everything first (table index)
- Get timeline context around interesting results
- Pick what matters based on context
- Fetch details only for what you need (batch when possible)
---
**Remember:**
- ALWAYS get timeline context to understand what was happening
- ALWAYS use `get_observations` when fetching 2+ observations
- The workflow is optimized: search → timeline → batch fetch = 10-100x faster
---
## Tool Reference
Comprehensive parameter documentation for all memory tools. For MCP usage, call `help(topic="search")` to load specific tool docs.
### search
Search across all memory types (observations, sessions, prompts).
**Parameters:**
- `query` (string, optional) - Search term for full-text search
- `limit` (number, optional) - Maximum results to return. Default: 20, Max: 100
- `offset` (number, optional) - Number of results to skip. Default: 0
- `project` (string, required) - Project name to filter by
- `type` (string, optional) - Filter by type: "observations", "sessions", "prompts"
- `dateStart` (string, optional) - Start date filter (YYYY-MM-DD or epoch ms)
- `dateEnd` (string, optional) - End date filter (YYYY-MM-DD or epoch ms)
- `obs_type` (string, optional) - Filter observations by type (comma-separated): bugfix, feature, decision, discovery, change
- `orderBy` (string, optional) - Sort order: "date_desc" (default), "date_asc", "relevance"
**Returns:** Table of results with IDs, timestamps, types, titles
### timeline
Get chronological context around a specific point in time or observation.
**Parameters:**
- `anchor` (number, optional) - Observation ID to center timeline around. If not provided, uses most recent result from query
- `query` (string, optional) - Search term to find anchor automatically (if anchor not provided)
- `depth_before` (number, optional) - Items before anchor. Default: 5, Max: 20
- `depth_after` (number, optional) - Items after anchor. Default: 5, Max: 20
- `project` (string, required) - Project name to filter by
**Returns:** Exactly `depth_before + 1 + depth_after` items in chronological order, with observations, sessions, and prompts interleaved
### get_recent_context
Get the most recent observations from current or recent sessions.
**Parameters:**
- `limit` (number, optional) - Maximum observations to return. Default: 10, Max: 50
- `project` (string, required) - Project name to filter by
**Returns:** Recent observations in reverse chronological order
### get_context_timeline
Get timeline context around a specific observation ID.
**Parameters:**
- `anchor` (number, required) - Observation ID to center timeline around
- `depth_before` (number, optional) - Items before anchor. Default: 5, Max: 20
- `depth_after` (number, optional) - Items after anchor. Default: 5, Max: 20
- `project` (string, optional) - Project name to filter by
**Returns:** Timeline items centered on the anchor observation
### get_observation
Fetch a single observation by ID with full details.
**Parameters:**
- `id` (number, required) - Observation ID to fetch
**Returns:** Complete observation object with title, subtitle, narrative, facts, concepts, files, timestamps
### get_observations
Batch fetch multiple observations by IDs. Always prefer this over individual fetches for 2+ observations.
**Parameters:**
- `ids` (array of numbers, required) - Array of observation IDs to fetch
- `orderBy` (string, optional) - Sort order: "date_desc" (default), "date_asc"
- `limit` (number, optional) - Maximum observations to return. Default: no limit
- `project` (string, optional) - Project name to filter by
**Returns:** Array of complete observation objects, 10-100x faster than individual fetches
### get_session
Fetch a single session by ID with metadata.
**Parameters:**
- `id` (number, required) - Session ID to fetch (just the number, not "S2005" format)
**Returns:** Session object with ID, start time, end time, project, model info
### get_prompt
Fetch a single prompt by ID with full text.
**Parameters:**
- `id` (number, required) - Prompt ID to fetch
**Returns:** Prompt object with ID, text, timestamp, session reference
### help
Load detailed instructions for specific topics or all documentation.
**Parameters:**
- `topic` (string, optional) - Specific topic to load: "workflow", "search", "timeline", "get_recent_context", "get_context_timeline", "get_observation", "get_observations", "get_session", "get_prompt", "all". Default: "all"
**Returns:** Formatted documentation for the requested topic
@@ -1,124 +0,0 @@
# Search by Concept
Find observations tagged with specific concepts.
## When to Use
- User asks: "What discoveries did we make?"
- User asks: "What patterns did we identify?"
- User asks: "What gotchas did we encounter?"
- Looking for observations with semantic tags
## Command
```bash
curl -s "http://localhost:37777/api/search/by-concept?concept=discovery&format=index&limit=5"
```
## Parameters
- **concept** (required): Concept tag to search for
- `discovery` - New discoveries and insights
- `problem-solution` - Problems and their solutions
- `what-changed` - Change descriptions
- `how-it-works` - Explanations of mechanisms
- `pattern` - Identified patterns
- `gotcha` - Edge cases and gotchas
- `change` - General changes
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
- **dateStart/dateEnd**: Filter by date range (optional)
## When to Use Each Format
**Use format=index for:**
- Quick overviews of observations by concept
- Finding IDs for deeper investigation
- Listing multiple results
- **Token cost: ~50-100 per result**
**Use format=full for:**
- Complete details including narrative, facts, files, concepts
- Understanding the full context of specific observations
- **Token cost: ~500-1000 per result**
## Example Response (format=index)
```json
{
"concept": "discovery",
"count": 3,
"format": "index",
"results": [
{
"id": 1240,
"type": "discovery",
"title": "Worker service uses PM2 for process management",
"subtitle": "Discovered persistent background worker pattern",
"created_at_epoch": 1699564800000,
"project": "claude-mem",
"concepts": ["discovery", "how-it-works"]
}
]
}
```
## How to Present Results
For format=index, present as a compact list:
```markdown
Found 3 observations tagged with "discovery":
🔵 **#1240** Worker service uses PM2 for process management
> Discovered persistent background worker pattern
> Nov 9, 2024 • claude-mem
> Tags: discovery, how-it-works
🔵 **#1241** FTS5 full-text search enables instant searches
> SQLite FTS5 virtual tables provide sub-100ms search
> Nov 9, 2024 • claude-mem
> Tags: discovery, pattern
```
For complete formatting guidelines, see [formatting.md](formatting.md).
## Available Concepts
| Concept | Description | When to Use |
|---------|-------------|-------------|
| `discovery` | New discoveries and insights | Finding what was learned |
| `problem-solution` | Problems and their solutions | Finding how issues were resolved |
| `what-changed` | Change descriptions | Understanding what changed |
| `how-it-works` | Explanations of mechanisms | Learning how things work |
| `pattern` | Identified patterns | Finding design patterns |
| `gotcha` | Edge cases and gotchas | Learning about pitfalls |
| `change` | General changes | Tracking modifications |
## Error Handling
**Missing concept parameter:**
```json
{"error": "Missing required parameter: concept"}
```
Fix: Add the concept parameter
**Invalid concept:**
```json
{"error": "Invalid concept: foobar. Valid concepts: discovery, problem-solution, what-changed, how-it-works, pattern, gotcha, change"}
```
Fix: Use one of the valid concept values
## Tips
1. Use format=index first to see overview
2. Start with limit=5-10 to avoid token overload
3. Combine concepts with type filtering for precision
4. Use `discovery` for learning what was found during investigation
5. Use `problem-solution` for finding how past issues were resolved
**Token Efficiency:**
- Start with format=index (~50-100 tokens per result)
- Use format=full only for relevant items (~500-1000 tokens per result)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
@@ -1,127 +0,0 @@
# Search by File
Find all work related to a specific file path.
## When to Use
- User asks: "What changes to auth/login.ts?"
- User asks: "What work was done on this file?"
- User asks: "Show me the history of src/services/worker.ts"
- Looking for all observations that reference a file
## Command
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=src/services/worker-service.ts&format=index&limit=10"
```
## Parameters
- **filePath** (required): File path to search for (supports partial matching)
- Full path: `src/services/worker-service.ts`
- Partial path: `worker-service.ts`
- Directory: `src/hooks/`
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
- **dateStart/dateEnd**: Filter by date range (optional)
## When to Use Each Format
**Use format=index for:**
- Quick overviews of work on a file
- Finding IDs for deeper investigation
- Listing multiple changes
- **Token cost: ~50-100 per result**
**Use format=full for:**
- Complete details including narrative, facts, files, concepts
- Understanding the full context of specific changes
- **Token cost: ~500-1000 per result**
## Example Response (format=index)
```json
{
"filePath": "src/services/worker-service.ts",
"count": 8,
"format": "index",
"results": [
{
"id": 1245,
"type": "refactor",
"title": "Simplified worker health check logic",
"subtitle": "Removed redundant PM2 status check",
"created_at_epoch": 1699564800000,
"project": "claude-mem",
"files": ["src/services/worker-service.ts", "src/services/worker-utils.ts"]
}
]
}
```
## How to Present Results
For format=index, present as a compact list:
```markdown
Found 8 observations related to "src/services/worker-service.ts":
🔄 **#1245** Simplified worker health check logic
> Removed redundant PM2 status check
> Nov 9, 2024 • claude-mem
> Files: worker-service.ts, worker-utils.ts
🟣 **#1246** Added SSE endpoint for real-time updates
> Implemented Server-Sent Events for viewer UI
> Nov 8, 2024 • claude-mem
> Files: worker-service.ts
```
For complete formatting guidelines, see [formatting.md](formatting.md).
## Partial Path Matching
The file path parameter supports partial matching:
```bash
# These all match "src/services/worker-service.ts"
curl -s "http://localhost:37777/api/search/by-file?filePath=worker-service.ts&format=index"
curl -s "http://localhost:37777/api/search/by-file?filePath=services/worker&format=index"
curl -s "http://localhost:37777/api/search/by-file?filePath=worker-service&format=index"
```
## Directory Searches
Search for all work in a directory:
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=src/hooks/&format=index&limit=20"
```
## Error Handling
**Missing filePath parameter:**
```json
{"error": "Missing required parameter: filePath"}
```
Fix: Add the filePath parameter
**No results found:**
```json
{"filePath": "nonexistent.ts", "count": 0, "results": []}
```
Response: "No observations found for 'nonexistent.ts'. Try a partial path or check the spelling."
## Tips
1. Use format=index first to see overview of all changes
2. Start with partial paths (e.g., filename only) for broader matches
3. Use full paths when you need specific file matches
4. Combine with dateStart to see recent changes: `?filePath=worker.ts&dateStart=2024-11-01`
5. Use directory searches to see all work in a module
**Token Efficiency:**
- Start with format=index (~50-100 tokens per result)
- Use format=full only for relevant items (~500-1000 tokens per result)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
@@ -1,123 +0,0 @@
# Search by Type
Find observations by type: bugfix, feature, refactor, decision, discovery, or change.
## When to Use
- User asks: "What bugs did we fix?"
- User asks: "What features did we add?"
- User asks: "What decisions did we make?"
- Looking for specific types of work
## Command
```bash
curl -s "http://localhost:37777/api/search/by-type?type=bugfix&format=index&limit=5"
```
## Parameters
- **type** (required): One or more types (comma-separated)
- `bugfix` - Bug fixes
- `feature` - New features
- `refactor` - Code refactoring
- `decision` - Architectural/design decisions
- `discovery` - Discoveries and insights
- `change` - General changes
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
- **dateStart/dateEnd**: Filter by date range (optional)
## When to Use Each Format
**Use format=index for:**
- Quick overviews of work by type
- Finding IDs for deeper investigation
- Listing multiple results
- **Token cost: ~50-100 per result**
**Use format=full for:**
- Complete details including narrative, facts, files, concepts
- Understanding the full context of specific observations
- **Token cost: ~500-1000 per result**
## Example Response (format=index)
```json
{
"type": "bugfix",
"count": 5,
"format": "index",
"results": [
{
"id": 1235,
"type": "bugfix",
"title": "Fixed token expiration edge case",
"subtitle": "Handled race condition in refresh flow",
"created_at_epoch": 1699564800000,
"project": "api-server"
}
]
}
```
## How to Present Results
For format=index, present as a compact list with type emojis:
```markdown
Found 5 bugfixes:
🔴 **#1235** Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024 • api-server
🔴 **#1236** Resolved memory leak in worker
> Fixed event listener cleanup
> Nov 8, 2024 • worker-service
```
**Type Emojis:**
- 🔴 bugfix
- 🟣 feature
- 🔄 refactor
- 🔵 discovery
- 🧠 decision
- ✅ change
For complete formatting guidelines, see [formatting.md](formatting.md).
## Multiple Types
To search for multiple types:
```bash
curl -s "http://localhost:37777/api/search/by-type?type=bugfix,feature&format=index&limit=10"
```
## Error Handling
**Missing type parameter:**
```json
{"error": "Missing required parameter: type"}
```
Fix: Add the type parameter
**Invalid type:**
```json
{"error": "Invalid type: foobar. Valid types: bugfix, feature, refactor, decision, discovery, change"}
```
Fix: Use one of the valid type values
## Tips
1. Use format=index first to see overview
2. Start with limit=5-10 to avoid token overload
3. Combine with dateStart for recent work: `?type=bugfix&dateStart=2024-11-01`
4. Use project filtering when working on one codebase
**Token Efficiency:**
- Start with format=index (~50-100 tokens per result)
- Use format=full only for relevant items (~500-1000 tokens per result)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
@@ -1,251 +0,0 @@
# Common Workflows
Step-by-step guides for typical user requests using the search API.
## Workflow 1: Understanding Past Work
**User asks:** "What did we do last session?" or "Catch me up on recent work"
**Steps:**
1. **Get recent context** (fastest path):
```bash
curl -s "http://localhost:37777/api/context/recent?limit=3"
```
2. **Present as narrative:**
```markdown
## Recent Work
### Session #545 - Nov 9, 2024
Implemented JWT authentication system
**Completed:**
- Added token-based auth with refresh tokens
- Created JWT signing and verification logic
**Key Learning:** JWT expiration requires careful handling of refresh race conditions
```
**Why this workflow:**
- Single request gets both sessions and observations
- Optimized for "catch me up" questions
- ~1,500-2,500 tokens for 3 sessions
---
## Workflow 2: Finding Specific Bug Fixes
**User asks:** "What bugs did we fix?" or "Show me recent bug fixes"
**Steps:**
1. **Search by type** (index format first):
```bash
curl -s "http://localhost:37777/api/search/by-type?type=bugfix&format=index&limit=5"
```
2. **Review index results**, identify relevant items
3. **Get full details** for specific bugs:
```bash
curl -s "http://localhost:37777/api/search/by-type?type=bugfix&format=full&limit=1&offset=2"
```
4. **Present findings:**
```markdown
Found 5 bug fixes:
🔴 **#1235** Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024 • api-server
[Click for full details on #1235]
```
**Why this workflow:**
- Progressive disclosure: index first, full details selectively
- Type-specific search is more efficient than generic search
- ~250-500 tokens for index, ~750-1000 per full detail
---
## Workflow 3: Understanding File History
**User asks:** "What changes to auth/login.ts?" or "Show me work on this file"
**Steps:**
1. **Search by file** (index format):
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=auth/login.ts&format=index&limit=10"
```
2. **Review chronological changes**
3. **Get full details** for specific changes:
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=auth/login.ts&format=full&limit=1&offset=3"
```
4. **Present as file timeline:**
```markdown
## History of auth/login.ts
🟣 **#1230** Added JWT authentication (Nov 9)
🔴 **#1235** Fixed token expiration bug (Nov 9)
🔄 **#1240** Refactored auth flow (Nov 8)
```
**Why this workflow:**
- File-specific search finds all related work
- Index format shows chronological overview
- Selective full details for deep dives
---
## Workflow 4: Timeline Investigation
**User asks:** "What was happening when we deployed?" or "Show me context around that bug fix"
**Steps:**
1. **Find the event** using search:
```bash
curl -s "http://localhost:37777/api/search/observations?query=deployment&format=index&limit=5"
```
2. **Note observation ID** (e.g., #1234)
3. **Get timeline context**:
```bash
curl -s "http://localhost:37777/api/timeline/context?anchor=1234&depth_before=10&depth_after=10"
```
4. **Present as chronological narrative:**
```markdown
## Timeline: Deployment
### Before (10 records)
**2:45 PM** - 🟣 Prepared deployment scripts
**2:50 PM** - 💬 User asked: "Are we ready to deploy?"
### ⭐ Anchor Point (2:55 PM)
🎯 **Observation #1234**: Deployed to production
### After (10 records)
**3:00 PM** - 🔴 Fixed post-deployment routing issue
```
**Why this workflow:**
- Timeline shows temporal context (what happened before/after)
- Captures causality between events
- All record types (observations, sessions, prompts) interleaved
---
## Workflow 5: Quick Timeline (One Request)
**User asks:** "Timeline of authentication work"
**Steps:**
1. **Use timeline-by-query** (auto mode):
```bash
curl -s "http://localhost:37777/api/timeline/by-query?query=authentication&mode=auto&depth_before=10&depth_after=10"
```
2. **Present timeline directly:**
```markdown
## Timeline: Authentication
**Best Match:** 🟣 Observation #1234 - Implemented JWT authentication
### Context (21 records)
[... timeline around best match ...]
```
**Why this workflow:**
- Single request combines search + timeline
- Fastest path when query is specific
- Auto mode uses top result as anchor
**Alternative:** Use interactive mode for broad queries:
```bash
curl -s "http://localhost:37777/api/timeline/by-query?query=auth&mode=interactive&limit=5"
```
Then choose anchor manually.
---
## Workflow 6: Search Composition
**User asks:** "What features did we add to the authentication system recently?"
**Steps:**
1. **Combine filters** for precision:
```bash
curl -s "http://localhost:37777/api/search/observations?query=authentication&type=feature&dateStart=2024-11-01&format=index&limit=10"
```
2. **Review filtered results**
3. **Get full details** for relevant features:
```bash
curl -s "http://localhost:37777/api/search/observations?query=authentication&type=feature&format=full&limit=1&offset=2"
```
4. **Present findings:**
```markdown
Found 10 authentication features added in November:
🟣 **#1234** Implemented JWT authentication (Nov 9)
🟣 **#1236** Added refresh token rotation (Nov 9)
🟣 **#1238** Implemented OAuth2 flow (Nov 7)
```
**Why this workflow:**
- Multiple filters narrow results before requesting full details
- Type + query + dateStart/dateEnd = precise targeting
- Progressive disclosure: index first, full details selectively
---
## Workflow Selection Guide
| User Request | Workflow | Operation | Token Cost |
|--------------|----------|-----------|------------|
| "What did we do last session?" | #1 | recent-context | 1,500-2,500 |
| "What bugs did we fix?" | #2 | by-type | 500-3,000 |
| "What changes to file.ts?" | #3 | by-file | 500-3,000 |
| "What was happening then?" | #4 | search → timeline | 3,500-6,000 |
| "Timeline of X work" | #5 | timeline-by-query | 3,000-4,000 |
| "Recent features added?" | #6 | observations + filters | 500-3,000 |
## General Principles
1. **Start with index format** - Always use `format=index` first
2. **Use specialized tools** - by-type, by-file, by-concept when applicable
3. **Compose operations** - Combine search + timeline for investigations
4. **Filter early** - Use type, dateStart/dateEnd, project to narrow before expanding
5. **Progressive disclosure** - Load full details only for relevant items
## Token Budget Awareness
**Quick queries** (500-1,500 tokens):
- Recent context (limit=3)
- Index search (limit=5-10)
- Filtered searches
**Medium queries** (1,500-4,000 tokens):
- Recent context (limit=5-10)
- Full details (3-5 items)
- Timeline (depth 10/10)
**Deep queries** (4,000-8,000 tokens):
- Timeline (depth 20/20)
- Full details (10+ items)
- Multiple composed operations
Always start with minimal token investment, expand only when needed.
@@ -1,403 +0,0 @@
# Response Formatting Guidelines
How to present search results to users for maximum clarity and usefulness.
## General Principles
1. **Progressive disclosure** - Show index results first, full details on demand
2. **Visual hierarchy** - Use emojis, bold, and structure for scannability
3. **Context-aware** - Tailor presentation to user's question
4. **Actionable** - Include IDs for follow-up queries
5. **Token-efficient** - Balance detail with token budget
---
## Format: Index Results
**When to use:** First response to searches, overviews, multiple results
**Structure:**
```markdown
Found {count} results for "{query}":
{emoji} **#{id}** {title}
> {subtitle}
> {date} • {project}
```
**Example:**
```markdown
Found 5 results for "authentication":
🟣 **#1234** Implemented JWT authentication
> Added token-based auth with refresh tokens
> Nov 9, 2024 • api-server
🔴 **#1235** Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024 • api-server
```
**Type Emojis:**
- 🔴 bugfix
- 🟣 feature
- 🔄 refactor
- 🔵 discovery
- 🧠 decision
- ✅ change
- 🎯 session
- 💬 prompt
**What to include:**
- ✅ ID (for follow-up)
- ✅ Type emoji
- ✅ Title
- ✅ Subtitle (if available)
- ✅ Date (human-readable)
- ✅ Project name
- ❌ Don't include full narrative/facts/files in index format
---
## Format: Full Results
**When to use:** User requests details, specific items selected from index
**Structure:**
```markdown
## {emoji} {type} #{id}: {title}
**Summary:** {subtitle}
**What happened:**
{narrative}
**Key Facts:**
- {fact1}
- {fact2}
**Files modified:**
- {file1}
- {file2}
**Concepts:** {concepts}
**Date:** {human_readable_date}
**Project:** {project}
```
**Example:**
```markdown
## 🟣 Feature #1234: Implemented JWT authentication
**Summary:** Added token-based auth with refresh tokens
**What happened:**
Implemented a complete JWT authentication system with access and refresh tokens. Access tokens expire after 15 minutes, refresh tokens after 7 days. Added token signing with RS256 algorithm and proper key rotation infrastructure.
**Key Facts:**
- Access tokens use 15-minute expiration
- Refresh tokens stored in httpOnly cookies
- RS256 algorithm with key rotation support
- Token refresh endpoint handles race conditions gracefully
**Files modified:**
- src/auth/jwt.ts (created)
- src/auth/refresh.ts (created)
- src/middleware/auth.ts (modified)
**Concepts:** how-it-works, pattern
**Date:** November 9, 2024 at 2:55 PM
**Project:** api-server
```
**What to include:**
- ✅ Full title with emoji and ID
- ✅ Summary/subtitle
- ✅ Complete narrative
- ✅ All key facts
- ✅ All files (with status: created/modified/deleted)
- ✅ Concepts/tags
- ✅ Precise timestamp
- ✅ Project name
---
## Format: Timeline Results
**When to use:** Temporal investigations, "what was happening" questions
**Structure:**
```markdown
## Timeline: {anchor_description}
### Before ({count} records)
**{time}** - {emoji} {type} #{id}: {title}
**{time}** - {emoji} {type} #{id}: {title}
### ⭐ Anchor Point ({time})
{emoji} **{type} #{id}**: {title}
### After ({count} records)
**{time}** - {emoji} {type} #{id}: {title}
**{time}** - {emoji} {type} #{id}: {title}
```
**Example:**
```markdown
## Timeline: Deployment
### Before (10 records)
**2:30 PM** - 🟣 #1230: Prepared deployment scripts
**2:45 PM** - 🔄 #1232: Updated configuration files
**2:50 PM** - 💬 User asked: "Are we ready to deploy?"
### ⭐ Anchor Point (2:55 PM)
🎯 **Session #545**: Deployed to production
### After (10 records)
**3:00 PM** - 🔴 #1235: Fixed post-deployment routing issue
**3:10 PM** - 🔵 #1236: Discovered caching behavior in production
**3:15 PM** - 🧠 #1237: Decided to add health check endpoint
```
**What to include:**
- ✅ Chronological ordering (oldest to newest)
- ✅ Human-readable times (not epochs)
- ✅ Clear anchor point marker (⭐)
- ✅ Mix of all record types (observations, sessions, prompts)
- ✅ Concise titles (not full narratives)
- ✅ Type emojis for quick scanning
---
## Format: Session Summaries
**When to use:** Recent context, "what did we do" questions
**Structure:**
```markdown
## Recent Work on {project}
### 🎯 Session #{id} - {date}
**Request:** {user_request}
**Completed:**
- {completion1}
- {completion2}
**Key Learning:** {learning}
**Observations:**
- {emoji} **#{obs_id}** {obs_title}
- Files: {file_list}
```
**Example:**
```markdown
## Recent Work on api-server
### 🎯 Session #545 - November 9, 2024
**Request:** Add JWT authentication with refresh tokens
**Completed:**
- Implemented token-based auth with refresh logic
- Added JWT signing and verification
- Created refresh token rotation
**Key Learning:** JWT expiration requires careful handling of refresh race conditions
**Observations:**
- 🟣 **#1234** Implemented JWT authentication
- Files: jwt.ts, refresh.ts, middleware/auth.ts
- 🔴 **#1235** Fixed token expiration edge case
- Files: refresh.ts
```
**What to include:**
- ✅ Session ID and date
- ✅ Original user request
- ✅ What was completed (bulleted list)
- ✅ Key learnings/insights
- ✅ Linked observations with file lists
- ✅ Clear hierarchy (session → observations)
---
## Format: User Prompts
**When to use:** "What did I ask" questions, prompt searches
**Structure:**
```markdown
Found {count} user prompts:
💬 **Prompt #{id}** (Session #{session_id})
> "{preview_text}"
> {date} • {project}
```
**Example:**
```markdown
Found 5 user prompts about "authentication":
💬 **Prompt #1250** (Session #545)
> "How do I implement JWT authentication with refresh tokens? I need to handle token expiration..."
> Nov 9, 2024 • api-server
💬 **Prompt #1251** (Session #546)
> "The auth tokens are expiring too quickly. Can you help debug the refresh flow?"
> Nov 8, 2024 • api-server
```
**What to include:**
- ✅ Prompt ID
- ✅ Session ID (for context linking)
- ✅ Preview text (200 chars for index, full text for full format)
- ✅ Date and project
- ✅ Quote formatting for prompt text
---
## Error Responses
**No results found:**
```markdown
No results found for "{query}". Try:
- Different search terms
- Broader keywords
- Checking spelling
- Using partial paths (for file searches)
```
**Service unavailable:**
```markdown
The search service isn't available. Check if the worker is running:
```bash
npm run worker:status
```
If the worker is stopped, restart it:
```bash
npm run worker:restart
```
```
**Invalid parameters:**
```markdown
Invalid search parameters:
- {parameter}: {error_message}
See the [API help](help.md) for valid parameter options.
```
---
## Context-Aware Presentation
Tailor formatting to user's question:
**"What bugs did we fix?"**
→ Use index format, emphasize date/type, group by recency
**"How did we implement X?"**
→ Use full format for best match, include complete narrative and files
**"What was happening when..."**
→ Use timeline format, emphasize chronology and causality
**"Catch me up on recent work"**
→ Use session summary format, focus on high-level accomplishments
---
## Token Budget Guidelines
**Minimal presentation (~100-200 tokens):**
- Index format with 3-5 results
- Compact list structure
- Essential metadata only
**Standard presentation (~500-1,000 tokens):**
- Index format with 10-15 results
- Include subtitles and context
- Clear formatting and emojis
**Detailed presentation (~1,500-3,000 tokens):**
- Full format for 2-3 items
- Complete narratives and facts
- Timeline with 20-30 records
**Comprehensive presentation (~5,000+ tokens):**
- Multiple full results
- Deep timelines (40+ records)
- Session summaries with observations
Always start minimal, expand only when needed.
---
## Markdown Best Practices
1. **Use headers (##, ###)** for hierarchy
2. **Bold important elements** (IDs, titles, dates)
3. **Quote user text** (prompts, questions)
4. **Bullet lists** for facts and files
5. **Code blocks** for commands and examples
6. **Emojis** for type indicators
7. **Horizontal rules (---)** for section breaks
8. **Blockquotes (>)** for subtitles and previews
---
## Examples by Use Case
### Use Case 1: Quick Overview
User: "What did we do last session?"
```markdown
## Recent Work
### 🎯 Session #545 - November 9, 2024
Implemented JWT authentication system
**Key accomplishment:** Added token-based auth with refresh tokens
**Key learning:** JWT expiration requires careful handling of refresh race conditions
```
### Use Case 2: Specific Investigation
User: "How did we implement JWT authentication?"
```markdown
## 🟣 Feature #1234: Implemented JWT authentication
**What happened:**
Implemented a complete JWT authentication system with access and refresh tokens. Access tokens expire after 15 minutes, refresh tokens after 7 days. Added token signing with RS256 algorithm.
**Files:**
- src/auth/jwt.ts (created)
- src/auth/refresh.ts (created)
- src/middleware/auth.ts (modified)
**Key insight:** Refresh race conditions require atomic token exchange logic.
```
### Use Case 3: Timeline Investigation
User: "What was happening around the deployment?"
```markdown
## Timeline: Deployment
[... chronological timeline with before/after context ...]
```
Choose presentation style based on user's question and information needs.
-171
View File
@@ -1,171 +0,0 @@
# API Help
Get comprehensive API documentation for all search endpoints.
## When to Use
- User asks: "What search operations are available?"
- User asks: "How do I use the search API?"
- Need reference documentation for endpoints
- Want to see all available parameters
## Command
```bash
curl -s "http://localhost:37777/api/help"
```
## Response Structure
Returns complete API documentation:
```json
{
"version": "6.5.0",
"base_url": "http://localhost:37777/api",
"endpoints": [
{
"path": "/search/observations",
"method": "GET",
"description": "Search observations using full-text search",
"parameters": [
{
"name": "query",
"required": true,
"type": "string",
"description": "Search terms"
},
{
"name": "format",
"required": false,
"type": "string",
"default": "full",
"options": ["index", "full"],
"description": "Response format"
}
],
"example": "curl -s \"http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5\""
}
]
}
```
## How to Present Results
Present as reference documentation:
```markdown
## claude-mem Search API Reference
Base URL: `http://localhost:37777/api`
### Search Operations
**1. Search Observations**
- **Endpoint:** `GET /search/observations`
- **Description:** Search observations using full-text search
- **Parameters:**
- `query` (required, string): Search terms
- `format` (optional, string): "index" or "full" (default: "full")
- `limit` (optional, number): Max results (default: 20, max: 100)
- **Example:**
```bash
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5"
```
[... continue for all endpoints ...]
```
## Endpoint Categories
The API help response organizes endpoints by category:
1. **Full-Text Search**
- `/search/observations`
- `/search/sessions`
- `/search/prompts`
2. **Filtered Search**
- `/search/by-type`
- `/search/by-concept`
- `/search/by-file`
3. **Context Retrieval**
- `/context/recent`
- `/timeline/context`
- `/timeline/by-query`
4. **Utilities**
- `/help`
## Common Parameters
Many endpoints share these parameters:
- **format**: "index" (summary) or "full" (complete details)
- **limit**: Number of results to return
- **offset**: Number of results to skip (for pagination)
- **project**: Filter by project name
- **dateStart/dateEnd**: Filter by date range
- `dateStart`: Start date (YYYY-MM-DD or epoch timestamp)
- `dateEnd`: End date (YYYY-MM-DD or epoch timestamp)
## Error Handling
**Worker not running:**
Connection refused error. Response: "The search API isn't available. Check if worker is running: `npm run worker:status`"
**Invalid endpoint:**
```json
{"error": "Not found"}
```
Response: "Invalid API endpoint. Use /api/help to see available endpoints."
## Tips
1. Save help response for reference during investigation
2. Use examples as starting point for your queries
3. Check required parameters before making requests
4. Refer to format options for each endpoint
5. All endpoints use GET method with query parameters
**Token Efficiency:**
- Help response: ~2,000-3,000 tokens (complete API reference)
- Use sparingly - refer to operation-specific docs instead
- Keep help response cached for repeated reference
## When to Use Help
**Use help when:**
- Starting to use the search API
- Need complete parameter reference
- Forgot which endpoints are available
- Want to see all options at once
**Don't use help when:**
- You know which operation you need (use operation-specific docs)
- Just need examples (use common-workflows.md)
- Token budget is limited (help is comprehensive)
## Alternative to Help Endpoint
Instead of calling `/api/help`, you can:
1. **Use SKILL.md** - Quick decision guide with operation links
2. **Use operation docs** - Detailed guides for specific endpoints
3. **Use common-workflows.md** - Step-by-step examples
4. **Use formatting.md** - Response presentation templates
The help endpoint is most useful when you need complete API reference in one response.
## API Versioning
The help response includes version information:
```json
{
"version": "6.5.0"
}
```
Check version to ensure compatibility with documentation.
@@ -1,124 +0,0 @@
# Search Observations (Semantic + Full-Text Hybrid)
Search all observations using natural language queries.
## When to Use
- User asks: "How did we implement authentication?"
- User asks: "What bugs did we fix?"
- User asks: "What features did we add?"
- Looking for past work by keyword or topic
## Command
```bash
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5"
```
## Parameters
- **query** (optional): Natural language search query - uses semantic search (ChromaDB) for ranking with SQLite FTS5 fallback (e.g., "authentication", "bug fix", "database migration"). Can be omitted for filter-only searches.
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
- **dateStart/dateEnd**: Filter by date range (optional) - `dateStart` and/or `dateEnd` (YYYY-MM-DD format or epoch timestamp)
- **obs_type**: Filter by observation type (comma-separated): bugfix, feature, refactor, decision, discovery, change (optional)
- **concepts**: Filter by concept tags (comma-separated, optional)
- **files**: Filter by file paths (comma-separated, optional)
**Important**: When omitting `query`, you MUST provide at least one filter (project, dateStart/dateEnd, obs_type, concepts, or files)
## When to Use Each Format
**Use format=index for:**
- Quick overviews
- Finding IDs for deeper investigation
- Listing multiple results
- **Token cost: ~50-100 per result**
**Use format=full for:**
- Complete details including narrative, facts, files, concepts
- Understanding the full context of specific observations
- **Token cost: ~500-1000 per result**
## Example Response (format=index)
```json
{
"query": "authentication",
"count": 5,
"format": "index",
"results": [
{
"id": 1234,
"type": "feature",
"title": "Implemented JWT authentication",
"subtitle": "Added token-based auth with refresh tokens",
"created_at_epoch": 1699564800000,
"project": "api-server",
"score": 0.95
}
]
}
```
## How to Present Results
For format=index, present as a compact list:
```markdown
Found 5 results for "authentication":
1. **#1234** [feature] Implemented JWT authentication
> Added token-based auth with refresh tokens
> Nov 9, 2024 • api-server
2. **#1235** [bugfix] Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024 • api-server
```
**Include:** ID (for follow-up), type emoji (🔴 bugfix, 🟣 feature, 🔄 refactor, 🔵 discovery, 🧠 decision, ✅ change), title, subtitle, date, project.
For complete formatting guidelines, see formatting.md (documentation coming soon).
## Filter-Only Examples
Search without query text (direct SQLite filtering):
```bash
# Get all observations from November 2025
curl -s "http://localhost:37777/api/search?type=observations&dateStart=2025-11-01&format=index"
# Get all bug fixes from a specific project
curl -s "http://localhost:37777/api/search?type=observations&obs_type=bugfix&project=api-server&format=index"
# Get all observations from last 7 days
curl -s "http://localhost:37777/api/search?type=observations&dateStart=2025-11-11&format=index"
```
## Error Handling
**Missing query and filters:**
```json
{"error": "Either query or filters required for search"}
```
Fix: Provide either a query parameter OR at least one filter (project, dateStart/dateEnd, obs_type, concepts, files)
**No results found:**
```json
{"query": "foobar", "count": 0, "results": []}
```
Response: "No results found for 'foobar'. Try different search terms."
## Tips
1. Be specific: "authentication JWT" > "auth"
2. Start with format=index and limit=5-10
3. Use project filtering when working on one codebase
4. If no results, try broader terms or check spelling
**Token Efficiency:**
- Start with format=index (~50-100 tokens per result)
- Use format=full only for relevant items (~500-1000 tokens per result)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
@@ -1,125 +0,0 @@
# Search User Prompts (Full-Text)
Search raw user prompts to find what was actually asked across all sessions.
## When to Use
- User asks: "What did I ask about authentication?"
- User asks: "Find my question about database migrations"
- User asks: "When did I ask about testing?"
- Looking for specific user questions or requests
## Command
```bash
curl -s "http://localhost:37777/api/search/prompts?query=authentication&format=index&limit=5"
```
## Parameters
- **query** (required): Search terms (e.g., "authentication", "how do I", "bug fix")
- **format**: "index" (truncated prompts) or "full" (complete prompt text). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
- **dateStart/dateEnd**: Filter by date range (optional)
## When to Use Each Format
**Use format=index for:**
- Quick overviews of what was asked
- Finding prompt IDs for full text
- Listing multiple prompts
- **Token cost: ~50-100 per result (truncated to 200 chars)**
**Use format=full for:**
- Complete prompt text
- Understanding the full user request
- **Token cost: Variable (depends on prompt length, typically 100-300 tokens)**
## Example Response (format=index)
```json
{
"query": "authentication",
"count": 5,
"format": "index",
"results": [
{
"id": 1250,
"session_id": "S545",
"prompt_preview": "How do I implement JWT authentication with refresh tokens? I need to handle token expiration...",
"created_at_epoch": 1699564800000,
"project": "api-server"
}
]
}
```
## How to Present Results
For format=index, present as a compact list:
```markdown
Found 5 user prompts about "authentication":
💬 **Prompt #1250** (Session #545)
> "How do I implement JWT authentication with refresh tokens? I need to handle token expiration..."
> Nov 9, 2024 • api-server
💬 **Prompt #1251** (Session #546)
> "The auth tokens are expiring too quickly. Can you help debug the refresh flow?"
> Nov 8, 2024 • api-server
```
For complete formatting guidelines, see [formatting.md](formatting.md).
## What Gets Searched
User prompts search covers:
- All user messages sent to Claude Code
- Raw text as typed by the user
- Multi-turn conversations (each message is a separate prompt)
- Questions, requests, commands, and clarifications
## Error Handling
**Missing query parameter:**
```json
{"error": "Missing required parameter: query"}
```
Fix: Add the query parameter
**No results found:**
```json
{"query": "foobar", "count": 0, "results": []}
```
Response: "No user prompts found for 'foobar'. Try different search terms."
## Tips
1. Use exact phrases in quotes: `?query="how do I"` for precise matches
2. Start with format=index to see preview, then get full text if needed
3. Use dateStart to find recent questions: `?query=bug&dateStart=2024-11-01`
4. Prompts show what was asked, sessions/observations show what was done
5. Combine with session search to see both question and answer
**Token Efficiency:**
- Start with format=index (~50-100 tokens per result, prompt truncated to 200 chars)
- Use format=full only for relevant items (100-300 tokens per result)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
## When to Use Prompts vs Sessions
**Use prompts search when:**
- Looking for specific user questions
- Trying to remember what was asked
- Finding original request wording
**Use sessions search when:**
- Looking for what was accomplished
- Understanding work summaries
- Getting high-level context
**Combine both when:**
- Understanding the full conversation (what was asked + what was done)
- Investigating how a request was interpreted
@@ -1,134 +0,0 @@
# Get Recent Context
Get recent session summaries and observations for a project.
## When to Use
- User asks: "What did we do last session?"
- User asks: "What have we been working on recently?"
- User asks: "Catch me up on recent work"
- Starting a new session and need context
## Command
```bash
curl -s "http://localhost:37777/api/context/recent?project=api-server&limit=3"
```
## Parameters
- **project**: Project name (defaults to current working directory basename)
- **limit**: Number of recent sessions to retrieve (default: 3, max: 10)
## Response Structure
Returns combined context from recent sessions:
```json
{
"project": "api-server",
"limit": 3,
"sessions": [
{
"id": 545,
"session_id": "S545",
"title": "Implemented JWT authentication system",
"request": "Add JWT authentication with refresh tokens",
"completion": "Implemented token-based auth with refresh logic",
"learnings": "JWT expiration requires careful handling of refresh race conditions",
"created_at_epoch": 1699564800000,
"observations": [
{
"id": 1234,
"type": "feature",
"title": "Implemented JWT authentication",
"subtitle": "Added token-based auth with refresh tokens",
"files": ["src/auth/jwt.ts", "src/auth/refresh.ts"]
}
]
}
]
}
```
## How to Present Results
Present as a chronological narrative:
```markdown
## Recent Work on api-server
### Session #545 - Nov 9, 2024
**Request:** Add JWT authentication with refresh tokens
**Completed:**
- Implemented token-based auth with refresh logic
- Added JWT signing and verification
- Created refresh token rotation
**Key Learning:** JWT expiration requires careful handling of refresh race conditions
**Observations:**
- 🟣 **#1234** Implemented JWT authentication
- Files: jwt.ts, refresh.ts
```
For complete formatting guidelines, see [formatting.md](formatting.md).
## Default Project Detection
If no project parameter is provided, uses current working directory:
```bash
# Auto-detects project from current directory
curl -s "http://localhost:37777/api/context/recent?limit=3"
```
## Error Handling
**No sessions found:**
```json
{"project": "new-project", "sessions": []}
```
Response: "No recent sessions found for 'new-project'. This might be a new project."
**Worker not running:**
Connection refused error. Inform user to check if worker is running: `npm run worker:status`
## Tips
1. Start with limit=3 for quick overview (default)
2. Increase to limit=5-10 for deeper context
3. Recent context is perfect for session start
4. Combines both sessions and observations in one request
5. Use this when user asks "what did we do last time?"
**Token Efficiency:**
- limit=3 sessions: ~1,500-2,500 tokens (includes observations)
- limit=5 sessions: ~2,500-4,000 tokens
- limit=10 sessions: ~5,000-8,000 tokens
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
## When to Use Recent Context
**Use recent-context when:**
- Starting a new session
- User asks about recent work
- Need quick catch-up on project activity
- Want both sessions and observations together
**Don't use recent-context when:**
- Looking for specific topics (use search instead)
- Need timeline around specific event (use timeline instead)
- Want only observations or only sessions (use search operations)
## Comparison with Other Operations
| Operation | Use Case | Token Cost |
|-----------|----------|------------|
| recent-context | Quick catch-up on recent work | 1,500-4,000 |
| sessions search | Find sessions by topic | 50-100 per result (index) |
| observations search | Find specific implementations | 50-100 per result (index) |
| timeline | Context around specific point | 3,000-6,000 |
Recent context is optimized for "what happened recently?" questions with minimal token usage.
@@ -1,124 +0,0 @@
# Search Sessions (Full-Text)
Search session summaries using natural language queries.
## When to Use
- User asks: "What did we work on last week?"
- User asks: "What sessions involved database work?"
- User asks: "Show me sessions where we fixed bugs"
- Looking for past sessions by topic or theme
## Command
```bash
curl -s "http://localhost:37777/api/search/sessions?query=authentication&format=index&limit=5"
```
## Parameters
- **query** (required): Search terms (e.g., "authentication", "database migration", "bug fixes")
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
- **dateStart/dateEnd**: Filter by date range (optional)
## When to Use Each Format
**Use format=index for:**
- Quick overviews of past sessions
- Finding session IDs for deeper investigation
- Listing multiple sessions
- **Token cost: ~50-100 per result**
**Use format=full for:**
- Complete session summaries with requests, completions, learnings
- Understanding the full context of a session
- **Token cost: ~500-1000 per result**
## Example Response (format=index)
```json
{
"query": "authentication",
"count": 3,
"format": "index",
"results": [
{
"id": 545,
"session_id": "S545",
"title": "Implemented JWT authentication system",
"subtitle": "Added token-based auth with refresh tokens",
"created_at_epoch": 1699564800000,
"project": "api-server"
}
]
}
```
## How to Present Results
For format=index, present as a compact list:
```markdown
Found 3 sessions about "authentication":
🎯 **Session #545** Implemented JWT authentication system
> Added token-based auth with refresh tokens
> Nov 9, 2024 • api-server
🎯 **Session #546** Fixed authentication token expiration
> Resolved race condition in token refresh flow
> Nov 8, 2024 • api-server
```
For complete formatting guidelines, see [formatting.md](formatting.md).
## Session Summary Structure
Full session summaries include:
- **Session request**: What the user asked for
- **What was completed**: Summary of work done
- **Key learnings**: Important insights and discoveries
- **Files modified**: List of changed files
- **Observations**: Links to detailed observations
## Error Handling
**Missing query parameter:**
```json
{"error": "Missing required parameter: query"}
```
Fix: Add the query parameter
**No results found:**
```json
{"query": "foobar", "count": 0, "results": []}
```
Response: "No sessions found for 'foobar'. Try different search terms."
## Tips
1. Be specific: "JWT authentication implementation" > "auth"
2. Start with format=index and limit=5-10
3. Use dateStart for recent sessions: `?query=auth&dateStart=2024-11-01`
4. Sessions provide high-level overview, observations provide details
5. Use project filtering when working on one codebase
**Token Efficiency:**
- Start with format=index (~50-100 tokens per result)
- Use format=full only for relevant items (~500-1000 tokens per result)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
## When to Use Sessions vs Observations
**Use sessions search when:**
- Looking for high-level work summaries
- Understanding what was done in past sessions
- Getting overview of recent activity
**Use observations search when:**
- Looking for specific implementation details
- Finding bugs, features, or decisions
- Need fine-grained context about code changes
@@ -1,194 +0,0 @@
# Timeline by Query
Search for observations and get timeline context in a single request. Combines search + timeline into one operation.
## When to Use
- User asks: "What was happening when we worked on authentication?"
- User asks: "Show me context around bug fixes"
- User asks: "Timeline of database work"
- Need to find something then see temporal context
## MCP Tool
Use the `get_timeline_by_query` MCP tool:
```
# Auto mode: Uses top search result as timeline anchor
get_timeline_by_query(query="authentication", mode="auto", depth_before=10, depth_after=10)
# Interactive mode: Shows top N search results for manual selection
get_timeline_by_query(query="authentication", mode="interactive", limit=5)
```
## Parameters
- **query** (required): Search terms (e.g., "authentication", "bug fix", "database")
- **mode**: Search mode
- `auto` (default): Automatically uses top search result as timeline anchor
- `interactive`: Returns top N search results for manual anchor selection
- **depth_before**: Records before anchor (default: 10, max: 50) - for auto mode
- **depth_after**: Records after anchor (default: 10, max: 50) - for auto mode
- **limit**: Number of search results (default: 5, max: 20) - for interactive mode
- **project**: Filter by project name (optional)
## Auto Mode (Recommended)
Automatically gets timeline around best match:
```
get_timeline_by_query(query="JWT authentication", mode="auto", depth_before=10, depth_after=10)
```
**Response:**
```json
{
"query": "JWT authentication",
"mode": "auto",
"best_match": {
"id": 1234,
"type": "feature",
"title": "Implemented JWT authentication",
"score": 0.95
},
"timeline": [
// ... timeline records around observation #1234
]
}
```
**When to use auto mode:**
- You're confident the top result is what you want
- Want fastest path to timeline context
- Query is specific enough for accurate top result
## Interactive Mode
Shows top search results for manual review:
```
get_timeline_by_query(query="authentication", mode="interactive", limit=5)
```
**Response:**
```json
{
"query": "authentication",
"mode": "interactive",
"top_matches": [
{
"id": 1234,
"type": "feature",
"title": "Implemented JWT authentication",
"subtitle": "Added token-based auth with refresh tokens",
"score": 0.95
},
{
"id": 1240,
"type": "bugfix",
"title": "Fixed authentication token expiration",
"subtitle": "Resolved race condition in refresh flow",
"score": 0.87
}
],
"next_step": "Use get_context_timeline(anchor=<id>, depth_before=10, depth_after=10)"
}
```
**When to use interactive mode:**
- Query is broad and may have multiple relevant results
- Want to review options before getting timeline
- Not sure which result is most relevant
## How to Present Results
**For auto mode:**
```markdown
## Timeline: JWT authentication
**Best Match:** 🟣 Observation #1234 - Implemented JWT authentication (score: 0.95)
### Before (10 records)
**2:45 PM** - 🟣 Added authentication middleware
### ⭐ Anchor Point (2:55 PM)
🟣 **Observation #1234**: Implemented JWT authentication
### After (10 records)
**3:00 PM** - 🎯 Session completed: JWT authentication system
```
**For interactive mode:**
```markdown
Found 5 matches for "authentication":
1. 🟣 **#1234** Implemented JWT authentication (score: 0.95)
> Added token-based auth with refresh tokens
2. 🔴 **#1240** Fixed authentication token expiration (score: 0.87)
> Resolved race condition in refresh flow
To see timeline context, use observation ID with timeline operation.
```
For complete formatting guidelines, see [formatting.md](formatting.md).
## Error Handling
**Missing query parameter:**
```json
{"error": "Missing required parameter: query"}
```
Fix: Add the query parameter
**No results found:**
```json
{"query": "foobar", "top_matches": []}
```
Response: "No results found for 'foobar'. Try different search terms."
## Tips
1. **Use auto mode** for specific queries: "JWT authentication implementation"
2. **Use interactive mode** for broad queries: "authentication"
3. Start with depth 10/10 for balanced context
4. Be specific in queries for better auto mode accuracy
5. This is fastest way to find + explore context in one request
**Token Efficiency:**
- Auto mode: ~3,000-4,000 tokens (search + timeline)
- Interactive mode: ~500-1,000 tokens (search results only)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
## Workflow Comparison
**timeline-by-query (auto):**
1. One request → get timeline around best match
2. ~3,000 tokens
**timeline-by-query (interactive) → timeline:**
1. First request → see top matches (~500 tokens)
2. Second request → get timeline for chosen match (~3,000 tokens)
3. Total: ~3,500 tokens
**observations search → timeline:**
1. Search observations (~500 tokens)
2. Get timeline for chosen result (~3,000 tokens)
3. Total: ~3,500 tokens
Use auto mode when you're confident about the query. Use interactive mode or separate search when you want more control.
## When to Use Timeline-by-Query
**Use timeline-by-query when:**
- Need to find something AND see temporal context
- Want one-request convenience (auto mode)
- Investigating "what was happening when we worked on X?"
- Don't have observation ID already
**Don't use timeline-by-query when:**
- Already have observation ID (use timeline instead)
- Just need search results (use observations search)
- Need recent work overview (use recent-context)
@@ -1,171 +0,0 @@
# Get Context Timeline
Get a chronological timeline of observations, sessions, and prompts around a specific point in time.
## When to Use
- User asks: "What was happening when we deployed?"
- User asks: "Show me context around that bug fix"
- User asks: "What happened before and after that change?"
- Need temporal context around an event
## MCP Tool
Use the `get_context_timeline` MCP tool:
```
get_context_timeline(anchor=1234, depth_before=10, depth_after=10)
get_context_timeline(anchor="S545", depth_before=10, depth_after=10)
get_context_timeline(anchor="2024-11-09T12:00:00Z", depth_before=10, depth_after=10)
```
## Parameters
- **anchor** (required): Point in time to center timeline
- Observation ID: `1234`
- Session ID: `S545`
- ISO timestamp: `2024-11-09T12:00:00Z`
- **depth_before**: Number of records before anchor (default: 10, max: 50)
- **depth_after**: Number of records after anchor (default: 10, max: 50)
- **project**: Filter by project name (optional)
## Response Structure
Returns unified chronological timeline:
```json
{
"anchor": 1234,
"depth_before": 10,
"depth_after": 10,
"total_records": 21,
"timeline": [
{
"record_type": "observation",
"id": 1230,
"type": "feature",
"title": "Added authentication middleware",
"created_at_epoch": 1699564700000
},
{
"record_type": "prompt",
"id": 1250,
"session_id": "S545",
"prompt_preview": "How do I add JWT authentication?",
"created_at_epoch": 1699564750000
},
{
"record_type": "observation",
"id": 1234,
"type": "feature",
"title": "Implemented JWT authentication",
"created_at_epoch": 1699564800000,
"is_anchor": true
},
{
"record_type": "session",
"id": 545,
"session_id": "S545",
"title": "Implemented JWT authentication system",
"created_at_epoch": 1699564900000
}
]
}
```
## How to Present Results
Present as chronological narrative with anchor highlighted:
```markdown
## Timeline around Observation #1234
### Before (10 records)
**2:45 PM** - 🟣 Observation #1230: Added authentication middleware
**2:50 PM** - 💬 User asked: "How do I add JWT authentication?"
### ⭐ Anchor Point (2:55 PM)
🟣 **Observation #1234**: Implemented JWT authentication
### After (10 records)
**3:00 PM** - 🎯 Session #545 completed: Implemented JWT authentication system
**3:05 PM** - 🔴 Observation #1235: Fixed token expiration edge case
```
For complete formatting guidelines, see [formatting.md](formatting.md).
## Anchor Types
**Observation ID:**
- Use when you know the specific observation ID
- Example: `anchor=1234`
**Session ID:**
- Use when you want context around a session
- Example: `anchor=S545`
**ISO Timestamp:**
- Use when you know approximate time
- Example: `anchor=2024-11-09T14:30:00Z`
## Error Handling
**Missing anchor parameter:**
```json
{"error": "Missing required parameter: anchor"}
```
Fix: Add the anchor parameter
**Anchor not found:**
```json
{"error": "Anchor not found: 9999"}
```
Response: "Observation #9999 not found. Check the ID or try a different anchor."
**Invalid timestamp:**
```json
{"error": "Invalid timestamp format"}
```
Fix: Use ISO 8601 format: `2024-11-09T14:30:00Z`
## Tips
1. Start with depth_before=10, depth_after=10 for balanced context
2. Increase depth for broader investigation (max: 50 each)
3. Use observation IDs from search results as anchors
4. Timelines show all record types interleaved chronologically
5. Perfect for understanding "what was happening when X occurred"
**Token Efficiency:**
- depth 10/10: ~3,000-4,000 tokens (21 records)
- depth 20/20: ~6,000-8,000 tokens (41 records)
- depth 50/50: ~15,000-20,000 tokens (101 records)
- See [../principles/progressive-disclosure.md](../principles/progressive-disclosure.md)
## When to Use Timeline
**Use timeline when:**
- Need context around specific event
- Understanding sequence of events
- Investigating "what was happening then?"
- Want all record types (observations, sessions, prompts) together
**Don't use timeline when:**
- Just need recent work (use recent-context)
- Looking for specific topics (use search)
- Don't have an anchor point (use timeline-by-query)
## Comparison with Timeline-by-Query
| Feature | timeline | timeline-by-query |
|---------|----------|-------------------|
| Requires anchor | Yes (ID or timestamp) | No (uses search query) |
| Best for | Known event investigation | Finding then exploring context |
| Steps | 1 (direct timeline) | 2 (search + timeline) |
| Use when | You have observation ID | You have search term |
Timeline is faster when you already know the anchor point.
@@ -1,176 +0,0 @@
# Anti-Pattern Catalogue
Common mistakes to avoid when using the HTTP search API. These anti-patterns address LLM training biases and prevent token-wasting behaviors.
## Anti-Pattern 1: Skipping Index Format
**The Mistake:**
```bash
# ❌ Bad: Jump straight to full format
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=full&limit=20"
```
**Why It's Wrong:**
- 20 × 750 tokens = 15,000 tokens
- May hit MCP token limits
- 99% wasted on irrelevant results
**The Correction:**
```bash
# ✅ Good: Start with index, review, then request full selectively
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5"
# Review results, identify relevant items
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=full&limit=1&offset=2"
```
**What It Teaches:**
Progressive disclosure isn't optional - it's essential for scale.
**LLM Behavior Insight:**
LLMs trained on code examples may have seen `format=full` as "more complete" and default to it.
---
## Anti-Pattern 2: Over-Requesting Results
**The Mistake:**
```bash
# ❌ Bad: Request limit=20 without reviewing index first
curl -s "http://localhost:37777/api/search/observations?query=auth&format=index&limit=20"
```
**Why It's Wrong:**
- Most of 20 results will be irrelevant
- Wastes tokens and time
- Overwhelms review process
**The Correction:**
```bash
# ✅ Good: Start small, paginate if needed
curl -s "http://localhost:37777/api/search/observations?query=auth&format=index&limit=5"
# If needed, paginate:
curl -s "http://localhost:37777/api/search/observations?query=auth&format=index&limit=5&offset=5"
```
**What It Teaches:**
Start small (limit=3-5), review, paginate if needed.
**LLM Behavior Insight:**
LLMs may think "more results = more thorough" without considering relevance.
---
## Anti-Pattern 3: Ignoring Tool Specialization
**The Mistake:**
```bash
# ❌ Bad: Use generic search for everything
curl -s "http://localhost:37777/api/search/observations?query=bugfix&format=index&limit=10"
```
**Why It's Wrong:**
- Specialized tools (by-type, by-concept, by-file) are more efficient
- Generic search mixes all result types
- Misses filtering optimization
**The Correction:**
```bash
# ✅ Good: Use specialized endpoint when applicable
curl -s "http://localhost:37777/api/search/by-type?type=bugfix&format=index&limit=10"
```
**What It Teaches:**
The decision tree exists for a reason - follow it.
**LLM Behavior Insight:**
LLMs may gravitate toward "general purpose" tools to avoid decision-making.
---
## Anti-Pattern 4: Loading Full Context Prematurely
**The Mistake:**
```bash
# ❌ Bad: Request full format before understanding what's relevant
curl -s "http://localhost:37777/api/search/observations?query=database&format=full&limit=10"
```
**Why It's Wrong:**
- Can't filter relevance without seeing index first
- Wastes tokens on irrelevant full details
- 10 × 750 = 7,500 tokens for potentially zero useful results
**The Correction:**
```bash
# ✅ Good: Index first to identify relevance
curl -s "http://localhost:37777/api/search/observations?query=database&format=index&limit=10"
# Identify relevant: #1234 and #1250
curl -s "http://localhost:37777/api/search/observations?query=database+1234&format=full&limit=1"
curl -s "http://localhost:37777/api/search/observations?query=database+1250&format=full&limit=1"
```
**What It Teaches:**
Filtering is a prerequisite for expansion.
**LLM Behavior Insight:**
LLMs may try to "get everything at once" to avoid multiple tool calls.
---
## Anti-Pattern 5: Not Using Timeline Tools
**The Mistake:**
```bash
# ❌ Bad: Search for individual observations separately
curl -s "http://localhost:37777/api/search/observations?query=before+deployment"
curl -s "http://localhost:37777/api/search/observations?query=during+deployment"
curl -s "http://localhost:37777/api/search/observations?query=after+deployment"
```
**Why It's Wrong:**
- Misses context around events
- Inefficient (N searches vs 1 timeline)
- Temporal relationships lost
**The Correction:**
```bash
# ✅ Good: Use timeline tool for contextual investigation
curl -s "http://localhost:37777/api/timeline/by-query?query=deployment&depth_before=10&depth_after=10"
```
**What It Teaches:**
Tool composition - some tools are designed to work together.
**LLM Behavior Insight:**
LLMs may not naturally discover tool composition patterns.
---
## Why These Anti-Patterns Matter
**Addresses LLM Training Bias:**
LLMs default to "load everything" behavior from web scraping training data where thoroughness was rewarded.
**Teaches Protocol Awareness:**
HTTP APIs and MCP have real token limits that can break the system.
**Prevents User Frustration:**
Token limit errors confuse users and break workflows.
**Builds Good Habits:**
Anti-patterns teach the "why" behind best practices.
**Makes Implicit Explicit:**
Surfaces mental models that experienced users internalize but novices miss.
---
## What Happens If These Are Ignored
- **No progressive disclosure**: Every search loads limit=20 in full format → token exhaustion
- **Over-requesting**: 15,000 token searches for 2 relevant results
- **Wrong tool**: Generic search when specialized filters would be 10x faster
- **Premature expansion**: Load full details before knowing relevance
- **Missing composition**: Single-tool thinking, missing powerful multi-step workflows
**Bottom Line:** These anti-patterns waste 5-10x more tokens than necessary and frequently cause system failures.
@@ -1,120 +0,0 @@
# Progressive Disclosure Pattern (MANDATORY)
**Core Principle**: Find the smallest set of high-signal tokens first (index format), then drill down to full details only for relevant items.
## The 4-Step Workflow
### Step 1: Start with Index Format
**Action:**
- Use `format=index` (default in most operations)
- Set `limit=3-5` (not 20)
- Review titles and dates ONLY
**Token Cost:** ~50-100 tokens per result
**Why:** Minimal token investment for maximum signal. Get overview before committing to full details.
**Example:**
```bash
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5"
```
**Response:**
```json
{
"query": "authentication",
"count": 5,
"format": "index",
"results": [
{
"id": 1234,
"type": "feature",
"title": "Implemented JWT authentication",
"subtitle": "Added token-based auth with refresh tokens",
"created_at_epoch": 1699564800000,
"project": "api-server"
}
]
}
```
### Step 2: Identify Relevant Items
**Cognitive Task:**
- Scan index results for relevance
- Note which items need full details
- Discard irrelevant items
**Why:** Human-in-the-loop filtering before expensive operations. Don't load full details for items you'll ignore.
### Step 3: Request Full Details (Selectively)
**Action:**
- Use `format=full` ONLY for specific items of interest
- Target by ID or use refined search query
**Token Cost:** ~500-1000 tokens per result
**Principle:** Load only what you need
**Example:**
```bash
# After reviewing index, get full details for observation #1234
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=full&limit=1&offset=2"
```
**Why:** Targeted token expenditure with high ROI. 10x cost difference means selectivity matters.
### Step 4: Refine with Filters (If Needed)
**Techniques:**
- Use `type`, `dateStart`/`dateEnd`, `concepts`, `files` filters
- Narrow scope BEFORE requesting more results
- Use `offset` for pagination instead of large limits
**Why:** Reduce result set first, then expand selectively. Don't load 20 results when filters could narrow to 3.
## Token Budget Awareness
**Costs:**
- Index result: ~50-100 tokens
- Full result: ~500-1000 tokens
- 10x cost difference
**Starting Points:**
- Start with `limit=3-5` (not 20)
- Reduce limit if hitting token errors
**Savings Example:**
- Naive: 10 items × 750 tokens (avg full) = 7,500 tokens
- Progressive: (5 items × 75 tokens index) + (2 items × 750 tokens full) = 1,875 tokens
- **Savings: 5,625 tokens (75% reduction)**
## What Problems This Solves
1. **Token exhaustion**: Without this, LLMs load everything in full format (9,000+ tokens for 10 items)
2. **Poor signal-to-noise**: Loading full details for irrelevant items wastes tokens
3. **MCP limits**: Large payloads hit protocol limits (system failures)
4. **Inefficiency**: Loading 20 full results when only 2 are relevant
## How It Scales
**With 10 records:**
- Index (500 tokens) → Full (2,000 tokens for 2 relevant) = 2,500 tokens
- Without pattern: Full (10,000 tokens for all 10) = 4x more expensive
**With 1,000 records:**
- Index (500 tokens for top 5) → Full (1,000 tokens for 1 relevant) = 1,500 tokens
- Without pattern: Would hit MCP limits before seeing relevant data
## Context Engineering Alignment
This pattern implements core context engineering principles:
- **Just-in-time context**: Load data dynamically at runtime
- **Progressive disclosure**: Lightweight identifiers (index) → full details as needed
- **Token efficiency**: Minimal high-signal tokens first, expand selectively
- **Attention budget**: Treat context as finite resource with diminishing returns
Always start with the smallest set of high-signal tokens that maximize likelihood of desired outcome.
-89
View File
@@ -1,89 +0,0 @@
---
name: troubleshoot
description: Diagnose and fix claude-mem installation issues. Checks worker status, database integrity, service health, dependencies, and provides automated fixes for common problems.
---
# Claude-Mem Troubleshooting Skill
Diagnose and resolve installation and operational issues with the claude-mem plugin.
## When to Use This Skill
**Invoke this skill when:**
- Memory not persisting after `/clear`
- Viewer UI empty or not loading
- Worker service not running
- Database missing or corrupted
- Port conflicts
- Missing dependencies
- "Nothing is remembered" complaints
- Search results empty when they shouldn't be
**Do NOT invoke** for feature requests or usage questions (use regular documentation for that).
## Quick Decision Guide
Once the skill is loaded, choose the appropriate operation:
**What's the problem?**
- "Nothing is being remembered" → [operations/common-issues.md](operations/common-issues.md#nothing-remembered)
- "Viewer is empty" → [operations/common-issues.md](operations/common-issues.md#viewer-empty)
- "Worker won't start" → [operations/common-issues.md](operations/common-issues.md#worker-not-starting)
- "Want to run full diagnostics" → [operations/diagnostics.md](operations/diagnostics.md)
- "Need automated fix" → [operations/automated-fixes.md](operations/automated-fixes.md)
## Available Operations
Choose the appropriate operation file for detailed instructions:
### Diagnostic Workflows
1. **[Full System Diagnostics](operations/diagnostics.md)** - Comprehensive step-by-step diagnostic workflow
2. **[Worker Diagnostics](operations/worker.md)** - Bun worker-specific troubleshooting
3. **[Database Diagnostics](operations/database.md)** - Database integrity and data checks
### Issue Resolution
4. **[Common Issues](operations/common-issues.md)** - Quick fixes for frequently encountered problems
5. **[Automated Fixes](operations/automated-fixes.md)** - One-command fix sequences
### Reference
6. **[Quick Commands](operations/reference.md)** - Essential commands for troubleshooting
## Quick Start
**Fast automated fix (try this first):**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm run worker:stop; \
npm install && \
npm run worker:start && \
sleep 3 && \
curl -s http://127.0.0.1:37777/health
```
Expected output: `{"status":"ok"}`
If that doesn't work, proceed to detailed diagnostics.
## Response Format
When troubleshooting:
1. **Identify the symptom** - What's the user reporting?
2. **Choose operation file** - Use the decision guide above
3. **Follow steps systematically** - Don't skip diagnostic steps
4. **Report findings** - Tell user what you found and what was fixed
5. **Verify resolution** - Confirm the issue is resolved
## Technical Notes
- **Worker port:** Default 37777 (configurable via `CLAUDE_MEM_WORKER_PORT`)
- **Database location:** `~/.claude-mem/claude-mem.db`
- **Plugin location:** `~/.claude/plugins/marketplaces/thedotmack/`
- **Worker PID file:** `~/.claude-mem/worker.pid`
## Error Reporting
If troubleshooting doesn't resolve the issue, collect diagnostic data and direct user to:
https://github.com/thedotmack/claude-mem/issues
See [operations/diagnostics.md](operations/diagnostics.md#reporting-issues) for details on what to collect.
@@ -1,206 +0,0 @@
# Automated Fix Sequences
One-command fix sequences for common claude-mem issues.
## Quick Fix: Complete Reset and Restart
**Use when:** General issues, worker not responding, after updates
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm run worker:stop; \
npm install && \
npm run worker:start && \
sleep 3 && \
curl -s http://127.0.0.1:37777/health
```
**Expected output:** `{"status":"ok"}`
**What it does:**
1. Stops the worker (if running)
2. Ensures dependencies are installed
3. Starts worker
4. Waits for startup
5. Verifies health
## Fix: Worker Not Running
**Use when:** Worker status shows it's not running
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm run worker:start && \
sleep 2 && \
npm run worker:status
```
**Expected output:** Worker running with PID and health OK
## Fix: Dependencies Missing
**Use when:** Worker won't start due to missing packages
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm install && \
npm run worker:restart
```
## Fix: Stale PID File
**Use when:** Worker reports running but health check fails
```bash
rm -f ~/.claude-mem/worker.pid && \
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm run worker:start && \
sleep 2 && \
curl -s http://127.0.0.1:37777/health
```
**Expected output:** `{"status":"ok"}`
## Fix: Port Conflict
**Use when:** Error shows port already in use
```bash
# Change to port 37778
mkdir -p ~/.claude-mem && \
echo '{"CLAUDE_MEM_WORKER_PORT":"37778"}' > ~/.claude-mem/settings.json && \
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm run worker:restart && \
sleep 2 && \
curl -s http://127.0.0.1:37778/health
```
**Expected output:** `{"status":"ok"}`
## Fix: Database Issues
**Use when:** Database appears corrupted or out of sync
```bash
# Backup and test integrity
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup && \
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;" && \
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm run worker:restart
```
**If integrity check fails, recreate database:**
```bash
# WARNING: This deletes all memory data
mv ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.old && \
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm run worker:restart
```
## Fix: Clean Reinstall
**Use when:** All else fails, nuclear option
```bash
# Backup data first
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup 2>/dev/null
# Stop worker
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:stop
# Clean PID file
rm -f ~/.claude-mem/worker.pid
# Reinstall dependencies
rm -rf node_modules && \
npm install
# Start worker
npm run worker:start && \
sleep 3 && \
curl -s http://127.0.0.1:37777/health
```
## Fix: Clear Old Logs
**Use when:** Want to start with fresh logs
```bash
# Archive old logs
tar -czf ~/.claude-mem/logs-archive-$(date +%Y-%m-%d).tar.gz ~/.claude-mem/logs/*.log 2>/dev/null
# Remove logs older than 7 days
find ~/.claude-mem/logs/ -name "worker-*.log" -mtime +7 -delete
# Restart worker for fresh log
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
```
**Note:** Logs auto-rotate daily, manual cleanup rarely needed.
## Verification Commands
**After running any fix, verify with these:**
```bash
# Check worker status
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:status
# Check health
curl -s http://127.0.0.1:37777/health
# Check database
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
# Check viewer
curl -s http://127.0.0.1:37777/api/stats
# Check logs for errors
grep -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log | tail -20
```
**All checks should pass:**
- Worker status: Shows PID and "Health: OK"
- Health endpoint: `{"status":"ok"}`
- Database: Shows count (may be 0 if new)
- Stats: Returns JSON with counts
- Logs: No recent errors
## One-Line Complete Diagnostic
**Quick health check:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && npm run worker:status && curl -s http://127.0.0.1:37777/health && echo " ✓ All systems OK"
```
## Troubleshooting the Fixes
**If automated fix fails:**
1. Run the diagnostic script from [diagnostics.md](diagnostics.md)
2. Check specific error in worker logs:
```bash
tail -50 ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
```
3. Try manual worker start to see detailed error:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
bun plugin/scripts/worker-service.js
```
4. Use the bug report tool:
```bash
npm run bug-report
```
## Common Error Patterns and Fixes
| Error Pattern | Likely Cause | Quick Fix |
|---------------|--------------|-----------|
| `EADDRINUSE` | Port conflict | Change port in settings.json |
| `SQLITE_ERROR` | Database corruption | Run integrity check, recreate if needed |
| `ENOENT` | Missing files | Run `npm install` |
| `Module not found` | Dependency issue | Clean reinstall |
| Connection refused | Worker not running | `npm run worker:start` |
| Stale PID | Old PID file | Remove `~/.claude-mem/worker.pid` |
@@ -1,237 +0,0 @@
# Common Issue Resolutions
Quick fixes for frequently encountered claude-mem problems.
## Issue: Nothing is Remembered After `/clear` {#nothing-remembered}
**Symptoms:**
- Data doesn't persist across sessions
- Context is empty after `/clear`
- Search returns no results for past work
**Root cause:** Sessions are marked complete but data should persist. This suggests:
- Worker not processing observations
- Database not being written to
- Context hook not reading from database
**Fix:**
1. Verify worker is running:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:status
```
2. Check database has recent observations:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations WHERE created_at > datetime('now', '-1 day');"
```
3. Restart worker and start new session:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
```
4. Create a test observation: `/skill version-bump` then cancel
5. Check if observation appears in viewer:
```bash
open http://127.0.0.1:37777
# Or manually check database:
sqlite3 ~/.claude-mem/claude-mem.db "SELECT * FROM observations ORDER BY created_at DESC LIMIT 1;"
```
## Issue: Viewer Empty After Every Claude Restart {#viewer-empty}
**Symptoms:**
- Viewer shows no data at http://127.0.0.1:37777
- Stats endpoint returns all zeros
- Database appears empty in UI
**Root cause:**
- Database being recreated on startup (shouldn't happen)
- Worker reading from wrong database location
- Database permissions issue
**Fix:**
1. Check database file exists and has data:
```bash
ls -lh ~/.claude-mem/claude-mem.db
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
```
2. Check file permissions:
```bash
ls -la ~/.claude-mem/claude-mem.db
# Should be readable/writable by your user
```
3. Verify worker is using correct database path in logs:
```bash
grep "Database" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
```
4. Test viewer connection manually:
```bash
curl -s http://127.0.0.1:37777/api/stats
# Should show non-zero counts if data exists
```
## Issue: Old Memory in Claude {#old-memory}
**Symptoms:**
- Context contains outdated observations
- Irrelevant past work appearing in sessions
- Context feels stale
**Root cause:** Context hook injecting stale observations
**Fix:**
1. Check the observation count setting:
```bash
grep CLAUDE_MEM_CONTEXT_OBSERVATIONS ~/.claude-mem/settings.json
```
2. Default is 50 observations - you can adjust this:
```json
{
"env": {
"CLAUDE_MEM_CONTEXT_OBSERVATIONS": "25"
}
}
```
3. Check database for actual observation dates:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, project, title FROM observations ORDER BY created_at DESC LIMIT 10;"
```
4. Consider filtering by project if working on multiple codebases
## Issue: Worker Not Starting {#worker-not-starting}
**Symptoms:**
- Worker status shows not running or error
- Health check fails
- Viewer not accessible
**Root cause:**
- Port already in use
- Bun not installed
- Missing dependencies
**Fix:**
1. Try manual worker start to see error:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
bun plugin/scripts/worker-service.js
# Should start server on port 37777 or show error
```
2. If port in use, change it:
```bash
mkdir -p ~/.claude-mem
echo '{"CLAUDE_MEM_WORKER_PORT":"37778"}' > ~/.claude-mem/settings.json
```
3. If dependencies missing:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm install
npm run worker:start
```
## Issue: Search Results Empty
**Symptoms:**
- Search skill returns no results
- API endpoints return empty arrays
- Know there's data but can't find it
**Root cause:**
- FTS5 tables not synchronized
- Wrong project filter
- Database not being queried correctly
**Fix:**
1. Check if observations exist in database:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
```
2. Check FTS5 table sync:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations_fts;"
# Should match observation count
```
3. Try search via API directly:
```bash
curl "http://127.0.0.1:37777/api/search/observations?q=test&format=index"
```
4. If FTS5 out of sync, restart worker (triggers reindex):
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
```
## Issue: Port Conflicts
**Symptoms:**
- Worker won't start
- Error: "EADDRINUSE: address already in use"
- Health check fails
**Fix:**
1. Check what's using port 37777:
```bash
lsof -i :37777
```
2. Either kill the conflicting process or change claude-mem port:
```bash
mkdir -p ~/.claude-mem
echo '{"CLAUDE_MEM_WORKER_PORT":"37778"}' > ~/.claude-mem/settings.json
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
```
## Issue: Database Corrupted
**Symptoms:**
- SQLite errors in logs
- Worker crashes on startup
- Queries fail
**Fix:**
1. Backup the database:
```bash
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup
```
2. Try to repair:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
3. If repair fails, recreate (loses data):
```bash
rm ~/.claude-mem/claude-mem.db
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
# Worker will create new database
```
## Prevention Tips
**Keep claude-mem healthy:**
- Regularly check viewer UI to see if observations are being captured
- Monitor database size (shouldn't grow unbounded)
- Update plugin when new versions are released
- Keep Claude Code updated
**Performance tuning:**
- Adjust `CLAUDE_MEM_CONTEXT_OBSERVATIONS` if context is too large/small
- Use `/clear` to mark sessions complete and start fresh
- Use search skill to query specific memories instead of loading everything
@@ -1,409 +0,0 @@
# Database Diagnostics
SQLite database troubleshooting for claude-mem.
## Database Overview
Claude-mem uses SQLite3 for persistent storage:
- **Location:** `~/.claude-mem/claude-mem.db`
- **Library:** bun:sqlite (native Bun SQLite, synchronous)
- **Features:** FTS5 full-text search, triggers, indexes
- **Tables:** observations, sessions, user_prompts, observations_fts, sessions_fts, prompts_fts
## Basic Database Checks
### Check Database Exists
```bash
# Check file exists
ls -lh ~/.claude-mem/claude-mem.db
# Check file size
du -h ~/.claude-mem/claude-mem.db
# Check permissions
ls -la ~/.claude-mem/claude-mem.db
```
**Expected:**
- File exists
- Size: 100KB - 10MB+ (depends on usage)
- Permissions: Readable/writable by your user
### Check Database Integrity
```bash
# Run integrity check
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
**Expected output:** `ok`
**If errors appear:**
- Database corrupted
- Backup immediately: `cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup`
- Consider recreating (data loss)
## Data Inspection
### Count Records
```bash
# Observation count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
# Session count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM sessions;"
# User prompt count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM user_prompts;"
# FTS5 table counts (should match main tables)
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations_fts;"
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM sessions_fts;"
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM prompts_fts;"
```
### View Recent Records
```bash
# Recent observations
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
created_at,
type,
title,
project
FROM observations
ORDER BY created_at DESC
LIMIT 10;
"
# Recent sessions
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
created_at,
request,
project
FROM sessions
ORDER BY created_at DESC
LIMIT 5;
"
# Recent user prompts
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
created_at,
prompt
FROM user_prompts
ORDER BY created_at DESC
LIMIT 10;
"
```
### Check Projects
```bash
# List all projects
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT DISTINCT project
FROM observations
ORDER BY project;
"
# Count observations per project
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
project,
COUNT(*) as count
FROM observations
GROUP BY project
ORDER BY count DESC;
"
```
## Database Schema
### View Table Structure
```bash
# List all tables
sqlite3 ~/.claude-mem/claude-mem.db ".tables"
# Show observations table schema
sqlite3 ~/.claude-mem/claude-mem.db ".schema observations"
# Show all schemas
sqlite3 ~/.claude-mem/claude-mem.db ".schema"
```
### Expected Tables
- `observations` - Main observation records
- `observations_fts` - FTS5 virtual table for full-text search
- `sessions` - Session summary records
- `sessions_fts` - FTS5 virtual table for session search
- `user_prompts` - User prompt records
- `prompts_fts` - FTS5 virtual table for prompt search
## FTS5 Synchronization
The FTS5 tables should stay synchronized with main tables via triggers.
### Check FTS5 Sync
```bash
# Compare counts
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
(SELECT COUNT(*) FROM observations) as observations,
(SELECT COUNT(*) FROM observations_fts) as observations_fts,
(SELECT COUNT(*) FROM sessions) as sessions,
(SELECT COUNT(*) FROM sessions_fts) as sessions_fts,
(SELECT COUNT(*) FROM user_prompts) as prompts,
(SELECT COUNT(*) FROM prompts_fts) as prompts_fts;
"
```
**Expected:** All pairs should match (observations = observations_fts, etc.)
### Fix FTS5 Desync
If FTS5 counts don't match, triggers may have failed. Restart worker to rebuild:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
```
The worker will rebuild FTS5 indexes on startup if they're out of sync.
## Common Database Issues
### Issue: Database Doesn't Exist
**Cause:** First run, or database was deleted
**Fix:** Database will be created automatically on first observation. No action needed.
### Issue: Database is Empty (0 Records)
**Cause:**
- New installation (normal)
- Data was deleted
- Worker not processing observations
**Fix:**
1. Create test observation (use any skill and cancel)
2. Check worker logs for errors:
```bash
tail -50 ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
```
3. Verify observation appears in database
### Issue: Database Permission Denied
**Cause:** File permissions wrong, database owned by different user
**Fix:**
```bash
# Check ownership
ls -la ~/.claude-mem/claude-mem.db
# Fix permissions (if needed)
chmod 644 ~/.claude-mem/claude-mem.db
chown $USER ~/.claude-mem/claude-mem.db
```
### Issue: Database Locked
**Cause:**
- Multiple processes accessing database
- Crash left lock file
- Long-running transaction
**Fix:**
```bash
# Check for lock file
ls -la ~/.claude-mem/claude-mem.db-wal
ls -la ~/.claude-mem/claude-mem.db-shm
# Remove lock files (only if worker is stopped!)
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:stop
rm ~/.claude-mem/claude-mem.db-wal ~/.claude-mem/claude-mem.db-shm
npm run worker:start
```
### Issue: Database Growing Too Large
**Cause:** Too many observations accumulated
**Check size:**
```bash
du -h ~/.claude-mem/claude-mem.db
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
```
**Options:**
1. Delete old observations (manual cleanup):
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
DELETE FROM observations
WHERE created_at < datetime('now', '-90 days');
"
```
2. Vacuum to reclaim space:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "VACUUM;"
```
3. Archive and start fresh:
```bash
mv ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.archive
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
```
## Database Recovery
### Backup Database
**Before any destructive operations:**
```bash
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup
```
### Restore from Backup
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:stop
cp ~/.claude-mem/claude-mem.db.backup ~/.claude-mem/claude-mem.db
npm run worker:start
```
### Export Data
Export to JSON for safekeeping:
```bash
# Export observations
sqlite3 ~/.claude-mem/claude-mem.db -json "SELECT * FROM observations;" > observations.json
# Export sessions
sqlite3 ~/.claude-mem/claude-mem.db -json "SELECT * FROM sessions;" > sessions.json
# Export prompts
sqlite3 ~/.claude-mem/claude-mem.db -json "SELECT * FROM user_prompts;" > prompts.json
```
### Recreate Database
**WARNING: Data loss. Backup first!**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
# Stop worker
npm run worker:stop
# Backup current database
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.old
# Delete database
rm ~/.claude-mem/claude-mem.db
# Start worker (creates new database)
npm run worker:start
```
## Database Statistics
### Storage Analysis
```bash
# Database file size
du -h ~/.claude-mem/claude-mem.db
# Record counts by type
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
type,
COUNT(*) as count
FROM observations
GROUP BY type
ORDER BY count DESC;
"
# Observations per month
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
strftime('%Y-%m', created_at) as month,
COUNT(*) as count
FROM observations
GROUP BY month
ORDER BY month DESC;
"
# Average observation size (characters)
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
AVG(LENGTH(content)) as avg_content_length,
MAX(LENGTH(content)) as max_content_length
FROM observations;
"
```
## Advanced Queries
### Find Specific Observations
```bash
# Search by keyword (FTS5)
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT title, created_at
FROM observations_fts
WHERE observations_fts MATCH 'authentication'
ORDER BY created_at DESC;
"
# Find by type
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT title, created_at
FROM observations
WHERE type = 'bugfix'
ORDER BY created_at DESC
LIMIT 10;
"
# Find by file path
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT title, created_at
FROM observations
WHERE file_path LIKE '%auth%'
ORDER BY created_at DESC;
"
```
## Database Maintenance
### Regular Maintenance Tasks
```bash
# Analyze for query optimization
sqlite3 ~/.claude-mem/claude-mem.db "ANALYZE;"
# Rebuild FTS5 indexes
sqlite3 ~/.claude-mem/claude-mem.db "
INSERT INTO observations_fts(observations_fts) VALUES('rebuild');
INSERT INTO sessions_fts(sessions_fts) VALUES('rebuild');
INSERT INTO prompts_fts(prompts_fts) VALUES('rebuild');
"
# Vacuum to reclaim space
sqlite3 ~/.claude-mem/claude-mem.db "VACUUM;"
```
**Run monthly to keep database healthy.**
@@ -1,309 +0,0 @@
# Full System Diagnostics
Comprehensive step-by-step diagnostic workflow for claude-mem issues.
## Diagnostic Workflow
Run these checks systematically to identify the root cause:
### 1. Check Worker Status
First, verify if the worker service is running:
```bash
# Check worker status using npm script
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:status
# Or check health endpoint directly
curl -s http://127.0.0.1:37777/health
```
**Expected output from npm run worker:status:**
```
✓ Worker is running (PID: 12345)
Port: 37777
Uptime: 45m
Health: OK
```
**Expected output from health endpoint:** `{"status":"ok"}`
**If worker not running:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:start
```
**If health endpoint fails but worker reports running:**
Check for stale PID file:
```bash
cat ~/.claude-mem/worker.pid
ps -p $(cat ~/.claude-mem/worker.pid 2>/dev/null | grep -o '"pid":[0-9]*' | grep -o '[0-9]*') 2>/dev/null || echo "Stale PID - worker not actually running"
rm ~/.claude-mem/worker.pid
npm run worker:start
```
### 2. Check Worker Service Health
Test if the worker service responds to HTTP requests:
```bash
# Default port is 37777
curl -s http://127.0.0.1:37777/health
# Check custom port from settings
PORT=$(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_WORKER_PORT | grep -o '[0-9]\+' || echo "37777")
curl -s http://127.0.0.1:$PORT/health
```
**Expected output:** `{"status":"ok"}`
**If connection refused:**
- Worker not running → Go back to step 1
- Port conflict → Check what's using the port:
```bash
lsof -i :37777 || netstat -tlnp | grep 37777
```
### 3. Check Database
Verify the database exists and contains data:
```bash
# Check if database file exists
ls -lh ~/.claude-mem/claude-mem.db
# Check database size (should be > 0 bytes)
du -h ~/.claude-mem/claude-mem.db
# Query database for observation count (requires sqlite3)
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) as observation_count FROM observations;" 2>&1
# Query for session count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) as session_count FROM sessions;" 2>&1
# Check recent observations
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, type, title FROM observations ORDER BY created_at DESC LIMIT 5;" 2>&1
```
**Expected:**
- Database file exists (typically 100KB - 10MB+)
- Contains observations and sessions
- Recent observations visible
**If database missing or empty:**
- New installation - this is normal, database will populate as you work
- After `/clear` - sessions are marked complete but not deleted, data should persist
- Corrupted database - backup and recreate:
```bash
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup
# Worker will recreate on next observation
```
### 4. Check Dependencies Installation
Verify all required npm packages are installed:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
# Check for critical packages
ls node_modules/@anthropic-ai/claude-agent-sdk 2>&1 | head -1
ls node_modules/express 2>&1 | head -1
# Check if Bun is available
bun --version 2>&1
```
**Expected:** All critical packages present, Bun installed
**If dependencies missing:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm install
```
### 5. Check Worker Logs
Review recent worker logs for errors:
```bash
# View logs using npm script
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:logs
# View today's log file directly
cat ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Last 50 lines
tail -50 ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Check for specific errors
grep -iE "error|exception|failed" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log | tail -20
```
**Common error patterns to look for:**
- `SQLITE_ERROR` - Database issues
- `EADDRINUSE` - Port conflict
- `ENOENT` - Missing files
- `Module not found` - Dependency issues
### 6. Test Viewer UI
Check if the web viewer is accessible:
```bash
# Test viewer endpoint
curl -s http://127.0.0.1:37777/ | head -20
# Test stats endpoint
curl -s http://127.0.0.1:37777/api/stats
```
**Expected:**
- `/` returns HTML page with React viewer
- `/api/stats` returns JSON with database counts
### 7. Check Port Configuration
Verify port settings and availability:
```bash
# Check if custom port is configured
cat ~/.claude-mem/settings.json 2>/dev/null
cat ~/.claude/settings.json 2>/dev/null
# Check what's listening on default port
lsof -i :37777 2>&1 || netstat -tlnp 2>&1 | grep 37777
# Test connectivity
nc -zv 127.0.0.1 37777 2>&1
```
## Full System Diagnosis Script
Run this comprehensive diagnostic script to collect all information:
```bash
#!/bin/bash
echo "=== Claude-Mem Troubleshooting Report ==="
echo ""
echo "1. Environment"
echo " OS: $(uname -s)"
echo " Node version: $(node --version 2>/dev/null || echo 'N/A')"
echo " Bun version: $(bun --version 2>/dev/null || echo 'N/A')"
echo ""
echo "2. Plugin Installation"
echo " Plugin directory exists: $([ -d ~/.claude/plugins/marketplaces/thedotmack ] && echo 'YES' || echo 'NO')"
echo " Package version: $(grep '"version"' ~/.claude/plugins/marketplaces/thedotmack/package.json 2>/dev/null | head -1)"
echo ""
echo "3. Database"
echo " Database exists: $([ -f ~/.claude-mem/claude-mem.db ] && echo 'YES' || echo 'NO')"
echo " Database size: $(du -h ~/.claude-mem/claude-mem.db 2>/dev/null | cut -f1)"
echo " Observation count: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM observations;' 2>/dev/null || echo 'N/A')"
echo " Session count: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM sessions;' 2>/dev/null || echo 'N/A')"
echo ""
echo "4. Worker Service"
echo " Worker PID file: $([ -f ~/.claude-mem/worker.pid ] && echo 'EXISTS' || echo 'MISSING')"
if [ -f ~/.claude-mem/worker.pid ]; then
WORKER_PID=$(cat ~/.claude-mem/worker.pid 2>/dev/null | grep -o '"pid":[0-9]*' | grep -o '[0-9]*')
echo " Worker PID: $WORKER_PID"
echo " Process running: $(ps -p $WORKER_PID >/dev/null 2>&1 && echo 'YES' || echo 'NO (stale PID)')"
fi
echo " Health check: $(curl -s http://127.0.0.1:37777/health 2>/dev/null || echo 'FAILED')"
echo ""
echo "5. Configuration"
echo " Port setting: $(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_WORKER_PORT || echo 'default (37777)')"
echo " Observation count: $(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_CONTEXT_OBSERVATIONS || echo 'default (50)')"
echo " Model: $(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_MODEL || echo 'default (claude-sonnet-4-5)')"
echo ""
echo "6. Recent Activity"
echo " Latest observation: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT created_at FROM observations ORDER BY created_at DESC LIMIT 1;' 2>/dev/null || echo 'N/A')"
echo " Latest session: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT created_at FROM sessions ORDER BY created_at DESC LIMIT 1;' 2>/dev/null || echo 'N/A')"
echo ""
echo "7. Logs"
echo " Today's log file: $([ -f ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log ] && echo 'EXISTS' || echo 'MISSING')"
echo " Log file size: $(du -h ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log 2>/dev/null | cut -f1 || echo 'N/A')"
echo " Recent errors: $(grep -c -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log 2>/dev/null || echo '0')"
echo ""
echo "=== End Report ==="
```
Save this as `/tmp/claude-mem-diagnostics.sh` and run:
```bash
bash /tmp/claude-mem-diagnostics.sh
```
## Quick Diagnostic One-Liners
```bash
# Full status check
npm run worker:status && curl -s http://127.0.0.1:37777/health && echo " - All systems OK"
# Database stats
echo "DB: $(du -h ~/.claude-mem/claude-mem.db | cut -f1) | Obs: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM observations;' 2>/dev/null) | Sessions: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM sessions;' 2>/dev/null)"
# Recent errors
grep -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log 2>/dev/null | tail -5 || echo "No recent errors"
# Port check
lsof -i :37777 || echo "Port 37777 is free"
# Worker process check
ps aux | grep -E "bun.*worker-service" | grep -v grep || echo "Worker not running"
```
## Automated Fix Sequence
If diagnostics show issues, run this automated fix sequence:
```bash
#!/bin/bash
echo "Running automated fix sequence..."
# 1. Stop worker if running
echo "1. Stopping worker..."
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:stop
# 2. Clean stale PID if exists
echo "2. Cleaning stale PID file..."
rm -f ~/.claude-mem/worker.pid
# 3. Reinstall dependencies
echo "3. Reinstalling dependencies..."
npm install
# 4. Start worker
echo "4. Starting worker..."
npm run worker:start
# 5. Wait for startup
echo "5. Waiting for worker to start..."
sleep 3
# 6. Verify health
echo "6. Verifying health..."
curl -s http://127.0.0.1:37777/health || echo "Worker health check FAILED"
echo "Fix sequence complete!"
```
## Reporting Issues
If troubleshooting doesn't resolve the issue, run the built-in bug report tool:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run bug-report
```
This will collect:
1. Full diagnostic report
2. Worker logs
3. System information
4. Configuration details
5. Database stats
Post the generated report to: https://github.com/thedotmack/claude-mem/issues
@@ -1,207 +0,0 @@
# Quick Commands Reference
Essential commands for troubleshooting claude-mem.
## Worker Management
```bash
# Check worker status
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:status
# Start worker
npm run worker:start
# Restart worker
npm run worker:restart
# Stop worker
npm run worker:stop
# View logs
npm run worker:logs
# View today's log file
cat ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Last 50 lines
tail -50 ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Follow logs in real-time
tail -f ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
```
## Health Checks
```bash
# Check worker health (default port)
curl -s http://127.0.0.1:37777/health
# Check viewer stats
curl -s http://127.0.0.1:37777/api/stats
# Open viewer in browser
open http://127.0.0.1:37777
# Test custom port
PORT=37778
curl -s http://127.0.0.1:$PORT/health
```
## Database Queries
```bash
# Observation count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
# Session count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM sessions;"
# Recent observations
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, type, title FROM observations ORDER BY created_at DESC LIMIT 10;"
# Recent sessions
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, request FROM sessions ORDER BY created_at DESC LIMIT 5;"
# Database size
du -h ~/.claude-mem/claude-mem.db
# Database integrity check
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
# Projects in database
sqlite3 ~/.claude-mem/claude-mem.db "SELECT DISTINCT project FROM observations ORDER BY project;"
```
## Configuration
```bash
# View current settings
cat ~/.claude-mem/settings.json
cat ~/.claude/settings.json
# Change worker port
echo '{"CLAUDE_MEM_WORKER_PORT":"37778"}' > ~/.claude-mem/settings.json
# Change context observation count
# Edit ~/.claude-mem/settings.json and add:
{
"CLAUDE_MEM_CONTEXT_OBSERVATIONS": "25"
}
# Change AI model
{
"CLAUDE_MEM_MODEL": "claude-sonnet-4-5"
}
```
## Plugin Management
```bash
# Navigate to plugin directory
cd ~/.claude/plugins/marketplaces/thedotmack/
# Check plugin version
grep '"version"' package.json
# Reinstall dependencies
npm install
# View package.json
cat package.json
```
## Port Diagnostics
```bash
# Check what's using port 37777
lsof -i :37777
netstat -tlnp | grep 37777
# Test port connectivity
nc -zv 127.0.0.1 37777
curl -v http://127.0.0.1:37777/health
```
## Log Analysis
```bash
# Search logs for errors
grep -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Search for specific keyword
grep "keyword" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Search across all log files
grep -i "error" ~/.claude-mem/logs/worker-*.log
# Last 100 error lines
grep -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log | tail -100
# Follow logs in real-time
tail -f ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
```
## File Locations
```bash
# Plugin directory
~/.claude/plugins/marketplaces/thedotmack/
# Database
~/.claude-mem/claude-mem.db
# Settings
~/.claude-mem/settings.json
~/.claude/settings.json
# Chroma vector database
~/.claude-mem/chroma/
# Worker logs (daily rotation)
~/.claude-mem/logs/worker-*.log
# Worker PID file
~/.claude-mem/worker.pid
```
## System Information
```bash
# OS version
uname -a
# Node version
node --version
# NPM version
npm --version
# Bun version
bun --version
# SQLite version
sqlite3 --version
# Check disk space
df -h ~/.claude-mem/
```
## One-Line Diagnostics
```bash
# Full worker status check
npm run worker:status && curl -s http://127.0.0.1:37777/health
# Quick health check
curl -s http://127.0.0.1:37777/health && echo " - Worker is healthy"
# Database stats
echo "Observations: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM observations;')" && echo "Sessions: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM sessions;')"
# Recent errors
grep -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log | tail -10
# Port check
lsof -i :37777 || echo "Port 37777 is free"
```
@@ -1,362 +0,0 @@
# Worker Service Diagnostics
Bun worker-specific troubleshooting for claude-mem.
## Worker Overview
The claude-mem worker is a persistent background service managed by Bun. It:
- Runs Express.js server on port 37777 (default)
- Processes observations asynchronously
- Serves the viewer UI
- Provides search API endpoints
## Check Worker Status
### Basic Status Check
```bash
# Check worker status using npm script
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:status
# Or check health endpoint directly
curl -s http://127.0.0.1:37777/health
```
**Expected npm run worker:status output:**
```
✓ Worker is running (PID: 12345)
Port: 37777
Uptime: 45m
Health: OK
```
**Expected health endpoint output:**
```json
{"status":"ok"}
```
**Status indicators:**
- `Worker is running` - Worker running correctly
- `Worker is not running` - Worker stopped or crashed
- Connection refused - Worker not running
- Timeout - Worker hung (restart needed)
### Detailed Worker Info
```bash
# View PID file
cat ~/.claude-mem/worker.pid
# Check process details
ps aux | grep "bun.*worker-service"
```
## Worker Health Endpoint
The worker exposes a health endpoint at `/health`:
```bash
# Check health (default port)
curl -s http://127.0.0.1:37777/health
# With custom port
PORT=$(grep CLAUDE_MEM_WORKER_PORT ~/.claude-mem/settings.json | grep -o '[0-9]\+' || echo "37777")
curl -s http://127.0.0.1:$PORT/health
```
**Expected response:** `{"status":"ok"}`
**Error responses:**
- Connection refused - Worker not running
- Timeout - Worker hung (restart needed)
- Empty response - Worker crashed mid-request
## Worker Logs
### View Recent Logs
```bash
# View logs using npm script
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:logs
# View today's log file directly
cat ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Last 50 lines of today's log
tail -50 ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Follow logs in real-time
tail -f ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
```
### Search Logs for Errors
```bash
# Find errors in today's log
grep -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Find exceptions
grep -i "exception" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Find failed requests
grep -i "failed" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# All error patterns
grep -iE "error|exception|failed|crash" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# Search across all log files
grep -iE "error|exception|failed|crash" ~/.claude-mem/logs/worker-*.log
```
### Common Log Patterns
**Good startup:**
```
Worker service started on port 37777
Database initialized
Express server listening
```
**Database errors:**
```
Error: SQLITE_ERROR
Error initializing database
Database locked
```
**Port conflicts:**
```
Error: listen EADDRINUSE
Port 37777 already in use
```
**Crashes:**
```
Worker process exited with code 1
Worker restarting...
```
## Starting the Worker
### Basic Start
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:start
```
### Force Restart
```bash
# Restart worker (stops and starts)
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:restart
# Or manually stop and start
npm run worker:stop
npm run worker:start
```
## Stopping the Worker
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm run worker:stop
```
## Worker Not Starting
### Diagnostic Steps
1. **Try manual start to see error:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
bun plugin/scripts/worker-service.js
```
This runs the worker directly, showing full error output.
2. **Check Bun installation:**
```bash
which bun
bun --version
```
If Bun not found, run: `npm install` (auto-installs Bun)
3. **Check dependencies:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
ls node_modules/@anthropic-ai/claude-agent-sdk
ls node_modules/express
```
4. **Check port availability:**
```bash
lsof -i :37777
```
If port in use, either kill that process or change claude-mem port.
5. **Check PID file:**
```bash
cat ~/.claude-mem/worker.pid
```
If worker PID exists but process is dead, remove stale PID:
```bash
rm ~/.claude-mem/worker.pid
npm run worker:start
```
### Common Fixes
**Dependencies missing:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm install
npm run worker:start
```
**Port conflict:**
```bash
echo '{"CLAUDE_MEM_WORKER_PORT":"37778"}' > ~/.claude-mem/settings.json
npm run worker:restart
```
**Stale PID file:**
```bash
rm ~/.claude-mem/worker.pid
npm run worker:start
```
## Worker Crashing Repeatedly
If worker keeps restarting (check logs for repeated startup messages):
### Find the Cause
1. **Check error logs:**
```bash
grep -i "error" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log | tail -100
```
2. **Look for crash pattern:**
```bash
grep -A 5 "exited with code" ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
```
3. **Run worker in foreground to see crashes:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
bun plugin/scripts/worker-service.js
```
### Common Crash Causes
**Database corruption:**
```bash
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
If fails, backup and recreate database.
**Out of memory:**
Check if database is too large or memory leak. Restart:
```bash
npm run worker:restart
```
**Port conflict race condition:**
Another process grabbing port intermittently. Change port:
```bash
echo '{"CLAUDE_MEM_WORKER_PORT":"37778"}' > ~/.claude-mem/settings.json
npm run worker:restart
```
## Worker Management Commands
```bash
# Check status
npm run worker:status
# Start worker
npm run worker:start
# Stop worker
npm run worker:stop
# Restart worker
npm run worker:restart
# View logs
npm run worker:logs
# Check health endpoint
curl -s http://127.0.0.1:37777/health
# View PID
cat ~/.claude-mem/worker.pid
# View today's log file
cat ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# List all log files
ls -lh ~/.claude-mem/logs/worker-*.log
```
## Log File Management
Worker logs are stored in `~/.claude-mem/logs/` with daily rotation:
```bash
# View today's log
cat ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log
# View yesterday's log
cat ~/.claude-mem/logs/worker-$(date -d "yesterday" +%Y-%m-%d).log # Linux
cat ~/.claude-mem/logs/worker-$(date -v-1d +%Y-%m-%d).log # macOS
# List all logs
ls -lh ~/.claude-mem/logs/
# Clean old logs (older than 7 days)
find ~/.claude-mem/logs/ -name "worker-*.log" -mtime +7 -delete
# Archive logs
tar -czf ~/claude-mem-logs-backup-$(date +%Y-%m-%d).tar.gz ~/.claude-mem/logs/
```
**Note:** Logs auto-rotate daily. No manual flush required.
## Testing Worker Endpoints
Once worker is running, test all endpoints:
```bash
# Health check
curl -s http://127.0.0.1:37777/health
# Viewer HTML
curl -s http://127.0.0.1:37777/ | head -20
# Stats API
curl -s http://127.0.0.1:37777/api/stats
# Search API
curl -s "http://127.0.0.1:37777/api/search?query=test&limit=5"
# Recent context
curl -s "http://127.0.0.1:37777/api/context/recent?limit=3"
```
All should return appropriate responses (HTML for viewer, JSON for APIs).
## Troubleshooting Quick Reference
| Problem | Command | Expected Result |
|---------|---------|----------------|
| Check if running | `npm run worker:status` | Shows PID and uptime |
| Worker not running | `npm run worker:start` | Worker starts successfully |
| Worker crashed | `npm run worker:restart` | Worker restarts |
| View recent errors | `grep -i error ~/.claude-mem/logs/worker-$(date +%Y-%m-%d).log \| tail -20` | Shows recent errors |
| Port in use | `lsof -i :37777` | Shows process using port |
| Stale PID | `rm ~/.claude-mem/worker.pid && npm run worker:start` | Removes stale PID and starts |
| Dependencies missing | `npm install && npm run worker:start` | Installs deps and starts |
-17
View File
@@ -191,28 +191,11 @@ async function buildHooks() {
console.log(`${hook.name} built (${sizeInKB} KB)`);
}
// Build mem-search skill zip for Claude Desktop
console.log('\n📦 Building mem-search skill zip for Claude Desktop...');
const { execSync } = await import('child_process');
const zipOutput = 'plugin/skills/mem-search.zip';
// Remove old zip if exists
if (fs.existsSync(zipOutput)) {
fs.unlinkSync(zipOutput);
}
// Create zip from mem-search skill directory
execSync(`cd plugin/skills && zip -r mem-search.zip mem-search/`, { stdio: 'pipe' });
const zipStats = fs.statSync(zipOutput);
console.log(`✓ mem-search.zip built (${(zipStats.size / 1024).toFixed(2)} KB)`);
console.log('\n✅ All hooks, worker service, and MCP server built successfully!');
console.log(` Output: ${hooksDir}/`);
console.log(` - Hooks: *-hook.js`);
console.log(` - Worker: worker-service.cjs`);
console.log(` - MCP Server: mcp-server.cjs`);
console.log(` - Skills: plugin/skills/`);
console.log(` - Desktop Skill: plugin/skills/mem-search.zip`);
console.log('\n💡 Note: Dependencies will be auto-installed on first hook execution');
} catch (error) {
+6 -6
View File
@@ -15,14 +15,14 @@ function main() {
console.log('Finding duplicate observations...');
const duplicateObsQuery = db['db'].prepare(`
SELECT sdk_session_id, title, subtitle, type, COUNT(*) as count, GROUP_CONCAT(id) as ids
SELECT memory_session_id, title, subtitle, type, COUNT(*) as count, GROUP_CONCAT(id) as ids
FROM observations
GROUP BY sdk_session_id, title, subtitle, type
GROUP BY memory_session_id, title, subtitle, type
HAVING count > 1
`);
const duplicateObs = duplicateObsQuery.all() as Array<{
sdk_session_id: string;
memory_session_id: string;
title: string;
subtitle: string;
type: string;
@@ -50,14 +50,14 @@ function main() {
console.log('\n\nFinding duplicate summaries...');
const duplicateSumQuery = db['db'].prepare(`
SELECT sdk_session_id, request, completed, learned, COUNT(*) as count, GROUP_CONCAT(id) as ids
SELECT memory_session_id, request, completed, learned, COUNT(*) as count, GROUP_CONCAT(id) as ids
FROM session_summaries
GROUP BY sdk_session_id, request, completed, learned
GROUP BY memory_session_id, request, completed, learned
HAVING count > 1
`);
const duplicateSum = duplicateSumQuery.all() as Array<{
sdk_session_id: string;
memory_session_id: string;
request: string;
completed: string;
learned: string;
+34 -236
View File
@@ -37,76 +37,7 @@ const WORKER_BASE_URL = `http://${WORKER_HOST}:${WORKER_PORT}`;
*/
const TOOL_ENDPOINT_MAP: Record<string, string> = {
'search': '/api/search',
'timeline': '/api/timeline',
'get_recent_context': '/api/context/recent',
'get_context_timeline': '/api/context/timeline',
'help': '/api/instructions'
};
/**
* Detailed parameter schemas for each tool
*/
const TOOL_SCHEMAS: Record<string, any> = {
search: {
query: { type: 'string', description: 'Full-text search query' },
type: { type: 'string', description: 'Filter by type: tool_use, tool_result, prompt, summary' },
obs_type: { type: 'string', description: 'Observation type filter' },
concepts: { type: 'string', description: 'Comma-separated concept tags' },
files: { type: 'string', description: 'Comma-separated file paths' },
project: { type: 'string', description: 'Project name filter' },
dateStart: { type: ['string', 'number'], description: 'Start date (ISO or timestamp)' },
dateEnd: { type: ['string', 'number'], description: 'End date (ISO or timestamp)' },
limit: { type: 'number', description: 'Max results (default: 10)' },
offset: { type: 'number', description: 'Result offset for pagination' },
orderBy: { type: 'string', description: 'Sort order: created_at, relevance' }
},
timeline: {
query: { type: 'string', description: 'Search query to find anchor point' },
anchor: { type: 'number', description: 'Observation ID as timeline center' },
depth_before: { type: 'number', description: 'Observations before anchor (default: 5)' },
depth_after: { type: 'number', description: 'Observations after anchor (default: 5)' },
type: { type: 'string', description: 'Filter by type' },
concepts: { type: 'string', description: 'Comma-separated concept tags' },
files: { type: 'string', description: 'Comma-separated file paths' },
project: { type: 'string', description: 'Project name filter' }
},
get_recent_context: {
limit: { type: 'number', description: 'Max results (default: 20)' },
type: { type: 'string', description: 'Filter by type' },
concepts: { type: 'string', description: 'Comma-separated concept tags' },
files: { type: 'string', description: 'Comma-separated file paths' },
project: { type: 'string', description: 'Project name filter' },
dateStart: { type: ['string', 'number'], description: 'Start date' },
dateEnd: { type: ['string', 'number'], description: 'End date' }
},
get_context_timeline: {
anchor: { type: 'number', description: 'Observation ID (required)', required: true },
depth_before: { type: 'number', description: 'Observations before anchor' },
depth_after: { type: 'number', description: 'Observations after anchor' },
type: { type: 'string', description: 'Filter by type' },
concepts: { type: 'string', description: 'Comma-separated concept tags' },
files: { type: 'string', description: 'Comma-separated file paths' },
project: { type: 'string', description: 'Project name filter' }
},
get_observations: {
ids: { type: 'array', items: { type: 'number' }, description: 'Array of observation IDs (required)', required: true },
orderBy: { type: 'string', description: 'Sort order' },
limit: { type: 'number', description: 'Max results' },
project: { type: 'string', description: 'Project filter' }
},
help: {
operation: { type: 'string', description: 'Operation type: "observations", "timeline", "sessions", etc.' },
topic: { type: 'string', description: 'Specific topic for help' }
},
get_observation: {
id: { type: 'number', description: 'Observation ID (required)', required: true }
},
get_session: {
id: { type: 'number', description: 'Session ID (required)', required: true }
},
get_prompt: {
id: { type: 'number', description: 'Prompt ID (required)', required: true }
}
'timeline': '/api/timeline'
};
/**
@@ -154,47 +85,6 @@ async function callWorkerAPI(
}
}
/**
* Call Worker HTTP API with path parameter (GET)
*/
async function callWorkerAPIWithPath(
endpoint: string,
id: number
): Promise<{ content: Array<{ type: 'text'; text: string }>; isError?: boolean }> {
logger.debug('HTTP', 'Worker API request (path)', undefined, { endpoint, id });
try {
const url = `${WORKER_BASE_URL}${endpoint}/${id}`;
const response = await fetch(url);
if (!response.ok) {
const errorText = await response.text();
throw new Error(`Worker API error (${response.status}): ${errorText}`);
}
const data = await response.json();
logger.debug('HTTP', 'Worker API success (path)', undefined, { endpoint, id });
// Wrap raw data in MCP format
return {
content: [{
type: 'text' as const,
text: JSON.stringify(data, null, 2)
}]
};
} catch (error: any) {
logger.error('HTTP', 'Worker API error (path)', undefined, { endpoint, id, error: error.message });
return {
content: [{
type: 'text' as const,
text: `Error calling Worker API: ${error.message}`
}],
isError: true
};
}
}
/**
* Call Worker HTTP API with POST body
*/
@@ -260,38 +150,42 @@ async function verifyWorkerConnection(): Promise<boolean> {
*/
const tools = [
{
name: 'get_schema',
description: 'Get parameter schema for a tool. Call get_schema(tool_name) for details',
name: '__IMPORTANT',
description: `3-LAYER WORKFLOW (ALWAYS FOLLOW):
1. search(query) Get index with IDs (~50-100 tokens/result)
2. timeline(anchor=ID) Get context around interesting results
3. get_observations([IDs]) Fetch full details ONLY for filtered IDs
NEVER fetch full details without filtering first. 10x token savings.`,
inputSchema: {
type: 'object',
properties: { tool_name: { type: 'string' } },
required: ['tool_name']
properties: {}
},
handler: async (args: any) => {
// Validate tool_name to prevent prototype pollution
const toolName = args.tool_name;
if (typeof toolName !== 'string' || !Object.hasOwn(TOOL_SCHEMAS, toolName)) {
return {
content: [{
type: 'text' as const,
text: `Unknown tool: ${toolName}\n\nAvailable tools: ${Object.keys(TOOL_SCHEMAS).join(', ')}`
}],
isError: true
};
}
handler: async () => ({
content: [{
type: 'text' as const,
text: `# Memory Search Workflow
const schema = TOOL_SCHEMAS[toolName];
return {
content: [{
type: 'text' as const,
text: `# ${toolName} Parameters\n\n${JSON.stringify(schema, null, 2)}`
}]
};
}
**3-Layer Pattern (ALWAYS follow this):**
1. **Search** - Get index of results with IDs
\`search(query="...", limit=20, project="...")\`
Returns: Table with IDs, titles, dates (~50-100 tokens/result)
2. **Timeline** - Get context around interesting results
\`timeline(anchor=<ID>, depth_before=3, depth_after=3)\`
Returns: Chronological context showing what was happening
3. **Fetch** - Get full details ONLY for relevant IDs
\`get_observations(ids=[...])\` # ALWAYS batch for 2+ items
Returns: Complete details (~500-1000 tokens/result)
**Why:** 10x token savings. Never fetch full details without filtering first.`
}]
})
},
{
name: 'search',
description: 'Search memory. All parameters optional - call get_schema("search") for details',
description: 'Step 1: Search memory. Returns index with IDs. Params: query, limit, project, type, obs_type, dateStart, dateEnd, offset, orderBy',
inputSchema: {
type: 'object',
properties: {},
@@ -304,7 +198,7 @@ const tools = [
},
{
name: 'timeline',
description: 'Timeline context. All parameters optional - call get_schema("timeline") for details',
description: 'Step 2: Get context around results. Params: anchor (observation ID) OR query (finds anchor automatically), depth_before, depth_after, project',
inputSchema: {
type: 'object',
properties: {},
@@ -315,78 +209,16 @@ const tools = [
return await callWorkerAPI(endpoint, args);
}
},
{
name: 'get_recent_context',
description: 'Recent context. All parameters optional - call get_schema("get_recent_context") for details',
inputSchema: {
type: 'object',
properties: {},
additionalProperties: true
},
handler: async (args: any) => {
const endpoint = TOOL_ENDPOINT_MAP['get_recent_context'];
return await callWorkerAPI(endpoint, args);
}
},
{
name: 'get_context_timeline',
description: 'Timeline around observation ID',
inputSchema: {
type: 'object',
properties: {
anchor: {
type: 'number',
description: 'Observation ID (required). Optional params: get_schema("get_context_timeline")'
}
},
required: ['anchor'],
additionalProperties: true
},
handler: async (args: any) => {
const endpoint = TOOL_ENDPOINT_MAP['get_context_timeline'];
return await callWorkerAPI(endpoint, args);
}
},
{
name: 'help',
description: 'Get detailed docs. All parameters optional - call get_schema("help") for details',
inputSchema: {
type: 'object',
properties: {},
additionalProperties: true
},
handler: async (args: any) => {
const endpoint = TOOL_ENDPOINT_MAP['help'];
return await callWorkerAPI(endpoint, args);
}
},
{
name: 'get_observation',
description: 'Fetch observation by ID',
inputSchema: {
type: 'object',
properties: {
id: {
type: 'number',
description: 'Observation ID (required)'
}
},
required: ['id']
},
handler: async (args: any) => {
return await callWorkerAPIWithPath('/api/observation', args.id);
}
},
{
name: 'get_observations',
description: 'Batch fetch observations',
description: 'Step 3: Fetch full details for filtered IDs. Params: ids (array of observation IDs, required), orderBy, limit, project',
inputSchema: {
type: 'object',
properties: {
ids: {
type: 'array',
items: { type: 'number' },
description: 'Array of observation IDs (required). Optional params: get_schema("get_observations")'
description: 'Array of observation IDs to fetch (required)'
}
},
required: ['ids'],
@@ -395,47 +227,13 @@ const tools = [
handler: async (args: any) => {
return await callWorkerAPIPost('/api/observations/batch', args);
}
},
{
name: 'get_session',
description: 'Fetch session by ID',
inputSchema: {
type: 'object',
properties: {
id: {
type: 'number',
description: 'Session ID (required)'
}
},
required: ['id']
},
handler: async (args: any) => {
return await callWorkerAPIWithPath('/api/session', args.id);
}
},
{
name: 'get_prompt',
description: 'Fetch prompt by ID',
inputSchema: {
type: 'object',
properties: {
id: {
type: 'number',
description: 'Prompt ID (required)'
}
},
required: ['id']
},
handler: async (args: any) => {
return await callWorkerAPIWithPath('/api/prompt', args.id);
}
}
];
// Create the MCP server
const server = new Server(
{
name: 'mem-search-server',
name: 'mcp-search-server',
version: '1.0.0',
},
{
+6 -6
View File
@@ -122,7 +122,7 @@ const colors = {
interface Observation {
id: number;
sdk_session_id: string;
memory_session_id: string;
type: string;
title: string | null;
subtitle: string | null;
@@ -138,7 +138,7 @@ interface Observation {
interface SessionSummary {
id: number;
sdk_session_id: string;
memory_session_id: string;
request: string | null;
investigated: string | null;
learned: string | null;
@@ -246,7 +246,7 @@ export async function generateContext(input?: ContextInput, useColors: boolean =
// Get recent observations
const observations = db.db.prepare(`
SELECT
id, sdk_session_id, type, title, subtitle, narrative,
id, memory_session_id, type, title, subtitle, narrative,
facts, concepts, files_read, files_modified, discovery_tokens,
created_at, created_at_epoch
FROM observations
@@ -262,7 +262,7 @@ export async function generateContext(input?: ContextInput, useColors: boolean =
// Get recent summaries
const recentSummaries = db.db.prepare(`
SELECT id, sdk_session_id, request, investigated, learned, completed, next_steps, created_at, created_at_epoch
SELECT id, memory_session_id, request, investigated, learned, completed, next_steps, created_at, created_at_epoch
FROM session_summaries
WHERE project = ?
ORDER BY created_at_epoch DESC
@@ -275,10 +275,10 @@ export async function generateContext(input?: ContextInput, useColors: boolean =
if (config.showLastMessage && observations.length > 0) {
const currentSessionId = input?.session_id;
const priorSessionObs = observations.find(obs => obs.sdk_session_id !== currentSessionId);
const priorSessionObs = observations.find(obs => obs.memory_session_id !== currentSessionId);
if (priorSessionObs) {
const priorSessionId = priorSessionObs.sdk_session_id;
const priorSessionId = priorSessionObs.memory_session_id;
const dashedCwd = cwdToDashed(cwd);
const transcriptPath = path.join(homedir(), '.claude', 'projects', dashedCwd, `${priorSessionId}.jsonl`);
const messages = extractPriorMessages(transcriptPath);
+97 -15
View File
@@ -44,6 +44,7 @@ export class SessionStore {
this.ensureDiscoveryTokensColumn();
this.createPendingMessagesTable();
this.renameSessionIdColumns();
this.repairSessionIdColumnRename();
}
/**
@@ -583,22 +584,25 @@ export class SessionStore {
* - sdk_session_id memory_session_id (memory agent's session for resume)
*/
private renameSessionIdColumns(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(17) as SchemaVersion | undefined;
if (applied) return;
// Check if columns are already renamed (idempotent check)
const sessionsInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
const hasContentSessionId = sessionsInfo.some(col => col.name === 'content_session_id');
if (hasContentSessionId) {
// Already renamed, just record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(17, new Date().toISOString());
return;
}
logger.info('DB', 'Renaming session ID columns for semantic clarity');
// Begin transaction for atomic rename
this.db.run('BEGIN TRANSACTION');
try {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(17) as SchemaVersion | undefined;
if (applied) return;
logger.info('DB', 'Renaming session ID columns for semantic clarity');
// Check if columns are already renamed (idempotent check)
const sessionsInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
const hasContentSessionId = sessionsInfo.some(col => col.name === 'content_session_id');
if (hasContentSessionId) {
// Already renamed, just record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(17, new Date().toISOString());
return;
}
// SQLite 3.25+ supports ALTER TABLE RENAME COLUMN
// Rename in sdk_sessions table
this.db.run('ALTER TABLE sdk_sessions RENAME COLUMN claude_session_id TO content_session_id');
@@ -616,16 +620,90 @@ export class SessionStore {
// Rename in user_prompts table
this.db.run('ALTER TABLE user_prompts RENAME COLUMN claude_session_id TO content_session_id');
// Commit transaction
this.db.run('COMMIT');
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(17, new Date().toISOString());
logger.info('DB', 'Successfully renamed session ID columns');
} catch (error: any) {
// Rollback on error
this.db.run('ROLLBACK');
logger.error('DB', 'Session ID column rename migration error', undefined, error);
throw error;
}
}
/**
* Repair session ID column renames (migration 19)
* Migration 17 may have been recorded but failed to actually rename columns.
* This migration checks each table and renames if needed (idempotent).
*/
private repairSessionIdColumnRename(): void {
try {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(19) as SchemaVersion | undefined;
if (applied) return;
logger.info('DB', 'Checking session ID column renames (repair migration)');
let repairsNeeded = false;
// Check and fix sdk_sessions
const sessionsInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
if (sessionsInfo.some(col => col.name === 'claude_session_id')) {
logger.info('DB', 'Repairing sdk_sessions columns');
this.db.run('ALTER TABLE sdk_sessions RENAME COLUMN claude_session_id TO content_session_id');
this.db.run('ALTER TABLE sdk_sessions RENAME COLUMN sdk_session_id TO memory_session_id');
repairsNeeded = true;
}
// Check and fix pending_messages
const pendingInfo = this.db.query('PRAGMA table_info(pending_messages)').all() as TableColumnInfo[];
if (pendingInfo.some(col => col.name === 'claude_session_id')) {
logger.info('DB', 'Repairing pending_messages columns');
this.db.run('ALTER TABLE pending_messages RENAME COLUMN claude_session_id TO content_session_id');
repairsNeeded = true;
}
// Check and fix observations
const obsInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
if (obsInfo.some(col => col.name === 'sdk_session_id')) {
logger.info('DB', 'Repairing observations columns');
this.db.run('ALTER TABLE observations RENAME COLUMN sdk_session_id TO memory_session_id');
repairsNeeded = true;
}
// Check and fix session_summaries
const summariesInfo = this.db.query('PRAGMA table_info(session_summaries)').all() as TableColumnInfo[];
if (summariesInfo.some(col => col.name === 'sdk_session_id')) {
logger.info('DB', 'Repairing session_summaries columns');
this.db.run('ALTER TABLE session_summaries RENAME COLUMN sdk_session_id TO memory_session_id');
repairsNeeded = true;
}
// Check and fix user_prompts
const promptsInfo = this.db.query('PRAGMA table_info(user_prompts)').all() as TableColumnInfo[];
if (promptsInfo.some(col => col.name === 'claude_session_id')) {
logger.info('DB', 'Repairing user_prompts columns');
this.db.run('ALTER TABLE user_prompts RENAME COLUMN claude_session_id TO content_session_id');
repairsNeeded = true;
}
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(19, new Date().toISOString());
if (repairsNeeded) {
logger.info('DB', 'Session ID column rename repairs completed');
} else {
logger.info('DB', 'No session ID column repairs needed');
}
} catch (error: any) {
logger.error('DB', 'Session ID column rename repair error', undefined, error);
throw error;
}
}
/**
* Update the memory session ID for a session
* Called by SDKAgent when it captures the session ID from the first SDK message
@@ -1147,6 +1225,10 @@ export class SessionStore {
const nowEpoch = now.getTime();
// Pure INSERT OR IGNORE - no updates, no complexity
// NOTE: memory_session_id is initialized to contentSessionId as a placeholder for FK purposes.
// The REAL memory session ID is captured by SDKAgent from the first SDK response
// and stored via updateMemorySessionId(). The resume logic checks if memorySessionId
// differs from contentSessionId before using it - see SDKAgent.startSession().
this.db.prepare(`
INSERT OR IGNORE INTO sdk_sessions
(content_session_id, memory_session_id, project, user_prompt, started_at, started_at_epoch, status)
+2 -2
View File
@@ -74,7 +74,7 @@ Tips:
const id = `#S${session.id}`;
const time = this.formatTime(session.created_at_epoch);
const icon = '🎯';
const title = session.request || `Session ${session.sdk_session_id?.substring(0, 8) || 'unknown'}`;
const title = session.request || `Session ${session.memory_session_id?.substring(0, 8) || 'unknown'}`;
return `| ${id} | ${time} | ${icon} | ${title} | - | - |`;
}
@@ -137,7 +137,7 @@ Tips:
const id = `#S${session.id}`;
const time = this.formatTime(session.created_at_epoch);
const icon = '🎯';
const title = session.request || `Session ${session.sdk_session_id?.substring(0, 8) || 'unknown'}`;
const title = session.request || `Session ${session.memory_session_id?.substring(0, 8) || 'unknown'}`;
// Use ditto mark if same time as previous row
const timeDisplay = time === lastTime ? '″' : time;
+11 -4
View File
@@ -64,22 +64,29 @@ export class SDKAgent {
// Create message generator (event-driven)
const messageGenerator = this.createMessageGenerator(session);
// CRITICAL: Only resume if memorySessionId is a REAL captured SDK session ID,
// not the placeholder (which equals contentSessionId). The placeholder is set
// for FK purposes but would cause the bug where we try to resume the USER's session!
const hasRealMemorySessionId = session.memorySessionId &&
session.memorySessionId !== session.contentSessionId;
logger.info('SDK', 'Starting SDK query', {
sessionDbId: session.sessionDbId,
contentSessionId: session.contentSessionId,
memorySessionId: session.memorySessionId,
resume_parameter: session.memorySessionId || '(none - fresh start)',
hasRealMemorySessionId,
resume_parameter: hasRealMemorySessionId ? session.memorySessionId : '(none - fresh start)',
lastPromptNumber: session.lastPromptNumber
});
// Run Agent SDK query loop
// Use memorySessionId for resume (captured from previous SDK response) if available
// Only resume if we have a REAL captured memory session ID (not the placeholder)
const queryResult = query({
prompt: messageGenerator,
options: {
model: modelId,
// Only resume if we have a captured memory session ID from previous SDK interaction
...(session.memorySessionId && { resume: session.memorySessionId }),
// Only resume if memorySessionId differs from contentSessionId (meaning it was captured)
...(hasRealMemorySessionId && { resume: session.memorySessionId }),
disallowedTools,
abortController: session.abortController,
pathToClaudeCodeExecutable: claudePath
+3 -3
View File
@@ -1376,13 +1376,13 @@ export class SearchManager {
lines.push('');
for (const session of sessions) {
if (!session.sdk_session_id) continue;
if (!session.memory_session_id) continue;
lines.push('---');
lines.push('');
if (session.has_summary) {
const summary = this.sessionStore.getSummaryForSession(session.sdk_session_id);
const summary = this.sessionStore.getSummaryForSession(session.memory_session_id);
if (summary) {
const promptLabel = summary.prompt_number ? ` (Prompt #${summary.prompt_number})` : '';
lines.push(`**Summary${promptLabel}**`);
@@ -1432,7 +1432,7 @@ export class SearchManager {
lines.push(`**Request:** ${session.user_prompt}`);
}
const observations = this.sessionStore.getObservationsForSession(session.sdk_session_id);
const observations = this.sessionStore.getObservationsForSession(session.memory_session_id);
if (observations.length > 0) {
lines.push('');
lines.push(`**Observations (${observations.length}):**`);