Mem-search enhancements: table output, simplified API, Sonnet default, and removed fake URIs (#317)
* feat: Add batch fetching for observations and update documentation - Implemented a new endpoint for fetching multiple observations by IDs in a single request. - Updated the DataRoutes to include a POST /api/observations/batch endpoint. - Enhanced SKILL.md documentation to reflect changes in the search process and batch fetching capabilities. - Increased the default limit for search results from 5 to 40 for better usability. * feat!: Fix timeline parameter passing with SearchManager alignment BREAKING CHANGE: Timeline MCP tools now use standardized parameter names - anchor_id → anchor - before → depth_before - after → depth_after - obs_type → type (timeline tool only) Fixes timeline endpoint failures caused by parameter name mismatch between MCP layer and SearchManager. Adds new SessionStore methods for fetching prompts and session summaries by ID. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * docs: reframe timeline parameter fix as bug fix, not breaking change The timeline tools were completely broken due to parameter name mismatch. There's nothing to migrate from since the old parameters never worked. Co-authored-by: Alex Newman <thedotmack@users.noreply.github.com> * Refactor mem-search documentation and optimize API tool definitions - Updated SKILL.md to emphasize batch fetching for observations, clarifying usage and efficiency. - Removed deprecated tools from mcp-server.ts and streamlined tool definitions for clarity. - Enhanced formatting in FormattingService.ts for better output readability. - Adjusted SearchManager.ts to improve result headers and removed unnecessary search tips from combined text. * Refactor FormattingService and SearchManager for table-based output - Updated FormattingService to format search results as tables, including methods for formatting observations, sessions, and user prompts. - Removed JSON format handling from SearchManager and streamlined result formatting to consistently use table format. - Enhanced readability and consistency in search tips and formatting logic. - Introduced token estimation for observations and improved time formatting. * refactor: update documentation and API references for version bump and search functionalities * Refactor code structure for improved readability and maintainability * chore: change default model from haiku to sonnet 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * feat: unify timeline formatting across search and context services Extract shared timeline formatting utilities into reusable module to align MCP search output format with context-generator's date/file-grouped format. Changes: - Create src/shared/timeline-formatting.ts with reusable utilities (parseJsonArray, formatDateTime, formatTime, formatDate, toRelativePath, extractFirstFile, groupByDate) - Refactor context-generator.ts to use shared utilities - Update SearchManager.search() to use date/file grouping - Add search-specific row formatters to FormattingService - Fix timeline methods to extract actual file paths from metadata instead of hardcoding 'General' - Remove Work column from search output (kept in context output) Result: Consistent date/file-grouped markdown formatting across both systems while maintaining their different column requirements. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: remove redundant legend from search output Remove legend from search/timeline results since it's already shown in SessionStart context. Saves ~30 tokens per search result. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * Refactor session summary rendering to remove links - Removed link generation for session summaries in context generation and search manager. - Updated output formatting to exclude links while maintaining the session summary structure. - Adjusted related components in TimelineService to ensure consistency across the application. * fix: move skillPath declaration outside try block to fix scoping bug The skillPath variable was declared inside the try block but referenced in the catch block for error logging. Since const is block-scoped, this would cause a ReferenceError when the error handler executes. Moved skillPath declaration before the try block so it's accessible in both try and catch scopes. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: address PR #317 code review feedback **Critical Fixes:** - Replace happy_path_error__with_fallback debug calls with proper logger methods in mcp-server.ts - All HTTP API calls now use logger.debug/error for consistent logging **Code Quality Improvements:** - Extract 90-day recency window magic numbers to named constants - Added RECENCY_WINDOW_DAYS and RECENCY_WINDOW_MS constants in SearchManager **Documentation:** - Document model cost implications of Haiku → Sonnet upgrade in CHANGELOG - Provide clear migration path for users who want to revert to Haiku 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: simplify CHANGELOG - remove cost documentation Removed model cost comparison documentation per user feedback. Kept only the technical code quality improvements. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com> Co-authored-by: claude[bot] <41898282+claude[bot]@users.noreply.github.com> Co-authored-by: Alex Newman <thedotmack@users.noreply.github.com>
This commit is contained in:
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
+138
-152
File diff suppressed because one or more lines are too long
@@ -10,6 +10,7 @@ Search past work across all sessions. Simple workflow: search → get IDs → fe
|
||||
## When to Use
|
||||
|
||||
Use when users ask about PREVIOUS sessions (not current conversation):
|
||||
|
||||
- "Did we already fix this?"
|
||||
- "How did we solve X last time?"
|
||||
- "What happened last week?"
|
||||
@@ -19,47 +20,57 @@ Use when users ask about PREVIOUS sessions (not current conversation):
|
||||
**ALWAYS follow this exact flow:**
|
||||
|
||||
1. **Search** - Get an index of results with IDs
|
||||
2. **Timeline** (optional) - Get context around top results to understand what was happening
|
||||
2. **Timeline** - Get context around top results to understand what was happening
|
||||
3. **Review** - Look at titles/dates/context, pick relevant IDs
|
||||
4. **Fetch** - Get full details ONLY for those IDs
|
||||
|
||||
### Step 1: Search Everything
|
||||
|
||||
```bash
|
||||
curl "http://localhost:37777/api/search?query=authentication&format=index&limit=5"
|
||||
```
|
||||
Use the `search` MCP tool:
|
||||
|
||||
**Required parameters:**
|
||||
|
||||
- `query` - Search term
|
||||
- `format=index` - ALWAYS start with index (lightweight)
|
||||
- `limit=5` - Start small (3-5 results)
|
||||
- `limit: 20` - You can request large indexes as necessary
|
||||
- `project` - Project name (required)
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
search(query="authentication", limit=20, project="my-project")
|
||||
```
|
||||
|
||||
**Returns:**
|
||||
```
|
||||
1. [feature] Added JWT authentication
|
||||
Date: 11/17/2025, 3:48:45 PM
|
||||
ID: 11131
|
||||
|
||||
2. [bugfix] Fixed auth token expiration
|
||||
Date: 11/16/2025, 2:15:22 PM
|
||||
ID: 10942
|
||||
```
|
||||
| ID | Time | T | Title | Read | Work |
|
||||
|----|------|---|-------|------|------|
|
||||
| #11131 | 3:48 PM | 🟣 | Added JWT authentication | ~75 | 🛠️ 450 |
|
||||
| #10942 | 2:15 PM | 🔴 | Fixed auth token expiration | ~50 | 🛠️ 200 |
|
||||
```
|
||||
|
||||
### Step 2: Get Timeline Context (Optional)
|
||||
### Step 2: Get Timeline Context
|
||||
|
||||
When you need to understand "what was happening" around a result:
|
||||
You MUST understand "what was happening" around a result.
|
||||
|
||||
```bash
|
||||
# Get timeline around an observation ID
|
||||
curl "http://localhost:37777/api/timeline?anchor=11131&depth_before=3&depth_after=3"
|
||||
Use the `timeline` MCP tool:
|
||||
|
||||
# Or use query to find + get timeline in one step
|
||||
curl "http://localhost:37777/api/timeline?query=authentication&depth_before=3&depth_after=3"
|
||||
**Example with observation ID:**
|
||||
|
||||
```
|
||||
timeline(anchor=11131, depth_before=3, depth_after=3, project="my-project")
|
||||
```
|
||||
|
||||
**Example with query (finds anchor automatically):**
|
||||
|
||||
```
|
||||
timeline(query="authentication", depth_before=3, depth_after=3, project="my-project")
|
||||
```
|
||||
|
||||
**Returns exactly `depth_before + 1 + depth_after` items** - observations, sessions, and prompts interleaved chronologically around the anchor.
|
||||
|
||||
**When to use:**
|
||||
|
||||
- User asks "what was happening when..."
|
||||
- Need to understand sequence of events
|
||||
- Want broader context around a specific observation
|
||||
@@ -70,34 +81,68 @@ Review the index results (and timeline if used). Identify which IDs are actually
|
||||
|
||||
### Step 4: Fetch by ID
|
||||
|
||||
For each relevant ID, fetch full details:
|
||||
For each relevant ID, fetch full details using MCP tools:
|
||||
|
||||
```bash
|
||||
# Fetch observation
|
||||
curl "http://localhost:37777/api/observation/11131"
|
||||
**Fetch multiple observations (ALWAYS use for 2+ IDs):**
|
||||
|
||||
# Fetch session
|
||||
curl "http://localhost:37777/api/session/2005"
|
||||
```
|
||||
get_batch_observations(ids=[11131, 10942, 10855])
|
||||
```
|
||||
|
||||
# Fetch prompt
|
||||
curl "http://localhost:37777/api/prompt/5421"
|
||||
**With ordering and limit:**
|
||||
|
||||
```
|
||||
get_batch_observations(
|
||||
ids=[11131, 10942, 10855],
|
||||
orderBy="date_desc",
|
||||
limit=10,
|
||||
project="my-project"
|
||||
)
|
||||
```
|
||||
|
||||
**Fetch single observation (only when fetching exactly 1):**
|
||||
|
||||
```
|
||||
get_observation(id=11131)
|
||||
```
|
||||
|
||||
**Fetch session:**
|
||||
|
||||
```
|
||||
get_session(id=2005) # Just the number from S2005
|
||||
```
|
||||
|
||||
**Fetch prompt:**
|
||||
|
||||
```
|
||||
get_prompt(id=5421)
|
||||
```
|
||||
|
||||
**ID formats:**
|
||||
|
||||
- Observations: Just the number (11131)
|
||||
- Sessions: Just the number (2005) from "S2005"
|
||||
- Prompts: Just the number (5421)
|
||||
|
||||
**Batch optimization:**
|
||||
|
||||
- **ALWAYS use `get_batch_observations` for 2+ observations**
|
||||
- 10-100x more efficient than individual fetches
|
||||
- Single HTTP request vs N requests
|
||||
- Returns all results in one response
|
||||
- Supports ordering and filtering
|
||||
|
||||
## Search Parameters
|
||||
|
||||
**Basic:**
|
||||
|
||||
- `query` - What to search for (required)
|
||||
- `format` - "index" or "full" (always use "index" first)
|
||||
- `limit` - How many results (default 5, max 100)
|
||||
- `limit` - How many results (default 20)
|
||||
- `project` - Filter by project name (required)
|
||||
|
||||
**Filters (optional):**
|
||||
|
||||
- `type` - Filter to "observations", "sessions", or "prompts"
|
||||
- `project` - Filter by project name
|
||||
- `dateStart` - Start date (YYYY-MM-DD or epoch timestamp)
|
||||
- `dateEnd` - End date (YYYY-MM-DD or epoch timestamp)
|
||||
- `obs_type` - Filter observations by type (comma-separated): bugfix, feature, decision, discovery, change
|
||||
@@ -105,39 +150,65 @@ curl "http://localhost:37777/api/prompt/5421"
|
||||
## Examples
|
||||
|
||||
**Find recent bug fixes:**
|
||||
```bash
|
||||
curl "http://localhost:37777/api/search?query=bug&type=observations&obs_type=bugfix&format=index&limit=5"
|
||||
|
||||
Use the `search` MCP tool with filters:
|
||||
|
||||
```
|
||||
search(query="bug", type="observations", obs_type="bugfix", limit=20, project="my-project")
|
||||
```
|
||||
|
||||
**Find what happened last week:**
|
||||
```bash
|
||||
curl "http://localhost:37777/api/search?query=&type=observations&dateStart=2025-11-11&format=index&limit=10"
|
||||
|
||||
Use date filters:
|
||||
|
||||
```
|
||||
search(type="observations", dateStart="2025-11-11", limit=20, project="my-project")
|
||||
```
|
||||
|
||||
**Search everything:**
|
||||
```bash
|
||||
curl "http://localhost:37777/api/search?query=database+migration&format=index&limit=5"
|
||||
|
||||
Simple query search:
|
||||
|
||||
```
|
||||
search(query="database migration", limit=20, project="my-project")
|
||||
```
|
||||
|
||||
**Get detailed instructions:**
|
||||
|
||||
Use the `progressive_description` tool to load full instructions on-demand:
|
||||
|
||||
```
|
||||
progressive_description(topic="workflow") # Get 4-step workflow
|
||||
progressive_description(topic="search_params") # Get parameters reference
|
||||
progressive_description(topic="examples") # Get usage examples
|
||||
progressive_description(topic="all") # Get complete guide
|
||||
```
|
||||
|
||||
## Why This Workflow?
|
||||
|
||||
**Token efficiency:**
|
||||
- Index format: ~50-100 tokens per result
|
||||
- Full format: ~500-1000 tokens per result
|
||||
- **10x difference** - only fetch full when you know it's relevant
|
||||
|
||||
- **Search results:** ~50-100 tokens per result (table index)
|
||||
- **Full observation:** ~500-1000 tokens each
|
||||
- **10x savings** - only fetch full when you know it's relevant
|
||||
|
||||
**Batch fetching:**
|
||||
|
||||
- **Individual fetches:** 10 HTTP requests, ~5-10s latency
|
||||
- **Batch fetch:** 1 HTTP request, ~0.5-1s latency
|
||||
- **10-100x faster** for multi-observation queries
|
||||
|
||||
**Clarity:**
|
||||
- See everything first
|
||||
- Pick what matters
|
||||
- Get details only for what you need
|
||||
|
||||
## Error Handling
|
||||
|
||||
If search fails, tell the user the worker isn't available and suggest:
|
||||
```bash
|
||||
pm2 list # Check if worker is running
|
||||
```
|
||||
- See everything first (table index)
|
||||
- Get timeline context around interesting results
|
||||
- Pick what matters based on context
|
||||
- Fetch details only for what you need (batch when possible)
|
||||
|
||||
---
|
||||
|
||||
**Remember:** ALWAYS search with format=index first. ALWAYS fetch by ID for details. The IDs are there for a reason - USE THEM.
|
||||
**Remember:**
|
||||
|
||||
- ALWAYS get timeline context to understand what was happening
|
||||
- ALWAYS use `get_batch_observations` when fetching 2+ observations
|
||||
- The workflow is optimized: search → timeline → batch fetch = 10-100x faster
|
||||
|
||||
@@ -95,7 +95,7 @@ echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json
|
||||
# Change AI model
|
||||
{
|
||||
"env": {
|
||||
"CLAUDE_MEM_MODEL": "claude-haiku-4-5"
|
||||
"CLAUDE_MEM_MODEL": "claude-sonnet-4-5"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
File diff suppressed because one or more lines are too long
Reference in New Issue
Block a user