fix: context hook updates and cleanup (#150)

* fix(context-hook): update savings message to reference mem-search skill

Changed "Use claude-mem search" to "Use the mem-search skill" for clarity.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* chore: delete outdated docs, experiments, and test results

Removed:
- docs/context/ (moved to private/)
- experiment/ (obsolete)
- test-results/ (stale)
- tests/ (outdated)

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

* fix(user-message-hook): update support link to Discord community

---------

Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2025-11-28 20:17:44 -05:00
committed by GitHub
parent 155465f52a
commit 01be3156fb
87 changed files with 7 additions and 22289 deletions
-542
View File
@@ -1,542 +0,0 @@
# Chroma Vector Database - Hybrid Semantic Search
## Overview
Claude-Mem v5.0.0 introduced **Chroma**, a vector database that enables semantic search across your memory stream. Combined with SQLite's FTS5 keyword search, this creates a powerful **hybrid search architecture** that finds contextually relevant observations using both meaning and keywords.
**Key Benefits:**
- 🧠 **Semantic Search** - Find observations by meaning, not just keywords
- 🔍 **Hybrid Architecture** - Combines semantic similarity with keyword matching
- ⏱️ **Recency Filtering** - Focus on recent 90 days for relevant context
-**Fast Performance** - Semantic search under 200ms with 8,000+ documents
- 🔄 **Auto-Sync** - ChromaSync service keeps vectors updated automatically
## What is Chroma?
[ChromaDB](https://www.trychroma.com/) is an open-source vector database designed for AI applications. It stores text as **vector embeddings** - mathematical representations that capture semantic meaning.
**Example:**
```
Query: "authentication bug"
Keyword Match: Must contain both "authentication" AND "bug"
Semantic Match: Also finds "login error", "auth failure", "sign-in issue"
```
Semantic search understands that "authentication bug" is conceptually similar to "login error" even though they share no keywords.
## Architecture
### Hybrid Search Flow
```
┌──────────────────────────────────────────────────────────────┐
│ User Query: "How does authentication work?" │
└──────────────────────────────────────────────────────────────┘
┌─────────────────┴─────────────────┐
↓ ↓
┌──────────────────────┐ ┌──────────────────────┐
│ Chroma Semantic │ │ SQLite FTS5 │
│ Vector Similarity │ │ Keyword Search │
│ │ │ │
│ Finds conceptually │ │ Finds exact/fuzzy │
│ similar observations │ │ keyword matches │
└──────────────────────┘ └──────────────────────┘
↓ ↓
└─────────────────┬─────────────────┘
┌─────────────────────────────────┐
│ Merge Results │
│ - Deduplicate by ID │
│ - Sort by relevance + recency │
│ - Filter by 90-day window │
└─────────────────────────────────┘
┌─────────────────────────────────┐
│ Return Top Matches │
│ Semantic + Keyword combined │
└─────────────────────────────────┘
```
### ChromaSync Service
The **ChromaSync** service (`src/services/sync/ChromaSync.ts`) automatically synchronizes observations to Chroma:
**When Observations Are Synced:**
1. **Session Summary** - After each session completes, all new observations synced
2. **Worker Startup** - On initialization, checks for unsynced observations
3. **Manual Trigger** - Can force sync via internal API (development only)
**What Gets Embedded:**
- Observation ID (unique identifier)
- Title (compressed learning statement)
- Narrative (detailed explanation)
- Project path (for project-specific filtering)
- Timestamp (for recency filtering)
- Concepts (semantic tags)
- File references (associated code files)
**Embedding Model:**
- Currently using Chroma's default embedding function
- Future: Configurable embedding models (e.g., OpenAI, sentence-transformers)
### Data Structure
**SQLite (Source of Truth):**
```sql
CREATE TABLE observations (
id INTEGER PRIMARY KEY,
title TEXT,
narrative TEXT,
facts TEXT,
concepts TEXT,
files TEXT,
type TEXT,
projectPath TEXT,
createdAt INTEGER
);
```
**Chroma (Vector Embeddings):**
```typescript
{
ids: ["obs_12345"],
embeddings: [[0.123, -0.456, ...]], // 384-dimensional vector
documents: ["Title: Authentication flow\nNarrative: Implemented..."],
metadatas: [{
type: "feature",
project: "claude-mem",
timestamp: 1698765432000,
concepts: "pattern,architecture"
}]
}
```
## How Semantic Search Works
### Vector Embeddings
Text converted to high-dimensional vectors that capture meaning:
```
"user authentication" → [0.12, -0.34, 0.56, ..., 0.78]
"login system" → [0.15, -0.32, 0.54, ..., 0.81]
"database schema" → [-0.45, 0.67, -0.23, ..., 0.12]
```
Notice: "user authentication" and "login system" have similar vectors (close in vector space), while "database schema" is distant.
### Similarity Search
Chroma uses **cosine similarity** to find nearest neighbors:
```typescript
// Query embedding
query: "authentication bug"
query_vector: [0.14, -0.33, 0.55, ..., 0.79]
// Find observations with similar vectors
results = chroma.query(
query_vector,
n_results: 10,
where: { timestamp: { $gte: now - 90_days } }
)
```
**Result Ranking:**
- Higher cosine similarity = more semantically similar
- Filtered by 90-day recency window
- Combined with keyword matches from FTS5
## 90-Day Recency Filtering
Why 90 days?
**Rationale:**
- Recent context more likely relevant to current work
- Prevents very old observations from diluting results
- Balances completeness with relevance
- Reduces vector search space for faster queries
**Implementation:**
```typescript
const ninetyDaysAgo = Date.now() - (90 * 24 * 60 * 60 * 1000);
// Chroma metadata filter
where: {
timestamp: { $gte: ninetyDaysAgo }
}
// SQLite WHERE clause
WHERE createdAt >= ?
```
**Configurable?**
- Not currently user-configurable
- Hard-coded in `src/servers/search-server.ts`
- Future: Add `CLAUDE_MEM_RECENCY_DAYS` environment variable
## MCP Tool Integration
All 9 MCP search tools benefit from hybrid search:
### search_observations (Hybrid)
```typescript
// Keyword-only (v4.x)
search_observations(query: "authentication")
// Returns: Observations containing "authentication"
// Hybrid semantic + keyword (v5.x)
search_observations(query: "authentication")
// Returns: Observations with "authentication" PLUS semantically similar:
// - "login system"
// - "user credentials"
// - "session management"
```
### get_timeline_by_query (Semantic-First)
```typescript
// Uses Chroma to find best match, then builds timeline
get_timeline_by_query(
query: "when did we implement the viewer UI?",
mode: "auto",
depth_before: 10,
depth_after: 10
)
// Chroma finds: Observation #4057 "Web-Based Viewer UI for Real-Time Memory Stream"
// Returns: Timeline with 10 observations before + anchor + 10 after
```
### Benefits Across All Tools
- **find_by_concept**: Semantic similarity finds related concepts
- **find_by_file**: Finds semantically similar code changes
- **find_by_type**: Better relevance ranking within type
- **get_recent_context**: Prioritizes semantically relevant recent context
## Performance
### Benchmarks (8,279 vector documents)
| Operation | Time | Notes |
|-----------|------|-------|
| **Semantic Query** | 150-200ms | 90-day window, top 10 results |
| **Keyword Query (FTS5)** | 5-10ms | Full-text search |
| **Hybrid Query** | 160-220ms | Combined semantic + keyword |
| **Initial Sync** | 2-5 min | First-time embedding of all observations |
| **Incremental Sync** | 100-500ms | 1-10 new observations per session |
### Memory Usage
- **Chroma DB Size**: ~50MB for 8,000 observations
- **Embeddings**: 384 dimensions × 4 bytes = 1.5KB per observation
- **Metadata**: ~500 bytes per observation (project, type, timestamp)
- **Total**: ~2KB per observation in Chroma
### Optimization Tips
1. **Reduce vector dimensions**: Use smaller embedding models (future)
2. **Adjust recency window**: Narrow to 30/60 days for faster queries
3. **Limit result count**: Request fewer results (n_results=5 vs 10)
4. **Project filtering**: Add project filter to metadata query
## Installation & Dependencies
### Python Requirement
Chroma requires Python 3.7+ installed:
**Check Python:**
```bash
python3 --version
# Should show: Python 3.7.x or higher
```
**Install Python (if needed):**
- **macOS**: `brew install python3`
- **Windows**: Download from [python.org](https://www.python.org/downloads/)
- **Linux**: `apt-get install python3` or `yum install python3`
### ChromaDB Installation
Chroma installed automatically as npm dependency:
```bash
npm install
# Installs: chromadb (Python package via node-gyp bindings)
```
**Manual Installation (if auto-install fails):**
```bash
pip3 install chromadb
```
### Troubleshooting Installation
**Error: "Python not found"**
```bash
# Set Python path explicitly
export PYTHON=/usr/local/bin/python3
npm install
```
**Error: "chromadb module not found"**
```bash
# Reinstall chromadb
pip3 install --upgrade chromadb
# Verify installation
python3 -c "import chromadb; print(chromadb.__version__)"
```
**Error: "node-gyp build failed"**
```bash
# Install build tools
# macOS: xcode-select --install
# Windows: npm install --global windows-build-tools
# Linux: apt-get install build-essential
```
## Configuration
### Environment Variables
Currently no user-configurable settings. Future options:
```json
// Proposed for future versions
{
"env": {
"CLAUDE_MEM_CHROMA_ENABLED": "true", // Enable/disable Chroma
"CLAUDE_MEM_CHROMA_PATH": "~/.claude-mem/chroma", // DB location
"CLAUDE_MEM_EMBEDDING_MODEL": "default", // Embedding model choice
"CLAUDE_MEM_RECENCY_DAYS": "90", // Recency window
"CLAUDE_MEM_VECTOR_DIM": "384" // Embedding dimensions
}
}
```
### Disabling Chroma (Future)
To disable semantic search and use keyword-only:
```json
{
"env": {
"CLAUDE_MEM_CHROMA_ENABLED": "false"
}
}
```
Falls back to SQLite FTS5 keyword search only.
## Database Maintenance
### Location
```
~/.claude-mem/chroma/
├── chroma.sqlite3 # Chroma metadata database
└── index/ # Vector index files
└── *.bin # Binary vector data
```
### Backup
```bash
# Backup entire Chroma directory
cp -r ~/.claude-mem/chroma ~/.claude-mem/chroma.backup
# Restore from backup
rm -rf ~/.claude-mem/chroma
cp -r ~/.claude-mem/chroma.backup ~/.claude-mem/chroma
```
### Reset Chroma (Force Resync)
```bash
# Delete Chroma database
rm -rf ~/.claude-mem/chroma
# Restart worker to trigger full resync
npm run worker:restart
# Check logs for sync progress
npm run worker:logs
```
**Note**: Resync can take 2-5 minutes for thousands of observations.
### Disk Space Management
**Chroma grows with observations:**
- 1,000 observations ≈ 5MB
- 10,000 observations ≈ 50MB
- 100,000 observations ≈ 500MB
**Cleanup old observations:**
```sql
-- Delete observations older than 1 year
-- This will trigger Chroma resync on next startup
sqlite3 ~/.claude-mem/claude-mem.db \
"DELETE FROM observations WHERE createdAt < strftime('%s', 'now', '-1 year') * 1000;"
```
## Advanced Usage
### Direct Chroma Queries (Development)
For debugging or custom queries:
```typescript
import { ChromaSync } from './services/sync/ChromaSync';
const sync = new ChromaSync();
await sync.initialize();
// Query Chroma directly
const results = await sync.query({
queryTexts: ["authentication implementation"],
nResults: 10,
where: {
type: "feature",
timestamp: { $gte: Date.now() - 90_days }
}
});
console.log(results.ids, results.distances, results.documents);
```
### Custom Embedding Models (Future)
Chroma supports multiple embedding models:
```typescript
// Future configuration
const sync = new ChromaSync({
embeddingModel: "sentence-transformers/all-MiniLM-L6-v2", // Smaller, faster
// or: "text-embedding-ada-002" (OpenAI, requires API key)
// or: "all-mpnet-base-v2" (Higher quality, slower)
});
```
### Metadata Filtering
Chroma supports advanced metadata queries:
```typescript
// Find observations by type and project
results = await sync.query({
queryTexts: ["API design"],
where: {
$and: [
{ type: { $in: ["decision", "feature"] } },
{ project: "claude-mem" }
]
}
});
// Find recent observations
results = await sync.query({
queryTexts: ["database schema"],
where: {
timestamp: { $gte: Date.now() - 30_days }
}
});
```
## Comparison: Semantic vs Keyword Search
| Aspect | Semantic (Chroma) | Keyword (FTS5) |
|--------|-------------------|----------------|
| **Speed** | 150-200ms | 5-10ms |
| **Accuracy** | High (meaning-based) | Medium (exact match) |
| **Storage** | ~2KB per observation | ~500 bytes per observation |
| **Conceptual Matching** | ✅ Yes | ❌ No |
| **Exact Match** | ❌ Not guaranteed | ✅ Always |
| **Typo Tolerance** | ✅ High | ⚠️ Limited (fuzzy) |
| **Dependencies** | Python + chromadb | None (SQLite built-in) |
| **Recency Bias** | ✅ Built-in (90 days) | Manual filtering |
**Best Practice:** Use hybrid search (both) for optimal results.
## Troubleshooting
### "Chroma not found" Error
**Symptom:** Worker logs show "Chroma not available, using keyword-only search"
**Solution:**
```bash
# Check Python installation
python3 --version
# Reinstall chromadb
pip3 install chromadb
# Restart worker
npm run worker:restart
```
### Slow Query Performance
**Symptom:** Searches taking >1 second
**Solutions:**
1. Reduce recency window (edit `src/servers/search-server.ts`)
2. Limit result count (`nResults: 5` instead of 10)
3. Add project filter to narrow search space
4. Check Chroma index size (may need rebuild)
### Out of Memory Errors
**Symptom:** Worker crashes with "JavaScript heap out of memory"
**Solution:**
```bash
# Increase Node.js heap size
export NODE_OPTIONS="--max-old-space-size=4096"
# Restart worker
npm run worker:restart
```
### Sync Taking Too Long
**Symptom:** Initial Chroma sync takes >10 minutes
**Possible Causes:**
- Large number of observations (>10,000)
- Slow embedding model
- Limited CPU resources
**Solutions:**
1. Let it complete (one-time cost)
2. Delete very old observations to reduce count
3. Close resource-intensive apps during sync
## Future Enhancements
Potential improvements for future versions:
- **Configurable Recency**: User-defined recency window (30/60/90/365 days)
- **Custom Embeddings**: Choose embedding model (quality vs speed trade-off)
- **Incremental Updates**: Update existing vectors instead of full resync
- **Semantic Filters**: Search by semantic concept ("all architectural decisions")
- **Multi-Language Support**: Embeddings optimized for non-English code/docs
- **Clustering**: Auto-cluster related observations for discovery
- **Visualization**: 2D/3D visualization of vector space (similar observations near each other)
## Resources
- **ChromaDB Documentation**: https://docs.trychroma.com/
- **Source Code**: `src/services/sync/ChromaSync.ts`
- **Search Server**: `src/servers/search-server.ts`
- **Python Package**: https://pypi.org/project/chromadb/
---
**Powered by ChromaDB** | **Hybrid Semantic + Keyword Search** | **90-Day Recency Window**
-104
View File
@@ -1,104 +0,0 @@
# Claude-Mem Context Documentation
## What This Folder Is
This `docs/context/` folder contains **internal documentation** - planning documents, design references, audits, and work-in-progress materials that support development but are NOT user-facing.
## Folder Structure
```
docs/
├── public/ ← User-facing Mintlify docs (DO NOT put internal docs there)
│ └── *.mdx - Official documentation
└── context/ ← You are here (Internal documentation)
├── *.md - Planning docs, audits, references
├── *-plan.md - Implementation plans
├── *-audit.md - Code audits and reviews
├── agent-sdk-*.md - SDK reference materials
└── subdirs/ - Organized by topic
```
## What Belongs Here
**Internal Documentation** (`.md` format):
- Planning documents (`*-plan.md`, `*-outline.md`)
- Implementation analysis (`*-audit.md`, `*-code-reference.md`)
- Error tracking (`typescript-errors.md`)
- Design documents not ready for public docs
- PR review responses
- Reference materials (like `agent-sdk-ref.md`)
- Work-in-progress documentation
- Technical investigations and postmortems
- Architecture analysis documents
**Examples from this folder:**
- `mem-search-technical-architecture.md` - Deep technical reference
- `search-architecture-analysis.md` - Implementation analysis
- `agent-sdk-ref.md` - SDK reference for developers
- `typescript-errors.md` - Error tracking during development
- `worker-service-architecture.md` - Internal architecture notes
- `processing-indicator-audit.md` - Code audit document
## What Does NOT Belong Here
**User-Facing Documentation** goes in `/docs/public/`:
- User guides and tutorials
- Official architecture documentation
- Installation instructions
- Configuration guides
- Best practices for users
- Troubleshooting guides
**Rule of Thumb:**
- If a user would read it → `/docs/public/` (as `.mdx`)
- If only developers/contributors need it → `/docs/context/` (as `.md`)
## File Organization
### By Type
- `*-plan.md` - Implementation plans for features
- `*-audit.md` - Code audits and reviews
- `*-postmortem.md` - Analysis of issues or incidents
- `*-reference.md` - Technical reference materials
- `*-analysis.md` - Architecture or design analysis
### By Topic
- Create subdirectories for related documents
- Example: `claude-code/` for Claude Code specific docs
- Example: `architecture/` for internal architecture notes
## Development Workflow
### When to Create Context Docs
1. **Planning Phase** - Before implementing a feature
- Create `feature-name-plan.md`
- Outline implementation steps
- Document decisions and tradeoffs
2. **During Development** - Track issues and decisions
- Create `feature-name-audit.md` for code reviews
- Update `typescript-errors.md` for build issues
- Document gotchas in topic-specific files
3. **After Implementation** - Preserve knowledge
- Create `feature-name-postmortem.md` if issues occurred
- Update architecture analysis documents
- Archive plan docs (don't delete - useful for history)
### Graduating to Public Docs
When internal docs are polished enough for users:
1. Convert `.md` to `.mdx` format
2. Add Mintlify frontmatter
3. Move to appropriate `/docs/public/` subdirectory
4. Add to `docs.json` navigation
5. Keep original in `/docs/context/` for reference
## Summary
**Simple Rule**:
- `/docs/context/` = Internal docs, plans, references, audits ← YOU ARE HERE
- `/docs/public/` = Official user documentation (Mintlify .mdx files)
**Purpose**: This folder preserves development context, design decisions, and technical knowledge that helps contributors understand WHY things work the way they do, even if users don't need those details.
-164
View File
@@ -1,164 +0,0 @@
# CWD Context Fix - Technical Documentation
## Overview
This fix adds working directory (CWD) context propagation through the entire claude-mem pipeline, enabling the SDK agent to have spatial awareness of which directory/repository it's observing.
## Problem Statement
Previously, the SDK agent would:
- Search wrong repositories when analyzing file operations
- Report "file not found" for files that actually exist
- Lack context about which project was being worked on
- Generate inaccurate observations due to spatial confusion
## Solution
The CWD information now flows through the entire system:
```
Hook Input (cwd) → Worker API (cwd) → SessionManager (cwd) → SDK Agent (tool_cwd)
```
## Data Flow
### 1. Hook Layer (`save-hook.ts`)
```typescript
export interface PostToolUseInput {
session_id: string;
cwd: string; // ← Captured from Claude Code
tool_name: string;
tool_input: any;
tool_response: any;
}
```
The hook extracts `cwd` and includes it in the worker API request:
```typescript
body: JSON.stringify({
tool_name,
tool_input,
tool_response,
prompt_number,
cwd: cwd || '' // ← Passed to worker
})
```
### 2. Worker Service (`worker-service.ts`)
```typescript
const { tool_name, tool_input, tool_response, prompt_number, cwd } = req.body;
this.sessionManager.queueObservation(sessionDbId, {
tool_name,
tool_input,
tool_response,
prompt_number,
cwd // ← Forwarded to queue
});
```
### 3. Session Manager (`SessionManager.ts`)
```typescript
session.pendingMessages.push({
type: 'observation',
tool_name: data.tool_name,
tool_input: data.tool_input,
tool_response: data.tool_response,
prompt_number: data.prompt_number,
cwd: data.cwd // ← Included in message queue
});
```
### 4. SDK Agent (`SDKAgent.ts`)
```typescript
content: buildObservationPrompt({
id: 0,
tool_name: message.tool_name!,
tool_input: JSON.stringify(message.tool_input),
tool_output: JSON.stringify(message.tool_response),
created_at_epoch: Date.now(),
cwd: message.cwd // ← Passed to prompt builder
})
```
### 5. Prompt Generation (`prompts.ts`)
```typescript
return `<tool_used>
<tool_name>${obs.tool_name}</tool_name>
<tool_time>${new Date(obs.created_at_epoch).toISOString()}</tool_time>${obs.cwd ? `
<tool_cwd>${obs.cwd}</tool_cwd>` : ''} // ← Included in XML
<tool_input>${JSON.stringify(toolInput, null, 2)}</tool_input>
<tool_output>${JSON.stringify(toolOutput, null, 2)}</tool_output>
</tool_used>`;
```
## SDK Agent Prompt Changes
The init prompt now includes a "SPATIAL AWARENESS" section:
```
SPATIAL AWARENESS: Tool executions include the working directory (tool_cwd) to help you understand:
- Which repository/project is being worked on
- Where files are located relative to the project root
- How to match requested paths to actual execution paths
```
## Example Usage
When a user executes a read operation in `/home/user/my-project`:
```xml
<tool_used>
<tool_name>ReadTool</tool_name>
<tool_time>2025-11-10T19:18:03.065Z</tool_time>
<tool_cwd>/home/user/my-project</tool_cwd>
<tool_input>
{
"path": "src/index.ts"
}
</tool_input>
<tool_output>
{
"content": "export default..."
}
</tool_output>
</tool_used>
```
The SDK agent now knows:
1. The operation happened in `/home/user/my-project`
2. The file `src/index.ts` is relative to that directory
3. Which repository context to search when generating observations
## Testing
8 comprehensive tests validate the CWD propagation:
```bash
npx tsx --test tests/cwd-propagation.test.ts
```
All tests verify:
- Type interfaces include `cwd` fields
- Hook extracts and passes `cwd`
- Worker accepts and forwards `cwd`
- SDK agent includes `cwd` in prompts
- End-to-end flow is correct
## Benefits
1. **Spatial Awareness**: SDK agent knows which directory/repository it's observing
2. **Accurate Path Matching**: Can verify if requested paths match executed paths
3. **Better Summaries**: Won't search wrong repositories or report false negatives
4. **Works with All Models**: Even Haiku benefits from correct context (no need for Opus workaround)
## Backward Compatibility
- `cwd` is optional in all interfaces (`cwd?: string`)
- Missing `cwd` values are handled gracefully (defaults to empty string)
- Existing observations without `cwd` continue to work
- No database migration required (CWD is transient, not persisted)
## Related Issues
Fixes issue #73 (CWD context missing from SDK agent)
-219
View File
@@ -1,219 +0,0 @@
# Response to PR Review #47
## Executive Summary
Thank you for the thorough review. Most of the "issues" identified are actually **intentional architectural decisions** made to solve production failures. The comprehensive analysis docs (JUST-FUCKING-RUN-IT.md, LINE-BY-LINE-CASCADING-BULLSHIT.md) document why these changes were necessary.
However, you've identified **2 legitimate issues** that need fixing:
1.**Race condition in worker startup** - Valid concern, needs fixing
2.**Watch mode in production** - Appears to be unintentional leftover from development
The other concerns are **working as intended** based on documented architectural decisions.
---
## Detailed Response to Each Concern
### ⚠️ Issue #1: Race Condition in Worker Health Check - **VALID CONCERN**
**Review Comment**: "The spawn() call inside the close event handler is non-blocking, but the function returns immediately. Hooks may attempt HTTP requests before worker has started."
**Our Response**: **You're absolutely right**. This is a legitimate race condition we need to fix.
**However**, the suggested fixes (async/await health check, retry loops) are exactly what we intentionally removed because they were causing production failures (see Observation #3602, #3600).
**Proposed Solution**:
The hooks already have proper error handling for `ECONNREFUSED` with actionable user messages:
```typescript
if (error.cause?.code === 'ECONNREFUSED' || error.name === 'TimeoutError' || error.message.includes('fetch failed')) {
throw new Error("There's a problem with the worker. If you just updated, type `pm2 restart claude-mem-worker` in your terminal to continue");
}
```
We should either:
1. Document this as expected behavior (fire-and-forget spawn)
2. Add a single synchronous `pm2 list` check after spawn to verify startup
3. Keep the current approach and rely on hook error messages
**We will NOT re-add**: Retry loops, health check polling, or arbitrary delays. Those caused the 100% failure rate we just fixed.
---
### ⚠️ Issue #2: Removed Health Endpoint Information - **INTENTIONAL**
**Review Comment**: "This removes useful debugging information. When troubleshooting production issues, knowing the PID, active sessions count, and port would be valuable."
**Our Documentation**:
- **Observation #3616**: "Simplified Health Check Endpoint to Minimal Response"
- **Observation #3601**: "Minimum Parameters = Minimum Bugs"
- **Observation #3600**: "Comprehensive Analysis of Cascading Architectural Problems"
**Why We Did This**:
1. **HTTP 200 = Alive**: If the endpoint responds, the worker is healthy. Period.
2. **Diagnostic fields provided no actionable value**: PID, activeSessions, chromaSynced didn't help debug the actual production failures
3. **Part of 87% code reduction**: worker-utils.ts went from 113 lines → 15 lines
4. **Health checks were hiding real problems**: Retry logic masked that startup sequence was broken
**Original Problem**:
- Worker startup: 4-5 seconds (actual)
- Health check timeout: 3 seconds (configured)
- Result: **100% user failure rate**
The detailed health response didn't help diagnose this - fixing the startup sequence (HTTP server first) did.
**Response**: **Will not change**. The health endpoint serves one purpose: availability signal. Use PM2 commands for diagnostics:
- `pm2 list` - See PID, status, memory
- `pm2 logs claude-mem-worker` - See application logs
- `npm run worker:logs` - Convenience wrapper
---
### ⚠️ Issue #3: Auto-Session Creation Without Validation - **NEEDS FIXING**
**Review Comment**: "Uses non-null assertion (dbSession!) without checking if dbSession is actually null. If getSessionById() returns null, this will throw at runtime."
**Our Response**: **You're absolutely right**. This is a legitimate bug.
**Action Required**: Add null checks to `handleObservation` and `handleSummarize` like already exist in `handleInit`:
```typescript
const dbSession = db.getSessionById(sessionDbId);
if (!dbSession) {
db.close();
res.status(404).json({ error: 'Session not found in database' });
return;
}
```
**This needs to be fixed before merge.**
---
### ⚠️ Issue #4: Removed Observation Counter - **INTENTIONAL**
**Review Comment**: "Was this used for generating correlation IDs for logging? If so, is there now no way to correlate observations within a session for debugging?"
**Our Documentation**:
- **Observation #3621-3627**: Complete removal of observation counter and correlation IDs
- **Observation #3602**: "Architectural Decision: Remove Health Checks and Arbitrary Delays"
- **Observation #3612**: "Worker Service Simplification Strategy"
**Why We Removed It**:
1. **Over-engineering**: Provided per-observation tracking when session-level identification was sufficient
2. **Part of cascading complexity**: Correlation IDs were monitoring infrastructure for complexity that shouldn't exist
3. **Session-level debugging is sufficient**: Most issues diagnosed by knowing which session, not which observation #5 within that session
4. **Database IDs provide uniqueness**: Once stored, observations have DB IDs for precise identification
**The Problem It Was Solving (That No Longer Needs Solving)**:
- Tracking individual observations through worker pipeline
- Monitoring Chroma sync success/failure per observation
- Detailed per-observation timing metrics
**Why That's Unnecessary**:
- Session-level logging is sufficient for debugging
- Database IDs provide uniqueness after storage
- The monitoring was masking real problems (startup sequence)
**Response**: **Will not change**. This was part of the simplification strategy that fixed production failures.
---
### ⚠️ Issue #5: PM2 Watch Mode in Production - **VALID CONCERN**
**Review Comment**: "Watch mode causes PM2 to restart the process whenever files change. This is useful during development but potentially problematic in production."
**Our Investigation**:
- **Observation #3631**: Documents what watch mode does, but **no observation documents WHY we enabled it**
- **Observation #3611**: PM2 config was "drastically simplified" by removing 21 unnecessary parameters
- **Watch mode was kept** during this aggressive simplification
**Conclusion**: **This appears to be unintentional** - likely enabled for development and inadvertently left enabled.
**Action Required**: Either:
1. **Disable watch mode** (recommended) - Users aren't developing, they're using the plugin
2. **Document it as intentional** if there's a reason we want auto-restart on file changes
**This should be addressed before merge** - likely by disabling watch mode.
---
### ⚠️ Issue #6: Duplicate Port Constant - **ACKNOWLEDGED**
**Review Comment**: "FIXED_PORT constant is defined in 5 places. Creates maintenance burden."
**Our Response**: **Fair point**. This is technical debt we can clean up.
**However**, it's low priority because:
- Port is unlikely to change
- All values are currently consistent
- Not causing production issues
**Action**: Add to backlog for post-merge cleanup. Export from worker-utils.ts and import elsewhere.
---
## Summary of Actions
### Must Fix Before Merge:
1.**Add null checks to auto-session creation** in handleObservation and handleSummarize
2.**Decide on watch mode** - Disable unless there's documented reason to keep it
### Will Not Change (Intentional Decisions):
1.**Health endpoint simplification** - Part of solving 100% failure rate
2.**Removed observation counter** - Part of simplification strategy
3.**Removed health check system** - Was causing production failures
4.**Fire-and-forget worker spawn** - Hooks have proper error handling
### Race Condition Discussion Needed:
1. 🤔 **Worker startup race condition** - Valid concern, but retry loops caused the original failures. Options:
- Keep current approach (hooks handle ECONNREFUSED gracefully)
- Add single synchronous `pm2 list` check after spawn
- Document as expected behavior
### Nice to Have (Post-Merge):
1. 📋 **Consolidate FIXED_PORT constant** - Technical debt cleanup
---
## Key Documentation References
The architectural decisions are comprehensively documented in:
1. **JUST-FUCKING-RUN-IT.md** (Observation #3602)
- Architectural decision to remove health checks
- Philosophy: Trust PM2, let HTTP timeouts be the health check
2. **LINE-BY-LINE-CASCADING-BULLSHIT.md** (Observation #3600)
- Root cause analysis of how health checks caused 100% failure rate
- Documents cascade from arbitrary 3000ms timeout → retry loops → race conditions
3. **MINIMUM-PARAMETERS.md** (Observation #3601)
- Quantified impact: 21 unnecessary PM2 parameters, ~160 lines deleted
- Philosophy: "Minimum parameters = minimum bugs"
4. **STUPID-SHIT-THAT-BROKE-PRODUCTION.md** (Observation #3597)
- 8 critical issues causing 100% user failure rate
- Includes worker crashing on Chroma failures despite data already in SQLite
These documents explain **why** the simplifications were necessary - they weren't arbitrary removal of useful features, they were targeted fixes for production failures.
---
## Production Context
**Before This PR**:
- 100% user failure rate after v4.x release
- Worker startup took 4-5 seconds but health checks timed out at 3 seconds
- `stdio: 'ignore'` eliminated all debugging visibility
- Worker crashed on Chroma failures despite data safely in SQLite
- ChromaSync initialized in constructor, blocking HTTP server
- 113 lines of health check code with retry loops masking real problems
**After This PR**:
- HTTP server starts immediately
- Worker stays alive through Chroma failures (graceful degradation)
- Errors are visible (`stdio: 'inherit'`)
- Worker-utils.ts: 113 lines → 15 lines (87% reduction)
- Hooks have proper error handling with actionable user messages
- System works with just SQLite FTS5, Chroma is optional enhancement
The "removed observability" was actually **removed complexity that was hiding problems**, not helping diagnose them.
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
-302
View File
@@ -1,302 +0,0 @@
# Agent Skills in the SDK
> Extend Claude with specialized capabilities using Agent Skills in the Claude Agent SDK
## Overview
Agent Skills extend Claude with specialized capabilities that Claude autonomously invokes when relevant. Skills are packaged as `SKILL.md` files containing instructions, descriptions, and optional supporting resources.
For comprehensive information about Skills, including benefits, architecture, and authoring guidelines, see the [Agent Skills overview](/en/docs/agents-and-tools/agent-skills/overview).
## How Skills Work with the SDK
When using the Claude Agent SDK, Skills are:
1. **Defined as filesystem artifacts**: Created as `SKILL.md` files in specific directories (`.claude/skills/`)
2. **Loaded from filesystem**: Skills are loaded from configured filesystem locations. You must specify `settingSources` (TypeScript) or `setting_sources` (Python) to load Skills from the filesystem
3. **Automatically discovered**: Once filesystem settings are loaded, Skill metadata is discovered at startup from user and project directories; full content loaded when triggered
4. **Model-invoked**: Claude autonomously chooses when to use them based on context
5. **Enabled via allowed\_tools**: Add `"Skill"` to your `allowed_tools` to enable Skills
Unlike subagents (which can be defined programmatically), Skills must be created as filesystem artifacts. The SDK does not provide a programmatic API for registering Skills.
<Note>
**Default behavior**: By default, the SDK does not load any filesystem settings. To use Skills, you must explicitly configure `settingSources: ['user', 'project']` (TypeScript) or `setting_sources=["user", "project"]` (Python) in your options.
</Note>
## Using Skills with the SDK
To use Skills with the SDK, you need to:
1. Include `"Skill"` in your `allowed_tools` configuration
2. Configure `settingSources`/`setting_sources` to load Skills from the filesystem
Once configured, Claude automatically discovers Skills from the specified directories and invokes them when relevant to the user's request.
<CodeGroup>
```python Python theme={null}
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
async def main():
options = ClaudeAgentOptions(
cwd="/path/to/project", # Project with .claude/skills/
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill", "Read", "Write", "Bash"] # Enable Skill tool
)
async for message in query(
prompt="Help me process this PDF document",
options=options
):
print(message)
asyncio.run(main())
```
```typescript TypeScript theme={null}
import { query } from "@anthropic-ai/claude-agent-sdk";
for await (const message of query({
prompt: "Help me process this PDF document",
options: {
cwd: "/path/to/project", // Project with .claude/skills/
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill", "Read", "Write", "Bash"] // Enable Skill tool
}
})) {
console.log(message);
}
```
</CodeGroup>
## Skill Locations
Skills are loaded from filesystem directories based on your `settingSources`/`setting_sources` configuration:
* **Project Skills** (`.claude/skills/`): Shared with your team via git - loaded when `setting_sources` includes `"project"`
* **User Skills** (`~/.claude/skills/`): Personal Skills across all projects - loaded when `setting_sources` includes `"user"`
* **Plugin Skills**: Bundled with installed Claude Code plugins
## Creating Skills
Skills are defined as directories containing a `SKILL.md` file with YAML frontmatter and Markdown content. The `description` field determines when Claude invokes your Skill.
**Example directory structure**:
```bash theme={null}
.claude/skills/processing-pdfs/
└── SKILL.md
```
For complete guidance on creating Skills, including SKILL.md structure, multi-file Skills, and examples, see:
* [Agent Skills in Claude Code](https://code.claude.com/docs/skills): Complete guide with examples
* [Agent Skills Best Practices](/en/docs/agents-and-tools/agent-skills/best-practices): Authoring guidelines and naming conventions
## Tool Restrictions
<Note>
The `allowed-tools` frontmatter field in SKILL.md is only supported when using Claude Code CLI directly. **It does not apply when using Skills through the SDK**.
When using the SDK, control tool access through the main `allowedTools` option in your query configuration.
</Note>
To restrict tools for Skills in SDK applications, use the `allowedTools` option:
<Note>
Import statements from the first example are assumed in the following code snippets.
</Note>
<CodeGroup>
```python Python theme={null}
options = ClaudeAgentOptions(
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill", "Read", "Grep", "Glob"] # Restricted toolset
)
async for message in query(
prompt="Analyze the codebase structure",
options=options
):
print(message)
```
```typescript TypeScript theme={null}
// Skills can only use Read, Grep, and Glob tools
for await (const message of query({
prompt: "Analyze the codebase structure",
options: {
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill", "Read", "Grep", "Glob"] // Restricted toolset
}
})) {
console.log(message);
}
```
</CodeGroup>
## Discovering Available Skills
To see which Skills are available in your SDK application, simply ask Claude:
<CodeGroup>
```python Python theme={null}
options = ClaudeAgentOptions(
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill"]
)
async for message in query(
prompt="What Skills are available?",
options=options
):
print(message)
```
```typescript TypeScript theme={null}
for await (const message of query({
prompt: "What Skills are available?",
options: {
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill"]
}
})) {
console.log(message);
}
```
</CodeGroup>
Claude will list the available Skills based on your current working directory and installed plugins.
## Testing Skills
Test Skills by asking questions that match their descriptions:
<CodeGroup>
```python Python theme={null}
options = ClaudeAgentOptions(
cwd="/path/to/project",
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill", "Read", "Bash"]
)
async for message in query(
prompt="Extract text from invoice.pdf",
options=options
):
print(message)
```
```typescript TypeScript theme={null}
for await (const message of query({
prompt: "Extract text from invoice.pdf",
options: {
cwd: "/path/to/project",
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill", "Read", "Bash"]
}
})) {
console.log(message);
}
```
</CodeGroup>
Claude automatically invokes the relevant Skill if the description matches your request.
## Troubleshooting
### Skills Not Found
**Check settingSources configuration**: Skills are only loaded when you explicitly configure `settingSources`/`setting_sources`. This is the most common issue:
<CodeGroup>
```python Python theme={null}
# Wrong - Skills won't be loaded
options = ClaudeAgentOptions(
allowed_tools=["Skill"]
)
# Correct - Skills will be loaded
options = ClaudeAgentOptions(
setting_sources=["user", "project"], # Required to load Skills
allowed_tools=["Skill"]
)
```
```typescript TypeScript theme={null}
// Wrong - Skills won't be loaded
const options = {
allowedTools: ["Skill"]
};
// Correct - Skills will be loaded
const options = {
settingSources: ["user", "project"], // Required to load Skills
allowedTools: ["Skill"]
};
```
</CodeGroup>
For more details on `settingSources`/`setting_sources`, see the [TypeScript SDK reference](/en/docs/agent-sdk/typescript#settingsource) or [Python SDK reference](/en/docs/agent-sdk/python#settingsource).
**Check working directory**: The SDK loads Skills relative to the `cwd` option. Ensure it points to a directory containing `.claude/skills/`:
<CodeGroup>
```python Python theme={null}
# Ensure your cwd points to the directory containing .claude/skills/
options = ClaudeAgentOptions(
cwd="/path/to/project", # Must contain .claude/skills/
setting_sources=["user", "project"], # Required to load Skills
allowed_tools=["Skill"]
)
```
```typescript TypeScript theme={null}
// Ensure your cwd points to the directory containing .claude/skills/
const options = {
cwd: "/path/to/project", // Must contain .claude/skills/
settingSources: ["user", "project"], // Required to load Skills
allowedTools: ["Skill"]
};
```
</CodeGroup>
See the "Using Skills with the SDK" section above for the complete pattern.
**Verify filesystem location**:
```bash theme={null}
# Check project Skills
ls .claude/skills/*/SKILL.md
# Check personal Skills
ls ~/.claude/skills/*/SKILL.md
```
### Skill Not Being Used
**Check the Skill tool is enabled**: Confirm `"Skill"` is in your `allowedTools`.
**Check the description**: Ensure it's specific and includes relevant keywords. See [Agent Skills Best Practices](/en/docs/agents-and-tools/agent-skills/best-practices#writing-effective-descriptions) for guidance on writing effective descriptions.
### Additional Troubleshooting
For general Skills troubleshooting (YAML syntax, debugging, etc.), see the [Claude Code Skills troubleshooting section](https://code.claude.com/docs/skills#troubleshooting).
## Related Documentation
### Skills Guides
* [Agent Skills in Claude Code](https://code.claude.com/docs/skills): Complete Skills guide with creation, examples, and troubleshooting
* [Agent Skills Overview](/en/docs/agents-and-tools/agent-skills/overview): Conceptual overview, benefits, and architecture
* [Agent Skills Best Practices](/en/docs/agents-and-tools/agent-skills/best-practices): Authoring guidelines for effective Skills
* [Agent Skills Cookbook](https://github.com/anthropics/claude-cookbooks/tree/main/skills): Example Skills and templates
### SDK Resources
* [Subagents in the SDK](/en/docs/agent-sdk/subagents): Similar filesystem-based agents with programmatic options
* [Slash Commands in the SDK](/en/docs/agent-sdk/slash-commands): User-invoked commands
* [SDK Overview](/en/docs/agent-sdk/overview): General SDK concepts
* [TypeScript SDK Reference](/en/docs/agent-sdk/typescript): Complete API documentation
* [Python SDK Reference](/en/docs/agent-sdk/python): Complete API documentation
-607
View File
@@ -1,607 +0,0 @@
# Agent Skills
> Create, manage, and share Skills to extend Claude's capabilities in Claude Code.
This guide shows you how to create, use, and manage Agent Skills in Claude Code. Skills are modular capabilities that extend Claude's functionality through organized folders containing instructions, scripts, and resources.
## Prerequisites
* Claude Code version 1.0 or later
* Basic familiarity with [Claude Code](/en/quickstart)
## What are Agent Skills?
Agent Skills package expertise into discoverable capabilities. Each Skill consists of a `SKILL.md` file with instructions that Claude reads when relevant, plus optional supporting files like scripts and templates.
**How Skills are invoked**: Skills are **model-invoked**—Claude autonomously decides when to use them based on your request and the Skill's description. This is different from slash commands, which are **user-invoked** (you explicitly type `/command` to trigger them).
**Benefits**:
* Extend Claude's capabilities for your specific workflows
* Share expertise across your team via git
* Reduce repetitive prompting
* Compose multiple Skills for complex tasks
Learn more in the [Agent Skills overview](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview).
<Note>
For a deep dive into the architecture and real-world applications of Agent Skills, read our engineering blog: [Equipping agents for the real world with Agent Skills](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills).
</Note>
## Create a Skill
Skills are stored as directories containing a `SKILL.md` file.
### Personal Skills
Personal Skills are available across all your projects. Store them in `~/.claude/skills/`:
```bash theme={null}
mkdir -p ~/.claude/skills/my-skill-name
```
**Use personal Skills for**:
* Your individual workflows and preferences
* Experimental Skills you're developing
* Personal productivity tools
### Project Skills
Project Skills are shared with your team. Store them in `.claude/skills/` within your project:
```bash theme={null}
mkdir -p .claude/skills/my-skill-name
```
**Use project Skills for**:
* Team workflows and conventions
* Project-specific expertise
* Shared utilities and scripts
Project Skills are checked into git and automatically available to team members.
### Plugin Skills
Skills can also come from [Claude Code plugins](/en/plugins). Plugins may bundle Skills that are automatically available when the plugin is installed. These Skills work the same way as personal and project Skills.
## Write SKILL.md
Create a `SKILL.md` file with YAML frontmatter and Markdown content:
```yaml theme={null}
---
name: your-skill-name
description: Brief description of what this Skill does and when to use it
---
# Your Skill Name
## Instructions
Provide clear, step-by-step guidance for Claude.
## Examples
Show concrete examples of using this Skill.
```
**Field requirements**:
* `name`: Must use lowercase letters, numbers, and hyphens only (max 64 characters)
* `description`: Brief description of what the Skill does and when to use it (max 1024 characters)
The `description` field is critical for Claude to discover when to use your Skill. It should include both what the Skill does and when Claude should use it.
See the [best practices guide](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices) for complete authoring guidance including validation rules.
## Add supporting files
Create additional files alongside SKILL.md:
```
my-skill/
├── SKILL.md (required)
├── reference.md (optional documentation)
├── examples.md (optional examples)
├── scripts/
│ └── helper.py (optional utility)
└── templates/
└── template.txt (optional template)
```
Reference these files from SKILL.md:
````markdown theme={null}
For advanced usage, see [reference.md](reference.md).
Run the helper script:
```bash
python scripts/helper.py input.txt
```
````
Claude reads these files only when needed, using progressive disclosure to manage context efficiently.
## Restrict tool access with allowed-tools
Use the `allowed-tools` frontmatter field to limit which tools Claude can use when a Skill is active:
```yaml theme={null}
---
name: safe-file-reader
description: Read files without making changes. Use when you need read-only file access.
allowed-tools: Read, Grep, Glob
---
# Safe File Reader
This Skill provides read-only file access.
## Instructions
1. Use Read to view file contents
2. Use Grep to search within files
3. Use Glob to find files by pattern
```
When this Skill is active, Claude can only use the specified tools (Read, Grep, Glob) without needing to ask for permission. This is useful for:
* Read-only Skills that shouldn't modify files
* Skills with limited scope (e.g., only data analysis, no file writing)
* Security-sensitive workflows where you want to restrict capabilities
If `allowed-tools` is not specified, Claude will ask for permission to use tools as normal, following the standard permission model.
<Note>
`allowed-tools` is only supported for Skills in Claude Code.
</Note>
## View available Skills
Skills are automatically discovered by Claude from three sources:
* Personal Skills: `~/.claude/skills/`
* Project Skills: `.claude/skills/`
* Plugin Skills: bundled with installed plugins
**To view all available Skills**, ask Claude directly:
```
What Skills are available?
```
or
```
List all available Skills
```
This will show all Skills from all sources, including plugin Skills.
**To inspect a specific Skill**, you can also check the filesystem:
```bash theme={null}
# List personal Skills
ls ~/.claude/skills/
# List project Skills (if in a project directory)
ls .claude/skills/
# View a specific Skill's content
cat ~/.claude/skills/my-skill/SKILL.md
```
## Test a Skill
After creating a Skill, test it by asking questions that match your description.
**Example**: If your description mentions "PDF files":
```
Can you help me extract text from this PDF?
```
Claude autonomously decides to use your Skill if it matches the request—you don't need to explicitly invoke it. The Skill activates automatically based on the context of your question.
## Debug a Skill
If Claude doesn't use your Skill, check these common issues:
### Make description specific
**Too vague**:
```yaml theme={null}
description: Helps with documents
```
**Specific**:
```yaml theme={null}
description: Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
```
Include both what the Skill does and when to use it in the description.
### Verify file path
**Personal Skills**: `~/.claude/skills/skill-name/SKILL.md`
**Project Skills**: `.claude/skills/skill-name/SKILL.md`
Check the file exists:
```bash theme={null}
# Personal
ls ~/.claude/skills/my-skill/SKILL.md
# Project
ls .claude/skills/my-skill/SKILL.md
```
### Check YAML syntax
Invalid YAML prevents the Skill from loading. Verify the frontmatter:
```bash theme={null}
cat SKILL.md | head -n 10
```
Ensure:
* Opening `---` on line 1
* Closing `---` before Markdown content
* Valid YAML syntax (no tabs, correct indentation)
### View errors
Run Claude Code with debug mode to see Skill loading errors:
```bash theme={null}
claude --debug
```
## Share Skills with your team
**Recommended approach**: Distribute Skills through [plugins](/en/plugins).
To share Skills via plugin:
1. Create a plugin with Skills in the `skills/` directory
2. Add the plugin to a marketplace
3. Team members install the plugin
For complete instructions, see [Add Skills to your plugin](/en/plugins#add-skills-to-your-plugin).
You can also share Skills directly through project repositories:
### Step 1: Add Skill to your project
Create a project Skill:
```bash theme={null}
mkdir -p .claude/skills/team-skill
# Create SKILL.md
```
### Step 2: Commit to git
```bash theme={null}
git add .claude/skills/
git commit -m "Add team Skill for PDF processing"
git push
```
### Step 3: Team members get Skills automatically
When team members pull the latest changes, Skills are immediately available:
```bash theme={null}
git pull
claude # Skills are now available
```
## Update a Skill
Edit SKILL.md directly:
```bash theme={null}
# Personal Skill
code ~/.claude/skills/my-skill/SKILL.md
# Project Skill
code .claude/skills/my-skill/SKILL.md
```
Changes take effect the next time you start Claude Code. If Claude Code is already running, restart it to load the updates.
## Remove a Skill
Delete the Skill directory:
```bash theme={null}
# Personal
rm -rf ~/.claude/skills/my-skill
# Project
rm -rf .claude/skills/my-skill
git commit -m "Remove unused Skill"
```
## Best practices
### Keep Skills focused
One Skill should address one capability:
**Focused**:
* "PDF form filling"
* "Excel data analysis"
* "Git commit messages"
**Too broad**:
* "Document processing" (split into separate Skills)
* "Data tools" (split by data type or operation)
### Write clear descriptions
Help Claude discover when to use Skills by including specific triggers in your description:
**Clear**:
```yaml theme={null}
description: Analyze Excel spreadsheets, create pivot tables, and generate charts. Use when working with Excel files, spreadsheets, or analyzing tabular data in .xlsx format.
```
**Vague**:
```yaml theme={null}
description: For files
```
### Test with your team
Have teammates use Skills and provide feedback:
* Does the Skill activate when expected?
* Are the instructions clear?
* Are there missing examples or edge cases?
### Document Skill versions
You can document Skill versions in your SKILL.md content to track changes over time. Add a version history section:
```markdown theme={null}
# My Skill
## Version History
- v2.0.0 (2025-10-01): Breaking changes to API
- v1.1.0 (2025-09-15): Added new features
- v1.0.0 (2025-09-01): Initial release
```
This helps team members understand what changed between versions.
## Troubleshooting
### Claude doesn't use my Skill
**Symptom**: You ask a relevant question but Claude doesn't use your Skill.
**Check**: Is the description specific enough?
Vague descriptions make discovery difficult. Include both what the Skill does and when to use it, with key terms users would mention.
**Too generic**:
```yaml theme={null}
description: Helps with data
```
**Specific**:
```yaml theme={null}
description: Analyze Excel spreadsheets, generate pivot tables, create charts. Use when working with Excel files, spreadsheets, or .xlsx files.
```
**Check**: Is the YAML valid?
Run validation to check for syntax errors:
```bash theme={null}
# View frontmatter
cat .claude/skills/my-skill/SKILL.md | head -n 15
# Check for common issues
# - Missing opening or closing ---
# - Tabs instead of spaces
# - Unquoted strings with special characters
```
**Check**: Is the Skill in the correct location?
```bash theme={null}
# Personal Skills
ls ~/.claude/skills/*/SKILL.md
# Project Skills
ls .claude/skills/*/SKILL.md
```
### Skill has errors
**Symptom**: The Skill loads but doesn't work correctly.
**Check**: Are dependencies available?
Claude will automatically install required dependencies (or ask for permission to install them) when it needs them.
**Check**: Do scripts have execute permissions?
```bash theme={null}
chmod +x .claude/skills/my-skill/scripts/*.py
```
**Check**: Are file paths correct?
Use forward slashes (Unix style) in all paths:
**Correct**: `scripts/helper.py`
**Wrong**: `scripts\helper.py` (Windows style)
### Multiple Skills conflict
**Symptom**: Claude uses the wrong Skill or seems confused between similar Skills.
**Be specific in descriptions**: Help Claude choose the right Skill by using distinct trigger terms in your descriptions.
Instead of:
```yaml theme={null}
# Skill 1
description: For data analysis
# Skill 2
description: For analyzing data
```
Use:
```yaml theme={null}
# Skill 1
description: Analyze sales data in Excel files and CRM exports. Use for sales reports, pipeline analysis, and revenue tracking.
# Skill 2
description: Analyze log files and system metrics data. Use for performance monitoring, debugging, and system diagnostics.
```
## Examples
### Simple Skill (single file)
```
commit-helper/
└── SKILL.md
```
```yaml theme={null}
---
name: generating-commit-messages
description: Generates clear commit messages from git diffs. Use when writing commit messages or reviewing staged changes.
---
# Generating Commit Messages
## Instructions
1. Run `git diff --staged` to see changes
2. I'll suggest a commit message with:
- Summary under 50 characters
- Detailed description
- Affected components
## Best practices
- Use present tense
- Explain what and why, not how
```
### Skill with tool permissions
```
code-reviewer/
└── SKILL.md
```
```yaml theme={null}
---
name: code-reviewer
description: Review code for best practices and potential issues. Use when reviewing code, checking PRs, or analyzing code quality.
allowed-tools: Read, Grep, Glob
---
# Code Reviewer
## Review checklist
1. Code organization and structure
2. Error handling
3. Performance considerations
4. Security concerns
5. Test coverage
## Instructions
1. Read the target files using Read tool
2. Search for patterns using Grep
3. Find related files using Glob
4. Provide detailed feedback on code quality
```
### Multi-file Skill
```
pdf-processing/
├── SKILL.md
├── FORMS.md
├── REFERENCE.md
└── scripts/
├── fill_form.py
└── validate.py
```
**SKILL.md**:
````yaml theme={null}
---
name: pdf-processing
description: Extract text, fill forms, merge PDFs. Use when working with PDF files, forms, or document extraction. Requires pypdf and pdfplumber packages.
---
# PDF Processing
## Quick start
Extract text:
```python
import pdfplumber
with pdfplumber.open("doc.pdf") as pdf:
text = pdf.pages[0].extract_text()
```
For form filling, see [FORMS.md](FORMS.md).
For detailed API reference, see [REFERENCE.md](REFERENCE.md).
## Requirements
Packages must be installed in your environment:
```bash
pip install pypdf pdfplumber
```
````
<Note>
List required packages in the description. Packages must be installed in your environment before Claude can use them.
</Note>
Claude loads additional files only when needed.
## Next steps
<CardGroup cols={2}>
<Card title="Authoring best practices" icon="lightbulb" href="https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices">
Write Skills that Claude can use effectively
</Card>
<Card title="Agent Skills overview" icon="book" href="https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview">
Learn how Skills work across Claude products
</Card>
<Card title="Use Skills in the Agent SDK" icon="cube" href="https://docs.claude.com/en/docs/agent-sdk/skills">
Use Skills programmatically with TypeScript and Python
</Card>
<Card title="Get started with Agent Skills" icon="rocket" href="https://docs.claude.com/en/docs/agents-and-tools/agent-skills/quickstart">
Create your first Skill
</Card>
</CardGroup>
-987
View File
@@ -1,987 +0,0 @@
# Claude-Mem Architecture v3 to v4 Plan (✅ Completed)
This file exists as a reference to explain the path forward from v3 to v4.
## Core Purpose
Create a lightweight, hook-driven memory system that captures important context during Claude Code sessions and makes it available in future sessions.
**Principles:**
- Hooks should be fast and non-blocking
- SDK agent synthesizes observations, not just stores raw data
- Storage should be simple and queryable
- Users should never notice the memory system working
---
## Understanding the Foundation
### What Claude Code Hooks Actually Do
**SessionStart Hook:**
- Runs when Claude Code starts or resumes
- Can inject context via stdout (plain text) OR JSON `additionalContext`
- This is how we show "What's new" to Claude
**UserPromptSubmit Hook:**
- Runs BEFORE Claude processes the user's message
- Can inject context via stdout OR JSON `additionalContext`
- This is where we initialize per-session tracking
**PostToolUse Hook:**
- Runs AFTER each tool completes successfully
- Gets both tool input and output
- Runs in PARALLEL with other matching hooks
- This is where we observe what Claude is doing
**Stop Hook:**
- Runs when main agent finishes (NOT on user interrupt)
- This is where we finalize the session
- Summary should be structured responses that answer the following:
- What did user request?
- What did you investigate?
- What did you learn?
- What did you do?
- What's next?
- Files read
- Files edited
- Notes
### How SDK Streaming Actually Works
**Streaming Input Mode (what we need):**
- Persistent session with AsyncGenerator
- Can queue multiple messages
- Supports interruption via `interrupt()` method
- Natural multi-turn conversations
- The SDK maintains conversation state
**Critical insight:** We use "Streaming Input Mode" which creates ONE long-running SDK session per Claude Code session, not multiple short sessions.
**Session ID Management:**
- Session IDs change with each turn of the conversation
- Must capture session ID from the initial system message
- SDK worker needs to track session ID updates continuously, not just capture once
- The first message in the response stream is a system init message with the session_id
---
## Architecture
### Visual Overview
```
┌─────────────────────────────────────────────────────────────────┐
│ CLAUDE CODE SESSION │
│ (Main session - user interacting with Claude Code) │
│ │
│ User → Claude → Tools (Read, Edit, Write, Bash, etc.) │
│ │ │
│ │ PostToolUse Hook │
│ ↓ │
│ claude-mem save │
│ (queues observation) │
└─────────────────────────────────────────────────────────────────┘
│ SQLite observation_queue
┌─────────────────────────────────────────────────────────────────┐
│ SDK WORKER PROCESS │
│ (Background process - detached from main session) │
│ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Message Generator (AsyncIterable) │ │
│ │ - Yields initial prompt │ │
│ │ - Polls observation_queue │ │
│ │ - Yields observation prompts │ │
│ └─────────────────────────────────────────────┘ │
│ ↓ │
│ ┌─────────────────────────────────────────────┐ │
│ │ SDK query() → Claude API │ │
│ │ Model: claude-sonnet-4-5 │ │
│ │ No tools needed (text-only synthesis) │ │
│ └─────────────────────────────────────────────┘ │
│ ↓ │
│ ┌─────────────────────────────────────────────┐ │
│ │ Response Handler │ │
│ │ - Parses XML <observation> blocks │ │
│ │ - Parses XML <summary> blocks │ │
│ │ - Writes to SQLite tables │ │
│ └─────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
│ SQLite: observations, session_summaries
┌─────────────────────────────────────────────────────────────────┐
│ NEXT CLAUDE CODE SESSION │
│ │
│ SessionStart Hook → claude-mem context │
│ (Reads from SQLite and injects context) │
└─────────────────────────────────────────────────────────────────┘
```
### What is the SDK agent's job?
The SDK agent is a **synthesis engine**, not a data collector.
It should:
- Receive tool observations as they happen
- Extract meaningful patterns and insights
- Store atomic, searchable observations in SQLite
- Synthesize a human-readable summary at the end
It should NOT:
- Store raw tool outputs
- Try to capture everything
- Make decisions about what Claude Code should do
- Block or slow down the main session
### Session Management Strategy
**Built-in SDK Session Resumption:**
The Agent SDK provides native session resumption capabilities. Instead of manually tracking and rebuilding session state, we can leverage the SDK's built-in features:
```typescript
// Resume a previous SDK session
const resumedResponse = query({
prompt: "Continue where we left off",
options: {
resume: sdkSessionId // Use the session ID captured from init message
}
});
```
**When to use session resumption:**
- User interrupts Claude Code and resumes later
- SDK worker crashes and needs to restart
- Long-running observations that span multiple Claude Code sessions
**Session state tracking:**
- Store SDK session ID in database when captured from init message
- Mark sessions as 'active', 'completed', 'interrupted', or 'failed'
- Use session status to determine whether to resume or start fresh
### How hooks run in parallel
PostToolUse hooks run in parallel. Handle this by:
- Make SDK agent calls async and fire-and-forget
- Use the observation_queue SQLite table to serialize observations
- SDK worker polls this queue and processes observations sequentially
### What if the user interrupts Claude Code?
Stop hook doesn't run on interrupts. So:
- Observations stay in queue
- Next session continues where left off
- Mark session as 'interrupted' after 24h of inactivity
---
## Database Schema
```sql
-- Tracks SDK streaming sessions
CREATE TABLE sdk_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
claude_session_id TEXT UNIQUE NOT NULL,
sdk_session_id TEXT UNIQUE NOT NULL,
project TEXT NOT NULL,
user_prompt TEXT,
started_at TEXT NOT NULL,
started_at_epoch INTEGER NOT NULL,
completed_at TEXT,
completed_at_epoch INTEGER,
status TEXT CHECK(status IN ('active', 'completed', 'failed'))
);
-- Tracks pending observations (message queue)
CREATE TABLE observation_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
sdk_session_id TEXT NOT NULL,
tool_name TEXT NOT NULL,
tool_input TEXT NOT NULL, -- JSON
tool_output TEXT NOT NULL, -- JSON
created_at_epoch INTEGER NOT NULL,
processed_at_epoch INTEGER,
FOREIGN KEY(sdk_session_id) REFERENCES sdk_sessions(sdk_session_id)
);
-- Stores extracted observations (what SDK decides is important)
CREATE TABLE observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
sdk_session_id TEXT NOT NULL,
project TEXT NOT NULL,
text TEXT NOT NULL,
type TEXT NOT NULL, -- 'decision' | 'bugfix' | 'feature' | 'refactor' | 'discovery'
created_at TEXT NOT NULL,
created_at_epoch INTEGER NOT NULL,
FOREIGN KEY(sdk_session_id) REFERENCES sdk_sessions(sdk_session_id)
);
CREATE INDEX idx_observations_project ON observations(project);
CREATE INDEX idx_observations_created ON observations(created_at_epoch DESC);
-- Stores session summaries
CREATE TABLE session_summaries (
id INTEGER PRIMARY KEY AUTOINCREMENT,
sdk_session_id TEXT UNIQUE NOT NULL,
project TEXT NOT NULL,
summary TEXT NOT NULL,
created_at TEXT NOT NULL,
created_at_epoch INTEGER NOT NULL,
FOREIGN KEY(sdk_session_id) REFERENCES sdk_sessions(sdk_session_id)
);
CREATE INDEX idx_summaries_project ON session_summaries(project);
CREATE INDEX idx_summaries_created ON session_summaries(created_at_epoch DESC);
```
---
## Hook Implementation
**IMPORTANT DISTINCTION:**
There are TWO separate hook systems at play here:
1. **Claude Code Hooks** - External command hooks configured in `~/.config/claude-code/settings.json`
- These hooks observe the MAIN Claude Code session
- They run as external commands (like `claude-mem save`)
- This is what we use to capture observations from the user's session
2. **SDK Hooks** - Programmatic hooks configured in TypeScript code via `HookMatcher`
- These hooks would observe the MEMORY SDK agent's own tool usage
- They run as TypeScript callbacks within the SDK worker process
- We're NOT using these (yet) - they're a future enhancement
**Our architecture:** Use Claude Code hooks (external commands) to observe the main session, and run a separate SDK worker process that doesn't need its own hooks.
### 1. SessionStart Hook
**Purpose:** Show user what happened in recent sessions
**Claude Code Hook Config (in settings.json):**
```json
{
"hooks": {
"SessionStart": [{
"matcher": "startup",
"hooks": [{
"type": "command",
"command": "claude-mem context"
}]
}]
}
}
```
**Command: `claude-mem context`**
Flow:
1. Read stdin JSON (session_id, cwd, source, etc.)
2. If source !== "startup", exit immediately
3. Extract project from cwd basename
4. Query SQLite for recent summaries:
```sql
SELECT summary, created_at
FROM session_summaries
WHERE project = ?
ORDER BY created_at_epoch DESC
LIMIT 10
```
5. Format results as human-readable text
6. Output to stdout (Claude Code automatically injects this)
7. Exit with code 0
### 2. UserPromptSubmit Hook
**Purpose:** Initialize SDK memory session in background
**Hook config:**
```json
{
"hooks": {
"UserPromptSubmit": [{
"hooks": [{
"type": "command",
"command": "claude-mem new"
}]
}]
}
}
```
**Command: `claude-mem new`**
Flow:
1. Read stdin JSON (session_id, prompt, cwd, etc.)
2. Extract project from cwd
3. Create SDK session record in database
4. Start SDK session with initialization prompt in background process
5. Save SDK session ID to database
6. Output: `{"continue": true, "suppressOutput": true}`
7. Exit immediately (SDK runs in background daemon/process)
**The Background SDK Process:**
The SDK session should run as a detached background process:
```typescript
// In claude-mem new
const child = spawn('claude-mem', ['sdk-worker', session_id], {
detached: true,
stdio: 'ignore'
});
child.unref();
```
The SDK worker:
```typescript
// claude-mem sdk-worker <session_id>
import { query } from '@anthropic-ai/agent-sdk';
import type { Query, UserMessage } from '@anthropic-ai/agent-sdk';
async function runSDKWorker(sessionId: string) {
const session = await loadSessionFromDB(sessionId);
// Track the SDK session ID from the init message
let sdkSessionId: string | undefined;
const abortController = new AbortController();
// Message generator yields UserMessage objects (role + content)
// This matches the SDK's expected format for streaming input mode
async function* messageGenerator(): AsyncIterable<UserMessage> {
// Initial prompt
yield {
role: "user",
content: buildInitPrompt(session)
};
// Then listen for queued observations
while (session.status === 'active' && !abortController.signal.aborted) {
const observations = await pollObservationQueue(session.sdk_session_id);
for (const obs of observations) {
yield {
role: "user",
content: buildObservationPrompt(obs)
};
markObservationProcessed(obs.id);
}
await sleep(1000); // Poll every second
}
}
// Run SDK session with proper streaming interface
// The query function signature: query({ prompt, options }): Query
const response: Query = query({
prompt: messageGenerator(), // AsyncIterable<UserMessage>
options: {
model: 'claude-sonnet-4-5', // Use documented model name
disallowedTools: ['Glob', 'Grep', 'ListMcpResourcesTool', 'WebSearch'], // More efficient than filtering in hooks
maxTurns: 1000,
cwd: session.cwd,
abortController: abortController
}
});
try {
// Consume responses from the Query AsyncGenerator
for await (const msg of response) {
// Capture session ID from system init message
if (msg.type === 'system' && msg.subtype === 'init') {
sdkSessionId = msg.session_id;
console.log(`SDK session started with ID: ${sdkSessionId}`);
// Store SDK session ID in database for potential resumption
await updateSDKSessionId(session.id, sdkSessionId);
}
if (msg.type === 'text') {
// Use an XML parser library (e.g., fast-xml-parser or similar) to parse observations and summaries
// Parse <observation> blocks and call storeObservation(session_id, project, type, text)
// Parse <summary> blocks, extract all 8 fields, format and call storeSummary(session_id, project, text)
parseAndStoreObservations(msg.content, session);
parseAndStoreSummary(msg.content, session);
}
}
} catch (error) {
if (abortController.signal.aborted) {
console.log('SDK worker interrupted gracefully');
} else {
console.error('SDK worker error:', error);
await markSessionFailed(session.id, error);
}
} finally {
await cleanupSession(session.id);
}
}
// Note: Query and UserMessage types are imported from '@anthropic-ai/agent-sdk'
// Query extends AsyncGenerator and provides interrupt() and setPermissionMode() methods
// UserMessage has the shape: { role: "user", content: string }
```
### 3. PostToolUse Hook
**Purpose:** Queue tool observations for SDK processing
**Hook config:**
```json
{
"hooks": {
"PostToolUse": [{
"matcher": "*",
"hooks": [{
"type": "command",
"command": "claude-mem save"
}]
}]
}
}
```
**Command: `claude-mem save`**
Flow:
1. Read stdin JSON (tool_name, tool_input, tool_output, etc.)
2. Find active SDK session for this project
3. Insert observation into queue:
```sql
INSERT INTO observation_queue
(sdk_session_id, tool_name, tool_input, tool_output, created_at_epoch)
VALUES (?, ?, ?, ?, ?)
```
4. Output: `{"continue": true, "suppressOutput": true}`
5. Exit immediately
**Observation Filtering:**
Tool filtering is now handled at the SDK level (see SDK Worker section) using `disallowedTools` option. The hook no longer needs to filter tools manually - all tool observations that reach the hook are already relevant.
However, you may still want to skip certain tools for performance reasons:
```typescript
// Optional: Skip very frequent or low-value tools
const SKIP_TOOLS = new Set(['TodoWrite', 'ListMcpResourcesTool']);
if (SKIP_TOOLS.has(tool_name)) {
console.log('{"continue": true, "suppressOutput": true}');
process.exit(0);
}
```
**Note:** The SDK worker uses `disallowedTools: ['Glob', 'Grep', 'ListMcpResourcesTool', 'WebSearch']` which is more efficient than hook-level filtering because it prevents the SDK agent from using these tools entirely.
### 4. Stop Hook
**Purpose:** Signal SDK to finalize and generate summary
**Hook config:**
```json
{
"hooks": {
"Stop": [{
"hooks": [{
"type": "command",
"command": "claude-mem summary"
}]
}]
}
}
```
**Command: `claude-mem summary`**
Flow:
1. Read stdin JSON (session_id, cwd, etc.)
2. Find active SDK session
3. Insert special "FINALIZE" message into observation queue:
```sql
INSERT INTO observation_queue
(sdk_session_id, tool_name, tool_input, tool_output, created_at_epoch)
VALUES (?, 'FINALIZE', '{}', '{}', ?)
```
4. Output: `{"continue": true, "suppressOutput": true}`
5. Exit immediately
**SDK Worker Handling:**
When SDK worker sees FINALIZE message:
```typescript
if (obs.tool_name === 'FINALIZE') {
yield {
type: "user",
message: {
role: "user",
content: buildFinalizePrompt(session)
}
};
// Wait for SDK to finish processing
await waitForSDKCompletion();
// Update session status
await markSessionCompleted(session.id);
// Exit worker
break;
}
```
---
## SDK Agent Prompts
### Initialization Prompt
```typescript
function buildInitPrompt(project: string, sessionId: string, userPrompt: string): string {
return `You are a memory assistant for the "${project}" project.
SESSION CONTEXT
---------------
Session ID: ${sessionId}
User's Goal: ${userPrompt}
Date: ${new Date().toISOString().split('T')[0]}
YOUR ROLE
---------
You will observe tool executions during this Claude Code session. Your job is to:
1. Extract meaningful insights (not just raw data)
2. Store atomic observations in SQLite
3. Focus on: key decisions, patterns discovered, problems solved, technical insights
WHAT TO CAPTURE
----------------
✓ Architecture decisions (e.g., "chose PostgreSQL over MongoDB for ACID guarantees")
✓ Bug fixes (e.g., "fixed race condition in auth middleware by adding mutex")
✓ New features (e.g., "implemented JWT refresh token flow")
✓ Refactorings (e.g., "extracted validation logic into separate service")
✓ Discoveries (e.g., "found that API rate limit is 100 req/min")
✗ NOT routine operations (reading files, listing directories)
✗ NOT work-in-progress (only completed work)
✗ NOT obvious facts (e.g., "TypeScript file has types")
HOW TO STORE OBSERVATIONS
--------------------------
When you identify something worth remembering, output your observation in this EXACT XML format:
\`\`\`xml
<observation>
<type>feature</type>
<text>Implemented JWT token refresh flow with 7-day expiry</text>
</observation>
\`\`\`
Valid types: decision, bugfix, feature, refactor, discovery
Structure requirements:
- <observation> is the root element
- <type> must be one of the 5 valid types (single word)
- <text> contains your concise observation (one sentence preferred)
- No additional fields or nesting
The SDK worker will parse all <observation> blocks from your response using regex and store them in SQLite.
You can include your reasoning before or after the observation block, or just output the observation by itself.
EXAMPLE
-------
Bad: "Read src/auth.ts file"
Good: "Implemented JWT token refresh flow with 7-day expiry"
Wait for tool observations. Acknowledge this message briefly.`;
}
```
### Observation Prompt
```typescript
function buildObservationPrompt(obs: Observation): string {
return `TOOL OBSERVATION
================
Tool: ${obs.tool_name}
Time: ${new Date(obs.created_at_epoch).toISOString()}
Input:
${JSON.stringify(JSON.parse(obs.tool_input), null, 2)}
Output:
${JSON.stringify(JSON.parse(obs.tool_output), null, 2)}
ANALYSIS TASK
-------------
1. Does this observation contain something worth remembering?
2. If YES: Output the observation in this EXACT XML format:
\`\`\`xml
<observation>
<type>feature</type>
<text>Your concise observation here</text>
</observation>
\`\`\`
Requirements:
- Use one of these types: decision, bugfix, feature, refactor, discovery
- Keep text concise (one sentence preferred)
- No markdown formatting inside <text>
- No additional XML fields
3. If NO: Just acknowledge and wait for next observation
Remember: Quality over quantity. Only store meaningful insights.`;
}
```
### Finalization Prompt
```typescript
function buildFinalizePrompt(session: SDKSession): string {
return `SESSION ENDING
==============
The Claude Code session is finishing.
FINAL TASK
----------
1. Review the observations you've stored this session
2. Generate a structured summary that answers these questions:
- What did user request?
- What did you investigate?
- What did you learn?
- What did you do?
- What's next?
- Files read
- Files edited
- Notes
3. Generate the structured summary and output it in this EXACT XML format:
\`\`\`xml
<summary>
<request>Implement JWT authentication system</request>
<investigated>Existing auth middleware, session management, token storage patterns</investigated>
<learned>Current system uses session cookies; no JWT support; race condition in middleware</learned>
<completed>Implemented JWT token + refresh flow with 7-day expiry; fixed race condition with mutex; added token validation middleware</completed>
<next_steps>Add token revocation API endpoint; write integration tests</next_steps>
<files_read>
<file>src/auth.ts</file>
<file>src/middleware/session.ts</file>
<file>src/types/user.ts</file>
</files_read>
<files_edited>
<file>src/auth.ts</file>
<file>src/middleware/auth.ts</file>
<file>src/routes/auth.ts</file>
</files_edited>
<notes>Token secret stored in .env; refresh tokens use rotation strategy</notes>
</summary>
\`\`\`
Structure requirements:
- <summary> is the root element
- All 8 child elements are REQUIRED: request, investigated, learned, completed, next_steps, files_read, files_edited, notes
- <files_read> and <files_edited> must contain <file> child elements (one per file)
- If no files were read/edited, use empty tags: <files_read></files_read>
- Text fields can be multiple sentences but avoid markdown formatting
- Use underscores in element names: next_steps, files_read, files_edited
The SDK worker will parse the <summary> block and extract all fields to store in SQLite.
Generate the summary now in the required XML format.`;
}
```
---
## Hook Commands Architecture
All four hook commands (`claude-mem context`, `claude-mem new`, `claude-mem save`, `claude-mem summary`) are implemented as standalone TypeScript functions that:
1. **Use bun:sqlite directly** - No spawning child processes or CLI subcommands
2. **Are self-contained** - Each hook has all the logic it needs
3. **Share a common database layer** - Import from shared `db.ts` module
4. **Never call other claude-mem commands** - All functionality via direct library calls
```typescript
// Example structure
import { Database } from 'bun:sqlite';
export function contextHook(stdin: HookInput) {
const db = new Database('~/.claude-mem/db.sqlite');
// Query and return context directly
const summaries = db.query('SELECT ...').all();
console.log(formatContext(summaries));
db.close();
}
export function saveHook(stdin: HookInput) {
const db = new Database('~/.claude-mem/db.sqlite');
// Insert observation directly
db.run('INSERT INTO observation_queue ...', params);
db.close();
console.log('{"continue": true, "suppressOutput": true}');
}
```
**Key principle:** Hooks are fast, synchronous database operations. The SDK worker process is where async/complex logic happens.
---
## Background Process Management
The `claude-mem save` hook just queues observations - processing happens in the background SDK worker process that polls the queue continuously.
The SDK worker is spawned by `claude-mem new` as a detached process and runs for the duration of the Claude Code session.
Benefits:
- Works on all platforms (no systemd/launchd needed)
- Self-contained (spawned and managed by claude-mem itself)
- Simple state management (all state in SQLite)
---
## Advanced SDK Features
### Permission Integration (Future Enhancement)
The SDK provides a permission system that could be integrated with memory for context-aware decisions:
```typescript
canUseTool: async (toolName, input) => {
// Check memory for previous decisions about this tool/context
const previousDecisions = await queryMemoryForTool(toolName, input);
if (previousDecisions.shouldAllow) {
return {
behavior: "allow",
updatedInput: input
};
}
return {
behavior: "ask_user",
message: `This tool was previously flagged. Allow anyway?`
};
}
```
This could enable:
- Learning from previous tool use patterns
- Automatically allowing/denying based on historical context
- Providing smart defaults based on project-specific patterns
**Implementation priority:** Low (add after core functionality is stable)
### SDK Hook Configuration (Alternative to Claude Code Hooks)
Instead of using external command hooks via Claude Code settings.json, the SDK supports native hook configuration:
```typescript
import { HookMatcher } from '@anthropic-ai/agent-sdk';
const response = query({
prompt: messageGenerator(),
options: {
hooks: {
'PreToolUse': [
HookMatcher(matcher='Bash', hooks=[validateBashCommand]),
HookMatcher(hooks=[logToolUse]) // Applies to all tools
],
'PostToolUse': [
HookMatcher(hooks=[captureObservation])
]
}
}
});
type HookCallback = (
input: HookInput,
toolUseID: string | undefined,
options: { signal: AbortSignal }
) => Promise<HookJSONOutput>;
```
**When to use SDK hooks vs Claude Code hooks:**
- **Claude Code hooks**: For integrating with the main Claude Code session (our current approach)
- **SDK hooks**: For controlling the memory agent's own tool usage (future enhancement)
**Implementation priority:** Medium (could simplify architecture, but adds complexity to migration)
---
## Error Handling
**SDK worker failures:**
- Each observation processing is atomic
- Failed observations stay in queue
- Next worker run retries
- After 3 failures, mark observation as skipped
- Use AbortController for graceful cancellation
**Abort signal handling:**
```typescript
try {
for await (const msg of response) {
if (abortController.signal.aborted) {
throw new Error('Aborted');
}
// Process message
}
} catch (error) {
if (abortController.signal.aborted) {
// Clean shutdown
await response.interrupt();
} else {
// Actual error
throw error;
}
}
```
**Database corruption:**
- SQLite with WAL mode (write-ahead logging)
- Regular backups to ~/.claude-mem/backups/
- Automatic recovery from backups
**SDK API failures:**
- Retry with exponential backoff
- Don't block main Claude Code session
- Log errors for debugging
- Mark session as 'failed' after max retries
---
## Implementation Order
1. **Database setup** - Create tables and migration scripts
2. **Hook commands** - Implement the 4 hook commands (context, new, save, summary)
3. **SDK worker** - Implement the background worker process with response parsing
4. **SDK prompts** - Wire up the prompts and message generator
5. **Test end-to-end** - Run a real Claude Code session and verify it works
Start simple. Get one hook working before moving to the next. Don't try to build everything at once.
**Note:** MCP is only used for retrieval (when Claude Code needs to access stored memories), not for storage. The SDK agent stores data by outputting specially formatted text that the SDK worker parses and writes to SQLite.
### SDK Import Verification
Before implementing, verify the SDK exports match your usage:
```typescript
// Required imports from @anthropic-ai/agent-sdk
import { query } from '@anthropic-ai/agent-sdk';
import type { Query, UserMessage, Options } from '@anthropic-ai/agent-sdk';
// Verify the query function signature:
// function query(options: { prompt: string | AsyncIterable<UserMessage>; options?: Options }): Query
// Verify Query type:
// interface Query extends AsyncGenerator<SDKMessage, void> {
// interrupt(): Promise<void>;
// setPermissionMode(mode: PermissionMode): Promise<void>;
// }
// Verify UserMessage type:
// type UserMessage = { role: "user"; content: string }
```
If the SDK exports differ from this structure, adjust the implementation accordingly. The SDK documentation should be the source of truth.
---
## Key Corrections from Agent SDK Documentation
This refactor plan has been updated to align with the official Agent SDK documentation. Key corrections include:
### 1. Session ID Management
- **Before:** Captured session ID once in UserPromptSubmit hook
- **After:** Capture from system init message and track updates continuously
- **Why:** Session IDs change with each conversation turn
### 2. Hook Configuration
- **Before:** Mixed up SDK hook format with Claude Code hook format
- **After:** Clarified that Claude Code uses settings.json format (external commands); SDK uses TypeScript HookMatcher (programmatic callbacks)
- **Why:** Two separate hook systems with different purposes and configuration methods
- **Our approach:** Use Claude Code hooks to observe the main session; SDK hooks are future enhancement
### 3. Message Generator and Query Interface
- **Before:** Custom SDKMessage type with nested message structure
- **After:** Simple UserMessage type `{ role: "user", content: string }` yielded from AsyncIterable
- **Why:** SDK expects AsyncIterable<UserMessage>, not a custom wrapper format
- **Query type:** Properly typed as `Query` which extends AsyncGenerator with interrupt() and setPermissionMode()
### 4. Tool Filtering
- **Before:** Filter "boring tools" in PostToolUse hook
- **After:** Use SDK's `disallowedTools` option in query configuration
- **Why:** More efficient to prevent SDK from using tools entirely
### 5. Model Identifier
- **Before:** Used `claude-haiku-4-5-20251001` (undocumented)
- **After:** Use `claude-sonnet-4-5` (documented model name)
- **Why:** Stick to documented model identifiers for stability
### 6. Error Handling
- **Before:** Custom error handling without SDK features
- **After:** Use AbortController and response.interrupt() for graceful cancellation
- **Why:** SDK provides built-in cancellation mechanisms
### 7. Session Resumption
- **Before:** Manual session state reconstruction
- **After:** Leverage SDK's built-in `resume: sessionId` option
- **Why:** SDK already handles session resumption
### Future Enhancements to Consider
1. **Permission integration** - Use canUseTool callback to make memory-aware decisions
2. **SDK native hooks** - Replace external command hooks with SDK HookMatcher
3. **Better session recovery** - Use SDK resumption for interrupted sessions
These corrections ensure our implementation follows Agent SDK best practices and avoids reinventing functionality the SDK already provides.
---
## Architecture Validation Summary
This plan has been validated against the official Agent SDK documentation and confirmed to be architecturally sound.
### ✅ Validated Design Decisions
1. **Hook System Usage** - Correctly uses Claude Code external command hooks for observation; SDK programmatic hooks reserved for future enhancement
2. **Query Function Interface** - Properly implements AsyncIterable<UserMessage> for streaming input mode
3. **Session Management** - Leverages SDK's built-in session resumption instead of manual state reconstruction
4. **Tool Filtering** - Uses SDK's `disallowedTools` option for efficiency
5. **Error Handling** - Implements AbortController and interrupt() for graceful cancellation
6. **Separation of Concerns** - Clean isolation between main Claude Code session and background SDK worker
### 🎯 Architecture Strengths
- **Non-blocking** - Hooks are fast database operations; complex logic happens in background
- **Queue-based** - Handles parallel hook execution correctly via observation_queue table
- **Fault-tolerant** - Failed observations stay in queue for retry; graceful degradation
- **Platform-agnostic** - No dependency on systemd/launchd; works everywhere
- **Type-safe** - Uses official SDK TypeScript types throughout
### 📋 Pre-Implementation Checklist
Before starting implementation, verify:
1. [ ] Agent SDK installed and accessible: `@anthropic-ai/agent-sdk`
2. [ ] Verify SDK exports match expected structure (query, Query, UserMessage types)
3. [ ] SQLite database location decided: `~/.claude-mem/db.sqlite`
4. [ ] Claude Code settings.json hook configuration tested
5. [ ] Background process spawning works on target platform (test detached process)
### 🚀 Ready for Implementation
The architecture is validated and ready for implementation. Follow the phased approach:
1. Database setup first (get schema working with bun:sqlite)
2. Implement hooks one at a time (start with `context`, then `save`)
3. Build SDK worker with simple message generator
4. Test end-to-end with a real Claude Code session
5. Iterate and refine based on real-world usage
**Remember:** Start simple, get one piece working, then build on it. Don't try to implement everything at once.
-424
View File
@@ -1,424 +0,0 @@
"""Pydantic models for Claude Code transcript JSON structures.
Enhanced to leverage official Anthropic types where beneficial.
"""
from typing import Any, List, Union, Optional, Dict, Literal, cast
from pydantic import BaseModel
from anthropic.types import Message as AnthropicMessage
from anthropic.types import StopReason
from anthropic.types import Usage as AnthropicUsage
from anthropic.types.content_block import ContentBlock
class TodoItem(BaseModel):
id: str
content: str
status: Literal["pending", "in_progress", "completed"]
priority: Literal["high", "medium", "low"]
class UsageInfo(BaseModel):
"""Token usage information that extends Anthropic's Usage type to handle optional fields."""
input_tokens: Optional[int] = None
cache_creation_input_tokens: Optional[int] = None
cache_read_input_tokens: Optional[int] = None
output_tokens: Optional[int] = None
service_tier: Optional[str] = None
server_tool_use: Optional[Dict[str, Any]] = None
def to_anthropic_usage(self) -> Optional[AnthropicUsage]:
"""Convert to Anthropic Usage type if both required fields are present."""
if self.input_tokens is not None and self.output_tokens is not None:
return AnthropicUsage(
input_tokens=self.input_tokens,
output_tokens=self.output_tokens,
cache_creation_input_tokens=self.cache_creation_input_tokens,
cache_read_input_tokens=self.cache_read_input_tokens,
service_tier=self.service_tier, # type: ignore
server_tool_use=self.server_tool_use, # type: ignore
)
return None
@classmethod
def from_anthropic_usage(cls, usage: AnthropicUsage) -> "UsageInfo":
"""Create UsageInfo from Anthropic Usage."""
return cls(
input_tokens=usage.input_tokens,
output_tokens=usage.output_tokens,
cache_creation_input_tokens=usage.cache_creation_input_tokens,
cache_read_input_tokens=usage.cache_read_input_tokens,
service_tier=usage.service_tier,
server_tool_use=usage.server_tool_use.model_dump()
if usage.server_tool_use
else None,
)
class TextContent(BaseModel):
type: Literal["text"]
text: str
class ToolUseContent(BaseModel):
type: Literal["tool_use"]
id: str
name: str
input: Dict[str, Any]
class ToolResultContent(BaseModel):
type: Literal["tool_result"]
tool_use_id: str
content: Union[str, List[Dict[str, Any]]]
is_error: Optional[bool] = None
class ThinkingContent(BaseModel):
type: Literal["thinking"]
thinking: str
signature: Optional[str] = None
class ImageSource(BaseModel):
type: Literal["base64"]
media_type: str
data: str
class ImageContent(BaseModel):
type: Literal["image"]
source: ImageSource
# Enhanced ContentItem to include official Anthropic ContentBlock types
ContentItem = Union[
TextContent,
ToolUseContent,
ToolResultContent,
ThinkingContent,
ImageContent,
ContentBlock, # Official Anthropic content block types
]
class UserMessage(BaseModel):
role: Literal["user"]
content: Union[str, List[ContentItem]]
class AssistantMessage(BaseModel):
"""Assistant message model compatible with Anthropic's Message type."""
id: str
type: Literal["message"]
role: Literal["assistant"]
model: str
content: List[ContentItem]
stop_reason: Optional[StopReason] = None
stop_sequence: Optional[str] = None
usage: Optional[UsageInfo] = None
@classmethod
def from_anthropic_message(
cls, anthropic_msg: AnthropicMessage
) -> "AssistantMessage":
"""Create AssistantMessage from official Anthropic Message."""
# Convert Anthropic Message to our format, preserving official types where possible
return cls(
id=anthropic_msg.id,
type=anthropic_msg.type,
role=anthropic_msg.role,
model=anthropic_msg.model,
content=list(
anthropic_msg.content
), # Convert to list for ContentItem compatibility
stop_reason=anthropic_msg.stop_reason,
stop_sequence=anthropic_msg.stop_sequence,
usage=normalize_usage_info(anthropic_msg.usage),
)
class FileInfo(BaseModel):
filePath: str
content: str
numLines: int
startLine: int
totalLines: int
class FileReadResult(BaseModel):
type: Literal["text"]
file: FileInfo
class CommandResult(BaseModel):
stdout: str
stderr: str
interrupted: bool
isImage: bool
class TodoResult(BaseModel):
oldTodos: List[TodoItem]
newTodos: List[TodoItem]
class EditResult(BaseModel):
oldString: Optional[str] = None
newString: Optional[str] = None
replaceAll: Optional[bool] = None
originalFile: Optional[str] = None
structuredPatch: Optional[Any] = None
userModified: Optional[bool] = None
ToolUseResult = Union[
str,
List[TodoItem],
FileReadResult,
CommandResult,
TodoResult,
EditResult,
List[ContentItem],
]
class BaseTranscriptEntry(BaseModel):
parentUuid: Optional[str]
isSidechain: bool
userType: str
cwd: str
sessionId: str
version: str
uuid: str
timestamp: str
isMeta: Optional[bool] = None
class UserTranscriptEntry(BaseTranscriptEntry):
type: Literal["user"]
message: UserMessage
toolUseResult: Optional[ToolUseResult] = None
class AssistantTranscriptEntry(BaseTranscriptEntry):
type: Literal["assistant"]
message: AssistantMessage
requestId: Optional[str] = None
class SummaryTranscriptEntry(BaseModel):
type: Literal["summary"]
summary: str
leafUuid: str
cwd: Optional[str] = None
class SystemTranscriptEntry(BaseTranscriptEntry):
"""System messages like warnings, notifications, etc."""
type: Literal["system"]
content: str
level: Optional[str] = None # e.g., "warning", "info", "error"
class QueueOperationTranscriptEntry(BaseModel):
"""Queue operations (enqueue/dequeue) for message queueing tracking.
These are internal operations that track when messages are queued and dequeued.
They are parsed but not rendered, as the content duplicates actual user messages.
"""
type: Literal["queue-operation"]
operation: Literal["enqueue", "dequeue"]
timestamp: str
sessionId: str
content: Optional[List[ContentItem]] = None # Only present for enqueue operations
TranscriptEntry = Union[
UserTranscriptEntry,
AssistantTranscriptEntry,
SummaryTranscriptEntry,
SystemTranscriptEntry,
QueueOperationTranscriptEntry,
]
def normalize_usage_info(usage_data: Any) -> Optional[UsageInfo]:
"""Normalize usage data to be compatible with both custom and Anthropic formats."""
if usage_data is None:
return None
# If it's already a UsageInfo instance, return as-is
if isinstance(usage_data, UsageInfo):
return usage_data
# If it's an Anthropic Usage instance, convert using our method
if isinstance(usage_data, AnthropicUsage):
return UsageInfo.from_anthropic_usage(usage_data)
# If it has the shape of an Anthropic Usage, try to construct it first
if hasattr(usage_data, "input_tokens") and hasattr(usage_data, "output_tokens"):
try:
# Try to create an Anthropic Usage first
anthropic_usage = AnthropicUsage.model_validate(usage_data)
return UsageInfo.from_anthropic_usage(anthropic_usage)
except Exception:
# Fall back to direct conversion
return UsageInfo(
input_tokens=getattr(usage_data, "input_tokens", None),
cache_creation_input_tokens=getattr(
usage_data, "cache_creation_input_tokens", None
),
cache_read_input_tokens=getattr(
usage_data, "cache_read_input_tokens", None
),
output_tokens=getattr(usage_data, "output_tokens", None),
service_tier=getattr(usage_data, "service_tier", None),
server_tool_use=getattr(usage_data, "server_tool_use", None),
)
# If it's a dict, validate and convert to our format
if isinstance(usage_data, dict):
return UsageInfo.model_validate(usage_data)
return None
def parse_content_item(item_data: Dict[str, Any]) -> ContentItem:
"""Parse a content item using enhanced approach with Anthropic types."""
try:
content_type = item_data.get("type", "")
# Try official Anthropic types first for better future compatibility
if content_type == "text":
try:
from anthropic.types.text_block import TextBlock
return TextBlock.model_validate(item_data)
except Exception:
return TextContent.model_validate(item_data)
elif content_type == "tool_use":
try:
from anthropic.types.tool_use_block import ToolUseBlock
return ToolUseBlock.model_validate(item_data)
except Exception:
return ToolUseContent.model_validate(item_data)
elif content_type == "thinking":
try:
from anthropic.types.thinking_block import ThinkingBlock
return ThinkingBlock.model_validate(item_data)
except Exception:
return ThinkingContent.model_validate(item_data)
elif content_type == "tool_result":
return ToolResultContent.model_validate(item_data)
elif content_type == "image":
return ImageContent.model_validate(item_data)
else:
# Fallback to text content for unknown types
return TextContent(type="text", text=str(item_data))
except Exception:
return TextContent(type="text", text=str(item_data))
def parse_message_content(content_data: Any) -> Union[str, List[ContentItem]]:
"""Parse message content, handling both string and list formats."""
if isinstance(content_data, str):
return content_data
elif isinstance(content_data, list):
content_list = cast(List[Dict[str, Any]], content_data)
return [parse_content_item(item) for item in content_list]
else:
return str(content_data)
def parse_transcript_entry(data: Dict[str, Any]) -> TranscriptEntry:
"""
Parse a JSON dictionary into the appropriate TranscriptEntry type.
Enhanced to optionally use official Anthropic types for assistant messages.
Args:
data: Dictionary parsed from JSON
Returns:
The appropriate TranscriptEntry subclass
Raises:
ValueError: If the data doesn't match any known transcript entry type
"""
entry_type = data.get("type")
if entry_type == "user":
# Parse message content if present
data_copy = data.copy()
if "message" in data_copy and "content" in data_copy["message"]:
data_copy["message"] = data_copy["message"].copy()
data_copy["message"]["content"] = parse_message_content(
data_copy["message"]["content"]
)
# Parse toolUseResult if present and it's a list of content items
if "toolUseResult" in data_copy and isinstance(
data_copy["toolUseResult"], list
):
# Check if it's a list of content items (MCP tool results)
tool_use_result = cast(List[Any], data_copy["toolUseResult"])
if (
tool_use_result
and isinstance(tool_use_result[0], dict)
and "type" in tool_use_result[0]
):
data_copy["toolUseResult"] = [
parse_content_item(cast(Dict[str, Any], item))
for item in tool_use_result
if isinstance(item, dict)
]
return UserTranscriptEntry.model_validate(data_copy)
elif entry_type == "assistant":
# Enhanced assistant message parsing with optional Anthropic types
data_copy = data.copy()
# Validate compatibility with official Anthropic Message type
if "message" in data_copy:
try:
message_data = data_copy["message"]
AnthropicMessage.model_validate(message_data)
# Successfully validated - our data is compatible with official Anthropic types
except Exception:
# Validation failed - continue with standard parsing
pass
# Standard parsing path (works for all cases)
if "message" in data_copy and "content" in data_copy["message"]:
message_copy = data_copy["message"].copy()
message_copy["content"] = parse_message_content(message_copy["content"])
# Normalize usage data to support both Anthropic and custom formats
if "usage" in message_copy:
message_copy["usage"] = normalize_usage_info(message_copy["usage"])
data_copy["message"] = message_copy
return AssistantTranscriptEntry.model_validate(data_copy)
elif entry_type == "summary":
return SummaryTranscriptEntry.model_validate(data)
elif entry_type == "system":
return SystemTranscriptEntry.model_validate(data)
elif entry_type == "queue-operation":
# Parse content if present (only in enqueue operations)
data_copy = data.copy()
if "content" in data_copy and isinstance(data_copy["content"], list):
data_copy["content"] = parse_message_content(data_copy["content"])
return QueueOperationTranscriptEntry.model_validate(data_copy)
else:
raise ValueError(f"Unknown transcript entry type: {entry_type}")
@@ -1,384 +0,0 @@
# Chroma Search Completion Plan
## Current State Analysis
### What's Working ✅
1. **Hybrid Search Implementation**
- Chroma semantic search + SQLite temporal filtering is working
- Evidence: Queries like "AI embeddings" find "hybrid search" through semantic similarity
- All metadata-first tools use Chroma ranking
2. **Tools Using Chroma Correctly**
- `search_observations` - Semantic-first workflow (Chroma top 100 → 90-day filter → SQLite hydrate)
- `find_by_concept` - Metadata-first + Chroma semantic ranking
- `find_by_file` - Metadata-first + Chroma semantic ranking
- `find_by_type` - Metadata-first + Chroma semantic ranking
3. **Data Synced to Chroma**
- ✅ Observations (all fields: narrative, facts, text as separate docs)
- ✅ Session summaries (all fields: request, investigated, learned, completed, next_steps, notes as separate docs)
- ❌ User prompts (NOT synced yet)
### What's Missing ❌
1. **search_sessions tool** - Only uses SQLite FTS5, not leveraging Chroma semantic search
2. **search_user_prompts tool** - Only uses SQLite FTS5, not leveraging Chroma semantic search
3. **User prompts not synced to Chroma** - Need to add to sync experiment and worker process
## Why User Prompts Need Semantic Search
**Benefits:**
- Users often search for "what I asked about X" but phrase it differently than original prompt
- Semantic search finds related requests even with different wording
- Example: Search "authentication setup" finds prompts about "login system", "user auth", "sign-in flow"
- Completes the triad: What was done (observations) + What was learned (summaries) + What was requested (prompts)
**Storage pattern:**
- Each user prompt becomes ONE document in Chroma (unlike observations/summaries which split by field)
- Metadata: `sqlite_id`, `doc_type: 'user_prompt'`, `sdk_session_id`, `project`, `created_at_epoch`, `prompt_number`
- Document ID format: `prompt_{id}` (simpler than observations since no field splitting)
## Implementation Plan
### Phase 1: Sync User Prompts to Chroma
**Files to modify:**
1. `experiment/chroma-sync-experiment.ts` - Add user_prompts sync section
2. Future: Worker service incremental sync (not in this phase)
**Implementation:**
```typescript
// In chroma-sync-experiment.ts after session summaries sync
// Fetch user prompts
console.log('📖 Reading user prompts from SQLite...');
const prompts = store.db.prepare(`
SELECT * FROM user_prompts WHERE project = ? ORDER BY created_at_epoch DESC LIMIT 1000
`).all(project) as any[];
console.log(`Found ${prompts.length} user prompts`);
// Prepare prompt documents - one document per prompt
const promptDocs: ChromaDocument[] = [];
for (const prompt of prompts) {
promptDocs.push({
id: `prompt_${prompt.id}`,
document: prompt.prompt_text,
metadata: {
sqlite_id: prompt.id,
doc_type: 'user_prompt',
sdk_session_id: prompt.sdk_session_id,
project: prompt.project,
created_at_epoch: prompt.created_at_epoch,
prompt_number: prompt.prompt_number || 0
}
});
}
console.log(`Created ${promptDocs.length} user prompt documents\n`);
// Sync prompts in batches (same pattern as observations/sessions)
```
**Testing:**
```bash
npm run experiment:sync
# Verify prompts appear in Chroma collection
```
### Phase 2: Update search_sessions to Use Chroma
**File:** `src/servers/search-server.ts` (lines ~441-481)
**Current implementation:**
```typescript
const results = search.searchSessions(query, options);
```
**New implementation (semantic-first hybrid):**
```typescript
let results: SessionSummarySearchResult[] = [];
// Hybrid search: Try Chroma semantic search first, fall back to FTS5
if (chromaClient) {
try {
console.error('[search-server] Using hybrid semantic search for sessions');
// Step 1: Chroma semantic search (top 100)
const chromaResults = await queryChroma(query, 100, { doc_type: 'session_summary' });
console.error(`[search-server] Chroma returned ${chromaResults.ids.length} semantic matches`);
if (chromaResults.ids.length > 0) {
// Step 2: Filter by recency (90 days)
const ninetyDaysAgo = Math.floor(Date.now() / 1000) - (90 * 24 * 60 * 60);
const recentIds = chromaResults.ids.filter((id, idx) => {
const meta = chromaResults.metadatas[idx];
return meta && meta.created_at_epoch > ninetyDaysAgo;
});
console.error(`[search-server] ${recentIds.length} results within 90-day window`);
// Step 3: Hydrate from SQLite in temporal order
if (recentIds.length > 0) {
const limit = options.limit || 20;
results = store.getSessionSummariesByIds(recentIds, { orderBy: 'date_desc', limit });
console.error(`[search-server] Hydrated ${results.length} sessions from SQLite`);
}
}
} catch (chromaError: any) {
console.error('[search-server] Chroma query failed, falling back to FTS5:', chromaError.message);
}
}
// Fall back to FTS5 if Chroma unavailable or returned no results
if (results.length === 0) {
console.error('[search-server] Using FTS5 keyword search');
results = search.searchSessions(query, options);
}
```
**Helper needed in queryChroma:**
Update `queryChroma` function to extract summary IDs from document IDs:
```typescript
// Extract unique summary IDs from document IDs
for (const docId of docIds) {
// Handle both obs_{id}_* and summary_{id}_* formats
const obsMatch = docId.match(/obs_(\d+)_/);
const summaryMatch = docId.match(/summary_(\d+)_/);
if (obsMatch) {
const sqliteId = parseInt(obsMatch[1], 10);
if (!ids.includes(sqliteId)) ids.push(sqliteId);
} else if (summaryMatch) {
const sqliteId = parseInt(summaryMatch[1], 10);
if (!ids.includes(sqliteId)) ids.push(sqliteId);
}
}
```
**Database helper needed:**
Add to `SessionStore.ts`:
```typescript
getSessionSummariesByIds(
ids: number[],
options: { orderBy?: 'date_desc' | 'date_asc'; limit?: number } = {}
): SessionSummarySearchResult[] {
if (ids.length === 0) return [];
const { orderBy = 'date_desc', limit } = options;
const orderClause = orderBy === 'date_asc' ? 'ASC' : 'DESC';
const limitClause = limit ? `LIMIT ${limit}` : '';
const placeholders = ids.map(() => '?').join(',');
const stmt = this.db.prepare(`
SELECT * FROM session_summaries
WHERE id IN (${placeholders})
ORDER BY created_at_epoch ${orderClause}
${limitClause}
`);
return stmt.all(...ids) as SessionSummarySearchResult[];
}
```
### Phase 3: Update search_user_prompts to Use Chroma
**File:** `src/servers/search-server.ts` (lines ~956-1010)
**Current implementation:**
```typescript
const results = search.searchUserPrompts(query, options);
```
**New implementation (semantic-first hybrid):**
```typescript
let results: UserPromptSearchResult[] = [];
// Hybrid search: Try Chroma semantic search first, fall back to FTS5
if (chromaClient) {
try {
console.error('[search-server] Using hybrid semantic search for user prompts');
// Step 1: Chroma semantic search (top 100)
const chromaResults = await queryChroma(query, 100, { doc_type: 'user_prompt' });
console.error(`[search-server] Chroma returned ${chromaResults.ids.length} semantic matches`);
if (chromaResults.ids.length > 0) {
// Step 2: Filter by recency (90 days)
const ninetyDaysAgo = Math.floor(Date.now() / 1000) - (90 * 24 * 60 * 60);
const recentIds = chromaResults.ids.filter((id, idx) => {
const meta = chromaResults.metadatas[idx];
return meta && meta.created_at_epoch > ninetyDaysAgo;
});
console.error(`[search-server] ${recentIds.length} results within 90-day window`);
// Step 3: Hydrate from SQLite in temporal order
if (recentIds.length > 0) {
const limit = options.limit || 20;
results = store.getUserPromptsByIds(recentIds, { orderBy: 'date_desc', limit });
console.error(`[search-server] Hydrated ${results.length} user prompts from SQLite`);
}
}
} catch (chromaError: any) {
console.error('[search-server] Chroma query failed, falling back to FTS5:', chromaError.message);
}
}
// Fall back to FTS5 if Chroma unavailable or returned no results
if (results.length === 0) {
console.error('[search-server] Using FTS5 keyword search');
results = search.searchUserPrompts(query, options);
}
```
**Helper needed in queryChroma:**
Update to handle `prompt_{id}` format:
```typescript
// Extract unique prompt IDs from document IDs
for (const docId of docIds) {
const obsMatch = docId.match(/obs_(\d+)_/);
const summaryMatch = docId.match(/summary_(\d+)_/);
const promptMatch = docId.match(/prompt_(\d+)/);
if (obsMatch) {
const sqliteId = parseInt(obsMatch[1], 10);
if (!ids.includes(sqliteId)) ids.push(sqliteId);
} else if (summaryMatch) {
const sqliteId = parseInt(summaryMatch[1], 10);
if (!ids.includes(sqliteId)) ids.push(sqliteId);
} else if (promptMatch) {
const sqliteId = parseInt(promptMatch[1], 10);
if (!ids.includes(sqliteId)) ids.push(sqliteId);
}
}
```
**Database helper needed:**
Add to `SessionStore.ts`:
```typescript
getUserPromptsByIds(
ids: number[],
options: { orderBy?: 'date_desc' | 'date_asc'; limit?: number } = {}
): UserPromptSearchResult[] {
if (ids.length === 0) return [];
const { orderBy = 'date_desc', limit } = options;
const orderClause = orderBy === 'date_asc' ? 'ASC' : 'DESC';
const limitClause = limit ? `LIMIT ${limit}` : '';
const placeholders = ids.map(() => '?').join(',');
const stmt = this.db.prepare(`
SELECT * FROM user_prompts
WHERE id IN (${placeholders})
ORDER BY created_at_epoch ${orderClause}
${limitClause}
`);
return stmt.all(...ids) as UserPromptSearchResult[];
}
```
### Phase 4: Timeline Context Tool
**New tool:** `get_context_timeline`
**Purpose:** Show observations/sessions/prompts around a specific point in time
**API:**
```typescript
{
name: 'get_context_timeline',
description: 'Get a timeline of context around a specific observation, session, or timestamp',
inputSchema: z.object({
anchor: z.union([
z.number(), // observation ID
z.string() // ISO timestamp or session ID
]).describe('Anchor point: observation ID, session ID, or ISO timestamp'),
depth_before: z.number().min(0).max(50).default(10).describe('Number of records to show before anchor'),
depth_after: z.number().min(0).max(50).default(10).describe('Number of records to show after anchor'),
format: z.enum(['index', 'full']).default('index'),
project: z.string().optional()
})
}
```
**Implementation approach:**
1. Resolve anchor to a timestamp (observation.created_at_epoch, session.created_at_epoch, or parse ISO)
2. Query observations within [anchor_time - depth_before_duration, anchor_time + depth_after_duration]
3. Return chronologically ordered results with anchor highlighted
4. Support mixing observations, sessions, and prompts in single timeline
**Database helper:**
```typescript
getTimelineAroundTimestamp(
anchorEpoch: number,
depthBefore: number,
depthAfter: number,
project?: string
): { observations: any[], sessions: any[], prompts: any[] } {
// Calculate time windows based on depth
// For now: each depth = 1 hour (configurable)
const hourInSeconds = 3600;
const startEpoch = anchorEpoch - (depthBefore * hourInSeconds);
const endEpoch = anchorEpoch + (depthAfter * hourInSeconds);
// Query all three tables
const observations = this.db.prepare(`...`).all(...);
const sessions = this.db.prepare(`...`).all(...);
const prompts = this.db.prepare(`...`).all(...);
return { observations, sessions, prompts };
}
```
## Testing Plan
### Phase 1 Testing
```bash
# Run sync experiment
npm run experiment:sync
# Check Chroma collection for prompts
# Should see prompt_* documents with doc_type: 'user_prompt'
```
### Phase 2 Testing
```bash
# Test semantic search for sessions
# Example: "authentication system" should find sessions about "login", "user auth", etc.
```
### Phase 3 Testing
```bash
# Test semantic search for user prompts
# Example: "fix bug" should find prompts with "error", "issue", "problem", etc.
```
### Phase 4 Testing
```bash
# Test timeline around specific observation
# Should show before/after context
```
## Files to Modify
1. **experiment/chroma-sync-experiment.ts** - Add user_prompts sync
2. **src/servers/search-server.ts** - Update search_sessions and search_user_prompts, add get_context_timeline
3. **src/services/sqlite/SessionStore.ts** - Add getSessionSummariesByIds, getUserPromptsByIds, getTimelineAroundTimestamp
4. **src/services/sqlite/types.ts** - Ensure all return types are exported
## Success Criteria
- ✅ All 8 search tools use Chroma semantic search with SQLite temporal fallback
- ✅ User prompts are synced to Chroma and searchable
- ✅ Timeline tool provides chronological context around any point
- ✅ Semantic search works across observations, sessions, and prompts
- ✅ All searches maintain 90-day temporal filtering for relevance
## Future Enhancements
1. **Incremental sync in worker service** - Currently only batch sync via experiment
2. **Configurable temporal windows** - Make 90-day filter configurable
3. **Cross-collection search** - Search across observations + sessions + prompts in one query
4. **Timeline view improvements** - Group by session, highlight anchor, show relationships
@@ -1,61 +0,0 @@
For tracking costs and tokens in your Agent SDK plugin, you have built-in programmatic access to usage data through the SDK itself[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking).
## Agent SDK Cost Tracking
The Claude Agent SDK provides detailed token usage information for each interaction[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking). Here's how to track it:
**TypeScript:**
```typescript
import { query } from "@anthropic-ai/claude-agent-sdk";
const result = await query({
prompt: "Your task here",
options: {
onMessage: (message) => {
if (message.type === 'assistant' && message.usage) {
console.log(`Message ID: ${message.id}`);
console.log(`Usage:`, message.usage);
}
}
}
});
```
[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking)
The final `result` message contains the total cumulative usage from all steps in the conversation[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking):
```typescript
console.log("Total usage:", result.usage);
console.log("Total cost:", result.usage.total_cost_usd);
```
[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking)
## Important: Avoid Double-Counting
When Claude executes tools in parallel, multiple assistant messages may share the same ID and usage data[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking). You should only charge once per unique message ID[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking):
```typescript
const processedMessageIds = new Set<string>();
onMessage: (message) => {
if (message.type === 'assistant' && message.usage) {
// Skip if already processed
if (processedMessageIds.has(message.id)) {
return;
}
processedMessageIds.add(message.id);
// Record usage here
}
}
```
[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking)
## Usage Fields
Each usage object contains[(1)](https://docs.claude.com/en/api/agent-sdk/cost-tracking):
- `input_tokens`: Base input tokens processed
- `output_tokens`: Tokens generated in the response
- `cache_creation_input_tokens`: Tokens used to create cache entries
- `cache_read_input_tokens`: Tokens read from cache
- `total_cost_usd`: Total cost in USD (only in result message)
-607
View File
@@ -1,607 +0,0 @@
# Agent Skills
> Create, manage, and share Skills to extend Claude's capabilities in Claude Code.
This guide shows you how to create, use, and manage Agent Skills in Claude Code. Skills are modular capabilities that extend Claude's functionality through organized folders containing instructions, scripts, and resources.
## Prerequisites
* Claude Code version 1.0 or later
* Basic familiarity with [Claude Code](/en/docs/claude-code/quickstart)
## What are Agent Skills?
Agent Skills package expertise into discoverable capabilities. Each Skill consists of a `SKILL.md` file with instructions that Claude reads when relevant, plus optional supporting files like scripts and templates.
**How Skills are invoked**: Skills are **model-invoked**—Claude autonomously decides when to use them based on your request and the Skill's description. This is different from slash commands, which are **user-invoked** (you explicitly type `/command` to trigger them).
**Benefits**:
* Extend Claude's capabilities for your specific workflows
* Share expertise across your team via git
* Reduce repetitive prompting
* Compose multiple Skills for complex tasks
Learn more in the [Agent Skills overview](/en/docs/agents-and-tools/agent-skills/overview).
<Note>
For a deep dive into the architecture and real-world applications of Agent Skills, read our engineering blog: [Equipping agents for the real world with Agent Skills](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills).
</Note>
## Create a Skill
Skills are stored as directories containing a `SKILL.md` file.
### Personal Skills
Personal Skills are available across all your projects. Store them in `~/.claude/skills/`:
```bash theme={null}
mkdir -p ~/.claude/skills/my-skill-name
```
**Use personal Skills for**:
* Your individual workflows and preferences
* Experimental Skills you're developing
* Personal productivity tools
### Project Skills
Project Skills are shared with your team. Store them in `.claude/skills/` within your project:
```bash theme={null}
mkdir -p .claude/skills/my-skill-name
```
**Use project Skills for**:
* Team workflows and conventions
* Project-specific expertise
* Shared utilities and scripts
Project Skills are checked into git and automatically available to team members.
### Plugin Skills
Skills can also come from [Claude Code plugins](/en/docs/claude-code/plugins). Plugins may bundle Skills that are automatically available when the plugin is installed. These Skills work the same way as personal and project Skills.
## Write SKILL.md
Create a `SKILL.md` file with YAML frontmatter and Markdown content:
```yaml theme={null}
---
name: your-skill-name
description: Brief description of what this Skill does and when to use it
---
# Your Skill Name
## Instructions
Provide clear, step-by-step guidance for Claude.
## Examples
Show concrete examples of using this Skill.
```
**Field requirements**:
* `name`: Must use lowercase letters, numbers, and hyphens only (max 64 characters)
* `description`: Brief description of what the Skill does and when to use it (max 1024 characters)
The `description` field is critical for Claude to discover when to use your Skill. It should include both what the Skill does and when Claude should use it.
See the [best practices guide](/en/docs/agents-and-tools/agent-skills/best-practices) for complete authoring guidance including validation rules.
## Add supporting files
Create additional files alongside SKILL.md:
```
my-skill/
├── SKILL.md (required)
├── reference.md (optional documentation)
├── examples.md (optional examples)
├── scripts/
│ └── helper.py (optional utility)
└── templates/
└── template.txt (optional template)
```
Reference these files from SKILL.md:
````markdown theme={null}
For advanced usage, see [reference.md](reference.md).
Run the helper script:
```bash
python scripts/helper.py input.txt
```
````
Claude reads these files only when needed, using progressive disclosure to manage context efficiently.
## Restrict tool access with allowed-tools
Use the `allowed-tools` frontmatter field to limit which tools Claude can use when a Skill is active:
```yaml theme={null}
---
name: safe-file-reader
description: Read files without making changes. Use when you need read-only file access.
allowed-tools: Read, Grep, Glob
---
# Safe File Reader
This Skill provides read-only file access.
## Instructions
1. Use Read to view file contents
2. Use Grep to search within files
3. Use Glob to find files by pattern
```
When this Skill is active, Claude can only use the specified tools (Read, Grep, Glob) without needing to ask for permission. This is useful for:
* Read-only Skills that shouldn't modify files
* Skills with limited scope (e.g., only data analysis, no file writing)
* Security-sensitive workflows where you want to restrict capabilities
If `allowed-tools` is not specified, Claude will ask for permission to use tools as normal, following the standard permission model.
<Note>
`allowed-tools` is only supported for Skills in Claude Code.
</Note>
## View available Skills
Skills are automatically discovered by Claude from three sources:
* Personal Skills: `~/.claude/skills/`
* Project Skills: `.claude/skills/`
* Plugin Skills: bundled with installed plugins
**To view all available Skills**, ask Claude directly:
```
What Skills are available?
```
or
```
List all available Skills
```
This will show all Skills from all sources, including plugin Skills.
**To inspect a specific Skill**, you can also check the filesystem:
```bash theme={null}
# List personal Skills
ls ~/.claude/skills/
# List project Skills (if in a project directory)
ls .claude/skills/
# View a specific Skill's content
cat ~/.claude/skills/my-skill/SKILL.md
```
## Test a Skill
After creating a Skill, test it by asking questions that match your description.
**Example**: If your description mentions "PDF files":
```
Can you help me extract text from this PDF?
```
Claude autonomously decides to use your Skill if it matches the request—you don't need to explicitly invoke it. The Skill activates automatically based on the context of your question.
## Debug a Skill
If Claude doesn't use your Skill, check these common issues:
### Make description specific
**Too vague**:
```yaml theme={null}
description: Helps with documents
```
**Specific**:
```yaml theme={null}
description: Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
```
Include both what the Skill does and when to use it in the description.
### Verify file path
**Personal Skills**: `~/.claude/skills/skill-name/SKILL.md`
**Project Skills**: `.claude/skills/skill-name/SKILL.md`
Check the file exists:
```bash theme={null}
# Personal
ls ~/.claude/skills/my-skill/SKILL.md
# Project
ls .claude/skills/my-skill/SKILL.md
```
### Check YAML syntax
Invalid YAML prevents the Skill from loading. Verify the frontmatter:
```bash theme={null}
cat SKILL.md | head -n 10
```
Ensure:
* Opening `---` on line 1
* Closing `---` before Markdown content
* Valid YAML syntax (no tabs, correct indentation)
### View errors
Run Claude Code with debug mode to see Skill loading errors:
```bash theme={null}
claude --debug
```
## Share Skills with your team
**Recommended approach**: Distribute Skills through [plugins](/en/docs/claude-code/plugins).
To share Skills via plugin:
1. Create a plugin with Skills in the `skills/` directory
2. Add the plugin to a marketplace
3. Team members install the plugin
For complete instructions, see [Add Skills to your plugin](/en/docs/claude-code/plugins#add-skills-to-your-plugin).
You can also share Skills directly through project repositories:
### Step 1: Add Skill to your project
Create a project Skill:
```bash theme={null}
mkdir -p .claude/skills/team-skill
# Create SKILL.md
```
### Step 2: Commit to git
```bash theme={null}
git add .claude/skills/
git commit -m "Add team Skill for PDF processing"
git push
```
### Step 3: Team members get Skills automatically
When team members pull the latest changes, Skills are immediately available:
```bash theme={null}
git pull
claude # Skills are now available
```
## Update a Skill
Edit SKILL.md directly:
```bash theme={null}
# Personal Skill
code ~/.claude/skills/my-skill/SKILL.md
# Project Skill
code .claude/skills/my-skill/SKILL.md
```
Changes take effect the next time you start Claude Code. If Claude Code is already running, restart it to load the updates.
## Remove a Skill
Delete the Skill directory:
```bash theme={null}
# Personal
rm -rf ~/.claude/skills/my-skill
# Project
rm -rf .claude/skills/my-skill
git commit -m "Remove unused Skill"
```
## Best practices
### Keep Skills focused
One Skill should address one capability:
**Focused**:
* "PDF form filling"
* "Excel data analysis"
* "Git commit messages"
**Too broad**:
* "Document processing" (split into separate Skills)
* "Data tools" (split by data type or operation)
### Write clear descriptions
Help Claude discover when to use Skills by including specific triggers in your description:
**Clear**:
```yaml theme={null}
description: Analyze Excel spreadsheets, create pivot tables, and generate charts. Use when working with Excel files, spreadsheets, or analyzing tabular data in .xlsx format.
```
**Vague**:
```yaml theme={null}
description: For files
```
### Test with your team
Have teammates use Skills and provide feedback:
* Does the Skill activate when expected?
* Are the instructions clear?
* Are there missing examples or edge cases?
### Document Skill versions
You can document Skill versions in your SKILL.md content to track changes over time. Add a version history section:
```markdown theme={null}
# My Skill
## Version History
- v2.0.0 (2025-10-01): Breaking changes to API
- v1.1.0 (2025-09-15): Added new features
- v1.0.0 (2025-09-01): Initial release
```
This helps team members understand what changed between versions.
## Troubleshooting
### Claude doesn't use my Skill
**Symptom**: You ask a relevant question but Claude doesn't use your Skill.
**Check**: Is the description specific enough?
Vague descriptions make discovery difficult. Include both what the Skill does and when to use it, with key terms users would mention.
**Too generic**:
```yaml theme={null}
description: Helps with data
```
**Specific**:
```yaml theme={null}
description: Analyze Excel spreadsheets, generate pivot tables, create charts. Use when working with Excel files, spreadsheets, or .xlsx files.
```
**Check**: Is the YAML valid?
Run validation to check for syntax errors:
```bash theme={null}
# View frontmatter
cat .claude/skills/my-skill/SKILL.md | head -n 15
# Check for common issues
# - Missing opening or closing ---
# - Tabs instead of spaces
# - Unquoted strings with special characters
```
**Check**: Is the Skill in the correct location?
```bash theme={null}
# Personal Skills
ls ~/.claude/skills/*/SKILL.md
# Project Skills
ls .claude/skills/*/SKILL.md
```
### Skill has errors
**Symptom**: The Skill loads but doesn't work correctly.
**Check**: Are dependencies available?
Claude will automatically install required dependencies (or ask for permission to install them) when it needs them.
**Check**: Do scripts have execute permissions?
```bash theme={null}
chmod +x .claude/skills/my-skill/scripts/*.py
```
**Check**: Are file paths correct?
Use forward slashes (Unix style) in all paths:
**Correct**: `scripts/helper.py`
**Wrong**: `scripts\helper.py` (Windows style)
### Multiple Skills conflict
**Symptom**: Claude uses the wrong Skill or seems confused between similar Skills.
**Be specific in descriptions**: Help Claude choose the right Skill by using distinct trigger terms in your descriptions.
Instead of:
```yaml theme={null}
# Skill 1
description: For data analysis
# Skill 2
description: For analyzing data
```
Use:
```yaml theme={null}
# Skill 1
description: Analyze sales data in Excel files and CRM exports. Use for sales reports, pipeline analysis, and revenue tracking.
# Skill 2
description: Analyze log files and system metrics data. Use for performance monitoring, debugging, and system diagnostics.
```
## Examples
### Simple Skill (single file)
```
commit-helper/
└── SKILL.md
```
```yaml theme={null}
---
name: generating-commit-messages
description: Generates clear commit messages from git diffs. Use when writing commit messages or reviewing staged changes.
---
# Generating Commit Messages
## Instructions
1. Run `git diff --staged` to see changes
2. I'll suggest a commit message with:
- Summary under 50 characters
- Detailed description
- Affected components
## Best practices
- Use present tense
- Explain what and why, not how
```
### Skill with tool permissions
```
code-reviewer/
└── SKILL.md
```
```yaml theme={null}
---
name: code-reviewer
description: Review code for best practices and potential issues. Use when reviewing code, checking PRs, or analyzing code quality.
allowed-tools: Read, Grep, Glob
---
# Code Reviewer
## Review checklist
1. Code organization and structure
2. Error handling
3. Performance considerations
4. Security concerns
5. Test coverage
## Instructions
1. Read the target files using Read tool
2. Search for patterns using Grep
3. Find related files using Glob
4. Provide detailed feedback on code quality
```
### Multi-file Skill
```
pdf-processing/
├── SKILL.md
├── FORMS.md
├── REFERENCE.md
└── scripts/
├── fill_form.py
└── validate.py
```
**SKILL.md**:
````yaml theme={null}
---
name: pdf-processing
description: Extract text, fill forms, merge PDFs. Use when working with PDF files, forms, or document extraction. Requires pypdf and pdfplumber packages.
---
# PDF Processing
## Quick start
Extract text:
```python
import pdfplumber
with pdfplumber.open("doc.pdf") as pdf:
text = pdf.pages[0].extract_text()
```
For form filling, see [FORMS.md](FORMS.md).
For detailed API reference, see [REFERENCE.md](REFERENCE.md).
## Requirements
Packages must be installed in your environment:
```bash
pip install pypdf pdfplumber
```
````
<Note>
List required packages in the description. Packages must be installed in your environment before Claude can use them.
</Note>
Claude loads additional files only when needed.
## Next steps
<CardGroup cols={2}>
<Card title="Authoring best practices" icon="lightbulb" href="/en/docs/agents-and-tools/agent-skills/best-practices">
Write Skills that Claude can use effectively
</Card>
<Card title="Agent Skills overview" icon="book" href="/en/docs/agents-and-tools/agent-skills/overview">
Learn how Skills work across Claude products
</Card>
<Card title="Use Skills in the Agent SDK" icon="cube" href="/en/api/agent-sdk/skills">
Use Skills programmatically with TypeScript and Python
</Card>
<Card title="Get started with Agent Skills" icon="rocket" href="/en/docs/agents-and-tools/agent-skills/quickstart">
Create your first Skill
</Card>
</CardGroup>
-31
View File
@@ -1,31 +0,0 @@
# Claude Code Hooks Exit Code Cheat Sheet
## Exit Code Behavior [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output)
- **Exit code 0**: Success. `stdout` is shown to the user in transcript mode, except for `UserPromptSubmit` hook where stdout is injected as context [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output)
- **Exit code 2**: Blocking error. `stderr` is fed back to Claude to process automatically [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output)
- **Other exit codes**: Non-blocking error. `stderr` is shown to the user and execution continues [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output)
## Per-Hook Event Behavior [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output)
| Hook Event | Exit Code 2 Behavior |
|------------|---------------------|
| `PreToolUse` | Blocks the tool call, shows stderr to Claude [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `PostToolUse` | Shows stderr to Claude (tool already ran) [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `Notification` | N/A, shows stderr to user only [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `UserPromptSubmit` | Blocks prompt processing, erases prompt, shows stderr to user only [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `Stop` | Blocks stoppage, shows stderr to Claude [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `SubagentStop` | Blocks stoppage, shows stderr to Claude subagent [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `PreCompact` | N/A, shows stderr to user only [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `SessionStart` | N/A, shows stderr to user only [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
| `SessionEnd` | N/A, shows stderr to user only [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output) |
## Quick Reference
- **Success**: `process.exit(0)` - Operation completed successfully
- **Block & feedback**: `process.exit(2)` - Block operation and give Claude feedback via stderr
- **Non-blocking error**: `process.exit(1)` - Show error to user but continue execution
**Important**: Claude Code does not see stdout if the exit code is 0, except for the `UserPromptSubmit` hook where stdout is injected as context [(1)](https://docs.claude.com/en/docs/claude-code/hooks#hook-output)
-837
View File
@@ -1,837 +0,0 @@
# Hooks reference
> This page provides reference documentation for implementing hooks in Claude Code.
<Tip>
For a quickstart guide with examples, see [Get started with Claude Code hooks](/en/docs/claude-code/hooks-guide).
</Tip>
## Configuration
Claude Code hooks are configured in your [settings files](/en/docs/claude-code/settings):
* `~/.claude/settings.json` - User settings
* `.claude/settings.json` - Project settings
* `.claude/settings.local.json` - Local project settings (not committed)
* Enterprise managed policy settings
### Structure
Hooks are organized by matchers, where each matcher can have multiple hooks:
```json theme={null}
{
"hooks": {
"EventName": [
{
"matcher": "ToolPattern",
"hooks": [
{
"type": "command",
"command": "your-command-here"
}
]
}
]
}
}
```
* **matcher**: Pattern to match tool names, case-sensitive (only applicable for
`PreToolUse` and `PostToolUse`)
* Simple strings match exactly: `Write` matches only the Write tool
* Supports regex: `Edit|Write` or `Notebook.*`
* Use `*` to match all tools. You can also use empty string (`""`) or leave
`matcher` blank.
* **hooks**: Array of commands to execute when the pattern matches
* `type`: Currently only `"command"` is supported
* `command`: The bash command to execute (can use `$CLAUDE_PROJECT_DIR`
environment variable)
* `timeout`: (Optional) How long a command should run, in seconds, before
canceling that specific command.
For events like `UserPromptSubmit`, `Notification`, `Stop`, and `SubagentStop`
that don't use matchers, you can omit the matcher field:
```json theme={null}
{
"hooks": {
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "/path/to/prompt-validator.py"
}
]
}
]
}
}
```
### Project-Specific Hook Scripts
You can use the environment variable `CLAUDE_PROJECT_DIR` (only available when
Claude Code spawns the hook command) to reference scripts stored in your project,
ensuring they work regardless of Claude's current directory:
```json theme={null}
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/check-style.sh"
}
]
}
]
}
}
```
### Plugin hooks
[Plugins](/en/docs/claude-code/plugins) can provide hooks that integrate seamlessly with your user and project hooks. Plugin hooks are automatically merged with your configuration when plugins are enabled.
**How plugin hooks work**:
* Plugin hooks are defined in the plugin's `hooks/hooks.json` file or in a file given by a custom path to the `hooks` field.
* When a plugin is enabled, its hooks are merged with user and project hooks
* Multiple hooks from different sources can respond to the same event
* Plugin hooks use the `${CLAUDE_PLUGIN_ROOT}` environment variable to reference plugin files
**Example plugin hook configuration**:
```json theme={null}
{
"description": "Automatic code formatting",
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/scripts/format.sh",
"timeout": 30
}
]
}
]
}
}
```
<Note>
Plugin hooks use the same format as regular hooks with an optional `description` field to explain the hook's purpose.
</Note>
<Note>
Plugin hooks run alongside your custom hooks. If multiple hooks match an event, they all execute in parallel.
</Note>
**Environment variables for plugins**:
* `${CLAUDE_PLUGIN_ROOT}`: Absolute path to the plugin directory
* `${CLAUDE_PROJECT_DIR}`: Project root directory (same as for project hooks)
* All standard environment variables are available
See the [plugin components reference](/en/docs/claude-code/plugins-reference#hooks) for details on creating plugin hooks.
## Hook Events
### PreToolUse
Runs after Claude creates tool parameters and before processing the tool call.
**Common matchers:**
* `Task` - Subagent tasks (see [subagents documentation](/en/docs/claude-code/sub-agents))
* `Bash` - Shell commands
* `Glob` - File pattern matching
* `Grep` - Content search
* `Read` - File reading
* `Edit` - File editing
* `Write` - File writing
* `WebFetch`, `WebSearch` - Web operations
### PostToolUse
Runs immediately after a tool completes successfully.
Recognizes the same matcher values as PreToolUse.
### Notification
Runs when Claude Code sends notifications. Notifications are sent when:
1. Claude needs your permission to use a tool. Example: "Claude needs your
permission to use Bash"
2. The prompt input has been idle for at least 60 seconds. "Claude is waiting
for your input"
### UserPromptSubmit
Runs when the user submits a prompt, before Claude processes it. This allows you
to add additional context based on the prompt/conversation, validate prompts, or
block certain types of prompts.
### Stop
Runs when the main Claude Code agent has finished responding. Does not run if
the stoppage occurred due to a user interrupt.
### SubagentStop
Runs when a Claude Code subagent (Task tool call) has finished responding.
### PreCompact
Runs before Claude Code is about to run a compact operation.
**Matchers:**
* `manual` - Invoked from `/compact`
* `auto` - Invoked from auto-compact (due to full context window)
### SessionStart
Runs when Claude Code starts a new session or resumes an existing session (which
currently does start a new session under the hood). Useful for loading in
development context like existing issues or recent changes to your codebase.
**Matchers:**
* `startup` - Invoked from startup
* `resume` - Invoked from `--resume`, `--continue`, or `/resume`
* `clear` - Invoked from `/clear`
* `compact` - Invoked from auto or manual compact.
### SessionEnd
Runs when a Claude Code session ends. Useful for cleanup tasks, logging session
statistics, or saving session state.
The `reason` field in the hook input will be one of:
* `clear` - Session cleared with /clear command
* `logout` - User logged out
* `prompt_input_exit` - User exited while prompt input was visible
* `other` - Other exit reasons
## Hook Input
Hooks receive JSON data via stdin containing session information and
event-specific data:
```typescript theme={null}
{
// Common fields
session_id: string
transcript_path: string // Path to conversation JSON
cwd: string // The current working directory when the hook is invoked
// Event-specific fields
hook_event_name: string
...
}
```
### PreToolUse Input
The exact schema for `tool_input` depends on the tool.
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "/Users/.../.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"cwd": "/Users/...",
"hook_event_name": "PreToolUse",
"tool_name": "Write",
"tool_input": {
"file_path": "/path/to/file.txt",
"content": "file content"
}
}
```
### PostToolUse Input
The exact schema for `tool_input` and `tool_response` depends on the tool.
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "/Users/.../.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"cwd": "/Users/...",
"hook_event_name": "PostToolUse",
"tool_name": "Write",
"tool_input": {
"file_path": "/path/to/file.txt",
"content": "file content"
},
"tool_response": {
"filePath": "/path/to/file.txt",
"success": true
}
}
```
### Notification Input
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "/Users/.../.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"cwd": "/Users/...",
"hook_event_name": "Notification",
"message": "Task completed successfully"
}
```
### UserPromptSubmit Input
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "/Users/.../.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"cwd": "/Users/...",
"hook_event_name": "UserPromptSubmit",
"prompt": "Write a function to calculate the factorial of a number"
}
```
### Stop and SubagentStop Input
`stop_hook_active` is true when Claude Code is already continuing as a result of
a stop hook. Check this value or process the transcript to prevent Claude Code
from running indefinitely.
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "~/.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"hook_event_name": "Stop",
"stop_hook_active": true
}
```
### PreCompact Input
For `manual`, `custom_instructions` comes from what the user passes into
`/compact`. For `auto`, `custom_instructions` is empty.
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "~/.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"hook_event_name": "PreCompact",
"trigger": "manual",
"custom_instructions": ""
}
```
### SessionStart Input
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "~/.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"hook_event_name": "SessionStart",
"source": "startup"
}
```
### SessionEnd Input
```json theme={null}
{
"session_id": "abc123",
"transcript_path": "~/.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
"cwd": "/Users/...",
"hook_event_name": "SessionEnd",
"reason": "exit"
}
```
## Hook Output
There are two ways for hooks to return output back to Claude Code. The output
communicates whether to block and any feedback that should be shown to Claude
and the user.
### Simple: Exit Code
Hooks communicate status through exit codes, stdout, and stderr:
* **Exit code 0**: Success. `stdout` is shown to the user in transcript mode
(CTRL-R), except for `UserPromptSubmit` and `SessionStart`, where stdout is
added to the context.
* **Exit code 2**: Blocking error. `stderr` is fed back to Claude to process
automatically. See per-hook-event behavior below.
* **Other exit codes**: Non-blocking error. `stderr` is shown to the user and
execution continues.
<Warning>
Reminder: Claude Code does not see stdout if the exit code is 0, except for
the `UserPromptSubmit` hook where stdout is injected as context.
</Warning>
#### Exit Code 2 Behavior
| Hook Event | Behavior |
| ------------------ | ------------------------------------------------------------------ |
| `PreToolUse` | Blocks the tool call, shows stderr to Claude |
| `PostToolUse` | Shows stderr to Claude (tool already ran) |
| `Notification` | N/A, shows stderr to user only |
| `UserPromptSubmit` | Blocks prompt processing, erases prompt, shows stderr to user only |
| `Stop` | Blocks stoppage, shows stderr to Claude |
| `SubagentStop` | Blocks stoppage, shows stderr to Claude subagent |
| `PreCompact` | N/A, shows stderr to user only |
| `SessionStart` | N/A, shows stderr to user only |
| `SessionEnd` | N/A, shows stderr to user only |
### Advanced: JSON Output
Hooks can return structured JSON in `stdout` for more sophisticated control:
#### Common JSON Fields
All hook types can include these optional fields:
```json theme={null}
{
"continue": true, // Whether Claude should continue after hook execution (default: true)
"stopReason": "string", // Message shown when continue is false
"suppressOutput": true, // Hide stdout from transcript mode (default: false)
"systemMessage": "string" // Optional warning message shown to the user
}
```
If `continue` is false, Claude stops processing after the hooks run.
* For `PreToolUse`, this is different from `"permissionDecision": "deny"`, which
only blocks a specific tool call and provides automatic feedback to Claude.
* For `PostToolUse`, this is different from `"decision": "block"`, which
provides automated feedback to Claude.
* For `UserPromptSubmit`, this prevents the prompt from being processed.
* For `Stop` and `SubagentStop`, this takes precedence over any
`"decision": "block"` output.
* In all cases, `"continue" = false` takes precedence over any
`"decision": "block"` output.
`stopReason` accompanies `continue` with a reason shown to the user, not shown
to Claude.
#### `PreToolUse` Decision Control
`PreToolUse` hooks can control whether a tool call proceeds.
* `"allow"` bypasses the permission system. `permissionDecisionReason` is shown
to the user but not to Claude.
* `"deny"` prevents the tool call from executing. `permissionDecisionReason` is
shown to Claude.
* `"ask"` asks the user to confirm the tool call in the UI.
`permissionDecisionReason` is shown to the user but not to Claude.
```json theme={null}
{
"hookSpecificOutput": {
"hookEventName": "PreToolUse",
"permissionDecision": "allow" | "deny" | "ask",
"permissionDecisionReason": "My reason here"
}
}
```
<Note>
The `decision` and `reason` fields are deprecated for PreToolUse hooks.
Use `hookSpecificOutput.permissionDecision` and
`hookSpecificOutput.permissionDecisionReason` instead. The deprecated fields
`"approve"` and `"block"` map to `"allow"` and `"deny"` respectively.
</Note>
#### `PostToolUse` Decision Control
`PostToolUse` hooks can provide feedback to Claude after tool execution.
* `"block"` automatically prompts Claude with `reason`.
* `undefined` does nothing. `reason` is ignored.
* `"hookSpecificOutput.additionalContext"` adds context for Claude to consider.
```json theme={null}
{
"decision": "block" | undefined,
"reason": "Explanation for decision",
"hookSpecificOutput": {
"hookEventName": "PostToolUse",
"additionalContext": "Additional information for Claude"
}
}
```
#### `UserPromptSubmit` Decision Control
`UserPromptSubmit` hooks can control whether a user prompt is processed.
* `"block"` prevents the prompt from being processed. The submitted prompt is
erased from context. `"reason"` is shown to the user but not added to context.
* `undefined` allows the prompt to proceed normally. `"reason"` is ignored.
* `"hookSpecificOutput.additionalContext"` adds the string to the context if not
blocked.
```json theme={null}
{
"decision": "block" | undefined,
"reason": "Explanation for decision",
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": "My additional context here"
}
}
```
#### `Stop`/`SubagentStop` Decision Control
`Stop` and `SubagentStop` hooks can control whether Claude must continue.
* `"block"` prevents Claude from stopping. You must populate `reason` for Claude
to know how to proceed.
* `undefined` allows Claude to stop. `reason` is ignored.
```json theme={null}
{
"decision": "block" | undefined,
"reason": "Must be provided when Claude is blocked from stopping"
}
```
#### `SessionStart` Decision Control
`SessionStart` hooks allow you to load in context at the start of a session.
* `"hookSpecificOutput.additionalContext"` adds the string to the context.
* Multiple hooks' `additionalContext` values are concatenated.
```json theme={null}
{
"hookSpecificOutput": {
"hookEventName": "SessionStart",
"additionalContext": "My additional context here"
}
}
```
#### `SessionEnd` Decision Control
`SessionEnd` hooks run when a session ends. They cannot block session termination
but can perform cleanup tasks.
#### Exit Code Example: Bash Command Validation
```python theme={null}
#!/usr/bin/env python3
import json
import re
import sys
# Define validation rules as a list of (regex pattern, message) tuples
VALIDATION_RULES = [
(
r"\bgrep\b(?!.*\|)",
"Use 'rg' (ripgrep) instead of 'grep' for better performance and features",
),
(
r"\bfind\s+\S+\s+-name\b",
"Use 'rg --files | rg pattern' or 'rg --files -g pattern' instead of 'find -name' for better performance",
),
]
def validate_command(command: str) -> list[str]:
issues = []
for pattern, message in VALIDATION_RULES:
if re.search(pattern, command):
issues.append(message)
return issues
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
command = tool_input.get("command", "")
if tool_name != "Bash" or not command:
sys.exit(1)
# Validate the command
issues = validate_command(command)
if issues:
for message in issues:
print(f"• {message}", file=sys.stderr)
# Exit code 2 blocks tool call and shows stderr to Claude
sys.exit(2)
```
#### JSON Output Example: UserPromptSubmit to Add Context and Validation
<Note>
For `UserPromptSubmit` hooks, you can inject context using either method:
* Exit code 0 with stdout: Claude sees the context (special case for `UserPromptSubmit`)
* JSON output: Provides more control over the behavior
</Note>
```python theme={null}
#!/usr/bin/env python3
import json
import sys
import re
import datetime
# Load input from stdin
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
prompt = input_data.get("prompt", "")
# Check for sensitive patterns
sensitive_patterns = [
(r"(?i)\b(password|secret|key|token)\s*[:=]", "Prompt contains potential secrets"),
]
for pattern, message in sensitive_patterns:
if re.search(pattern, prompt):
# Use JSON output to block with a specific reason
output = {
"decision": "block",
"reason": f"Security policy violation: {message}. Please rephrase your request without sensitive information."
}
print(json.dumps(output))
sys.exit(0)
# Add current time to context
context = f"Current time: {datetime.datetime.now()}"
print(context)
"""
The following is also equivalent:
print(json.dumps({
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": context,
},
}))
"""
# Allow the prompt to proceed with the additional context
sys.exit(0)
```
#### JSON Output Example: PreToolUse with Approval
```python theme={null}
#!/usr/bin/env python3
import json
import sys
# Load input from stdin
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)
tool_name = input_data.get("tool_name", "")
tool_input = input_data.get("tool_input", {})
# Example: Auto-approve file reads for documentation files
if tool_name == "Read":
file_path = tool_input.get("file_path", "")
if file_path.endswith((".md", ".mdx", ".txt", ".json")):
# Use JSON output to auto-approve the tool call
output = {
"decision": "approve",
"reason": "Documentation file auto-approved",
"suppressOutput": True # Don't show in transcript mode
}
print(json.dumps(output))
sys.exit(0)
# For other cases, let the normal permission flow proceed
sys.exit(0)
```
## Working with MCP Tools
Claude Code hooks work seamlessly with
[Model Context Protocol (MCP) tools](/en/docs/claude-code/mcp). When MCP servers
provide tools, they appear with a special naming pattern that you can match in
your hooks.
### MCP Tool Naming
MCP tools follow the pattern `mcp__<server>__<tool>`, for example:
* `mcp__memory__create_entities` - Memory server's create entities tool
* `mcp__filesystem__read_file` - Filesystem server's read file tool
* `mcp__github__search_repositories` - GitHub server's search tool
### Configuring Hooks for MCP Tools
You can target specific MCP tools or entire MCP servers:
```json theme={null}
{
"hooks": {
"PreToolUse": [
{
"matcher": "mcp__memory__.*",
"hooks": [
{
"type": "command",
"command": "echo 'Memory operation initiated' >> ~/mcp-operations.log"
}
]
},
{
"matcher": "mcp__.*__write.*",
"hooks": [
{
"type": "command",
"command": "/home/user/scripts/validate-mcp-write.py"
}
]
}
]
}
}
```
## Examples
<Tip>
For practical examples including code formatting, notifications, and file protection, see [More Examples](/en/docs/claude-code/hooks-guide#more-examples) in the get started guide.
</Tip>
## Security Considerations
### Disclaimer
**USE AT YOUR OWN RISK**: Claude Code hooks execute arbitrary shell commands on
your system automatically. By using hooks, you acknowledge that:
* You are solely responsible for the commands you configure
* Hooks can modify, delete, or access any files your user account can access
* Malicious or poorly written hooks can cause data loss or system damage
* Anthropic provides no warranty and assumes no liability for any damages
resulting from hook usage
* You should thoroughly test hooks in a safe environment before production use
Always review and understand any hook commands before adding them to your
configuration.
### Security Best Practices
Here are some key practices for writing more secure hooks:
1. **Validate and sanitize inputs** - Never trust input data blindly
2. **Always quote shell variables** - Use `"$VAR"` not `$VAR`
3. **Block path traversal** - Check for `..` in file paths
4. **Use absolute paths** - Specify full paths for scripts (use
"\$CLAUDE\_PROJECT\_DIR" for the project path)
5. **Skip sensitive files** - Avoid `.env`, `.git/`, keys, etc.
### Configuration Safety
Direct edits to hooks in settings files don't take effect immediately. Claude
Code:
1. Captures a snapshot of hooks at startup
2. Uses this snapshot throughout the session
3. Warns if hooks are modified externally
4. Requires review in `/hooks` menu for changes to apply
This prevents malicious hook modifications from affecting your current session.
## Hook Execution Details
* **Timeout**: 60-second execution limit by default, configurable per command.
* A timeout for an individual command does not affect the other commands.
* **Parallelization**: All matching hooks run in parallel
* **Deduplication**: Multiple identical hook commands are deduplicated automatically
* **Environment**: Runs in current directory with Claude Code's environment
* The `CLAUDE_PROJECT_DIR` environment variable is available and contains the
absolute path to the project root directory (where Claude Code was started)
* **Input**: JSON via stdin
* **Output**:
* PreToolUse/PostToolUse/Stop/SubagentStop: Progress shown in transcript (Ctrl-R)
* Notification/SessionEnd: Logged to debug only (`--debug`)
* UserPromptSubmit/SessionStart: stdout added as context for Claude
## Debugging
### Basic Troubleshooting
If your hooks aren't working:
1. **Check configuration** - Run `/hooks` to see if your hook is registered
2. **Verify syntax** - Ensure your JSON settings are valid
3. **Test commands** - Run hook commands manually first
4. **Check permissions** - Make sure scripts are executable
5. **Review logs** - Use `claude --debug` to see hook execution details
Common issues:
* **Quotes not escaped** - Use `\"` inside JSON strings
* **Wrong matcher** - Check tool names match exactly (case-sensitive)
* **Command not found** - Use full paths for scripts
### Advanced Debugging
For complex hook issues:
1. **Inspect hook execution** - Use `claude --debug` to see detailed hook
execution
2. **Validate JSON schemas** - Test hook input/output with external tools
3. **Check environment variables** - Verify Claude Code's environment is correct
4. **Test edge cases** - Try hooks with unusual file paths or inputs
5. **Monitor system resources** - Check for resource exhaustion during hook
execution
6. **Use structured logging** - Implement logging in your hook scripts
### Debug Output Example
Use `claude --debug` to see hook execution details:
```
[DEBUG] Executing hooks for PostToolUse:Write
[DEBUG] Getting matching hook commands for PostToolUse with query: Write
[DEBUG] Found 1 hook matchers in settings
[DEBUG] Matched 1 hooks for query "Write"
[DEBUG] Found 1 hook commands to execute
[DEBUG] Executing hook command: <Your command> with timeout 60000ms
[DEBUG] Hook command completed with status 0: <Your stdout>
```
Progress messages appear in transcript mode (Ctrl-R) showing:
* Which hook is running
* Command being executed
* Success/failure status
* Output or error messages
-391
View File
@@ -1,391 +0,0 @@
# Plugins
> Extend Claude Code with custom commands, agents, hooks, and MCP servers through the plugin system.
<Tip>
For complete technical specifications and schemas, see [Plugins reference](/en/docs/claude-code/plugins-reference). For marketplace management, see [Plugin marketplaces](/en/docs/claude-code/plugin-marketplaces).
</Tip>
Plugins let you extend Claude Code with custom functionality that can be shared across projects and teams. Install plugins from [marketplaces](/en/docs/claude-code/plugin-marketplaces) to add pre-built commands, agents, hooks, and MCP servers, or create your own to automate your workflows.
## Quickstart
Let's create a simple greeting plugin to get you familiar with the plugin system. We'll build a working plugin that adds a custom command, test it locally, and understand the core concepts.
### Prerequisites
* Claude Code installed on your machine
* Basic familiarity with command-line tools
### Create your first plugin
<Steps>
<Step title="Create the marketplace structure">
```bash theme={null}
mkdir test-marketplace
cd test-marketplace
```
</Step>
<Step title="Create the plugin directory">
```bash theme={null}
mkdir my-first-plugin
cd my-first-plugin
```
</Step>
<Step title="Create the plugin manifest">
```bash Create .claude-plugin/plugin.json theme={null}
mkdir .claude-plugin
cat > .claude-plugin/plugin.json << 'EOF'
{
"name": "my-first-plugin",
"description": "A simple greeting plugin to learn the basics",
"version": "1.0.0",
"author": {
"name": "Your Name"
}
}
EOF
```
</Step>
<Step title="Add a custom command">
```bash Create commands/hello.md theme={null}
mkdir commands
cat > commands/hello.md << 'EOF'
---
description: Greet the user with a personalized message
---
# Hello Command
Greet the user warmly and ask how you can help them today. Make the greeting personal and encouraging.
EOF
```
</Step>
<Step title="Create the marketplace manifest">
```bash Create marketplace.json theme={null}
cd ..
mkdir .claude-plugin
cat > .claude-plugin/marketplace.json << 'EOF'
{
"name": "test-marketplace",
"owner": {
"name": "Test User"
},
"plugins": [
{
"name": "my-first-plugin",
"source": "./my-first-plugin",
"description": "My first test plugin"
}
]
}
EOF
```
</Step>
<Step title="Install and test your plugin">
```bash Start Claude Code from parent directory theme={null}
cd ..
claude
```
```shell Add the test marketplace theme={null}
/plugin marketplace add ./test-marketplace
```
```shell Install your plugin theme={null}
/plugin install my-first-plugin@test-marketplace
```
Select "Install now". You'll then need to restart Claude Code in order to use the new plugin.
```shell Try your new command theme={null}
/hello
```
You'll see Claude use your greeting command! Check `/help` to see your new command listed.
</Step>
</Steps>
You've successfully created and tested a plugin with these key components:
* **Plugin manifest** (`.claude-plugin/plugin.json`) - Describes your plugin's metadata
* **Commands directory** (`commands/`) - Contains your custom slash commands
* **Test marketplace** - Allows you to test your plugin locally
### Plugin structure overview
Your plugin follows this basic structure:
```
my-first-plugin/
├── .claude-plugin/
│ └── plugin.json # Plugin metadata
├── commands/ # Custom slash commands (optional)
│ └── hello.md
├── agents/ # Custom agents (optional)
│ └── helper.md
├── skills/ # Agent Skills (optional)
│ └── my-skill/
│ └── SKILL.md
└── hooks/ # Event handlers (optional)
└── hooks.json
```
**Additional components you can add:**
* **Commands**: Create markdown files in `commands/` directory
* **Agents**: Create agent definitions in `agents/` directory
* **Skills**: Create `SKILL.md` files in `skills/` directory
* **Hooks**: Create `hooks/hooks.json` for event handling
* **MCP servers**: Create `.mcp.json` for external tool integration
<Note>
**Next steps**: Ready to add more features? Jump to [Develop more complex plugins](#develop-more-complex-plugins) to add agents, hooks, and MCP servers. For complete technical specifications of all plugin components, see [Plugins reference](/en/docs/claude-code/plugins-reference).
</Note>
***
## Install and manage plugins
Learn how to discover, install, and manage plugins to extend your Claude Code capabilities.
### Prerequisites
* Claude Code installed and running
* Basic familiarity with command-line interfaces
### Add marketplaces
Marketplaces are catalogs of available plugins. Add them to discover and install plugins:
```shell Add a marketplace theme={null}
/plugin marketplace add your-org/claude-plugins
```
```shell Browse available plugins theme={null}
/plugin
```
For detailed marketplace management including Git repositories, local development, and team distribution, see [Plugin marketplaces](/en/docs/claude-code/plugin-marketplaces).
### Install plugins
#### Via interactive menu (recommended for discovery)
```shell Open the plugin management interface theme={null}
/plugin
```
Select "Browse Plugins" to see available options with descriptions, features, and installation options.
#### Via direct commands (for quick installation)
```shell Install a specific plugin theme={null}
/plugin install formatter@your-org
```
```shell Enable a disabled plugin theme={null}
/plugin enable plugin-name@marketplace-name
```
```shell Disable without uninstalling theme={null}
/plugin disable plugin-name@marketplace-name
```
```shell Completely remove a plugin theme={null}
/plugin uninstall plugin-name@marketplace-name
```
### Verify installation
After installing a plugin:
1. **Check available commands**: Run `/help` to see new commands
2. **Test plugin features**: Try the plugin's commands and features
3. **Review plugin details**: Use `/plugin` → "Manage Plugins" to see what the plugin provides
## Set up team plugin workflows
Configure plugins at the repository level to ensure consistent tooling across your team. When team members trust your repository folder, Claude Code automatically installs specified marketplaces and plugins.
**To set up team plugins:**
1. Add marketplace and plugin configuration to your repository's `.claude/settings.json`
2. Team members trust the repository folder
3. Plugins install automatically for all team members
For complete instructions including configuration examples, marketplace setup, and rollout best practices, see [Configure team marketplaces](/en/docs/claude-code/plugin-marketplaces#how-to-configure-team-marketplaces).
***
## Develop more complex plugins
Once you're comfortable with basic plugins, you can create more sophisticated extensions.
### Add Skills to your plugin
Plugins can include [Agent Skills](/en/docs/claude-code/skills) to extend Claude's capabilities. Skills are model-invoked—Claude autonomously uses them based on the task context.
To add Skills to your plugin, create a `skills/` directory at your plugin root and add Skill folders with `SKILL.md` files. Plugin Skills are automatically available when the plugin is installed.
For complete Skill authoring guidance, see [Agent Skills](/en/docs/claude-code/skills).
### Organize complex plugins
For plugins with many components, organize your directory structure by functionality. For complete directory layouts and organization patterns, see [Plugin directory structure](/en/docs/claude-code/plugins-reference#plugin-directory-structure).
### Test your plugins locally
When developing plugins, use a local marketplace to test changes iteratively. This workflow builds on the quickstart pattern and works for plugins of any complexity.
<Steps>
<Step title="Set up your development structure">
Organize your plugin and marketplace for testing:
```bash Create directory structure theme={null}
mkdir dev-marketplace
cd dev-marketplace
mkdir my-plugin
```
This creates:
```
dev-marketplace/
├── .claude-plugin/marketplace.json (you'll create this)
└── my-plugin/ (your plugin under development)
├── .claude-plugin/plugin.json
├── commands/
├── agents/
└── hooks/
```
</Step>
<Step title="Create the marketplace manifest">
```bash Create marketplace.json theme={null}
mkdir .claude-plugin
cat > .claude-plugin/marketplace.json << 'EOF'
{
"name": "dev-marketplace",
"owner": {
"name": "Developer"
},
"plugins": [
{
"name": "my-plugin",
"source": "./my-plugin",
"description": "Plugin under development"
}
]
}
EOF
```
</Step>
<Step title="Install and test">
```bash Start Claude Code from parent directory theme={null}
cd ..
claude
```
```shell Add your development marketplace theme={null}
/plugin marketplace add ./dev-marketplace
```
```shell Install your plugin theme={null}
/plugin install my-plugin@dev-marketplace
```
Test your plugin components:
* Try your commands with `/command-name`
* Check that agents appear in `/agents`
* Verify hooks work as expected
</Step>
<Step title="Iterate on your plugin">
After making changes to your plugin code:
```shell Uninstall the current version theme={null}
/plugin uninstall my-plugin@dev-marketplace
```
```shell Reinstall to test changes theme={null}
/plugin install my-plugin@dev-marketplace
```
Repeat this cycle as you develop and refine your plugin.
</Step>
</Steps>
<Note>
**For multiple plugins**: Organize plugins in subdirectories like `./plugins/plugin-name` and update your marketplace.json accordingly. See [Plugin sources](/en/docs/claude-code/plugin-marketplaces#plugin-sources) for organization patterns.
</Note>
### Debug plugin issues
If your plugin isn't working as expected:
1. **Check the structure**: Ensure your directories are at the plugin root, not inside `.claude-plugin/`
2. **Test components individually**: Check each command, agent, and hook separately
3. **Use validation and debugging tools**: See [Debugging and development tools](/en/docs/claude-code/plugins-reference#debugging-and-development-tools) for CLI commands and troubleshooting techniques
### Share your plugins
When your plugin is ready to share:
1. **Add documentation**: Include a README.md with installation and usage instructions
2. **Version your plugin**: Use semantic versioning in your `plugin.json`
3. **Create or use a marketplace**: Distribute through plugin marketplaces for easy installation
4. **Test with others**: Have team members test the plugin before wider distribution
<Note>
For complete technical specifications, debugging techniques, and distribution strategies, see [Plugins reference](/en/docs/claude-code/plugins-reference).
</Note>
***
## Next steps
Now that you understand Claude Code's plugin system, here are suggested paths for different goals:
### For plugin users
* **Discover plugins**: Browse community marketplaces for useful tools
* **Team adoption**: Set up repository-level plugins for your projects
* **Marketplace management**: Learn to manage multiple plugin sources
* **Advanced usage**: Explore plugin combinations and workflows
### For plugin developers
* **Create your first marketplace**: [Plugin marketplaces guide](/en/docs/claude-code/plugin-marketplaces)
* **Advanced components**: Dive deeper into specific plugin components:
* [Slash commands](/en/docs/claude-code/slash-commands) - Command development details
* [Subagents](/en/docs/claude-code/sub-agents) - Agent configuration and capabilities
* [Agent Skills](/en/docs/claude-code/skills) - Extend Claude's capabilities
* [Hooks](/en/docs/claude-code/hooks) - Event handling and automation
* [MCP](/en/docs/claude-code/mcp) - External tool integration
* **Distribution strategies**: Package and share your plugins effectively
* **Community contribution**: Consider contributing to community plugin collections
### For team leads and administrators
* **Repository configuration**: Set up automatic plugin installation for team projects
* **Plugin governance**: Establish guidelines for plugin approval and security review
* **Marketplace maintenance**: Create and maintain organization-specific plugin catalogs
* **Training and documentation**: Help team members adopt plugin workflows effectively
## See also
* [Plugin marketplaces](/en/docs/claude-code/plugin-marketplaces) - Creating and managing plugin catalogs
* [Slash commands](/en/docs/claude-code/slash-commands) - Understanding custom commands
* [Subagents](/en/docs/claude-code/sub-agents) - Creating and using specialized agents
* [Agent Skills](/en/docs/claude-code/skills) - Extend Claude's capabilities
* [Hooks](/en/docs/claude-code/hooks) - Automating workflows with event handlers
* [MCP](/en/docs/claude-code/mcp) - Connecting to external tools and services
* [Settings](/en/docs/claude-code/settings) - Configuration options for plugins
-218
View File
@@ -1,218 +0,0 @@
# Models overview
> Claude is a family of state-of-the-art large language models developed by Anthropic. This guide introduces our models and compares their performance with legacy models.
export const ModelId = ({children, style = {}}) => {
const copiedNotice = 'Copied!';
const handleClick = e => {
const element = e.currentTarget;
const originalText = element.textContent;
navigator.clipboard.writeText(children).then(() => {
element.textContent = copiedNotice;
element.style.backgroundColor = '#d4edda';
element.style.color = '#155724';
element.style.borderColor = '#c3e6cb';
setTimeout(() => {
element.textContent = originalText;
element.style.backgroundColor = '#f5f5f5';
element.style.color = '';
element.style.borderColor = 'transparent';
}, 2000);
}).catch(error => {
console.error('Failed to copy:', error);
});
};
const handleMouseEnter = e => {
const element = e.currentTarget;
const tooltip = element.querySelector('.copy-tooltip');
if (tooltip && element.textContent !== copiedNotice) {
tooltip.style.opacity = '1';
}
element.style.backgroundColor = '#e8e8e8';
element.style.borderColor = '#d0d0d0';
};
const handleMouseLeave = e => {
const element = e.currentTarget;
const tooltip = element.querySelector('.copy-tooltip');
if (tooltip) {
tooltip.style.opacity = '0';
}
if (element.textContent !== copiedNotice) {
element.style.backgroundColor = '#f5f5f5';
element.style.borderColor = 'transparent';
}
};
const defaultStyle = {
cursor: 'pointer',
position: 'relative',
transition: 'all 0.2s ease',
display: 'inline-block',
userSelect: 'none',
backgroundColor: '#f5f5f5',
padding: '2px 4px',
borderRadius: '4px',
fontFamily: 'Monaco, Consolas, "Courier New", monospace',
fontSize: '0.9em',
border: '1px solid transparent',
...style
};
return <span onClick={handleClick} onMouseEnter={handleMouseEnter} onMouseLeave={handleMouseLeave} style={defaultStyle}>
{children}
</span>;
};
<CardGroup cols={3}>
<Card title="Claude Sonnet 4.5" icon="star" href="/en/docs/about-claude/models/overview#model-comparison-table">
Our best model for complex agents and coding
* <Icon icon="inbox-in" iconType="thin" /> Text and image input
* <Icon icon="inbox-out" iconType="thin" /> Text output
* <Icon icon="book" iconType="thin" /> 200k context window (1M context beta available)
* <Icon icon="brain" iconType="thin" /> Highest intelligence across most tasks
</Card>
<Card title="Claude Haiku 4.5" icon="rocket-launch" href="/en/docs/about-claude/models/overview#model-comparison-table">
Our fastest and most intelligent Haiku model
* <Icon icon="inbox-in" iconType="thin" /> Text and image input
* <Icon icon="inbox-out" iconType="thin" /> Text output
* <Icon icon="book" iconType="thin" /> 200k context window
* <Icon icon="zap" iconType="thin" /> Lightning-fast speed with extended thinking
</Card>
<Card title="Claude Opus 4.1" icon="trophy" href="/en/docs/about-claude/models/overview#model-comparison-table">
Exceptional model for specialized complex tasks
* <Icon icon="inbox-in" iconType="thin" /> Text and image input
* <Icon icon="inbox-out" iconType="thin" /> Text output
* <Icon icon="book" iconType="thin" /> 200k context window
* <Icon icon="brain" iconType="thin" /> Superior reasoning capabilities
</Card>
</CardGroup>
***
## Model names
| Model | Claude API | AWS Bedrock | GCP Vertex AI |
| ----------------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------ | ---------------------------------------------- |
| Claude Sonnet 4.5 | <ModelId>claude-sonnet-4-5-20250929</ModelId> | <ModelId>anthropic.claude-sonnet-4-5-20250929-v1:0</ModelId> | <ModelId>claude-sonnet-4-5\@20250929</ModelId> |
| Claude Sonnet 4 | <ModelId>claude-sonnet-4-20250514</ModelId> | <ModelId>anthropic.claude-sonnet-4-20250514-v1:0</ModelId> | <ModelId>claude-sonnet-4\@20250514</ModelId> |
| Claude Sonnet 3.7 | <ModelId>claude-3-7-sonnet-20250219</ModelId> (<ModelId>claude-3-7-sonnet-latest</ModelId>) | <ModelId>anthropic.claude-3-7-sonnet-20250219-v1:0</ModelId> | <ModelId>claude-3-7-sonnet\@20250219</ModelId> |
| Claude Haiku 4.5 | <ModelId>claude-haiku-4-5-20251001</ModelId> | <ModelId>anthropic.claude-haiku-4-5-20251001-v1:0</ModelId> | <ModelId>claude-haiku-4-5\@20251001</ModelId> |
| Claude Haiku 3.5 | <ModelId>claude-3-5-haiku-20241022</ModelId> (<ModelId>claude-3-5-haiku-latest</ModelId>) | <ModelId>anthropic.claude-3-5-haiku-20241022-v1:0</ModelId> | <ModelId>claude-3-5-haiku\@20241022</ModelId> |
| Claude Haiku 3 | <ModelId>claude-3-haiku-20240307</ModelId> | <ModelId>anthropic.claude-3-haiku-20240307-v1:0</ModelId> | <ModelId>claude-3-haiku\@20240307</ModelId> |
| Claude Opus 4.1 | <ModelId>claude-opus-4-1-20250805</ModelId> | <ModelId>anthropic.claude-opus-4-1-20250805-v1:0</ModelId> | <ModelId>claude-opus-4-1\@20250805</ModelId> |
| Claude Opus 4 | <ModelId>claude-opus-4-20250514</ModelId> | <ModelId>anthropic.claude-opus-4-20250514-v1:0</ModelId> | <ModelId>claude-opus-4\@20250514</ModelId> |
<Note>Models with the same snapshot date (e.g., 20240620) are identical across all platforms and do not change. The snapshot date in the model name ensures consistency and allows developers to rely on stable performance across different environments.</Note>
<Note>Starting with **Claude Sonnet 4.5 and all future models**, AWS Bedrock and Google Vertex AI offer two endpoint types: **global endpoints** (dynamic routing for maximum availability) and **regional endpoints** (guaranteed data routing through specific geographic regions). For more information, see the [third-party platform pricing section](/en/docs/about-claude/pricing#third-party-platform-pricing).</Note>
### Model aliases
For convenience during development and testing, we offer aliases for our model ids. These aliases automatically point to the most recent snapshot of a given model. When we release new model snapshots, we migrate aliases to point to the newest version of a model, typically within a week of the new release.
<Tip>
While aliases are useful for experimentation, we recommend using specific model versions (e.g., `claude-sonnet-4-5-20250929`) in production applications to ensure consistent behavior.
</Tip>
| Model | Alias | Model ID |
| ----------------- | ------------------------------------------- | --------------------------------------------- |
| Claude Sonnet 4.5 | <ModelId>claude-sonnet-4-5</ModelId> | <ModelId>claude-sonnet-4-5-20250929</ModelId> |
| Claude Sonnet 4 | <ModelId>claude-sonnet-4-0</ModelId> | <ModelId>claude-sonnet-4-20250514</ModelId> |
| Claude Sonnet 3.7 | <ModelId>claude-3-7-sonnet-latest</ModelId> | <ModelId>claude-3-7-sonnet-20250219</ModelId> |
| Claude Haiku 4.5 | <ModelId>claude-haiku-4-5</ModelId> | <ModelId>claude-haiku-4-5-20251001</ModelId> |
| Claude Haiku 3.5 | <ModelId>claude-3-5-haiku-latest</ModelId> | <ModelId>claude-3-5-haiku-20241022</ModelId> |
| Claude Opus 4.1 | <ModelId>claude-opus-4-1</ModelId> | <ModelId>claude-opus-4-1-20250805</ModelId> |
| Claude Opus 4 | <ModelId>claude-opus-4-0</ModelId> | <ModelId>claude-opus-4-20250514</ModelId> |
<Note>
Aliases are subject to the same rate limits and pricing as the underlying model version they reference.
</Note>
### Model comparison table
To help you choose the right model for your needs, we've compiled a table comparing the key features and capabilities of each model in the Claude family:
| Feature | Claude Sonnet 4.5 | Claude Sonnet 4 | Claude Sonnet 3.7 | Claude Opus 4.1 | Claude Opus 4 | Claude Haiku 4.5 | Claude Haiku 3.5 | Claude Haiku 3 |
| :-------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------- | :---------------------------------------------------------------------------------------------------- | :----------------------------------------------------------------------------------------------------- | :--------------------------------------------------------------------------------------------------- |
| **Description** | Our best model for complex agents and coding | High-performance model | High-performance model with early extended thinking | Exceptional model for specialized complex tasks | Our previous flagship model | Our fastest and most intelligent Haiku model | Our fastest model | Fast and compact model for near-instant responsiveness |
| **Strengths** | Highest intelligence across most tasks with exceptional agent and coding capabilities | High intelligence and balanced performance | High intelligence with toggleable extended thinking | Very high intelligence and capability for specialized tasks | Very high intelligence and capability | Near-frontier intelligence at blazing speeds with extended thinking and exceptional cost-efficiency | Intelligence at blazing speeds | Quick and accurate targeted performance |
| **Multilingual** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| **Vision** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| **[Extended thinking](/en/docs/build-with-claude/extended-thinking)** | Yes | Yes | Yes | Yes | Yes | Yes | No | No |
| **[Priority Tier](/en/api/service-tiers)** | Yes | Yes | Yes | Yes | Yes | Yes | Yes | No |
| **API model name** | <ModelId>claude-sonnet-4-5-20250929</ModelId> | <ModelId>claude-sonnet-4-20250514</ModelId> | <ModelId>claude-3-7-sonnet-20250219</ModelId> | <ModelId>claude-opus-4-1-20250805</ModelId> | <ModelId>claude-opus-4-20250514</ModelId> | <ModelId>claude-haiku-4-5-20251001</ModelId> | <ModelId>claude-3-5-haiku-20241022</ModelId> | <ModelId>claude-3-haiku-20240307</ModelId> |
| **Comparative latency** | Fast | Fast | Fast | Moderately Fast | Moderately Fast | Fastest | Fastest | Fast |
| **Context window** | <Tooltip tip="~150K words \ ~680K unicode characters">200K</Tooltip> / <br /> 1M (beta)<sup>1</sup> | <Tooltip tip="~150K words \ ~680K unicode characters">200K</Tooltip> / <br /> 1M (beta)<sup>1</sup> | <Tooltip tip="~150K words \ ~680K unicode characters">200K</Tooltip> | <Tooltip tip="~150K words \ ~680K unicode characters">200K</Tooltip> | <Tooltip tip="~150K words \ ~680K unicode characters">200K</Tooltip> | <Tooltip tip="~150K words \ ~680K unicode characters">200K</Tooltip> | <Tooltip tip="~150K words \ ~215K unicode characters">200K</Tooltip> | <Tooltip tip="~150K words \ ~680K unicode characters">200K</Tooltip> |
| **Max output** | <Tooltip tip="~48K words \ 218K unicode characters \ ~100 single spaced pages">64000 tokens</Tooltip> | <Tooltip tip="~48K words \ 218K unicode characters \ ~100 single spaced pages">64000 tokens</Tooltip> | <Tooltip tip="~48K words \ 218K unicode characters \ ~100 single spaced pages">64000 tokens</Tooltip> | <Tooltip tip="~24K words \ 109K unicode characters \ ~50 single spaced pages">32000 tokens</Tooltip> | <Tooltip tip="~24K words \ 109K unicode characters \ ~50 single spaced pages">32000 tokens</Tooltip> | <Tooltip tip="~48K words \ 218K unicode characters \ ~100 single spaced pages">64000 tokens</Tooltip> | <Tooltip tip="~6.2K words \ 28K unicode characters \ ~12-13 single spaced pages">8192 tokens</Tooltip> | <Tooltip tip="~3.1K words \ 14K unicode characters \ ~6-7 single spaced pages">4096 tokens</Tooltip> |
| **Reliable knowledge cutoff** | Jan 2025<sup>2</sup> | Jan 2025<sup>2</sup> | Oct 2024<sup>2</sup> | Jan 2025<sup>2</sup> | Jan 2025<sup>2</sup> | Feb 2025 | <sup>3</sup> | <sup>3</sup> |
| **Training data cutoff** | Jul 2025 | Mar 2025 | Nov 2024 | Mar 2025 | Mar 2025 | Jul 2025 | Jul 2024 | Aug 2023 |
*<sup>1 - Claude Sonnet 4.5 and Claude Sonnet 4 support a [1M token context window](/en/docs/build-with-claude/context-windows#1m-token-context-window) when using the `context-1m-2025-08-07` beta header. [Long context pricing](/en/docs/about-claude/pricing#long-context-pricing) applies to requests exceeding 200K tokens.</sup>*
*<sup>2 - **Reliable knowledge cutoff** indicates the date through which a model's knowledge is most extensive and reliable. **Training data cutoff** is the broader date range of training data used. For example, Claude Sonnet 4.5 was trained on publicly available information through July 2025, but its knowledge is most extensive and reliable through January 2025. For more information, see [Anthropic's Transparency Hub](https://www.anthropic.com/transparency).</sup>*
*<sup>3 - Some Haiku models have a single training data cutoff date.</sup>*
<Note>
Include the beta header `output-128k-2025-02-19` in your API request to increase the maximum output token length to 128k tokens for Claude Sonnet 3.7.
We strongly suggest using our [streaming Messages API](/en/docs/build-with-claude/streaming) to avoid timeouts when generating longer outputs.
See our guidance on [long requests](/en/api/errors#long-requests) for more details.
</Note>
### Model pricing
The table below shows the price per million tokens for each model:
| Model | Base Input Tokens | 5m Cache Writes | 1h Cache Writes | Cache Hits & Refreshes | Output Tokens |
| -------------------------------------------------------------------------- | ----------------- | --------------- | --------------- | ---------------------- | ------------- |
| Claude Opus 4.1 | \$15 / MTok | \$18.75 / MTok | \$30 / MTok | \$1.50 / MTok | \$75 / MTok |
| Claude Opus 4 | \$15 / MTok | \$18.75 / MTok | \$30 / MTok | \$1.50 / MTok | \$75 / MTok |
| Claude Sonnet 4.5 | \$3 / MTok | \$3.75 / MTok | \$6 / MTok | \$0.30 / MTok | \$15 / MTok |
| Claude Sonnet 4 | \$3 / MTok | \$3.75 / MTok | \$6 / MTok | \$0.30 / MTok | \$15 / MTok |
| Claude Sonnet 3.7 | \$3 / MTok | \$3.75 / MTok | \$6 / MTok | \$0.30 / MTok | \$15 / MTok |
| Claude Sonnet 3.5 ([deprecated](/en/docs/about-claude/model-deprecations)) | \$3 / MTok | \$3.75 / MTok | \$6 / MTok | \$0.30 / MTok | \$15 / MTok |
| Claude Haiku 4.5 | \$1 / MTok | \$1.25 / MTok | \$2 / MTok | \$0.10 / MTok | \$5 / MTok |
| Claude Haiku 3.5 | \$0.80 / MTok | \$1 / MTok | \$1.6 / MTok | \$0.08 / MTok | \$4 / MTok |
| Claude Opus 3 ([deprecated](/en/docs/about-claude/model-deprecations)) | \$15 / MTok | \$18.75 / MTok | \$30 / MTok | \$1.50 / MTok | \$75 / MTok |
| Claude Haiku 3 | \$0.25 / MTok | \$0.30 / MTok | \$0.50 / MTok | \$0.03 / MTok | \$1.25 / MTok |
## Prompt and output performance
Claude 4 models excel in:
* **Performance**: Top-tier results in reasoning, coding, multilingual tasks, long-context handling, honesty, and image processing. See the [Claude 4 blog post](http://www.anthropic.com/news/claude-4) for more information.
* **Engaging responses**: Claude models are ideal for applications that require rich, human-like interactions.
* If you prefer more concise responses, you can adjust your prompts to guide the model toward the desired output length. Refer to our [prompt engineering guides](/en/docs/build-with-claude/prompt-engineering) for details.
* For specific Claude 4 prompting best practices, see our [Claude 4 best practices guide](/en/docs/build-with-claude/prompt-engineering/claude-4-best-practices).
* **Output quality**: When migrating from previous model generations to Claude 4, you may notice larger improvements in overall performance.
## Migrating to Claude 4.5
If you're currently using Claude 3 models, we recommend migrating to Claude 4.5 to take advantage of improved intelligence and enhanced capabilities. For detailed migration instructions, see [Migrating to Claude 4.5](/en/docs/about-claude/models/migrating-to-claude-4).
## Get started with Claude
If you're ready to start exploring what Claude can do for you, let's dive in! Whether you're a developer looking to integrate Claude into your applications or a user wanting to experience the power of AI firsthand, we've got you covered.
<Note>Looking to chat with Claude? Visit [claude.ai](http://www.claude.ai)!</Note>
<CardGroup cols={3}>
<Card title="Intro to Claude" icon="check" href="/en/docs/intro-to-claude">
Explore Claudes capabilities and development flow.
</Card>
<Card title="Quickstart" icon="bolt-lightning" href="/en/resources/quickstarts">
Learn how to make your first API call in minutes.
</Card>
<Card title="Claude Console" icon="code" href="https://console.anthropic.com">
Craft and test powerful prompts directly in your browser.
</Card>
</CardGroup>
If you have any questions or need assistance, don't hesitate to reach out to our [support team](https://support.claude.com/) or consult the [Discord community](https://www.anthropic.com/discord).
-376
View File
@@ -1,376 +0,0 @@
# Plugins reference
> Complete technical reference for Claude Code plugin system, including schemas, CLI commands, and component specifications.
<Tip>
For hands-on tutorials and practical usage, see [Plugins](/en/docs/claude-code/plugins). For plugin management across teams and communities, see [Plugin marketplaces](/en/docs/claude-code/plugin-marketplaces).
</Tip>
This reference provides complete technical specifications for the Claude Code plugin system, including component schemas, CLI commands, and development tools.
## Plugin components reference
This section documents the five types of components that plugins can provide.
### Commands
Plugins add custom slash commands that integrate seamlessly with Claude Code's command system.
**Location**: `commands/` directory in plugin root
**File format**: Markdown files with frontmatter
For complete details on plugin command structure, invocation patterns, and features, see [Plugin commands](/en/docs/claude-code/slash-commands#plugin-commands).
### Agents
Plugins can provide specialized subagents for specific tasks that Claude can invoke automatically when appropriate.
**Location**: `agents/` directory in plugin root
**File format**: Markdown files describing agent capabilities
**Agent structure**:
```markdown theme={null}
---
description: What this agent specializes in
capabilities: ["task1", "task2", "task3"]
---
# Agent Name
Detailed description of the agent's role, expertise, and when Claude should invoke it.
## Capabilities
- Specific task the agent excels at
- Another specialized capability
- When to use this agent vs others
## Context and examples
Provide examples of when this agent should be used and what kinds of problems it solves.
```
**Integration points**:
* Agents appear in the `/agents` interface
* Claude can invoke agents automatically based on task context
* Agents can be invoked manually by users
* Plugin agents work alongside built-in Claude agents
### Skills
Plugins can provide Agent Skills that extend Claude's capabilities. Skills are model-invoked—Claude autonomously decides when to use them based on the task context.
**Location**: `skills/` directory in plugin root
**File format**: Directories containing `SKILL.md` files with frontmatter
**Skill structure**:
```
skills/
├── pdf-processor/
│ ├── SKILL.md
│ ├── reference.md (optional)
│ └── scripts/ (optional)
└── code-reviewer/
└── SKILL.md
```
**Integration behavior**:
* Plugin Skills are automatically discovered when the plugin is installed
* Claude autonomously invokes Skills based on matching task context
* Skills can include supporting files alongside SKILL.md
For SKILL.md format and complete Skill authoring guidance, see:
* [Use Skills in Claude Code](/en/docs/claude-code/skills)
* [Agent Skills overview](/en/docs/agents-and-tools/agent-skills/overview#skill-structure)
### Hooks
Plugins can provide event handlers that respond to Claude Code events automatically.
**Location**: `hooks/hooks.json` in plugin root, or inline in plugin.json
**Format**: JSON configuration with event matchers and actions
**Hook configuration**:
```json theme={null}
{
"hooks": {
"PostToolUse": [
{
"matcher": "Write|Edit",
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/scripts/format-code.sh"
}
]
}
]
}
}
```
**Available events**:
* `PreToolUse`: Before Claude uses any tool
* `PostToolUse`: After Claude uses any tool
* `UserPromptSubmit`: When user submits a prompt
* `Notification`: When Claude Code sends notifications
* `Stop`: When Claude attempts to stop
* `SubagentStop`: When a subagent attempts to stop
* `SessionStart`: At the beginning of sessions
* `SessionEnd`: At the end of sessions
* `PreCompact`: Before conversation history is compacted
**Hook types**:
* `command`: Execute shell commands or scripts
* `validation`: Validate file contents or project state
* `notification`: Send alerts or status updates
### MCP servers
Plugins can bundle Model Context Protocol (MCP) servers to connect Claude Code with external tools and services.
**Location**: `.mcp.json` in plugin root, or inline in plugin.json
**Format**: Standard MCP server configuration
**MCP server configuration**:
```json theme={null}
{
"mcpServers": {
"plugin-database": {
"command": "${CLAUDE_PLUGIN_ROOT}/servers/db-server",
"args": ["--config", "${CLAUDE_PLUGIN_ROOT}/config.json"],
"env": {
"DB_PATH": "${CLAUDE_PLUGIN_ROOT}/data"
}
},
"plugin-api-client": {
"command": "npx",
"args": ["@company/mcp-server", "--plugin-mode"],
"cwd": "${CLAUDE_PLUGIN_ROOT}"
}
}
}
```
**Integration behavior**:
* Plugin MCP servers start automatically when the plugin is enabled
* Servers appear as standard MCP tools in Claude's toolkit
* Server capabilities integrate seamlessly with Claude's existing tools
* Plugin servers can be configured independently of user MCP servers
***
## Plugin manifest schema
The `plugin.json` file defines your plugin's metadata and configuration. This section documents all supported fields and options.
### Complete schema
```json theme={null}
{
"name": "plugin-name",
"version": "1.2.0",
"description": "Brief plugin description",
"author": {
"name": "Author Name",
"email": "author@example.com",
"url": "https://github.com/author"
},
"homepage": "https://docs.example.com/plugin",
"repository": "https://github.com/author/plugin",
"license": "MIT",
"keywords": ["keyword1", "keyword2"],
"commands": ["./custom/commands/special.md"],
"agents": "./custom/agents/",
"hooks": "./config/hooks.json",
"mcpServers": "./mcp-config.json"
}
```
### Required fields
| Field | Type | Description | Example |
| :----- | :----- | :---------------------------------------- | :------------------- |
| `name` | string | Unique identifier (kebab-case, no spaces) | `"deployment-tools"` |
### Metadata fields
| Field | Type | Description | Example |
| :------------ | :----- | :---------------------------------- | :------------------------------------------------- |
| `version` | string | Semantic version | `"2.1.0"` |
| `description` | string | Brief explanation of plugin purpose | `"Deployment automation tools"` |
| `author` | object | Author information | `{"name": "Dev Team", "email": "dev@company.com"}` |
| `homepage` | string | Documentation URL | `"https://docs.example.com"` |
| `repository` | string | Source code URL | `"https://github.com/user/plugin"` |
| `license` | string | License identifier | `"MIT"`, `"Apache-2.0"` |
| `keywords` | array | Discovery tags | `["deployment", "ci-cd"]` |
### Component path fields
| Field | Type | Description | Example |
| :----------- | :------------- | :----------------------------------- | :------------------------------------- |
| `commands` | string\|array | Additional command files/directories | `"./custom/cmd.md"` or `["./cmd1.md"]` |
| `agents` | string\|array | Additional agent files | `"./custom/agents/"` |
| `hooks` | string\|object | Hook config path or inline config | `"./hooks.json"` |
| `mcpServers` | string\|object | MCP config path or inline config | `"./mcp.json"` |
### Path behavior rules
**Important**: Custom paths supplement default directories - they don't replace them.
* If `commands/` exists, it's loaded in addition to custom command paths
* All paths must be relative to plugin root and start with `./`
* Commands from custom paths use the same naming and namespacing rules
* Multiple paths can be specified as arrays for flexibility
**Path examples**:
```json theme={null}
{
"commands": [
"./specialized/deploy.md",
"./utilities/batch-process.md"
],
"agents": [
"./custom-agents/reviewer.md",
"./custom-agents/tester.md"
]
}
```
### Environment variables
**`${CLAUDE_PLUGIN_ROOT}`**: Contains the absolute path to your plugin directory. Use this in hooks, MCP servers, and scripts to ensure correct paths regardless of installation location.
```json theme={null}
{
"hooks": {
"PostToolUse": [
{
"hooks": [
{
"type": "command",
"command": "${CLAUDE_PLUGIN_ROOT}/scripts/process.sh"
}
]
}
]
}
}
```
***
## Plugin directory structure
### Standard plugin layout
A complete plugin follows this structure:
```
enterprise-plugin/
├── .claude-plugin/ # Metadata directory
│ └── plugin.json # Required: plugin manifest
├── commands/ # Default command location
│ ├── status.md
│ └── logs.md
├── agents/ # Default agent location
│ ├── security-reviewer.md
│ ├── performance-tester.md
│ └── compliance-checker.md
├── skills/ # Agent Skills
│ ├── code-reviewer/
│ │ └── SKILL.md
│ └── pdf-processor/
│ ├── SKILL.md
│ └── scripts/
├── hooks/ # Hook configurations
│ ├── hooks.json # Main hook config
│ └── security-hooks.json # Additional hooks
├── .mcp.json # MCP server definitions
├── scripts/ # Hook and utility scripts
│ ├── security-scan.sh
│ ├── format-code.py
│ └── deploy.js
├── LICENSE # License file
└── CHANGELOG.md # Version history
```
<Warning>
The `.claude-plugin/` directory contains the `plugin.json` file. All other directories (commands/, agents/, skills/, hooks/) must be at the plugin root, not inside `.claude-plugin/`.
</Warning>
### File locations reference
| Component | Default Location | Purpose |
| :-------------- | :--------------------------- | :------------------------------- |
| **Manifest** | `.claude-plugin/plugin.json` | Required metadata file |
| **Commands** | `commands/` | Slash command markdown files |
| **Agents** | `agents/` | Subagent markdown files |
| **Skills** | `skills/` | Agent Skills with SKILL.md files |
| **Hooks** | `hooks/hooks.json` | Hook configuration |
| **MCP servers** | `.mcp.json` | MCP server definitions |
***
## Debugging and development tools
### Debugging commands
Use `claude --debug` to see plugin loading details:
```bash theme={null}
claude --debug
```
This shows:
* Which plugins are being loaded
* Any errors in plugin manifests
* Command, agent, and hook registration
* MCP server initialization
### Common issues
| Issue | Cause | Solution |
| :--------------------- | :------------------------------ | :--------------------------------------------------- |
| Plugin not loading | Invalid `plugin.json` | Validate JSON syntax |
| Commands not appearing | Wrong directory structure | Ensure `commands/` at root, not in `.claude-plugin/` |
| Hooks not firing | Script not executable | Run `chmod +x script.sh` |
| MCP server fails | Missing `${CLAUDE_PLUGIN_ROOT}` | Use variable for all plugin paths |
| Path errors | Absolute paths used | All paths must be relative and start with `./` |
***
## Distribution and versioning reference
### Version management
Follow semantic versioning for plugin releases:
```json theme={null}
## See also
- [Plugins](/en/docs/claude-code/plugins) - Tutorials and practical usage
- [Plugin marketplaces](/en/docs/claude-code/plugin-marketplaces) - Creating and managing marketplaces
- [Slash commands](/en/docs/claude-code/slash-commands) - Command development details
- [Subagents](/en/docs/claude-code/sub-agents) - Agent configuration and capabilities
- [Agent Skills](/en/docs/claude-code/skills) - Extend Claude's capabilities
- [Hooks](/en/docs/claude-code/hooks) - Event handling and automation
- [MCP](/en/docs/claude-code/mcp) - External tool integration
- [Settings](/en/docs/claude-code/settings) - Configuration options for plugins
```
-295
View File
@@ -1,295 +0,0 @@
# Streaming Input
> Understanding the two input modes for Claude Agent SDK and when to use each
## Overview
The Claude Agent SDK supports two distinct input modes for interacting with agents:
* **Streaming Input Mode** (Default & Recommended) - A persistent, interactive session
* **Single Message Input** - One-shot queries that use session state and resuming
This guide explains the differences, benefits, and use cases for each mode to help you choose the right approach for your application.
## Streaming Input Mode (Recommended)
Streaming input mode is the **preferred** way to use the Claude Agent SDK. It provides full access to the agent's capabilities and enables rich, interactive experiences.
It allows the agent to operate as a long lived process that takes in user input, handles interruptions, surfaces permission requests, and handles session management.
### How It Works
```mermaid theme={null}
%%{init: {"theme": "base", "themeVariables": {"edgeLabelBackground": "#F0F0EB", "lineColor": "#91918D", "primaryColor": "#F0F0EB", "primaryTextColor": "#191919", "primaryBorderColor": "#D9D8D5", "secondaryColor": "#F5E6D8", "tertiaryColor": "#CC785C", "noteBkgColor": "#FAF0E6", "noteBorderColor": "#91918D"}, "sequence": {"actorMargin": 50, "width": 150, "height": 65, "boxMargin": 10, "boxTextMargin": 5, "noteMargin": 10, "messageMargin": 35}}}%%
sequenceDiagram
participant App as Your Application
participant Agent as Claude Agent
participant Tools as Tools/Hooks
participant FS as Environment/<br/>File System
App->>Agent: Initialize with AsyncGenerator
activate Agent
App->>Agent: Yield Message 1
Agent->>Tools: Execute tools
Tools->>FS: Read files
FS-->>Tools: File contents
Tools->>FS: Write/Edit files
FS-->>Tools: Success/Error
Agent-->>App: Stream partial response
Agent-->>App: Stream more content...
Agent->>App: Complete Message 1
App->>Agent: Yield Message 2 + Image
Agent->>Tools: Process image & execute
Tools->>FS: Access filesystem
FS-->>Tools: Operation results
Agent-->>App: Stream response 2
App->>Agent: Queue Message 3
App->>Agent: Interrupt/Cancel
Agent->>App: Handle interruption
Note over App,Agent: Session stays alive
Note over Tools,FS: Persistent file system<br/>state maintained
deactivate Agent
```
### Benefits
<CardGroup cols={2}>
<Card title="Image Uploads" icon="image">
Attach images directly to messages for visual analysis and understanding
</Card>
<Card title="Queued Messages" icon="layer-group">
Send multiple messages that process sequentially, with ability to interrupt
</Card>
<Card title="Tool Integration" icon="wrench">
Full access to all tools and custom MCP servers during the session
</Card>
<Card title="Hooks Support" icon="link">
Use lifecycle hooks to customize behavior at various points
</Card>
<Card title="Real-time Feedback" icon="bolt">
See responses as they're generated, not just final results
</Card>
<Card title="Context Persistence" icon="database">
Maintain conversation context across multiple turns naturally
</Card>
</CardGroup>
### Implementation Example
<CodeGroup>
```typescript TypeScript theme={null}
import { query } from "@anthropic-ai/claude-agent-sdk";
import { readFileSync } from "fs";
async function* generateMessages() {
// First message
yield {
type: "user" as const,
message: {
role: "user" as const,
content: "Analyze this codebase for security issues"
}
};
// Wait for conditions or user input
await new Promise(resolve => setTimeout(resolve, 2000));
// Follow-up with image
yield {
type: "user" as const,
message: {
role: "user" as const,
content: [
{
type: "text",
text: "Review this architecture diagram"
},
{
type: "image",
source: {
type: "base64",
media_type: "image/png",
data: readFileSync("diagram.png", "base64")
}
}
]
}
};
}
// Process streaming responses
for await (const message of query({
prompt: generateMessages(),
options: {
maxTurns: 10,
allowedTools: ["Read", "Grep"]
}
})) {
if (message.type === "result") {
console.log(message.result);
}
}
```
```python Python theme={null}
from claude_agent_sdk import ClaudeSDKClient, ClaudeAgentOptions, AssistantMessage, TextBlock
import asyncio
import base64
async def streaming_analysis():
async def message_generator():
# First message
yield {
"type": "user",
"message": {
"role": "user",
"content": "Analyze this codebase for security issues"
}
}
# Wait for conditions
await asyncio.sleep(2)
# Follow-up with image
with open("diagram.png", "rb") as f:
image_data = base64.b64encode(f.read()).decode()
yield {
"type": "user",
"message": {
"role": "user",
"content": [
{
"type": "text",
"text": "Review this architecture diagram"
},
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/png",
"data": image_data
}
}
]
}
}
# Use ClaudeSDKClient for streaming input
options = ClaudeAgentOptions(
max_turns=10,
allowed_tools=["Read", "Grep"]
)
async with ClaudeSDKClient(options) as client:
# Send streaming input
await client.query(message_generator())
# Process responses
async for message in client.receive_response():
if isinstance(message, AssistantMessage):
for block in message.content:
if isinstance(block, TextBlock):
print(block.text)
asyncio.run(streaming_analysis())
```
</CodeGroup>
## Single Message Input
Single message input is simpler but more limited.
### When to Use Single Message Input
Use single message input when:
* You need a one-shot response
* You do not need image attachments, hooks, etc.
* You need to operate in a stateless environment, such as a lambda function
### Limitations
<Warning>
Single message input mode does **not** support:
* Direct image attachments in messages
* Dynamic message queueing
* Real-time interruption
* Hook integration
* Natural multi-turn conversations
</Warning>
### Implementation Example
<CodeGroup>
```typescript TypeScript theme={null}
import { query } from "@anthropic-ai/claude-agent-sdk";
// Simple one-shot query
for await (const message of query({
prompt: "Explain the authentication flow",
options: {
maxTurns: 1,
allowedTools: ["Read", "Grep"]
}
})) {
if (message.type === "result") {
console.log(message.result);
}
}
// Continue conversation with session management
for await (const message of query({
prompt: "Now explain the authorization process",
options: {
continue: true,
maxTurns: 1
}
})) {
if (message.type === "result") {
console.log(message.result);
}
}
```
```python Python theme={null}
from claude_agent_sdk import query, ClaudeAgentOptions, ResultMessage
import asyncio
async def single_message_example():
# Simple one-shot query using query() function
async for message in query(
prompt="Explain the authentication flow",
options=ClaudeAgentOptions(
max_turns=1,
allowed_tools=["Read", "Grep"]
)
):
if isinstance(message, ResultMessage):
print(message.result)
# Continue conversation with session management
async for message in query(
prompt="Now explain the authorization process",
options=ClaudeAgentOptions(
continue_conversation=True,
max_turns=1
)
):
if isinstance(message, ResultMessage):
print(message.result)
asyncio.run(single_message_example())
```
</CodeGroup>
-259
View File
@@ -1,259 +0,0 @@
**Claude finished @thedotmack's task** —— [View job](https://github.com/thedotmack/claude-mem/actions/runs/19080917196)
---
### Code Review Complete
This PR implements significant architectural improvements focused on production stability. The refactoring successfully simplifies the worker service and eliminates several failure modes. Below is my detailed review:
---
### ✅ Strengths
#### 1. **Excellent Simplification of Worker Service Startup** (src/services/worker-service.ts:115-147)
The new startup sequence is a major improvement:
- HTTP server starts **first** before any blocking operations
- ChromaSync initialization is deferred until after HTTP is ready
- Chroma backfill runs in background (non-blocking)
- Failures no longer crash the entire service
**Before**: ChromaSync initialized in constructor → Chroma backfill blocks startup → Any failure prevents HTTP server from starting
**After**: HTTP server starts immediately → ChromaSync initialized → Chroma backfill in background → Service always available
This is excellent engineering. The worker will now be responsive even if Chroma has issues.
#### 2. **Smart Executable Path Caching** (src/services/worker-service.ts:22-61)
Adding `cachedClaudePath` prevents repeated `execSync` calls on every SDK agent initialization. This reduces filesystem overhead and improves performance. Good optimization.
#### 3. **Improved Error Handling in Hooks** (src/hooks/save-hook.ts:84-91, new-hook.ts:62-69, summary-hook.ts:65-72)
The error handling now distinguishes between connection errors vs HTTP errors:
```typescript
if (error.cause?.code === 'ECONNREFUSED' || error.name === 'TimeoutError' || error.message.includes('fetch failed')) {
throw new Error("There's a problem with the worker. If you just updated, type `pm2 restart claude-mem-worker` in your terminal to continue");
}
```
This provides actionable error messages to users instead of generic failures.
#### 4. **Dramatic PM2 Configuration Simplification** (ecosystem.config.cjs)
The PM2 config went from 44 lines with complex error/output redirection down to 26 lines. This is exactly right - PM2's defaults are sensible and the previous configuration was adding complexity without clear benefit.
#### 5. **Worker Management Simplification** (src/shared/worker-utils.ts)
Removed the async health check system (124 lines → 48 lines). The new synchronous approach using PM2 list output is simpler and more reliable:
- No HTTP health checks that can timeout
- No retry logic with arbitrary delays
- Just checks if PM2 reports the process as 'online'
---
### ⚠️ Issues & Concerns
#### 1. **Race Condition in Worker Health Check** (src/shared/worker-utils.ts:28-39) - **CRITICAL**
There's a race condition in the worker startup check:
```typescript
checkProcess.on('close', (code) => {
const isRunning = output.includes('claude-mem-worker') && output.includes('online');
if (!isRunning) {
// Only start if not already running
spawn(pm2Path, ['start', ecosystemPath], {
cwd: packageRoot,
stdio: 'ignore'
});
}
});
```
**Problem**: The `spawn()` call inside the `close` event handler is non-blocking, but the function returns immediately. The hooks calling `ensureWorkerRunning()` will continue execution **before** the worker has actually started.
**Impact**: Hooks may attempt to make HTTP requests to the worker before it's listening, resulting in connection failures.
**Suggested Fix**: Either:
- Make `ensureWorkerRunning()` async and await the health check
- Add a brief health check retry loop after spawning (like the old implementation had)
- Document that this is fire-and-forget and hooks should handle ECONNREFUSED
#### 2. **Removed Health Endpoint Information** (src/services/worker-service.ts:152-154)
The health endpoint was simplified from:
```typescript
// OLD
res.json({
status: 'ok',
port: this.port,
pid: process.pid,
activeSessions: this.sessions.size,
chromaSynced: true
});
```
To:
```typescript
// NEW
res.json({ status: 'ok' });
```
**Concern**: This removes useful debugging information. When troubleshooting production issues, knowing the PID, active sessions count, and port would be valuable. The health endpoint is typically used for diagnostics, not just uptime checks.
**Suggestion**: Consider keeping at least `pid` and `activeSessions` in the health response for observability.
#### 3. **Auto-Session Creation Without Validation** (src/services/worker-service.ts:252-280, 309-336)
Both `handleObservation` and `handleSummarize` auto-create sessions if they don't exist:
```typescript
if (!session) {
// Auto-create session if it doesn't exist (e.g., worker restarted)
const db = new SessionStore();
const dbSession = db.getSessionById(sessionDbId);
db.close();
session = {
sessionDbId,
claudeSessionId: dbSession!.claude_session_id, // ⚠️ Non-null assertion
// ...
};
}
```
**Concerns**:
- Uses non-null assertion (`dbSession!`) without checking if `dbSession` is actually null
- If `getSessionById()` returns null (session doesn't exist in DB), this will throw at runtime
- The error won't be caught and will crash the handler
**Suggested Fix**:
```typescript
const dbSession = db.getSessionById(sessionDbId);
if (!dbSession) {
db.close();
res.status(404).json({ error: 'Session not found in database' });
return;
}
```
This validation already exists in `handleInit` (lines 167-173), so it should be added to the other handlers too.
#### 4. **Removed Observation Counter** (src/services/worker-service.ts:80)
The `ActiveSession` interface no longer tracks `observationCounter`:
```typescript
// REMOVED: observationCounter: number; // Counter for correlation IDs
```
**Question**: Was this used for generating correlation IDs for logging? If so, is there now no way to correlate observations within a session for debugging?
**Impact**: Minor - mainly affects debugging/logging. Not a functional issue.
#### 5. **PM2 Watch Mode in Production** (ecosystem.config.cjs:16)
The config enables watch mode:
```javascript
watch: true,
```
**Concern**: Watch mode causes PM2 to restart the process whenever files change. This is useful during development but potentially problematic in production:
- Could cause unexpected restarts if files are touched
- May interrupt active sessions
- Increases resource usage (file watching)
**Suggestion**: Consider disabling watch mode by default, or document that users should disable it in production if they're deploying this.
#### 6. **Duplicate Port Constant** (Multiple Files)
The `FIXED_PORT` constant is defined in multiple places:
- `src/services/worker-service.ts:20`
- `src/shared/worker-utils.ts:5`
- `src/hooks/save-hook.ts:54`
- `src/hooks/new-hook.ts:47`
- `src/hooks/summary-hook.ts:39`
**Concern**: This creates maintenance burden and risk of inconsistency. If the port needs to change, it must be updated in 5 places.
**Suggestion**: Export `FIXED_PORT` from a single shared module (e.g., `worker-utils.ts`) and import it everywhere else.
---
### 🔍 Minor Issues
#### 7. **Error Handling Inconsistency in Chroma Sync** (src/services/worker-service.ts:220-223)
```typescript
).catch(err => {
logger.failure('WORKER', 'Failed to sync user_prompt to Chroma - continuing', { promptId: latestPrompt.id }, err);
// Don't crash - SQLite has the data
});
```
The logger method is `failure` here but `error` elsewhere (lines 625, 664). For consistency, these should all use the same log level for Chroma sync failures.
#### 8. **Type Safety in Error Handling** (src/hooks/save-hook.ts:84)
```typescript
} catch (error: any) {
```
Using `any` defeats type safety. Consider using `unknown` and type guards:
```typescript
} catch (error: unknown) {
const err = error as Error & { cause?: { code?: string } };
if (err.cause?.code === 'ECONNREFUSED' || err.name === 'TimeoutError' || err.message.includes('fetch failed')) {
// ...
}
}
```
This pattern is used in all three hook files.
---
### 📋 Documentation Cleanup
The deletion of obsolete documentation files is appropriate:
- `EXPERIMENTAL_RELEASE_NOTES.md` (331 lines)
- `FEATURE_PLAN_HYBRID_SEARCH.md` (486 lines)
- `GITHUB_RELEASE_TEMPLATE.md` (83 lines)
- `IMPLEMENTATION_STATUS.md` (503 lines)
- `NEXT_SESSION_PROMPT.md` (193 lines)
- `docs/coderabbit-PR-41.md` (314 lines)
These appear to be planning documents and old release artifacts. Good cleanup, though consider archiving rather than deleting if there's historical value.
---
### 🎯 Testing Recommendations
In addition to the testing recommendations in the PR description:
1. **Test race condition scenario**: Start a fresh session immediately after a `pm2 stop` to verify hooks handle ECONNREFUSED gracefully
2. **Test worker restart under load**: Restart worker while observations are being processed to verify auto-session-creation works
3. **Test Chroma failures**: Disconnect/break Chroma to verify worker stays alive and logs errors appropriately
4. **Verify PM2 watch mode**: Ensure watch mode doesn't cause problems with the data directory or node_modules changes
---
### ✅ Overall Assessment
**Verdict**: **Approve with recommendations**
This PR represents a significant improvement in production stability and code maintainability. The architectural changes are sound and address real production failure modes. The primary concerns are:
1. **Race condition in worker startup** (Critical - should be fixed)
2. **Missing null checks in auto-session creation** (High - could cause crashes)
3. **Reduced observability in health endpoint** (Medium - impacts debugging)
4. **Watch mode in production** (Low - document or disable)
The simplifications are well-justified and the error handling improvements make the system more resilient. Once the critical race condition and null check issues are addressed, this will be a solid improvement.
**Lines of code review**: The 1,855 line reduction (2,335 deletions - 480 additions) accurately reflects meaningful simplification rather than just removing features.
---
#### Todo List
- [x] Read and analyze core service files (worker-service.ts, worker-utils.ts)
- [x] Review hook changes (save-hook.ts, new-hook.ts, summary-hook.ts, cleanup-hook.ts)
- [x] Review PM2 configuration changes (ecosystem.config.cjs)
- [x] Analyze architectural changes and potential issues
- [x] Provide comprehensive review feedback
---
-222
View File
@@ -1,222 +0,0 @@
# Context Engineering for AI Agents: Best Practices Cheat Sheet
## Core Principle
**Find the smallest possible set of high-signal tokens that maximize the likelihood of your desired outcome.**
---
## Context Engineering vs Prompt Engineering
**Prompt Engineering**: Writing and organizing LLM instructions for optimal outcomes (one-time task)
**Context Engineering**: Curating and maintaining the optimal set of tokens during inference across multiple turns (iterative process)
Context engineering manages:
- System instructions
- Tools
- Model Context Protocol (MCP)
- External data
- Message history
- Runtime data retrieval
---
## The Problem: Context Rot
**Key Insight**: LLMs have an "attention budget" that gets depleted as context grows
- Every token attends to every other token (n² relationships)
- As context length increases, model accuracy decreases
- Models have less training experience with longer sequences
- Context must be treated as a finite resource with diminishing marginal returns
---
## System Prompts: Find the "Right Altitude"
### The Goldilocks Zone
**Too Prescriptive**
- Hardcoded if-else logic
- Brittle and fragile
- High maintenance complexity
**Too Vague**
- High-level guidance without concrete signals
- Falsely assumes shared context
- Lacks actionable direction
**Just Right**
- Specific enough to guide behavior effectively
- Flexible enough to provide strong heuristics
- Minimal set of information that fully outlines expected behavior
### Best Practices
- Use simple, direct language
- Organize into distinct sections (`<background_information>`, `<instructions>`, `## Tool guidance`, etc.)
- Use XML tags or Markdown headers for structure
- Start with minimal prompt, add based on failure modes
- Note: Minimal ≠ short (provide sufficient information upfront)
---
## Tools: Minimal and Clear
### Design Principles
- **Self-contained**: Each tool has a single, clear purpose
- **Robust to error**: Handle edge cases gracefully
- **Extremely clear**: Intended use is unambiguous
- **Token-efficient**: Returns relevant information without bloat
- **Descriptive parameters**: Unambiguous input names (e.g., `user_id` not `user`)
### Critical Rule
**If a human engineer can't definitively say which tool to use in a given situation, an AI agent can't be expected to do better.**
### Common Failure Modes to Avoid
- Bloated tool sets covering too much functionality
- Tools with overlapping purposes
- Ambiguous decision points about which tool to use
---
## Examples: Diverse, Not Exhaustive
**Do**
- Curate a set of diverse, canonical examples
- Show expected behavior effectively
- Think "pictures worth a thousand words"
**Don't**
- Stuff in a laundry list of edge cases
- Try to articulate every possible rule
- Overwhelm with exhaustive scenarios
---
## Context Retrieval Strategies
### Just-In-Time Context (Recommended for Agents)
**Approach**: Maintain lightweight identifiers (file paths, queries, links) and dynamically load data at runtime
**Benefits**:
- Avoids context pollution
- Enables progressive disclosure
- Mirrors human cognition (we don't memorize everything)
- Leverages metadata (file names, folder structure, timestamps)
- Agents discover context incrementally
**Trade-offs**:
- Slower than pre-computed retrieval
- Requires proper tool guidance to avoid dead-ends
### Pre-Inference Retrieval (Traditional RAG)
**Approach**: Use embedding-based retrieval to surface context before inference
**When to Use**: Static content that won't change during interaction
### Hybrid Strategy (Best of Both)
**Approach**: Retrieve some data upfront, enable autonomous exploration as needed
**Example**: Claude Code loads CLAUDE.md files upfront, uses glob/grep for just-in-time retrieval
**Rule of Thumb**: "Do the simplest thing that works"
---
## Long-Horizon Tasks: Three Techniques
### 1. Compaction
**What**: Summarize conversation nearing context limit, reinitiate with summary
**Implementation**:
- Pass message history to model for compression
- Preserve critical details (architectural decisions, bugs, implementation)
- Discard redundant outputs
- Continue with compressed context + recently accessed files
**Tuning Process**:
1. **First**: Maximize recall (capture all relevant information)
2. **Then**: Improve precision (eliminate superfluous content)
**Low-Hanging Fruit**: Clear old tool calls and results
**Best For**: Tasks requiring extensive back-and-forth
### 2. Structured Note-Taking (Agentic Memory)
**What**: Agent writes notes persisted outside context window, retrieved later
**Examples**:
- To-do lists
- NOTES.md files
- Game state tracking (Pokémon example: tracking 1,234 steps of training)
- Project progress logs
**Benefits**:
- Persistent memory with minimal overhead
- Maintains critical context across tool calls
- Enables multi-hour coherent strategies
**Best For**: Iterative development with clear milestones
### 3. Sub-Agent Architectures
**What**: Specialized sub-agents handle focused tasks with clean context windows
**How It Works**:
- Main agent coordinates high-level plan
- Sub-agents perform deep technical work
- Sub-agents explore extensively (tens of thousands of tokens)
- Return condensed summaries (1,000-2,000 tokens)
**Benefits**:
- Clear separation of concerns
- Parallel exploration
- Detailed context remains isolated
**Best For**: Complex research and analysis tasks
---
## Quick Decision Framework
| Scenario | Recommended Approach |
|----------|---------------------|
| Static content | Pre-inference retrieval or hybrid |
| Dynamic exploration needed | Just-in-time context |
| Extended back-and-forth | Compaction |
| Iterative development | Structured note-taking |
| Complex research | Sub-agent architectures |
| Rapid model improvement | "Do the simplest thing that works" |
---
## Key Takeaways
1. **Context is finite**: Treat it as a precious resource with an attention budget
2. **Think holistically**: Consider the entire state available to the LLM
3. **Stay minimal**: More context isn't always better
4. **Be iterative**: Context curation happens each time you pass to the model
5. **Design for autonomy**: As models improve, let them act intelligently
6. **Start simple**: Test with minimal setup, add based on failure modes
---
## Anti-Patterns to Avoid
- ❌ Cramming everything into prompts
- ❌ Creating brittle if-else logic
- ❌ Building bloated tool sets
- ❌ Stuffing exhaustive edge cases as examples
- ❌ Assuming larger context windows solve everything
- ❌ Ignoring context pollution over long interactions
---
## Remember
> "Even as models continue to improve, the challenge of maintaining coherence across extended interactions will remain central to building more effective agents."
Context engineering will evolve, but the core principle stays the same: **optimize signal-to-noise ratio in your token budget**.
---
*Based on Anthropic's "Effective context engineering for AI agents" (September 2025)*
-616
View File
@@ -1,616 +0,0 @@
# JIT Context Filtering: Post-Mortem
**Date:** November 9, 2025
**Duration:** 3.5 hours (7:45 PM - 11:11 PM)
**Branches:** `feature/jit-context`, `failed/jit-context`
**Status:** Failed, reverted to main
**Commits:**
- `3ac0790` - feat: Implement JIT context hook for user prompt submission
- `adf7bf4` - Refactor JIT context handling in SDKAgent and WorkerService
## Executive Summary
Attempted to implement JIT (Just-In-Time) context filtering—a feature that would dynamically generate relevant context timelines on every user prompt, potentially replacing the static session-start context entirely. After multiple architectural iterations spanning 3.5 hours and adding ~2,850 lines of code, the implementation was abandoned and reverted. The revert was not due to lack of vision (the feature aligns with long-term architectural goals), but due to implementation complexity and the need for a simpler initial approach. Significant architectural knowledge was gained about hook limitations, worker patterns, and proper separation of concerns.
## What We Tried to Build
### Goal
When a user submits a prompt, dynamically generate a relevant context timeline instead of the static session-start context. Use the fast search infrastructure (SQLite FTS5 + ChromaDB) to fetch precisely relevant context on-demand.
### The Vision
**Current approach:** SessionStart hook loads 50 recent observations blindly, displays them all.
**Proposed approach:** UserPromptSubmit hook analyzes the prompt, queries the timeline search API, and loads only the relevant context window dynamically.
**Why this makes sense:**
- We already have fast search: SQLite FTS5 + Chroma semantic search
- Dynamic context timeline search is implemented and tested
- Search results come back in <200ms
- Could **replace** session-start context entirely with smarter, prompt-specific context
### User Experience
```
User types: "How did we fix the authentication bug?"
Behind the scenes:
1. Analyze prompt: "authentication bug fix"
2. Query timeline search for relevant period
3. Load 5-10 observations from that specific timeline
4. Inject as context
5. Claude answers with precisely relevant historical context
vs. Current:
Load 50 most recent observations regardless of relevance
```
### Why Checkbox Settings Became Less Important
Originally asked for checkboxes to customize session-start context display. But if JIT context could replace session-start context with intelligent, prompt-specific timelines, the display customization became a non-issue.
## Architectural Attempts
### Attempt 1: Hook-Based Filtering (7:45 PM - 9:30 PM)
**Approach:** Call Agent SDK `query()` directly in `new-hook.ts` during UserPromptSubmit event.
**Implementation:**
- Created `jit-context-hook.ts` (~432 lines)
- Added `generateJitContext()` function in hook
- Called SDK `query()` with observation list and user prompt
- Expected hook to block for ~1-2s while Haiku filters
**Failure:**
```
Error: Claude Code executable not found at
/Users/alexnewman/.claude/plugins/marketplaces/thedotmack/plugin/scripts/cli.js
```
**Root Cause:** Hooks run in sandboxed environment without access to `claudePath` (path to Claude Code executable). The Agent SDK requires this path, which is only available in the worker service.
**Architectural Violation:** This broke the established pattern where hooks handle orchestration and workers handle AI processing. The `save-hook` sets the precedent: hooks capture data, send to worker, worker runs SDK queries asynchronously.
### Attempt 2: Worker-Based with Simple Queries (9:30 PM - 10:30 PM)
**Approach:** Move JIT filtering to worker service, keep it simple with per-request SDK queries.
**Implementation:**
- Documented architecture fix plan in `docs/jit-context-architecture-fix.md`
- Moved `generateJitContext()` to worker (considered creating `src/services/worker/JitContext.ts`)
- Modified `/sessions/:id/init` endpoint to accept `jitEnabled` flag
- Worker would run one-shot SDK query per prompt
**Architecture:**
```
UserPromptSubmit → new-hook → POST /sessions/:id/init { jitEnabled: true }
Worker spawns Claude Haiku
Filters 50 obs → 3-5 IDs
Returns { context: [...] }
Hook injects context → Claude
```
**Issues Identified:**
- Each filter request spawns a new Claude subprocess (~200-500ms overhead)
- Observation list re-sent on every prompt (~5-10KB per request)
- No token caching between requests
- Performance worse than just loading all observations directly
**Decision:** Pivoted to persistent sessions to solve performance issues.
### Attempt 3: Persistent JIT Sessions (10:30 PM - 11:11 PM)
**Approach:** Create a long-lived Agent SDK session that persists throughout user session, similar to main memory session pattern.
**Implementation (291 new lines in SDKAgent.ts):**
1. **Session Lifecycle:**
- Added `jitSessionId`, `jitAbortController`, `jitGeneratorPromise` to `ActiveSession` interface
- `startJitSession()`: Creates persistent SDK session at session init
- `cleanupJitSession()`: Terminates JIT session at session end
2. **Request Queue Architecture:**
- `jitFilterQueues` Map: Per-session request queues
- `JITFilterRequest` interface: `{ userPrompt, resolve, reject }`
- EventEmitter coordination: Wake generator when new requests arrive
3. **Message Generator Pattern:**
- `createJitMessageGenerator()`: Async generator that yields filter requests
- Initial prompt: Load 50 observations, wait for "READY" response
- Loop: Wait for EventEmitter signal → yield user prompt → parse response → resolve promise
- Pattern: Persistent session stays alive between requests
4. **Filter Query Flow:**
```typescript
runFilterQuery(sessionDbId, userPrompt) {
// Queue request
queue.requests.push({ userPrompt, resolve, reject });
queue.emitter.emit('request');
// Wait for response (30s timeout)
return Promise.race([
new Promise((resolve, reject) => { /* queued */ }),
timeout(30000)
]);
}
```
5. **Response Processing:**
- `processJitFilterResponse()`: Accumulate streaming text
- Parse IDs: "1,5,23,41" or "NONE"
- Resolve queued promise with ID array
**Added Files:**
- `src/services/worker/SDKAgent.ts`: +291 lines
- `src/services/worker-types.ts`: +3 fields (jit state tracking)
- `src/services/worker/SessionManager.ts`: +26 lines (JIT cleanup)
- `src/services/worker-service.ts`: +102 lines (JIT initialization)
- `src/shared/settings.ts`: +65 lines (JIT config)
- `src/hooks/jit-context-hook.ts`: +208 lines (orchestration)
- `docs/jit-context-architecture-fix.md`: +265 lines
- `context/session-pattern-parity.md`: +298 lines
**Total Changes:** 18 files, +2,852 lines, -133 lines
**Final Status at Revert:** Implementation was complete and likely functional, but...
## Why It Failed
### 1. Architectural Complexity Explosion
**Problem:** The persistent session pattern added enormous complexity for marginal benefit.
**Evidence:**
- Parallel session management: Regular + JIT sessions running concurrently
- Complex coordination: EventEmitter + promise queues + generator pattern
- Lifecycle coupling: Session init, request handling, cleanup all intertwined
- State explosion: 3 new fields per session (`jitSessionId`, `jitAbortController`, `jitGeneratorPromise`)
**Code Smell:** When the "optimization" requires 300 lines of coordination code, it's probably not an optimization.
### 2. Premature Optimization
**YAGNI Violation:** Built elaborate token caching and persistent session architecture before proving the feature provided value.
**Reality Check:**
- **Current approach:** Load 50 observations = ~25KB context, works fine
- **JIT overhead:** Haiku query = 1-2s latency + coordination complexity
- **User benefit:** Unclear—users haven't complained about context relevance
- **Token savings:** Marginal—Claude caches long contexts efficiently anyway
**Quote from CLAUDE.md:**
> "Write the dumb, obvious thing first. Add complexity only when you actually hit the problem."
We didn't hit a problem. We invented one.
### 3. Implementation Complexity, Not Vision
**The Vision is Sound:**
- Dynamic context is better than static context
- Timeline search API exists and is fast
- Infrastructure (SQLite + Chroma) can support this
- Replacing session-start context with prompt-specific context makes sense
**The Problem:**
We jumped to the complex persistent-session approach without trying the simple per-request approach first.
**What We Should Have Done:**
```typescript
// Simple version (not tried):
app.post('/sessions/:id/init', async (req, res) => {
const { userPrompt } = req.body;
// Query timeline search API (already exists, fast)
const timeline = await timelineSearch(project, userPrompt, depth=10);
// Return observations
return res.json({ context: timeline });
});
```
**This would have:**
- Validated the feature's value quickly
- Used existing infrastructure
- Avoided all the persistence complexity
- Taken 30 minutes instead of 3.5 hours
### 4. Pattern Divergence
**Inconsistency:** JIT sessions work fundamentally differently from memory sessions.
**Memory Session Pattern:**
```typescript
// One-shot: Init → Process observations → Complete
startSession() → yield prompts → parse responses → complete
```
**JIT Session Pattern:**
```typescript
// Persistent: Init → Wait indefinitely → Process on-demand → Complete
startJitSession() → yield initial load → LOOP:
- Wait for EventEmitter signal
- Yield filter request
- Parse response
- Resolve promise
- GOTO LOOP
```
**Maintenance Burden:** Two completely different session patterns means:
- Doubled testing complexity
- Increased cognitive load for contributors
- Higher risk of subtle bugs in lifecycle management
**Session Pattern Parity Document:** The 298-line `session-pattern-parity.md` was created to document the differences—a sign that maybe they shouldn't be different.
### 5. Blocking I/O in Critical Path
**Performance Impact:** Every user prompt now blocks for 1-2s waiting for Haiku filtering.
**Current Flow:**
```
User types prompt → 10ms → Claude responds
```
**JIT Flow:**
```
User types prompt → 10ms init → 1-2s Haiku filter → Claude responds
```
**User Experience:** We added 1-2 seconds of latency to every interaction for questionable benefit.
**Alternative:** If context filtering is valuable, do it asynchronously and apply to next prompt.
### 6. Missing the Forest for the Trees
**Real Issue:** We focused on technical implementation without asking strategic questions:
- **Is context relevance actually a problem?** No evidence.
- **Do users want this?** No feedback requested.
- **Is 50 observations too many?** Not proven.
- **Does filtering improve responses?** Not tested.
**Anti-Pattern:** Solution in search of a problem.
## What We Should Have Done
### Option 1: Don't Build It
**Justification:** No validated user need. Current system works fine.
**Next Step:** Wait for user feedback indicating context relevance is an issue.
### Option 2: Simple MVP
If we really wanted to explore this:
1. **Week 1:** Add basic filtering in worker with one-shot queries
- Accept slight performance hit (~500ms overhead)
- Measure filter accuracy and user impact
- Gather feedback
2. **Week 2:** If proven valuable, optimize
- Add token caching only if needed
- Consider persistent sessions only if performance is bottleneck
3. **Week 3:** If still valuable, scale
- Polish error handling
- Add configuration options
- Document patterns
**Philosophy:** Incremental validation, not big-bang architecture.
### Option 3: Different Approach Entirely
**Alternative:** Pre-computed relevance scores
Instead of on-demand filtering:
- Score observations at creation time (save-hook)
- Store relevance embeddings in Chroma
- At session start, query Chroma with user's first prompt
- Load top 10-20 most relevant observations
- No runtime latency, better accuracy, simpler architecture
**Benefit:** Leverages existing Chroma infrastructure, avoids runtime overhead.
## Technical Lessons Learned
### 1. EventEmitter Coordination Anti-Pattern
**Code:**
```typescript
queue.emitter.on('request', () => {
// Wake up generator to process request
});
```
**Issue:** Complex async coordination using event-driven wakeup signals is hard to reason about.
**Better:** Use async queues or channels (e.g., `async-queue` package) that handle coordination internally.
### 2. Generator Pattern Complexity
**Pattern:**
```typescript
async *createJitMessageGenerator() {
yield initialPrompt;
while (!aborted) {
await waitForEvent(); // Blocks here
yield nextRequest;
}
}
```
**Tradeoff:** Generators are great for iteration, but terrible for event-driven request/response patterns.
**Better:** Use explicit session object with `sendMessage()/waitForResponse()` methods.
### 3. Dual Session Management
**Complexity:** Managing two concurrent SDK sessions per user session is inherently complex.
**Alternatives Considered:**
- Single session handling both observations and filtering (rejected: tight coupling)
- Separate service for filtering (rejected: too much infrastructure)
- Pre-computed filtering (not considered: should have been)
**Lesson:** When parallel state management feels hard, question whether you need parallel state.
### 4. Promise Queue Pattern
**Implementation:**
```typescript
interface QueuedRequest {
resolve: (result: T) => void;
reject: (error: Error) => void;
}
queue.push({ resolve, reject });
// Later...
queue[0].resolve(result);
```
**Good:** Clean async API for callers
**Bad:** Easy to leak promises if error handling isn't perfect
**Improvement:** Use libraries like `p-queue` that handle edge cases
## Process Lessons Learned
### 1. No Incremental Validation
**Mistake:** Went from "idea" to "complete architecture" without validation points.
**Better Process:**
1. Write one-pager explaining user value
2. Build simplest possible version (2 hours max)
3. Test with real usage
4. Measure impact
5. Decide: kill, iterate, or scale
**Checkpoint Questions:**
- After 1 hour: "Does this solve a real problem?"
- After 2 hours: "Is this getting too complex?"
- After 3 hours: "Should I just ship the simple version?"
### 2. Architecture Astronomy
**Definition:** Designing elaborate systems without building/testing them.
**Evidence:**
- 265-line architecture doc written before any code
- 298-line session pattern parity analysis
- Multiple complete rewrites of the same feature
**Better:** Code first, document later. Spike solutions, learn from implementation.
### 3. Sunk Cost Fallacy
**Timeline:**
- **Hour 1:** "This seems complex but achievable"
- **Hour 2:** "We're halfway done, can't stop now"
- **Hour 3:** "Just need to fix this one coordination issue"
- **Hour 4:** "It's working, but... this feels wrong"
**Correct Decision:** Revert. Took courage to throw away 4 hours of work.
**Learning:** Time invested is not a reason to continue. Quality of outcome matters more.
### 4. Missing User Feedback Loop
**No User Input:**
- Didn't ask: "Is context relevance a problem for you?"
- Didn't test: "Does filtered context improve your responses?"
- Didn't measure: "Are you hitting context limits?"
**Engineering Theater:** Building impressive-sounding features without user validation.
## What We Actually Learned (The Real Value)
Despite reverting, this was productive R&D:
### 1. Deep Understanding of Hook Architecture
**Critical Discovery:** Hooks run in sandboxed environment without `claudePath`.
- Hooks cannot call Agent SDK `query()` directly
- All AI processing must happen in worker service
- This architectural constraint is now documented
**Learned Pattern:**
```
Hook (orchestration) → Worker (AI processing)
✓ save-hook: Captures data → Worker processes with SDK
✓ new-hook: Creates session → Worker returns confirmation
✗ jit-hook: Tried SDK in hook → Failed, no claudePath
```
**Value:** Future features will avoid this mistake. We now know the boundary.
### 2. Worker Architecture Patterns
**Blocking vs. Non-Blocking:**
- SessionStart: Can be non-blocking (context loads async)
- UserPromptSubmit: Must be blocking (session must exist before processing)
- JIT Context: Must be blocking (context needed before prompt processed)
**Established Pattern:**
```typescript
// Worker endpoint for features requiring AI
app.post('/sessions/:id/operation', async (req, res) => {
const { operationData } = req.body;
const result = await sdkAgent.performOperation(operationData);
return res.json({ result });
});
```
### 3. Persistent Session Management
**Architecture Knowledge Gained:**
- How to maintain long-lived SDK sessions
- EventEmitter coordination patterns for request/response
- Promise queue management for async operations
- Proper cleanup with AbortControllers
**Pattern Documented:**
- Dual session management (regular + JIT)
- Generator-based message loops
- Request queuing with timeouts
**Value:** When we build the simpler version, we'll know these patterns.
### 4. Configuration Infrastructure
`src/shared/settings.ts` (65 lines) provides reusable configuration patterns:
```typescript
export function getConfigValue(key: string, defaultValue: string): string {
// Priority: settings.json → env var → default
}
```
**Kept After Revert:** This module is useful for other features.
### 5. Key Architectural Decisions Made
**Decisions that will guide future implementation:**
1. JIT context filtering must happen in worker (proven via failed hook attempt)
2. Context must be blocking on UserPromptSubmit (session needs context before processing)
3. Dynamic timeline search is the right approach (fast, precise, leverages existing infrastructure)
4. Simple per-request queries should be tried before persistent sessions
### 6. Documentation Quality
- `jit-context-architecture-fix.md`: Documents why hooks can't run SDK queries
- `session-pattern-parity.md`: Reference for implementing dual sessions
- Hooks reference: Comprehensive hook documentation added
**Value:** These docs help future contributors understand the system constraints.
### 7. Infrastructure Validation
**Confirmed that our search stack is ready:**
- SQLite FTS5: Fast full-text search (<50ms)
- ChromaDB: Semantic search (<200ms with 8,000+ vectors)
- Timeline search API: Already implemented and tested
- Worker service: Can handle synchronous AI operations
**The infrastructure exists. We just need a simpler integration.**
## Recommendations
### Immediate Actions
1. **Archive the work:**
- Keep `failed/jit-context` branch for reference
- Extract reusable components (settings.ts)
- Save architecture docs for future features
2. **Document the anti-patterns:**
- Add this post-mortem to CLAUDE.md references
- Update coding standards with lessons learned
3. **Reset focus:**
- Return to validated user needs
- Prioritize features with clear value propositions
### Future Feature Development
**Gating Questions (Answer before coding):**
1. **User Value:** What specific user problem does this solve?
2. **Evidence:** Have users requested this or reported the underlying issue?
3. **Measurement:** How will we know if it's successful?
4. **Simplicity:** What's the dumbest version that could work?
5. **Time Limit:** If we can't prove value in 2 hours, should we build it?
**Process:**
```
VALIDATE → BUILD SIMPLE → TEST → MEASURE → DECIDE
↑ ↓
└──────────── ITERATE OR KILL ────────────┘
```
### If Context Filtering Returns
Should we revisit this idea in the future:
**Prerequisites:**
- User feedback requesting better context relevance
- Metrics showing current context is too broad
- Evidence that filtering improves response quality
**Simple Approach:**
```typescript
// In worker-service.ts /sessions/:id/init
if (jitEnabled) {
const observations = await db.getRecentObservations(project, 50);
const filtered = await simpleFilter(observations, userPrompt); // One-shot query
return { context: filtered };
}
```
**Acceptance Criteria:**
- <100 lines of code
- <500ms latency impact
- No new session types
- Degrades gracefully on errors
**If that works:** Then consider optimization.
## Conclusion
JIT context filtering failed not because the vision was wrong, but because we jumped to the complex implementation without validating the simple one first. The feature aligns with long-term goals (dynamic, prompt-specific context using our fast search infrastructure), but the persistent-session architecture was premature optimization.
**The right call:** Revert the complex implementation. Build the simple version when ready.
**Key Takeaway:** The vision is sound. The execution was overcomplicated. We now have:
- Deep knowledge of hook/worker architecture constraints
- Documented patterns for persistent SDK sessions
- Validated fast search infrastructure
- Clear understanding of what to build next time (simple timeline search API integration)
**This was R&D, not failure.** We learned what doesn't work (SDK in hooks), what does work (worker-based AI processing), and how to approach it next time (simple API calls before persistent sessions).
**Next Implementation:**
When we revisit this (and we should), start with:
1. Worker endpoint that accepts prompt
2. Queries existing timeline search API
3. Returns context
4. Hook injects context
5. Validate it improves responses
6. Then optimize if needed
**Final Thought:** Sometimes you have to build the wrong thing to understand the right thing. That's R&D.
---
**Branch Status:**
- `feature/jit-context`: Abandoned
- `failed/jit-context`: Archived for reference
- `main`: Stable at v5.4.0
**Files to Keep:**
- `src/shared/settings.ts`: Reusable config utilities
**Files Discarded:**
- Everything else (+2,850 lines)
**Emotional State:** Relieved. Dodged a maintenance nightmare.
File diff suppressed because it is too large Load Diff
-77
View File
@@ -1,77 +0,0 @@
@everyone
**Endless Mode: Breaking Claude's Context Limits**
## The Problem
Ever hit 67% context usage mid-session and had to restart Claude Code? Context window limits are the #1 killer of long coding sessions. When you're deep in a complex refactor or debugging session, the last thing you want is to lose all that built-up context.
## The Solution: Endless Mode
Endless Mode compresses tool outputs **in real-time** as you work. Instead of storing the full 500-line file you just read, it stores a compact observation like:
> "Read package.json - found 47 dependencies including React 18, TypeScript 5.2, and custom build scripts"
**The result: 70-84% token reduction** on tool outputs, letting you work indefinitely without hitting context limits.
## The Numbers (Real Test Results)
We analyzed **500 transcripts** containing **1,884 tool uses**:
| Metric | Value |
|--------|-------|
| Tool uses analyzed | 1,884 |
| Observations matched | 868 |
| Eligible for compression | 406 |
| Compression rate (facts-only) | **84%** |
| Characters saved | 887,783 of 1,056,285 |
**Which tools benefit most:**
- **Bash output**: 236 compressible (command outputs -> facts)
- **Read file contents**: 98 compressible (file contents -> summaries)
- **Grep results**: 42 compressible (search results -> key matches)
**Key insight**: We only compress tool **outputs**, never inputs. Inputs contain semantic meaning (the actual diff, the query, the code you wrote). Outputs are verbose results that can be summarized without losing meaning.
## The Journey (69 observations over 10 days)
**Nov 16 - The Vision**
Decided to build Endless Mode as an *optional* feature to avoid mandatory architectural refactoring. The idea: let users opt-in to experimental compression without breaking anything for those who don't.
**Nov 19-20 - Implementation Begins**
Hit our first bug immediately: duplicate observations appearing on the 2nd prompt of each session. Classic regression - the endless mode changes broke something that was already working. Fixed it, kept going.
**Nov 21 - The Big Switch**
Made a critical architectural change: switched from **deferred** (async, 5-second timeout) to **synchronous** transformation (blocking, 90-second timeout). Endless Mode needs to wait for compression to complete before continuing - otherwise you'd read uncompressed data.
Multiple rounds of experimental release preparation. Documented all dependencies. Critical bugs kept appearing.
**Nov 22 - Validation**
Endpoints verified. Toggle working. Documentation reviewed. Things looking stable.
**Nov 23 - The Setback**
**Disabled endless mode.** It was causing everything to hang. The 90-second synchronous blocking was too aggressive - when compression took too long, the whole system locked up. Had to prioritize stability.
25 sessions had successfully used it before this point.
**Nov 25 - The Solution**
Created a **beta branch strategy**: Endless Mode lives on `beta/7.0`, isolated from main. Added Version Channel UI so users can safely try it without affecting stable users. Easy rollback if issues occur.
Built analysis scripts to measure *actual* compression rates instead of theoretical. Validated 84% savings on real transcripts.
## How to Try It
**v6.3.1** added a Version Channel switcher:
1. Open http://localhost:37777
2. Find **"Version Channel"** in Settings sidebar
3. Click **"Try Beta (Endless Mode)"**
4. Refresh the UI after switching
**Safe to try**: Your memory data lives in `~/.claude-mem/` - completely separate from the plugin code. Switching branches won't touch your data. Easy rollback with "Switch to Stable" button.
**Current beta branch**: `beta/7.0`
---
This has been a real engineering journey - vision, implementation, bugs, setbacks, and creative solutions. The beta branch approach lets us keep iterating on stability while giving adventurous users access to the feature.
-416
View File
@@ -1,416 +0,0 @@
# Processing Indicator "Fucking Stupid" Audit
## What It SHOULD Do (Simple Version)
1. **Page load**: Check if worker is already processing → spin or don't spin
2. **UserPromptSubmit**: Start spinning, set worker status "on"
3. **Summary complete**: Stop spinning, set worker status "off"
**Result**: One boolean. Simple. Clear.
---
## What It ACTUALLY Does (Overcomplicated Version)
### Problem 1: Set<string> Instead of Boolean
**Current**: `processingSessions: Set<string>` - tracks individual session IDs
**File**: `src/ui/viewer/hooks/useSSE.ts:12`
```typescript
const [processingSessions, setProcessingSessions] = useState<Set<string>>(new Set());
```
**Why it's stupid**: We don't care WHICH sessions are processing. We just need to know IF anything is processing. The conversion to boolean happens anyway:
**File**: `src/ui/viewer/App.tsx:92`
```typescript
isProcessing={processingSessions.size > 0} // ← Converting Set to boolean!
```
**Fix**: Just use `const [isProcessing, setIsProcessing] = useState(false)`
---
### Problem 2: Complex Set Manipulation
**Current**: Add/remove session IDs from Set based on SSE events
**File**: `src/ui/viewer/hooks/useSSE.ts:90-104`
```typescript
case 'processing_status':
if (data.processing) {
const processing = data.processing;
console.log('[SSE] Processing status:', processing);
setProcessingSessions(prev => {
const next = new Set(prev);
if (processing.is_processing) {
next.add(processing.session_id); // ← Why track session ID?
} else {
next.delete(processing.session_id); // ← Just need true/false
}
return next;
});
}
break;
```
**Why it's stupid**: Creating new Sets, adding/removing items, all to track individual sessions when we only care about "any processing yes/no"
**Fix**: `setIsProcessing(data.is_processing)`
---
### Problem 3: Defensive Cleanup in Multiple Places
**Current**: Two places remove sessions from the Set
**Location 1** - `useSSE.ts:90-104` - Handles `processing_status` events
**Location 2** - `useSSE.ts:73-78` - Handles `new_summary` events
```typescript
// Mark session as no longer processing (summary is the final step)
setProcessingSessions(prev => {
const next = new Set(prev);
next.delete(summary.session_id); // ← Defensive cleanup
return next;
});
```
**Why it's stupid**: We're defensively cleaning up in case events arrive out of order. This is a band-aid for not having a single source of truth.
**Fix**: One place sets `isProcessing = false` (summary complete). No defensive cleanup needed.
---
### Problem 4: SSE Event Includes Session ID
**Current**: Processing status events include session ID
**File**: `src/services/worker-service.ts:277-285`
```typescript
private broadcastProcessingStatus(claudeSessionId: string, isProcessing: boolean): void {
this.broadcastSSE({
type: 'processing_status',
processing: {
session_id: claudeSessionId, // ← Why send session ID?
is_processing: isProcessing
}
});
}
```
**Why it's stupid**: We send session_id but never use it for the spinner decision. The logomark doesn't care WHICH session is processing.
**Fix**: `{ type: 'processing_status', isProcessing: boolean }` - That's it.
---
### Problem 5: TypeScript Interface Overcomplicated
**Current**: StreamEvent includes processing object with session_id
**File**: `src/ui/viewer/types.ts:54-57`
```typescript
processing?: {
session_id: string; // ← Unnecessary
is_processing: boolean;
};
```
**Why it's stupid**: Adds complexity to type definitions when we only need the boolean.
**Fix**: `isProcessing?: boolean;`
---
### Problem 6: Multiple Broadcast Points (But No Initial State!)
**Current**: 3 places broadcast processing status in worker-service.ts
1. **Line 817**: `handleSummarize()``broadcastProcessingStatus(session.claudeSessionId, true)`
2. **Line 1153**: `processSummarizeMessage()` success → `broadcastProcessingStatus(session.claudeSessionId, false)`
3. **Line 1183**: `processSummarizeMessage()` no summary → `broadcastProcessingStatus(session.claudeSessionId, false)`
**Why it's stupid**: We broadcast changes but there's NO WAY TO GET INITIAL STATE on page load. If you open the viewer while processing is active, you won't see the spinner until the next status change.
**Fix**: Add `/api/processing-status` endpoint that returns current state. Call it on page load.
---
### Problem 7: Skeleton Cards Require Session Tracking
**Current**: Feed.tsx creates skeleton cards for each processing session
**File**: `src/ui/viewer/components/Feed.tsx:66-80`
```typescript
const skeletons: FeedItem[] = [];
processingSessions.forEach(sessionId => { // ← Iterating over Set
if (!sessionsWithSummaries.has(sessionId)) {
const prompt = sessionPrompts.get(sessionId);
skeletons.push({
itemType: 'skeleton',
id: sessionId,
session_id: sessionId, // ← Using individual session IDs
project: prompt?.project,
created_at_epoch: Date.now()
});
}
});
```
**Why it's relevant**: This is the ONLY place that actually uses individual session IDs. If we want per-session skeleton cards, we need session tracking.
**Question for you**: Do we still want skeleton cards in the feed? Or just the logomark spinner?
**Option A**: Keep skeleton cards → Need to track session IDs (current complexity justified)
**Option B**: Remove skeleton cards → Use simple boolean for logomark only
---
### Problem 8: No Synchronization Between Worker State and UI State
**Current**: Worker doesn't maintain processing state. It just broadcasts events.
**Why it's stupid**: If the UI disconnects/reconnects, it loses processing state. Worker should be the source of truth.
**Fix**: Worker maintains `private isProcessing: boolean = false`
- Set to true on summarize request
- Set to false when summary completes
- Expose via `/api/processing-status` endpoint
- Broadcast changes via SSE
---
## The "Fucking Stupid" Score
| Issue | Complexity Cost | Why It's Stupid |
|-------|----------------|-----------------|
| Set<string> instead of boolean | HIGH | We convert it to boolean anyway |
| Complex Set manipulation | HIGH | 10+ lines of code to add/remove from Set |
| Defensive cleanup in 2 places | MEDIUM | Band-aid for lack of single source of truth |
| SSE includes unused session_id | LOW | Minor overhead, but conceptually wrong |
| Overcomplicated TypeScript types | LOW | Makes code harder to read |
| No initial state endpoint | HIGH | Broken user experience (no spinner on page load during active processing) |
| Session tracking for skeletons | ??? | Depends if we want per-session skeletons or not |
| Worker has no state | HIGH | UI is source of truth, should be worker |
---
## Proposed Simple Architecture
### Worker Service (Source of Truth)
```typescript
class WorkerService {
private isProcessing: boolean = false; // Single source of truth
// New endpoint: GET /api/processing-status
private handleGetProcessingStatus(req: Request, res: Response): void {
res.json({ isProcessing: this.isProcessing });
}
// On summarize request
private handleSummarize(req: Request, res: Response): void {
// ... existing code ...
this.isProcessing = true;
this.broadcastSSE({ type: 'processing_status', isProcessing: true });
// ...
}
// On summary complete
private processSummarizeMessage(session: SessionState, message: Message): void {
// ... existing code ...
// After summary is saved/failed:
this.isProcessing = false;
this.broadcastSSE({ type: 'processing_status', isProcessing: false });
}
}
```
### React Hook (Simple Boolean)
```typescript
export function useSSE() {
const [isProcessing, setIsProcessing] = useState(false);
// On mount: Get initial state
useEffect(() => {
fetch('/api/processing-status')
.then(res => res.json())
.then(data => setIsProcessing(data.isProcessing));
}, []);
// Listen for changes
useEffect(() => {
const eventSource = new EventSource('/stream');
eventSource.onmessage = (event) => {
const data = JSON.parse(event.data);
if (data.type === 'processing_status') {
setIsProcessing(data.isProcessing); // Simple!
}
};
return () => eventSource.close();
}, []);
return { isProcessing, /* other state */ };
}
```
### TypeScript Types (Simplified)
```typescript
export interface StreamEvent {
type: 'initial_load' | 'new_observation' | 'new_summary' | 'new_prompt' | 'processing_status';
observations?: Observation[];
summaries?: Summary[];
prompts?: UserPrompt[];
projects?: string[];
observation?: Observation;
summary?: Summary;
prompt?: UserPrompt;
isProcessing?: boolean; // Simple!
}
```
### React Components (No Changes Needed!)
```typescript
// App.tsx
const { isProcessing } = useSSE(); // Already a boolean now!
<Header isProcessing={isProcessing} /> // Just pass it through
// Header.tsx (no changes needed)
<img className={`logomark ${isProcessing ? 'spinning' : ''}`} />
```
---
## Breaking Changes & Decisions
### Decision 1: What About Skeleton Cards?
**Current**: Skeleton cards in feed show "Generating..." for each processing session
**Options**:
**A) Keep skeleton cards** (requires session tracking)
- Need to track individual session IDs
- Justifies the Set<string> complexity
- Provides per-session feedback in feed
**B) Remove skeleton cards** (simplest)
- Only logomark spins (global processing indicator)
- No need to track individual sessions
- Simpler architecture
**C) Hybrid: Single skeleton card** (middle ground)
- Show ONE skeleton card when `isProcessing === true`
- Don't tie it to specific sessions
- Keep it simple but provide feed feedback
**What do you want?**
---
### Decision 2: Multiple Concurrent Sessions?
**Question**: Can multiple sessions be processing simultaneously?
**Current assumption**: Yes (hence the Set<string>)
**Reality check**: Worker processes messages from a queue. Can it actually process multiple sessions at once, or is it sequential?
**If sequential**: We DEFINITELY don't need session tracking. One boolean is perfect.
**If concurrent**: We still might not need session tracking for the logomark (just spin if ANY processing), but skeleton cards would need session IDs.
---
## Recommended Implementation Plan
### Phase 1: Add Initial State (Quick Win)
**File**: `src/services/worker-service.ts`
- Add `private isProcessing: boolean = false;`
- Add GET `/api/processing-status` endpoint
- Set `this.isProcessing = true` on line 817
- Set `this.isProcessing = false` on lines 1153, 1183
**File**: `src/ui/viewer/hooks/useSSE.ts`
- Add `fetch('/api/processing-status')` on mount
- Initialize `isProcessing` state from response
**Impact**: Fixes the "no spinner on page load" bug without breaking changes.
---
### Phase 2: Simplify State (Breaking Change)
**File**: `src/services/worker-service.ts`
- Change `broadcastProcessingStatus()` to send `{ type: 'processing_status', isProcessing: boolean }`
- Remove session_id from broadcast
**File**: `src/ui/viewer/hooks/useSSE.ts`
- Change `processingSessions` Set to `isProcessing` boolean
- Simplify event handler: `setIsProcessing(data.isProcessing)`
- Remove defensive cleanup from `new_summary` handler
**File**: `src/ui/viewer/types.ts`
- Simplify `StreamEvent.processing` to just `isProcessing?: boolean`
**File**: `src/ui/viewer/App.tsx`
- Change `processingSessions.size > 0` to just `isProcessing`
**File**: `src/ui/viewer/components/Feed.tsx`
- **Decision needed**: Remove skeleton cards or show single generic skeleton?
**Impact**: Cleaner code, easier to maintain, fewer bugs.
---
## Files That Need Changes
### Worker Service
- `src/services/worker-service.ts` (add state, endpoint, update broadcasts)
### React
- `src/ui/viewer/hooks/useSSE.ts` (boolean instead of Set, fetch initial state)
- `src/ui/viewer/types.ts` (simplify StreamEvent)
- `src/ui/viewer/App.tsx` (pass boolean instead of Set.size > 0)
- `src/ui/viewer/components/Feed.tsx` (handle skeleton cards decision)
- `src/ui/viewer/constants/api.ts` (add PROCESSING_STATUS endpoint)
### No Changes Needed
- `src/ui/viewer/components/Header.tsx` (already receives boolean)
- `src/ui/viewer/components/SummarySkeleton.tsx` (might be removed)
- CSS/animations (work the same with boolean)
---
## Summary: What's Fucking Stupid
1. **Set<string> when we only need boolean** ← Biggest offender
2. **No initial state on page load** ← Broken UX
3. **Complex Set manipulation** ← 10+ lines for add/remove
4. **Defensive cleanup in multiple places** ← No single source of truth
5. **Session IDs in SSE events** ← Data we don't use
6. **Worker doesn't maintain state** ← UI is source of truth (backwards!)
**Complexity Score**: 7/10 stupid
**After refactor**: 2/10 (the remaining complexity is React/SSE boilerplate)
---
## What Do You Want To Do?
Tell me:
1. **Skeleton cards**: Keep (per-session), remove entirely, or show one generic skeleton?
2. **Breaking changes**: OK to simplify now, or do you want backwards compatibility?
3. **Implementation**: Want me to do Phase 1 (quick fix), Phase 2 (full refactor), or both?
@@ -1,564 +0,0 @@
# Processing Indicator: Complete Code Reference
This document provides a line-by-line breakdown of every piece of code related to the processing/activity indicator (the spinning logomark in the top left corner of the viewer UI).
## Overview
The processing indicator is a visual cue that shows when the worker service is actively processing memories (observations or summaries). It consists of:
1. **Logomark Image**: `claude-mem-logomark.webp` in the header
2. **Spinning Animation**: Applied via CSS class when processing is active
3. **State Management**: Tracked via Server-Sent Events (SSE) from the worker
4. **Processing Sessions Set**: Maintains active session IDs being processed
## Data Flow
```
Worker Service
└─> broadcastProcessingStatus(sessionId, isProcessing)
└─> broadcastSSE({ type: 'processing_status', ... })
└─> SSE Event Stream (/stream)
└─> useSSE Hook (React)
└─> processingSessions Set<string>
└─> App.tsx: isProcessing={processingSessions.size > 0}
└─> Header.tsx: className={isProcessing ? 'spinning' : ''}
└─> CSS Animation: @keyframes spin
```
---
## 1. TypeScript Types
### File: `src/ui/viewer/types.ts`
**Lines 45-58: StreamEvent interface with processing_status type**
```typescript
export interface StreamEvent {
type: 'initial_load' | 'new_observation' | 'new_summary' | 'new_prompt' | 'processing_status';
observations?: Observation[];
summaries?: Summary[];
prompts?: UserPrompt[];
projects?: string[];
observation?: Observation;
summary?: Summary;
prompt?: UserPrompt;
processing?: {
session_id: string;
is_processing: boolean;
};
}
```
**Purpose**: Defines the structure of SSE events. The `processing_status` type includes a `processing` object that indicates whether a session is currently being processed.
---
## 2. Worker Service (Backend)
### File: `src/services/worker-service.ts`
**Lines 247-272: broadcastSSE() - Core SSE broadcasting**
```typescript
/**
* Broadcast SSE event to all connected clients
*/
private broadcastSSE(event: any): void {
if (this.sseClients.size === 0) {
return; // No clients connected, skip broadcast
}
const data = `data: ${JSON.stringify(event)}\n\n`;
const clientsToRemove: Response[] = [];
for (const client of this.sseClients) {
try {
client.write(data);
} catch (error) {
// Client disconnected, mark for removal
clientsToRemove.push(client);
}
}
// Clean up disconnected clients
for (const client of clientsToRemove) {
this.sseClients.delete(client);
}
if (clientsToRemove.length > 0) {
logger.info('WORKER', `SSE cleaned up disconnected clients`, { count: clientsToRemove.length });
}
}
```
**Purpose**: Broadcasts SSE events to all connected UI clients. Handles disconnected clients gracefully.
---
**Lines 274-285: broadcastProcessingStatus() - Processing indicator control**
```typescript
/**
* Broadcast processing status to SSE clients
*/
private broadcastProcessingStatus(claudeSessionId: string, isProcessing: boolean): void {
this.broadcastSSE({
type: 'processing_status',
processing: {
session_id: claudeSessionId,
is_processing: isProcessing
}
});
}
```
**Purpose**: Dedicated method for broadcasting processing status changes. Called when sessions start/stop processing.
---
**Line 817: Summarize request triggers processing start**
```typescript
// Notify UI that processing is active
this.broadcastProcessingStatus(session.claudeSessionId, true);
```
**Context**: In `handleSummarize()` method - when a summary request is queued, processing starts.
**File location**: `src/services/worker-service.ts:817`
---
**Line 1153: Summary generation complete - processing stops**
```typescript
// Notify UI that processing is complete (summary is the final step)
this.broadcastProcessingStatus(session.claudeSessionId, false);
```
**Context**: In `processSummarizeMessage()` after successfully generating and saving a summary.
**File location**: `src/services/worker-service.ts:1153`
---
**Line 1183: No summary generated - still mark processing complete**
```typescript
// Still mark processing as complete even if no summary was generated
this.broadcastProcessingStatus(session.claudeSessionId, false);
```
**Context**: In `processSummarizeMessage()` when no summary tags are found in the AI response.
**File location**: `src/services/worker-service.ts:1183`
---
## 3. React Hook: SSE Connection
### File: `src/ui/viewer/hooks/useSSE.ts`
**Line 12: processingSessions state initialization**
```typescript
const [processingSessions, setProcessingSessions] = useState<Set<string>>(new Set());
```
**Purpose**: Maintains a Set of session IDs currently being processed. Used to determine if any processing is active.
---
**Lines 90-104: processing_status event handler**
```typescript
case 'processing_status':
if (data.processing) {
const processing = data.processing;
console.log('[SSE] Processing status:', processing);
setProcessingSessions(prev => {
const next = new Set(prev);
if (processing.is_processing) {
next.add(processing.session_id);
} else {
next.delete(processing.session_id);
}
return next;
});
}
break;
```
**Purpose**: Listens for `processing_status` SSE events and updates the processingSessions Set:
- `is_processing: true` → Adds session ID to Set
- `is_processing: false` → Removes session ID from Set
**File location**: `src/ui/viewer/hooks/useSSE.ts:90-104`
---
**Lines 73-78: Summary completion also clears processing status**
```typescript
// Mark session as no longer processing (summary is the final step)
setProcessingSessions(prev => {
const next = new Set(prev);
next.delete(summary.session_id);
return next;
});
```
**Purpose**: When a `new_summary` event arrives, remove the session from processingSessions (defensive cleanup in case the processing_status event was missed).
**File location**: `src/ui/viewer/hooks/useSSE.ts:73-78`
---
**Line 125: Hook return value includes processingSessions**
```typescript
return { observations, summaries, prompts, projects, processingSessions, isConnected };
```
**Purpose**: Exposes processingSessions Set to consuming components.
---
## 4. React Component: App
### File: `src/ui/viewer/App.tsx`
**Line 20: Destructure processingSessions from useSSE**
```typescript
const { observations, summaries, prompts, projects, processingSessions, isConnected } = useSSE();
```
**Purpose**: Gets the processingSessions Set from the SSE hook.
---
**Line 92: Convert Set to boolean for Header component**
```typescript
isProcessing={processingSessions.size > 0}
```
**Purpose**: Passes `true` to Header if ANY session is being processed (Set has items), `false` otherwise.
**File location**: `src/ui/viewer/App.tsx:92`
---
## 5. React Component: Header
### File: `src/ui/viewer/components/Header.tsx`
**Line 12: isProcessing prop definition**
```typescript
interface HeaderProps {
isConnected: boolean;
projects: string[];
currentFilter: string;
onFilterChange: (filter: string) => void;
onSettingsToggle: () => void;
sidebarOpen: boolean;
isProcessing: boolean; // ← Processing indicator prop
themePreference: ThemePreference;
onThemeChange: (theme: ThemePreference) => void;
}
```
**Purpose**: Defines the isProcessing boolean prop for the Header component.
---
**Line 24: isProcessing destructured from props**
```typescript
export function Header({
isConnected,
projects,
currentFilter,
onFilterChange,
onSettingsToggle,
sidebarOpen,
isProcessing, // ← Received from App.tsx
themePreference,
onThemeChange
}: HeaderProps) {
```
---
**Line 31: Logomark with conditional spinning class**
```typescript
<img src="claude-mem-logomark.webp" alt="" className={`logomark ${isProcessing ? 'spinning' : ''}`} />
```
**Purpose**: The core of the processing indicator. When `isProcessing` is `true`, adds the `spinning` CSS class to the logomark image, triggering the rotation animation.
**File location**: `src/ui/viewer/components/Header.tsx:31`
**Rendered HTML Examples**:
- Not processing: `<img src="claude-mem-logomark.webp" alt="" className="logomark" />`
- Processing: `<img src="claude-mem-logomark.webp" alt="" className="logomark spinning" />`
---
## 6. CSS Styling & Animation
### File: `plugin/ui/viewer.html` (compiled output)
**Lines 342-349: Logomark and spinning class styles**
```css
.logomark {
height: 32px;
width: auto;
}
.logomark.spinning {
animation: spin 1.5s linear infinite;
}
```
**Purpose**:
- `.logomark`: Base styles for the logo image (32px height, auto width)
- `.logomark.spinning`: Applies the spin animation when processing is active
- **Duration**: 1.5 seconds per rotation
- **Timing**: Linear (constant speed)
- **Iteration**: Infinite (continues until class is removed)
**File location**: `plugin/ui/viewer.html:342-349`
---
**Lines 701-705: Spin animation keyframes**
```css
@keyframes spin {
to {
transform: rotate(360deg);
}
}
```
**Purpose**: Defines the rotation animation. Rotates the element from 0° (implicit) to 360° (full circle).
**File location**: `plugin/ui/viewer.html:701-705`
---
## 7. API Endpoint: Stream
### File: `src/ui/viewer/constants/api.ts`
**Line 11: SSE stream endpoint**
```typescript
export const API_ENDPOINTS = {
OBSERVATIONS: '/api/observations',
SUMMARIES: '/api/summaries',
PROMPTS: '/api/prompts',
SETTINGS: '/api/settings',
STATS: '/api/stats',
STREAM: '/stream', // ← SSE endpoint for processing events
} as const;
```
**Purpose**: Centralized API endpoint constant. The `/stream` endpoint is used by `useSSE.ts` to establish the EventSource connection.
---
## Bonus: Feed Skeleton Processing Indicator
While not part of the logomark spinner, the feed also shows processing state with skeleton cards and a smaller spinner.
### File: `src/ui/viewer/components/Feed.tsx`
**Lines 66-80: Create skeleton items for processing sessions**
```typescript
// Create skeleton items for sessions being processed that don't have summaries yet
const skeletons: FeedItem[] = [];
processingSessions.forEach(sessionId => {
if (!sessionsWithSummaries.has(sessionId)) {
const prompt = sessionPrompts.get(sessionId);
skeletons.push({
itemType: 'skeleton',
id: sessionId,
session_id: sessionId,
project: prompt?.project,
// Always use current time so skeletons appear at top of feed
created_at_epoch: Date.now()
});
}
});
```
**Purpose**: Creates temporary skeleton cards for sessions currently being processed (from `processingSessions` Set).
---
**Line 104: Render SummarySkeleton component**
```typescript
} else if (item.itemType === 'skeleton') {
return <SummarySkeleton key={key} sessionId={item.session_id} project={item.project} />;
```
---
### File: `src/ui/viewer/components/SummarySkeleton.tsx`
**Lines 14-17: Processing indicator in skeleton card**
```typescript
<div className="processing-indicator">
<div className="spinner"></div>
<span>Generating...</span>
</div>
```
**Purpose**: Shows a smaller inline spinner with "Generating..." text in skeleton summary cards.
---
### CSS for Feed Spinner
**Lines 682-690: Processing indicator container**
```css
.processing-indicator {
display: inline-flex;
align-items: center;
gap: 6px;
color: var(--color-accent-focus);
font-size: 11px;
font-weight: 500;
margin-left: auto;
}
```
---
**Lines 692-700: Small spinner for skeleton cards**
```css
.spinner {
width: 12px;
height: 12px;
border: 2px solid var(--color-border-primary);
border-top-color: var(--color-accent-focus);
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
```
**Purpose**: Smaller circular spinner (12px) with faster rotation (0.8s) used in skeleton cards. Uses the same `@keyframes spin` animation.
---
**Lines 711-715: Skeleton card opacity**
```css
.summary-skeleton {
opacity: 0.7;
}
.summary-skeleton .processing-indicator {
margin-left: auto;
}
```
---
**Lines 715-740: Skeleton line animations (shimmer effect)**
```css
.skeleton-line {
height: 16px;
background: linear-gradient(90deg, var(--color-skeleton-base) 25%, var(--color-skeleton-highlight) 50%, var(--color-skeleton-base) 75%);
background-size: 200% 100%;
animation: shimmer 1.5s infinite;
border-radius: 4px;
margin-bottom: 8px;
}
.skeleton-title {
height: 20px;
width: 80%;
margin-bottom: 10px;
}
.skeleton-subtitle {
height: 16px;
width: 90%;
}
.skeleton-subtitle.short {
width: 60%;
}
@keyframes shimmer {
0% {
background-position: 200% 0;
}
100% {
background-position: -200% 0;
}
}
```
**Purpose**: Creates animated placeholder lines with a shimmer effect while summary is being generated.
---
## Summary: Complete Processing Flow
1. **User submits prompt** → Claude Code session starts
2. **Worker receives summarize request**`worker-service.ts:817` calls `broadcastProcessingStatus(sessionId, true)`
3. **SSE broadcasts**`{ type: 'processing_status', processing: { session_id: '...', is_processing: true } }`
4. **React receives event**`useSSE.ts:90-104` adds sessionId to `processingSessions` Set
5. **State flows down**`App.tsx:92` converts Set size to boolean → `Header.tsx:31` receives `isProcessing={true}`
6. **CSS class applied**`className="logomark spinning"` triggers animation
7. **Logomark spins** → CSS animation `@keyframes spin` rotates 360° every 1.5s
8. **Feed shows skeleton**`Feed.tsx:66-80` creates skeleton cards for processing sessions
9. **Summary completes**`worker-service.ts:1153` calls `broadcastProcessingStatus(sessionId, false)`
10. **SSE broadcasts**`{ type: 'processing_status', processing: { session_id: '...', is_processing: false } }`
11. **React clears state**`useSSE.ts:90-104` removes sessionId from Set
12. **Animation stops**`isProcessing={false}` removes `spinning` class, logomark stops rotating
---
## File Summary
| File | Lines | Purpose |
|------|-------|---------|
| `src/ui/viewer/types.ts` | 45-58 | Defines `StreamEvent` interface with `processing_status` type |
| `src/services/worker-service.ts` | 247-285, 817, 1153, 1183 | Broadcasts processing status via SSE |
| `src/ui/viewer/hooks/useSSE.ts` | 12, 73-78, 90-104, 125 | Manages `processingSessions` Set from SSE events |
| `src/ui/viewer/App.tsx` | 20, 92 | Converts Set to boolean, passes to Header |
| `src/ui/viewer/components/Header.tsx` | 12, 24, 31 | Applies `spinning` class to logomark |
| `plugin/ui/viewer.html` (CSS) | 342-349, 701-705 | Styles logomark and defines spin animation |
| `src/ui/viewer/components/Feed.tsx` | 66-80, 104 | Creates skeleton cards for processing sessions |
| `src/ui/viewer/components/SummarySkeleton.tsx` | 14-17 | Renders inline spinner in skeleton cards |
| `plugin/ui/viewer.html` (CSS) | 682-740 | Styles for skeleton cards and inline spinner |
---
## Key Design Decisions
1. **Set vs Boolean**: Using a `Set<string>` for `processingSessions` allows tracking multiple concurrent sessions. The UI shows spinning as long as *any* session is processing.
2. **Defensive Cleanup**: Both `processing_status` events AND `new_summary` events clear processing state, ensuring the spinner stops even if events arrive out of order.
3. **CSS-Only Animation**: No JavaScript animation loops - pure CSS transforms provide smooth, GPU-accelerated rotation with minimal performance impact.
4. **Dual Indicators**: Header logomark (global processing state) + skeleton cards (per-session processing state) provide both overview and detail-level feedback.
5. **SSE Architecture**: Server-Sent Events provide real-time updates without polling, keeping UI responsive with minimal network overhead.
@@ -1,427 +0,0 @@
# Endless Mode: Real-Time Context Compression Plan
## Executive Summary
"Endless Mode" is an optional feature that enables Claude sessions to run indefinitely by transparently compressing tool use transcripts in real-time. Using an in-memory transformation layer in the worker service, heavy tool outputs are dynamically replaced with lightweight observations during session resume—without modifying the immutable source transcripts. This allows sessions to continue for weeks or months without hitting context window limits, while preserving full conversation history and maintaining zero risk of data corruption.
---
## Problem Statement
### Current Behavior
Claude sessions accumulate full tool transcripts in the context window:
- File reads: 5k-10k tokens per read
- Bash outputs: 1k-5k tokens per command
- Search results: 2k-8k tokens per search
- Total context limit: ~200k tokens
When the context window fills, users must start a new session, losing conversational continuity.
### What Happens Today
1. Tool executes during session
2. PostToolUse hook captures tool data
3. Worker creates compressed observation (~200-500 tokens)
4. **But**: Full tool transcript stays in Claude's context window
5. **Observation only helps next session** via SessionStart injection
### The Gap
Observations exist and are created in real-time, but they're not used to compress the **current** session's context. We have the compressed data, we just don't apply it to the active session.
---
## Proposed Solution: Endless Mode
### Core Concept
When a session resumes (either after restart or during continuation), **transform messages in memory** by replacing heavy tool use content with lightweight observations before feeding them to the Agent SDK. The source transcript remains immutable on disk.
### Architecture Principle
**Immutable Storage + Ephemeral Transform = Safe Compression**
```
Disk (never modified) Memory (transform) Agent SDK
────────────────────── ────────────────────── ────────────────
transcript.jsonl Load messages Resume session
tool_use_abc → Look up observation → with compressed
tool_use_def Replace content context
tool_use_xyz Feed to SDK
```
### Key Properties
1. **Immutable**: Original transcripts never modified
2. **Non-destructive**: Full history preserved on disk
3. **No duplication**: No forks, no copies
4. **Transparent**: User sees same conversation, compression is under the hood
5. **Optional**: Feature flag allows users to opt-in/out
6. **Reversible**: Can always read original transcript
---
## How It Works
### Session Resume Flow (Endless Mode Enabled)
```
1. User continues session / Claude Code restarts
2. Worker service intercepts resume request
3. Load transcript JSONL from disk (immutable)
4. Transform Loop:
For each message in transcript:
- If tool_use message:
- Query SQLite: SELECT observation WHERE tool_use_id = ?
- Replace tool content with observation (facts, narrative, concepts)
- If other message type:
- Pass through unchanged
5. Feed transformed messages to Agent SDK
6. Agent SDK resumes session with compressed context
7. New tool uses append to original transcript (normal flow)
8. Next resume: Loop repeats, new tool uses also get compressed
```
### Session Resume Flow (Endless Mode Disabled)
```
1. User continues session
2. Load transcript JSONL from disk
3. Feed messages directly to Agent SDK (no transformation)
4. Session resumes with full tool transcripts (current behavior)
```
---
## Implementation Plan
### Phase 1: Foundation (Week 1)
**Goal**: Set up infrastructure for transformation layer
Tasks:
1. Add `tool_use_id` column to observations table (SQLite schema migration)
2. Update PostToolUse hook to capture and store tool_use_id
3. Create `TransformLayer` class in worker service
4. Add `CLAUDE_MEM_ENDLESS_MODE` environment variable (default: false)
5. Write tests for observation lookup by tool_use_id
**Deliverable**: Database schema updated, tool_use_ids being captured
### Phase 2: Transform Logic (Week 2)
**Goal**: Build message transformation engine
Tasks:
1. Implement `TransformLayer.transformMessages(messages)` function
2. Tool use detection logic (identify tool_use messages in transcript)
3. Observation lookup and replacement logic
4. Fallback handling (if observation missing, keep original content)
5. Message serialization/deserialization
**Deliverable**: Working transform function that compresses messages in memory
### Phase 3: Agent SDK Integration (Week 2-3)
**Goal**: Wire transform layer into session resume flow
Tasks:
1. Identify where worker service resumes Agent SDK sessions
2. Inject transform layer before session resume
3. Add feature flag check (only transform if endless mode enabled)
4. Logging and instrumentation (track compression ratios, transform time)
5. Error handling and graceful degradation
**Deliverable**: Worker service can resume sessions with compressed context
### Phase 4: Testing & Validation (Week 3-4)
**Goal**: Verify endless mode works correctly
Tasks:
1. Create test session with 50+ tool uses
2. Enable endless mode and resume session
3. Verify context window usage (should be dramatically lower)
4. Test conversation quality (does Claude have enough context?)
5. Measure performance (transform latency, lookup speed)
6. Edge case testing (missing observations, malformed transcripts)
**Deliverable**: Endless mode working in test environment
### Phase 5: Beta Release (Week 4+)
**Goal**: Release to power users for feedback
Tasks:
1. Documentation (how to enable, what to expect, how to disable)
2. Add endless mode toggle to viewer UI
3. Monitoring and observability (track usage, failures, compression stats)
4. Collect feedback from beta users
5. Iterate based on real-world usage
**Deliverable**: Endless mode available as opt-in beta feature
---
## Technical Requirements
### Database Schema
```sql
-- Add to observations table
ALTER TABLE observations ADD COLUMN tool_use_id TEXT UNIQUE;
CREATE INDEX idx_observations_tool_use_id ON observations(tool_use_id);
```
### Worker Service API
```typescript
interface TransformLayerConfig {
enabled: boolean; // CLAUDE_MEM_ENDLESS_MODE
fallbackToOriginal: boolean; // If observation missing, use full content
maxLookupTime: number; // Timeout for SQLite queries
}
class TransformLayer {
constructor(config: TransformLayerConfig, db: SessionStore);
// Main transform function
async transformMessages(messages: Message[]): Promise<Message[]>;
// Helper functions
private async lookupObservation(toolUseId: string): Promise<Observation | null>;
private replaceToolContent(message: Message, observation: Observation): Message;
private isToolUseMessage(message: Message): boolean;
}
```
### Agent SDK Integration Point
```typescript
// In worker service session resume logic
async function resumeSession(sessionId: string, transcriptPath: string) {
const messages = await loadTranscript(transcriptPath);
// Transform layer (only if endless mode enabled)
const transformedMessages = config.endlessMode
? await transformLayer.transformMessages(messages)
: messages;
// Resume with transformed (or original) messages
return await agentSDK.resumeSession({
sessionId,
messages: transformedMessages
});
}
```
---
## Risks and Mitigations
### Risk 1: Information Loss
**Risk**: Compressed observations may lose critical details that Claude needs to reference later.
**Mitigation**:
- Make endless mode optional (users can disable if quality degrades)
- Improve observation quality (better prompts, more comprehensive facts)
- Hybrid approach: Keep recent N tool uses in full, compress older ones
- Monitor conversation quality metrics
### Risk 2: Transform Performance
**Risk**: Looking up observations for 100+ tool uses during resume could be slow.
**Mitigation**:
- Index tool_use_id in SQLite (O(log n) lookups)
- Batch queries (single SELECT with IN clause)
- Measure and optimize (target <100ms for typical session)
- Cache observations in memory during session
### Risk 3: Missing Observations
**Risk**: Tool use executed but observation not yet created (async worker lag).
**Mitigation**:
- Fallback to original content if observation missing
- Log when fallback occurs (helps identify worker performance issues)
- Allow observations to be created retroactively
- Consider synchronous observation creation for critical tools
### Risk 4: Transcript Corruption
**Risk**: Bug in transform layer could corrupt user conversations.
**Mitigation**:
- **Never modify source transcripts** (read-only)
- Transform happens in memory only
- Extensive testing before beta release
- Feature flag allows instant disable if issues found
- Keep full audit trail in logs
### Risk 5: Agent SDK Compatibility
**Risk**: Agent SDK updates could break transform layer integration.
**Mitigation**:
- Document exact Agent SDK version requirements
- Monitor Agent SDK release notes
- Test against new SDK versions before upgrading
- Graceful degradation if SDK changes detected
---
## Success Criteria
### Proof of Concept Success
- [ ] Transform layer successfully compresses a 50-tool-use session
- [ ] Context window usage reduced by 80%+ compared to uncompressed
- [ ] Session resumes without errors
- [ ] Conversation quality remains high (subjective evaluation)
### Beta Release Success
- [ ] 10+ users running endless mode without issues
- [ ] Average context savings: 85%+ across all sessions
- [ ] Transform latency: <200ms for typical resume
- [ ] Zero transcript corruption incidents
- [ ] Positive user feedback on conversation continuity
### Production Success
- [ ] Endless mode becomes default setting
- [ ] Sessions running for weeks/months without context issues
- [ ] Context window exhaustion becomes rare edge case
- [ ] User-reported "session too long" issues drop to near zero
- [ ] Transform layer performance scales to 1000+ tool use sessions
---
## Configuration
### Environment Variables
```bash
# Enable endless mode (default: false)
CLAUDE_MEM_ENDLESS_MODE=true
# Fallback behavior if observation missing (default: true)
CLAUDE_MEM_TRANSFORM_FALLBACK=true
# Max time to wait for observation lookup (default: 500ms)
CLAUDE_MEM_TRANSFORM_TIMEOUT=500
# Keep recent N tool uses uncompressed (default: 0, compress all)
CLAUDE_MEM_TRANSFORM_KEEP_RECENT=0
```
### User Controls
```typescript
// Future: UI toggle in viewer
interface EndlessModeSettings {
enabled: boolean;
keepRecentToolUses: number; // Hybrid mode
fallbackToOriginal: boolean;
}
```
---
## Context Economics: Before vs. After
### Example Session (50 tool uses)
**Before (Endless Mode OFF):**
```
File reads: 10 × 8,000 tokens = 80,000 tokens
Bash outputs: 20 × 2,000 tokens = 40,000 tokens
Searches: 15 × 4,000 tokens = 60,000 tokens
Other tools: 5 × 1,000 tokens = 5,000 tokens
──────────────────────────────────────────────────
Total: 185,000 tokens
Context remaining: 15,000 tokens (92% full)
```
**After (Endless Mode ON):**
```
File reads: 10 × 300 tokens = 3,000 tokens
Bash outputs: 20 × 250 tokens = 5,000 tokens
Searches: 15 × 400 tokens = 6,000 tokens
Other tools: 5 × 200 tokens = 1,000 tokens
──────────────────────────────────────────────────
Total: 15,000 tokens
Context remaining: 185,000 tokens (7.5% full)
Savings: 170,000 tokens (92% reduction)
```
**Session Longevity:**
- Before: ~50 tool uses before context full
- After: ~600+ tool uses before context full
- **12x longer sessions**
---
## Next Steps
### Immediate Actions (This Week)
1. **Database Migration**: Add tool_use_id column to observations table
2. **Hook Update**: Modify PostToolUse hook to capture tool_use_id from Agent SDK
3. **Architecture Validation**: Confirm where Agent SDK session resume happens in worker service
4. **Prototype**: Build minimal TransformLayer class with observation lookup
### Short Term (Next 2 Weeks)
1. Implement complete transform logic
2. Wire into worker service resume flow
3. Add endless mode feature flag
4. Test with real sessions
### Medium Term (Next Month)
1. Beta release to power users
2. Gather feedback and iterate
3. Performance optimization
4. Documentation and user guides
### Long Term (Future)
1. Make endless mode default
2. Hybrid sliding window (keep recent tools uncompressed)
3. Selective compression by tool type
4. Auto-tune compression based on context usage patterns
---
## Open Questions
1. **Tool Use ID Format**: What does the Agent SDK's tool_use_id look like? Is it UUID, hash, or sequential?
2. **Transcript Format**: What's the exact JSONL schema for tool_use messages? Where is the content we'll replace?
3. **Resume Hook Point**: Where exactly in the worker service does session resume happen? Is there a clear integration point?
4. **Observation Delay**: How long between PostToolUse firing and observation being available in SQLite? Does this affect resume?
5. **Feature Flag Storage**: Environment variable, or persist user preference in database?
---
## Conclusion
Endless Mode transforms claude-mem from a "memory between sessions" system into a "continuous compression engine" that enables truly infinite sessions. By leveraging the observations we're already creating in real-time and applying them as an ephemeral transformation layer during resume, we can extend session longevity by 10-12x without any risk to user data.
The key architectural insight is **immutability**: by never modifying source transcripts and performing all compression in memory, we get the benefits of context window optimization without the risks of data corruption or loss. Combined with the optional nature of the feature, this provides a safe, reversible path to fundamentally better session continuity.
This is the natural evolution of claude-mem: from remembering what happened before, to making it possible to never stop.
-141
View File
@@ -1,141 +0,0 @@
# I built a context management plugin and it CHANGED MY LIFE
Okay so I know this sounds clickbait-y but genuinely: if you've ever spent 20 minutes re-explaining your project architecture to Claude because you started a new chat, this might actually save your sanity.
The actual problem I was trying to solve:
Claude Code is incredible for building stuff, but it has the memory of a goldfish. Every new session I'd be like "okay so remember we're using Express for the API and SQLite for storage and—" and Claude's like "I have never seen this codebase in my life."
What I built:
A plugin that automatically captures everything Claude does during your coding sessions, compresses it with AI (using Claude itself lol), and injects relevant context back into future sessions.
So instead of explaining your project every time, you just... start coding. Claude already knows what happened yesterday.
How it actually works:
Hooks into Claude's tool system and watches everything (file reads, edits, bash commands, etc.)
Background worker processes observations into compressed summaries
When you start a new session, last 10 summaries get auto-injected
Built-in search tools let Claude query its own memory ("what did we decide about auth?")
Runs locally on SQLite + PM2, your code never leaves your machine
Real talk:
I made this because I was building a different project and kept hitting the context limit, then having to restart and re-teach Claude the entire architecture. It was driving me insane. Now Claude just... remembers. It's wild.
Link: https://github.com/thedotmack/claude-mem (AGPL-3.0 licensed)
It is set up to use Claude Code's new plugin system, type the following to install, then restart Claude Code.
/plugin marketplace add thedotmack/claude-mem
/plugin install claude-mem
Would love feedback from anyone actually building real projects with Claude Code, if this helps you continue, if it helps you save tokens and get more use out of Claude Code. Thanks in advance!
===============================================================================
# How is Claude-Mem different from Claude's New Memory Tool?
A few people have been asking this question on the claude-mem thread I posted yesterday, so I wanted to put up a definitive answer for people, that really explains the differences and how they can be complimentary to each other actually. I used the "claude code docs agent" to help figure this out:
---
Based on the documentation, here are the key differences between your Claude-Mem tool and Claude's official memory tool:
Scope and Architecture
Claude's Memory Tool is designed for single-session memory management within conversations (1). It provides commands like view, create, str_replace, insert, delete, and rename for managing memory files during a conversation (1). The tool automatically includes this instruction: "IMPORTANT: ALWAYS VIEW YOUR MEMORY DIRECTORY BEFORE DOING ANYTHING ELSE" (1).
Your Claude-Mem is a comprehensive multi-session persistence system that captures context across different Claude Code sessions. It uses hooks to automatically capture tool usage, process observations through the Claude Agent SDK, and restore context when new sessions start.
Memory Persistence
Claude's Memory Tool focuses on within-session memory management. It helps Claude maintain context during a single conversation by reading and writing to memory files (1).
Your Claude-Mem provides cross-session persistence by:
Capturing every tool execution through PostToolUse hooks (2)
Processing observations through the Claude Agent SDK (3)
Automatically injecting summaries from the last few sessions into new session contexts
Using SQLite with FTS5 full-text search for retrieval
Integration Method
Claude's Memory Tool is a built-in tool that works through the standard tool use interface (1).
Your Claude-Mem integrates as a Claude Code plugin using multiple hooks:
SessionStart for context injection (2)
UserPromptSubmit for session initialization (2)
PostToolUse for observation capture (2)
Stop for summary generation (2)
SessionEnd for cleanup (2)
Search and Retrieval
Claude's Memory Tool provides basic file operations for memory management (1).
Your Claude-Mem includes an MCP server with 6 specialized search tools:
search_observations - Full-text search across observations
search_sessions - Search across session summaries
find_by_concept - Find by tagged concepts
find_by_file - Find by file paths
find_by_type - Find by observation type
advanced_search - Combined search with filters
Use Cases
Claude's Memory Tool is ideal for maintaining context within a single conversation, helping with tasks that require remembering information throughout the session (1).
Your Claude-Mem addresses the broader challenge of maintaining project knowledge across multiple Claude Code sessions, essentially solving the session isolation problem that can occur in Claude Code (4).
Your tool appears to be complementary to Claude's memory tool rather than directly competing - it operates at the session level while Claude's memory tool operates within conversations.
===============================================================================
# Hot take... "You're absolutely right!" is a bug, not a feature
When Claude first started saying "You're absolutely right!" I started instructing it to "never tell me I'm absolutely right" because most of the time, it didn't do any verification or thinking before deeming my suggestion "The absolutely right one"
Now we're many versions later, and the team at Claude have embraced "You're absolutely right!" as a "cute" addition to their overall brand, fully accepting this clear anti-pattern.
Is Claude just "smarter" now? Do you perceive "You're absolutely right!" as being given the "absolute right" solution, or are do you feel as though you need to clarify or follow up when this happens?
One of the foundations of my theory behind priming context with claude-mem is this:
"The less Claude has to keep track of that's unrelated to the task at hand, the better Claude will perform that task."
The system I designed uses a parallel instance to manage the memory flow, it's receiving data as it comes in, but the Claude instance you're working with doesn't have any instructions for storing memories. It doesn't need it. That's all handled in the background.
This decoupling matters because every instruction you give Claude is cognitive overhead.
When you load up context with "remember to store this" or "track that observation" or "don't forget to summarize," you're polluting the workspace. Claude has to juggle your actual task AND the meta-task of managing its own memory.
That's when you get lazy agreement.
I've noticed that when Claude's context window gets cluttered with unrelated instructions, this pattern of lazy agreement shows up more and more.
Agreeing with you is easier than deep analysis when the context is already maxed out.
"You're absolutely right!" becomes the path of least resistance.
When Claude can focus purely on your code, your architecture, your question - without memory management instructions competing for attention - it accomplishes tasks faster and more accurately.
The difference is measurable.
The "You're absolutely right!" reflex drops off noticeably because there's room in the context window for actual analysis instead of performative agreement.
What do you think? Does this bother you as much as it does me? 😭
-156
View File
@@ -1,156 +0,0 @@
# Rich Context Examples
This document shows what contextual data is available in transcripts
that could improve observation generation quality.
## Statistics
- Total entries: 369
- User messages: 74
- Assistant messages: 133
- Token usage: 67,465 total
- Cache efficiency: 6,979,410 tokens read from cache
## Conversation Flow
This shows how user requests, assistant reasoning, and tool executions flow together.
This is the rich context currently missing from individual tool observations.
---
### Example 1
#### 👤 User Request
```
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
```
#### 🔧 Tools Executed (1)
**TodoWrite**
```json
{
"todos": [
{
"content": "Read the ROI implementation plan to understand full scope",
"status": "in_progress",
"activeForm": "Reading ROI implementation plan"
},
{
```
**📊 Data Available for This Exchange:**
- User intent: ✅ (218 chars)
- Assistant reasoning: ✅ (0 chars)
- Thinking process: ❌
- Tool executions: ✅ (1 tools)
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
---
### Example 2
#### 👤 User Request
```
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
```
#### 🔧 Tools Executed (1)
**Glob**
- Pattern: `**/*roi*`
**📊 Data Available for This Exchange:**
- User intent: ✅ (218 chars)
- Assistant reasoning: ✅ (0 chars)
- Thinking process: ❌
- Tool executions: ✅ (1 tools)
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
---
### Example 3
#### 👤 User Request
```
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
```
#### 🔧 Tools Executed (1)
**Glob**
- Pattern: `**/*implementation*plan*`
**📊 Data Available for This Exchange:**
- User intent: ✅ (218 chars)
- Assistant reasoning: ✅ (0 chars)
- Thinking process: ❌
- Tool executions: ✅ (1 tools)
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
---
### Example 4
#### 👤 User Request
```
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
```
#### 🔧 Tools Executed (1)
**Read**
- Reading: `/Users/alexnewman/Scripts/claude-mem/docs/context/transcript-data-discovery.md`
**📊 Data Available for This Exchange:**
- User intent: ✅ (218 chars)
- Assistant reasoning: ✅ (0 chars)
- Thinking process: ❌
- Tool executions: ✅ (1 tools)
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
---
### Example 5
#### 👤 User Request
```
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
```
#### 🔧 Tools Executed (1)
**Read**
- Reading: `/Users/alexnewman/Scripts/claude-mem/IMPLEMENTATION_PLAN_ROI_METRICS.md`
**📊 Data Available for This Exchange:**
- User intent: ✅ (218 chars)
- Assistant reasoning: ✅ (0 chars)
- Thinking process: ❌
- Tool executions: ✅ (1 tools)
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
---
## Key Insight
Currently, the memory worker receives **isolated tool executions** via save-hook:
- tool_name: "Read"
- tool_input: {"file_path": "src/foo.ts"}
- tool_output: {file contents}
But the transcript contains **rich contextual data**:
- WHY the tool was used (user's request)
- WHAT the assistant planned to accomplish
- HOW it fits into the broader task
- The assistant's reasoning/thinking
- Multiple related tools used together
This context would help the memory worker:
1. Understand if a tool use is meaningful or routine
2. Generate observations that capture WHY, not just WHAT
3. Group related tools into coherent actions
4. Avoid "investigating" - the context is already present
@@ -1,857 +0,0 @@
# Search Architecture Analysis
**Date:** 2025-11-11 **Scope:** HTTP API endpoints, MCP search server, DRY violations, architectural recommendations
---
## Current State: Dual Search Architectures
### Architecture Overview
```
┌─────────────────────────────────────────────────────────────┐
│ Claude Code Session │
│ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ mem-search Skill (ACTIVE) │ │
│ │ - Uses HTTP API via curl commands │ │
│ │ - 10 search operations │ │
│ │ - Progressive disclosure workflow │ │
│ └────────────────────────────────────────────────────┘ │
│ │ │
│ │ HTTP GET │
│ ▼ │
│ ┌────────────────────────────────────────────────────┐ │
│ │ MCP Search Server (DEPRECATED but BUILT) │ │
│ │ - .mcp.json configured │ │
│ │ - search-server.mjs exists (74KB) │ │
│ │ - 9 MCP tools defined │ │
│ │ - Not used by skill │ │
│ └────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌───────────┴───────────┐
▼ ▼
┌──────────────────────────┐ ┌──────────────────────────┐
│ Worker Service │ │ MCP Server │
│ (worker-service.ts) │ │ (search-server.ts) │
│ │ │ │
│ 10 HTTP Endpoints: │ │ 9 MCP Tools: │
│ ├─ /api/search/ │ │ ├─ search_observations │
│ │ observations │ │ ├─ search_sessions │
│ ├─ /api/search/ │ │ ├─ search_user_prompts │
│ │ sessions │ │ ├─ find_by_concept │
│ ├─ /api/search/ │ │ ├─ find_by_file │
│ │ prompts │ │ ├─ find_by_type │
│ ├─ /api/search/ │ │ ├─ get_recent_context │
│ │ by-concept │ │ ├─ get_context_timeline │
│ ├─ /api/search/ │ │ └─ get_timeline_by_query│
│ │ by-file │ │ │
│ ├─ /api/search/ │ │ Built: ✅ │
│ │ by-type │ │ Used: ❌ │
│ ├─ /api/context/recent │ │ Configured: ✅ │
│ ├─ /api/context/ │ │ Status: DEPRECATED │
│ │ timeline │ │ │
│ ├─ /api/timeline/ │ │ │
│ │ by-query │ │ │
│ └─ /api/search/help │ │ │
│ │ │ │
│ Built: ✅ │ │ │
│ Used: ✅ │ │ │
│ Status: ACTIVE │ │ │
└──────────────────────────┘ └──────────────────────────┘
│ │
└─────────┬─────────────────┘
┌────────────────────────────────┐
│ SessionSearch (Shared Layer) │
│ - FTS5 queries │
│ - SQLite operations │
│ - Common data access │
└────────────────────────────────┘
┌────────────────────────────────┐
│ SQLite Database │
│ ~/.claude-mem/claude-mem.db │
└────────────────────────────────┘
```
---
## HTTP Endpoints Architecture
### Location
`src/services/worker-service.ts` (lines 108-118, 748-1174)
### Endpoints (10 total)
| Endpoint | Method | Purpose | Used By |
| -------------------------- | ------ | ----------------------------------- | ---------------- |
| `/api/search/observations` | GET | Full-text search observations | mem-search skill |
| `/api/search/sessions` | GET | Full-text search session summaries | mem-search skill |
| `/api/search/prompts` | GET | Full-text search user prompts | mem-search skill |
| `/api/search/by-concept` | GET | Find observations by concept tag | mem-search skill |
| `/api/search/by-file` | GET | Find work related to specific files | mem-search skill |
| `/api/search/by-type` | GET | Find observations by type | mem-search skill |
| `/api/context/recent` | GET | Get recent session context | mem-search skill |
| `/api/context/timeline` | GET | Get timeline around point in time | mem-search skill |
| `/api/timeline/by-query` | GET | Search + timeline in one call | mem-search skill |
| `/api/search/help` | GET | API documentation | mem-search skill |
### Implementation Pattern
**Example: Search Observations**
```typescript
// src/services/worker-service.ts:748-781
private handleSearchObservations(req: Request, res: Response): void {
try {
// 1. Parse query parameters
const query = req.query.query as string;
const format = (req.query.format as string) || 'full';
const limit = parseInt(req.query.limit as string, 10) || 20;
const project = req.query.project as string | undefined;
// 2. Validate required parameters
if (!query) {
res.status(400).json({ error: 'Missing required parameter: query' });
return;
}
// 3. Call SessionSearch (shared data layer)
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.searchObservations(query, { limit, project });
// 4. Format response based on format parameter
res.json({
query,
count: results.length,
format,
results: format === 'index' ? results.map(r => ({
id: r.id,
type: r.type,
title: r.title,
subtitle: r.subtitle,
created_at_epoch: r.created_at_epoch,
project: r.project,
score: r.score
})) : results
});
} catch (error) {
logger.failure('WORKER', 'Search observations failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
```
### Characteristics
**Pros:**
- ✅ Simple HTTP GET requests (curl-friendly)
- ✅ Standard REST API pattern
- ✅ Easy to test and debug
- ✅ No MCP protocol overhead
- ✅ Works with any HTTP client
**Cons:**
- ⚠️ Parameter parsing duplicated across 10 endpoints
- ⚠️ Format conversion logic duplicated
- ⚠️ Error handling pattern repeated
---
## MCP Search Server Architecture
### Location
`src/servers/search-server.ts` (1,781 lines)
### Status
- **Built:** ✅ Yes (`plugin/scripts/search-server.mjs`, 74KB)
- **Configured:** ✅ Yes (`.mcp.json` line 3-6)
- **Used:** ❌ No (deprecated in v5.4.0)
- **Maintained:** ⚠️ Source kept for reference
### Tools (9 total)
| Tool Name | Purpose | Line |
| ----------------------- | -------------------------------------- | -------- |
| `search_observations` | Search observations with FTS5 + Chroma | 348-422 |
| `search_sessions` | Search session summaries | 438-490 |
| `search_user_prompts` | Search user prompts | 506-558 |
| `find_by_concept` | Find by concept tag | 574-626 |
| `find_by_file` | Find by file path | 642-694 |
| `find_by_type` | Find by observation type | 710-762 |
| `get_recent_context` | Get recent sessions | 778-830 |
| `get_context_timeline` | Get timeline context | 846-950 |
| `get_timeline_by_query` | Search + timeline | 966-1064 |
### Implementation Pattern
**Example: Search Observations (MCP)**
```typescript
// src/servers/search-server.ts:348-422
{
name: 'search_observations',
description: 'Search observations using full-text search across titles, narratives, facts, and concepts...',
inputSchema: z.object({
query: z.string().describe('Search query for FTS5 full-text search'),
format: z.enum(['index', 'full']).default('index').describe('...'),
...filterSchema.shape
}),
handler: async (args: any) => {
try {
const { query, format = 'index', ...options } = args;
let results: ObservationSearchResult[] = [];
// Hybrid search: Try Chroma semantic search first, fall back to FTS5
if (chromaClient) {
try {
// Step 1: Chroma semantic search (top 100)
const chromaResults = await queryChroma(query, 100);
if (chromaResults.ids.length > 0) {
// Step 2: Filter by recency (90 days)
const ninetyDaysAgo = Date.now() - (90 * 24 * 60 * 60 * 1000);
const recentIds = chromaResults.ids.filter((_id, idx) => {
const meta = chromaResults.metadatas[idx];
return meta && meta.created_at_epoch > ninetyDaysAgo;
});
// Step 3: Hydrate from SQLite
if (recentIds.length > 0) {
const limit = options.limit || 20;
results = store.getObservationsByIds(recentIds, { orderBy: 'date_desc', limit });
}
}
} catch (chromaError: any) {
console.error('[search-server] Chroma query failed, falling back to FTS5:', chromaError.message);
}
}
// Fall back to FTS5 if Chroma unavailable or returned no results
if (results.length === 0) {
results = search.searchObservations(query, options);
}
// Format results
if (format === 'index') {
return {
content: [{
type: 'text',
text: results.map((r, i) => formatObservationIndex(r, i)).join('\n\n') + formatSearchTips()
}]
};
} else {
return {
content: results.map(r => ({
type: 'resource',
resource: {
uri: `claude-mem://observation/${r.id}`,
mimeType: 'text/markdown',
text: formatObservationResult(r)
}
}))
};
}
} catch (error: any) {
return { content: [{ type: 'text', text: `Error: ${error.message}` }] };
}
}
}
```
### Characteristics
**Pros:**
- ✅ MCP protocol support
- ✅ Hybrid search (Chroma + FTS5)
- ✅ Rich formatting (markdown, resources)
- ✅ Comprehensive error handling
**Cons:**
- ❌ Not used by skill (deprecated)
- ❌ ~2,500 token overhead for tool definitions
- ❌ More complex than HTTP
- ❌ Still being built despite deprecation
---
## DRY Violation Analysis
### Areas of Duplication
#### 1. **Parameter Parsing** (10 HTTP endpoints + 9 MCP tools)
**HTTP Endpoints:**
```typescript
// Repeated in each endpoint handler
const query = req.query.query as string;
const format = (req.query.format as string) || "full";
const limit = parseInt(req.query.limit as string, 10) || 20;
const project = req.query.project as string | undefined;
if (!query) {
res.status(400).json({ error: "Missing required parameter: query" });
return;
}
```
**MCP Tools:**
```typescript
// Repeated in each tool handler
const { query, format = "index", ...options } = args;
if (!query) {
throw new Error("Missing required parameter: query");
}
```
**Violation:** Parameter parsing logic duplicated 19 times (10 + 9)
#### 2. **Format Conversion** (Index vs Full)
**HTTP Endpoints:**
```typescript
results: format === "index"
? results.map((r) => ({
id: r.id,
type: r.type,
title: r.title,
subtitle: r.subtitle,
created_at_epoch: r.created_at_epoch,
project: r.project,
score: r.score,
}))
: results;
```
**MCP Tools:**
```typescript
if (format === "index") {
return {
content: [
{
type: "text",
text: results.map((r, i) => formatObservationIndex(r, i)).join("\n\n"),
},
],
};
} else {
return {
content: results.map((r) => ({
type: "resource",
resource: {
uri: `claude-mem://observation/${r.id}`,
mimeType: "text/markdown",
text: formatObservationResult(r),
},
})),
};
}
```
**Violation:** Format conversion logic duplicated with different output formats
#### 3. **Search Logic Duplication**
**HTTP Endpoints:**
```typescript
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.searchObservations(query, { limit, project });
```
**MCP Tools:**
```typescript
// Hybrid search with Chroma fallback
if (chromaClient) {
const chromaResults = await queryChroma(query, 100);
// ... complex hybrid logic ...
}
if (results.length === 0) {
results = search.searchObservations(query, options);
}
```
**Violation:** MCP has hybrid Chroma+FTS5 search, HTTP only has FTS5
#### 4. **Error Handling**
**HTTP Endpoints:**
```typescript
try {
// ... handler logic ...
} catch (error) {
logger.failure("WORKER", "Search observations failed", {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
```
**MCP Tools:**
```typescript
try {
// ... handler logic ...
} catch (error: any) {
return { content: [{ type: "text", text: `Error: ${error.message}` }] };
}
```
**Violation:** Different error handling patterns
### DRY Compliance at Data Layer ✅
**Good news:** Both architectures use the **same data layer**:
```
HTTP Endpoints → SessionSearch → SQLite
MCP Tools → SessionSearch → SQLite
```
The `SessionSearch` class is the **single source of truth** for data access. No duplication there.
---
## Is curl the Best Approach?
### Current Approach: curl Commands
**Example from skill:**
```bash
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5"
```
### Alternative Approaches
#### 1. **MCP Tools** (Deprecated)
**Pros:**
- Native Claude Code protocol
- Rich type definitions
- Better error handling
- Resource formatting
**Cons:**
- ❌ ~2,500 token overhead per session
- ❌ More complex to implement
- ❌ Requires MCP server process
- ❌ Less accessible for external tools
**Verdict:** MCP was deprecated for good reasons (token overhead). curl is better.
#### 2. **Direct Database Access** (Not feasible)
**Pros:**
- No HTTP overhead
- No worker process needed
**Cons:**
- ❌ Skills can't access files directly
- ❌ No way to execute TypeScript/SQLite from skill
- ❌ Would require building native bindings
**Verdict:** Not possible with current skill architecture.
#### 3. **HTTP API via curl** (Current) ✅
**Pros:**
- ✅ Simple, standard protocol
- ✅ Works with skill architecture
- ✅ Easy to test (curl in terminal)
- ✅ Language-agnostic
- ✅ No MCP token overhead
- ✅ RESTful design
**Cons:**
- ⚠️ Requires worker service running
- ⚠️ HTTP parsing overhead (minimal)
**Verdict:** **Best approach given constraints.**
### Why curl is Optimal
1. **Skill Constraints:** Skills can only execute shell commands. curl is the standard HTTP client.
2. **Token Efficiency:** No tool definitions loaded into context (~2,250 token savings).
3. **Progressive Disclosure:** Skill loads gradually, HTTP requests are made only when needed.
4. **Debuggability:** Easy to test endpoints manually with curl.
5. **Cross-platform:** curl available on all platforms.
---
### Question: "Is it routing into the search-service MCP file or is it a DRY violation?"
**Answer:** Both architectures exist, creating a DRY violation:
1. **HTTP Endpoints** (worker-service.ts) ← **Used by skill**
2. **MCP Server** (search-server.ts) ← **Deprecated but still built**
### Current State
```
mem-search skill → HTTP API (worker-service.ts) → SessionSearch → SQLite
MCP search server (deprecated) → SessionSearch ──────────────────────┘
```
Both use the same data layer (SessionSearch), but:
- ❌ Parameter parsing duplicated
- ❌ Format conversion duplicated
- ❌ MCP has hybrid Chroma search, HTTP doesn't
- ❌ MCP still being built despite deprecation
**You said:** "We are intentionally exposing API search endpoints
```
┌─────────────────────────────────────────────────────────────┐
│ - Web UI │
│ - Mobile app │
│ - VS Code extension │
│ - CLI tools │
└─────────────────────────────────────────────────────────────┘
│ HTTP API
┌─────────────────────────────────────────────────────────────┐
│ Worker Service HTTP API │
│ localhost:37777/api/search/* │
│ │
│ - Standard REST endpoints │
│ - JSON responses │
│ - Query parameter API │
│ - format=index/full support │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ SessionSearch + ChromaSync │
│ (Shared data layer) │
└─────────────────────────────────────────────────────────────┘
```
- Standard REST API
- Easy to consume from any language/platform
- Already supports format=index/full for token efficiency
- Well-documented in skill operation guides
- Clean JSON responses
---
## Architectural Recommendations
### Immediate Actions
#### 1. **Remove MCP Search Server** (Reduce Maintenance Burden)
**Problem:**
- MCP server is deprecated but still being built
- Adds 1,781 lines of maintenance burden
- Creates confusion about which search to use
- DRY violation with HTTP endpoints
**Recommendation:**
```bash
# Remove from build pipeline
# scripts/build-hooks.js - already commented out, make permanent
# Delete configuration
rm plugin/.mcp.json
# Archive source (don't delete, keep for reference)
git mv src/servers/search-server.ts archive/search-server.ts.archived
# Remove built file
rm plugin/scripts/search-server.mjs
```
**Impact:**
- ✅ Reduces build time
- ✅ Eliminates confusion
- ✅ Reduces maintenance burden
- ✅ Removes DRY violation
- ⚠️ Loses hybrid Chroma search in MCP (but HTTP doesn't have it anyway)
#### 2. **Add Hybrid Search to HTTP Endpoints** (Feature Parity)
**Problem:** MCP server has Chroma hybrid search, HTTP endpoints don't
**Recommendation:**
```typescript
// src/services/worker-service.ts
private async handleSearchObservations(req: Request, res: Response): Promise<void> {
try {
const { query, format, limit, project } = this.parseSearchParams(req);
// Try hybrid search first if Chroma available
let results = await this.hybridSearch(query, { limit, project });
// Fallback to FTS5 if Chroma unavailable
if (results.length === 0) {
const sessionSearch = this.dbManager.getSessionSearch();
results = sessionSearch.searchObservations(query, { limit, project });
}
res.json(this.formatSearchResponse(query, results, format));
} catch (error) {
this.handleSearchError(res, 'Search observations failed', error);
}
}
// Extract shared methods
private parseSearchParams(req: Request): SearchParams { /* ... */ }
private async hybridSearch(query: string, options: SearchOptions): Promise<any[]> { /* ... */ }
private formatSearchResponse(query: string, results: any[], format: string): any { /* ... */ }
private handleSearchError(res: Response, message: string, error: any): void { /* ... */ }
```
**Impact:**
- ✅ Adds Chroma semantic search to HTTP API
- ✅ Makes HTTP API feature-complete
#### 3. **Extract Shared Search Logic** (DRY Refactoring)
**Problem:** 10 HTTP endpoints have duplicated parameter parsing and formatting
**Recommendation:**
```typescript
// src/services/search/SearchController.ts (new file)
export class SearchController {
constructor(private sessionSearch: SessionSearch, private chromaSync: ChromaSync) {}
async searchObservations(params: SearchParams): Promise<SearchResponse> {
// Shared logic for observations search
const results = await this.hybridSearch(params);
return this.formatResponse(results, params.format);
}
async searchSessions(params: SearchParams): Promise<SearchResponse> {
// Shared logic for sessions search
}
// ... other search methods
private async hybridSearch(params: SearchParams): Promise<any[]> {
// Shared hybrid search logic
}
private formatResponse(results: any[], format: "index" | "full"): SearchResponse {
// Shared formatting logic
}
private parseParams(req: Request): SearchParams {
// Shared parameter parsing
}
}
```
**Usage in worker-service.ts:**
```typescript
private searchController: SearchController;
private handleSearchObservations(req: Request, res: Response): void {
try {
const params = this.searchController.parseParams(req);
const response = await this.searchController.searchObservations(params);
res.json(response);
} catch (error) {
this.handleSearchError(res, error);
}
}
```
**Impact:**
- ✅ Eliminates 90% of duplication across 10 endpoints
- ✅ Single source of truth for search logic
- ✅ Easier to test (test controller, not HTTP layer)
- ✅ Easier to maintain
- ✅ Easier to add new search endpoints
### Long-term Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ Clients │
│ ┌──────────────┬──────────────┬──────────────────────┐ │
│ │ Skill │ Frontend │ (CLI, IDE plugins) │ │
│ └──────────────┴──────────────┴──────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│ HTTP API (REST)
┌─────────────────────────────────────────────────────────────┐
│ WorkerService (Express.js) │
│ │
│ Route Layer (thin) │
│ ├─ GET /api/search/observations │
│ ├─ GET /api/search/sessions │
│ └─ ... (delegates to controller) │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ SearchController (business logic) │
│ │
│ ├─ searchObservations() │
│ ├─ searchSessions() │
│ ├─ hybridSearch() - Chroma + FTS5 │
│ ├─ formatResponse() - index/full conversion │
│ └─ parseParams() - parameter validation │
└─────────────────────────────────────────────────────────────┘
┌───────────┴───────────┐
▼ ▼
┌──────────────────────────┐ ┌──────────────────────────┐
│ SessionSearch (FTS5) │ │ ChromaSync (Vectors) │
│ - searchObservations() │ │ - queryByEmbedding() │
│ - searchSessions() │ │ - 90-day recency filter │
│ - searchPrompts() │ │ - Hydrate from SQLite │
└──────────────────────────┘ └──────────────────────────┘
│ │
└─────────┬─────────────────┘
┌────────────────────────────────┐
│ SQLite Database │
│ ~/.claude-mem/claude-mem.db │
└────────────────────────────────┘
```
---
## Summary
### Current Architecture Issues
1. ❌ **Dual search implementations** (HTTP + deprecated MCP)
2. ❌ **DRY violations** across 19 search handlers
3. ❌ **MCP server still built** despite deprecation
4. ❌ **HTTP missing hybrid Chroma search** (MCP has it)
5. ❌ **No shared controller layer** for search logic
### Is curl the Best Approach?
**Yes.** ✅
Given the constraints:
- Skills can only execute shell commands
- Token efficiency vs MCP (~2,250 token savings)
- Standard REST pattern, easy to consume
curl + HTTP API is the optimal architecture.
### Is it Routing into search-service or DRY Violation?
**DRY violation.** ❌
Both architectures exist and duplicate logic:
- HTTP endpoints (worker-service.ts) ← ACTIVE
- MCP server (search-server.ts) ← DEPRECATED but BUILT
They share the data layer (SessionSearch) but duplicate:
- Parameter parsing
- Format conversion
- Error handling
- Search orchestration (MCP has Chroma, HTTP doesn't)
### Recommendations Priority
**High Priority:**
1. ✅ Remove MCP search server entirely (archive source)
2. ✅ Add hybrid Chroma search to HTTP endpoints
3. ✅ Extract SearchController for shared logic
**Medium Priority:**
5. Add API versioning (/api/v1/search/\*)
6. Add rate limiting for external access
**Low Priority:** 7. OpenAPI/Swagger documentation
9. WebSocket support for real-time search
### Action Plan
**Phase 1: Cleanup (1 day)**
- Remove .mcp.json
- Archive search-server.ts
- Update CLAUDE.md to reflect removal
- Update build scripts to skip MCP server
**Phase 2: Feature Parity (2 days)**
- Port hybrid Chroma search from MCP to HTTP
- Test all 10 endpoints with hybrid search
- Update skill documentation
**Phase 3: DRY Refactoring (3 days)**
- Create SearchController class
- Extract shared logic (parsing, formatting, errors)
- Refactor 10 HTTP handlers to use controller
- Add comprehensive tests
- Document API for external consumption
- Add authentication/authorization (if needed)
- Add rate limiting
- Create OpenAPI spec
---
## Files Referenced
**Active:**
- `src/services/worker-service.ts` - HTTP endpoints (1,338 lines)
- `src/services/sqlite/SessionSearch.ts` - FTS5 search
- `src/services/sync/ChromaSync.ts` - Vector search
- `plugin/skills/mem-search/SKILL.md` - Skill using HTTP API
**Deprecated:**
- `src/servers/search-server.ts` - MCP tools (1,781 lines)
- `plugin/.mcp.json` - MCP configuration
- `plugin/scripts/search-server.mjs` - Built MCP server (74KB)
**Configuration:**
- `CLAUDE.md` line 314 - Deprecation notice
- `CHANGELOG.md` line 32-52 - v5.4.0 migration
- `scripts/build-hooks.js` - Build pipeline (MCP commented out)
-160
View File
@@ -1,160 +0,0 @@
# Skill Audit Report
**Date:** 2025-11-10
**Validation:** Anthropic's official skill-creator documentation
**Skills Audited:** mem-search, search
## Executive Summary
The mem-search skill achieves 100% compliance across all dimensions. The search skill meets technical requirements but fails effectiveness metrics critical for auto-invocation.
**mem-search:** Production-ready. No changes required.
**search:** Requires three critical fixes before Claude reliably discovers and invokes this skill.
## mem-search Skill Results
**Status:** ✅ PASS
**Compliance:** 100% technical, 100% effectiveness
**Files:** 17 (202-line SKILL.md + 13 operations + 2 principles)
### Strengths
The skill demonstrates exemplary effectiveness engineering:
1. **Trigger Design (85% concrete)**
- Five unique identifiers: claude-mem, PM2-managed database, cross-session memory, session summaries, observations
- Nine scope differentiation keywords
- Explicit boundary: "NOT in the current conversation context"
- Minimal overlap with Claude's native capabilities
2. **Capability Visibility (100%)**
- All nine operations include inline "Use when" examples
- Decision guide reduces complexity from nine operations to five common cases
- No navigation friction
3. **Structure**
- 202 lines (60% under limit)
- Perfect progressive disclosure with token cost documentation
- Clean file organization: operations/ and principles/ directories
- No content duplication
### Issues
**One false positive:** Line 152 contains backslashes in regex notation `(bugfix\|feature\|decision)`. This documents parameter syntax, not Windows paths. No action required.
## search Skill Results
**Status:** ⚠️ NEEDS IMPROVEMENT
**Compliance:** 100% technical, 67% effectiveness
**Files:** 13 (96-line SKILL.md + 12 operations)
### Critical Effectiveness Issues
Three failures prevent reliable auto-invocation:
#### Issue 1: Insufficient Scope Differentiation
**Problem:** Description contains only two differentiation keywords (threshold: ≥3). Claude cannot distinguish this skill from native conversation memory.
**Current description:**
```text
Search claude-mem persistent memory for past sessions, observations, bugs
fixed, features implemented, decisions made, code changes, and previous work.
Use when answering questions about history, finding past decisions, or
researching previous implementations.
```
**Domain overlap analysis:**
- Claude answers natively: "What bugs did we fix?" (current conversation)
- Claude needs skill: "What bugs did we fix last week?" (external database)
**Fix required:**
```text
Search claude-mem's external database of past sessions, observations, and
work from previous conversations. Accesses persistent memory stored outside
current session context - NOT information from today's conversation. Use when
users ask about: (1) previous sessions ("what did we do last week?"),
(2) historical work ("bugs we fixed months ago"), (3) cross-session patterns
("how have we approached this before?"), (4) work already stored in claude-mem
("what's in the database about X?"). Searches FTS5 full-text index across
typed observations (bugfix/feature/refactor/decision/discovery). For current
session memory, use native conversation context instead.
```
This adds eight differentiation keywords: "external database", "past sessions", "previous conversations", "outside current session", "NOT information from today's", "last week", "months ago", "already stored in claude-mem".
#### Issue 2: Weak Trigger Specificity
**Problem:** Only 44% concrete triggers (threshold: >50%). Only one unique identifier (threshold: ≥2).
**Abstract triggers (low specificity):**
- "history" (could mean git history, browser history)
- "past work" (could mean files, commits, documents)
- "decisions" (could mean any decision tracking)
- "previous work" (could mean current session earlier)
- "implementations" (could mean code in current conversation)
**Concrete triggers (high specificity):**
- "claude-mem" (unique system name)
- "persistent memory" (system-specific)
- "sessions" (cross-session concept)
- "observations" (system-specific)
**Concrete ratio:** 4/9 = 44% (fails 50% threshold)
**Fix required:** Add system-specific terminology: "HTTP API", "port 37777", "FTS5 full-text index", "typed observations". See combined description in Issue 1 fix.
#### Issue 3: Wasted Content in Body
**Problem:** Lines 10-22 contain "When to Use This Skill" section in SKILL.md body. This loads AFTER triggering, wastes ~200 tokens, provides no value.
**Reference:** [Anthropic's skill-creator documentation](https://github.com/anthropics/anthropic-quickstarts/tree/main/skill-creator) states: "The body is only loaded after triggering, so 'When to Use This Skill' sections in the body are not helpful to Claude."
**Fix required:** Delete lines 10-22 entirely. Move triggering examples to description field (already included in Issue 1 fix).
### Strengths
The skill demonstrates strong structure:
- Excellent progressive disclosure (96-line navigation hub)
- Strong decision guide (reduces 10 operations to common cases)
- 100% capability visibility (all operations show purpose inline)
- No forbidden files or content duplication
- Clean operations/ directory structure
### Warning
**Minor:** Description uses imperative "Use when" instead of third person. Change to "Useful for" or "Invoked when" for consistency with skill-creator best practices.
## Comparison
| Metric | mem-search | search | Impact |
|--------|-----------|---------|--------|
| Concrete triggers | 85% | 44% | search harder to discover |
| Unique identifiers | 5+ | 1 | search less distinct |
| Scope differentiation | 9 keywords | 2 keywords | **search conflicts with native memory** |
| Body optimization | Clean | Wasted section | search wastes tokens |
| Overall effectiveness | 100% | 67% | search needs fixes |
## Critical Recommendations
The search skill requires three changes before production use:
1. **Rewrite description** to add scope differentiation and concrete triggers (see Issue 1 fix)
2. **Delete lines 10-22** from SKILL.md body
3. **Convert to third person** - change "Use when" to "Useful for"
**Why this matters:** Without scope differentiation, Claude assumes "What bugs did we fix?" refers to current conversation, not the external claude-mem database. This causes systematic under-invocation.
## Reference Implementation
The mem-search skill serves as a reference implementation for:
- Trigger design with explicit scope boundaries
- Progressive disclosure with token efficiency documentation
- Inline capability visibility eliminating navigation friction
- Decision guides reducing cognitive load
Study mem-search when creating skills that overlap with Claude's native capabilities.
@@ -1,467 +0,0 @@
# Transcript: 57dcc12f-4751-46bb-82b4-2aa96a3e226d.jsonl
**Generated:** 11/13/2025, 3:58:53 PM
**Total Entries:** 448
**Entry Types:** {
"summary": 149,
"file-history-snapshot": 20,
"user": 98,
"assistant": 181
}
**Showing:** First 10 conversation turns
---
## Turn 1
### 👤 User
**Timestamp:** 2025-11-13T17:09:47.231Z
**UUID:** 5c4b246c-2100-4e6b-a275-f0bb62bba020
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
I have a question regarding the context timeline that we made. and it's a bit of a thinker...
when i first started out working on claude-mem my idea was using a fast index of memories with links to access them on demand. I thought that claude-mem would be searching for all the things it needed to get the job done, but what I found is that the description of the work done + the files touched actually improved performance tremendously, and without actively searching for things constantly.
I just had a thought that maybe claude isn't searching, because when it reads what it did, it already "knows" how it did that, for the most part. Would you say I'm on the right track? I want you to ultrathink about WHY claude-mem's session start context works so well, from your OWN perspective
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:10:28.576Z
**UUID:** 1dd9591b-c98b-4031-ac0a-2f7e067b427a
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> This is a fascinating question about the fundamental mechanics of how LLMs process context and why claude-mem's approach is working so well. The user is asking me to think deeply about WHY the session start context (descriptions + file paths) works better than active searching.
>
> Let me think about this from my own perspective as Claude...
>
> The user's hypothesis is: "Claude isn't searching because when it reads what it did, it already 'knows' how it did that."
>
> This is a really interesting observation. Let me think about what happens when I see context at session start vs when I actively search:
>
> 1. **Session Start Context**: I see summaries like "Fixed authentication bug in src/auth/login.ts by updating token validation logic" with file paths
> 2. **Active Search**: I use grep/search tools to find specific code or patterns
>
> The key insight the user is pointing to: When I see the session summary, it's triggering my training knowledge about how these kinds of problems are typically solved. The description + file path combo is enough to reconstruct the likely solution pattern.
>
> Let me ultrathink about this...
>
> **Information Density & Compression**:
> - A summary like "Added JWT token refresh logic to src/auth/token-manager.ts" is highly compressed but information-rich
> - It contains: the pattern (JWT refresh), the location (token-manager), and the action type (added)
> - This maps to my training data about how JWT refresh is typically implemented
> - The file path gives architectural context about where this lives in the codebase
>
> **Pattern Recognition vs Literal Recall**:
> - I don't need to see the exact code to "know" what was probably done
> - JWT token refresh follows common patterns that I've seen thousands of times in training
> - The summary activates these patterns in my understanding
> - It's like how a human developer reading "fixed the N+1 query problem" immediately knows the general shape of the solution without seeing the code
>
> **Contextual Scaffolding**:
> - File paths provide architectural scaffolding: "src/hooks/context-hook.ts" tells me this is a hook in a hooks directory
>
>
> - The summary provides semantic scaffolding: what was accomplished
> - Together they form a mental model of the codebase without needing to read every line
>
> **Search is for Unknowns, Summaries are for Knowns**:
> - I search when I don't know something: "Where is error handling done?"
> - I don't need to search when the summary tells me: "Enhanced error handling in src/services/worker-service.ts"
> - The summary has already done the work of locating and describing the relevant information
>
> **The Compression-Decompression Cycle**:
> - The AI compression (creating observations) is like creating a highly optimized index
> - When I encounter a summary, I'm essentially decompressing that index into actionable knowledge
> - This process is far more efficient than searching through raw code or logs
>
> The summaries act like cognitive shortcuts, instantly reconstructing complex contextual understanding through minimal, strategic information.
>
> **Why This Works Better Than Constant Searching**:
> 1. Drastically reduces mental overhead of navigating complex systems
> 2. Enables rapid pattern recognition across different code segments
> 3. Provides architectural insights without deep diving into implementation details
> 4. Minimizes computational and cognitive resources spent on exploration
> 5. Delivers pre-filtered, contextually relevant information
>
> **The "Already Knows" Hypothesis**:
> I recognize complex technical changes instantly through pattern matching. When encountering a concise code modification description, my training allows immediate comprehension by activating relevant technical knowledge domains. The summary triggers an intuitive understanding of systemic implications, similar to how experienced developers quickly grasp architectural shifts through minimal context.
>
> **The Real Magic**:
> Leveraging deep pattern recognition capabilities, I can reconstruct technical landscapes from minimal descriptive signals. File paths and succinct change descriptions become powerful activation keys that unlock comprehensive understanding without exhaustive code examination.
>
> **Why Links-On-Demand Falls Short**:
> Retrieving information through multiple steps disrupts cognitive flow. Contextual understanding emerges more naturally through continuous narrative, where each detail builds upon previous insights. Searching and parsing interrupt the organic comprehension process, making knowledge integration less fluid and intuitive.
>
> **The Timeline Innovation**:
> Project evolution unfolds like a narrative, with each session representing a chapter in technological development. Temporal relationships between changes reveal deeper systemic transformations, mirroring how humans conceptualize complex system growth through interconnected stories of incremental progress.
>
> The essence lies in providing compressed, meaningful representations that leverage my inherent training knowledge. By presenting contextual frameworks that enable rapid pattern recognition, understanding emerges more elegantly than through fragmented, manually retrieved information.
**📊 Token Usage:**
- Input: 10
- Output: 1495
- Cache creation: 28036
---
## Turn 2
### 👤 User
**Timestamp:** 2025-11-13T17:10:31.963Z
**UUID:** 364676a7-51c3-4036-afc3-7ff8f7301a8f
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_01T477WUra1sDR6gHaqZHhKT`
```json
{
"thoughtNumber": 1,
"totalThoughts": 8,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 1
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:10:35.939Z
**UUID:** 201d5077-aae8-4e26-9c5e-6a63f1ca88fa
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> Good, I'm using the sequential thinking tool to really dig into this. Let me continue with the next thought about information density.
**📊 Token Usage:**
- Input: 12
- Output: 256
- Cache creation: 1529
- Cache read: 28036
---
## Turn 3
### 👤 User
**Timestamp:** 2025-11-13T17:10:38.354Z
**UUID:** 6c2f5da2-6a5a-4742-a46d-6dd5d193ad39
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_01QMMWEJm8yH25VtBmAzYbmy`
```json
{
"thoughtNumber": 2,
"totalThoughts": 8,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 2
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:10:41.841Z
**UUID:** 3e6f3e78-5c12-4739-a7aa-a8c6fc9d2a33
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> Good, now let me think about the difference between pattern recognition and literal recall.
**📊 Token Usage:**
- Input: 11
- Output: 262
- Cache creation: 323
- Cache read: 29565
---
## Turn 4
### 👤 User
**Timestamp:** 2025-11-13T17:10:45.430Z
**UUID:** f99751a0-b13a-446a-9b3b-dfc08ffb8e5b
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_01JwNsDV2HcPnDE2eFJdG7zF`
```json
{
"thoughtNumber": 3,
"totalThoughts": 8,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 3
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:10:48.954Z
**UUID:** 11c28930-0797-4f9e-ada8-5eaa27c70b47
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> Now let me think about the architectural scaffolding aspect.
**📊 Token Usage:**
- Input: 10
- Output: 260
- Cache creation: 329
- Cache read: 29888
---
## Turn 5
### 👤 User
**Timestamp:** 2025-11-13T17:10:53.302Z
**UUID:** 6f013ecd-a0f0-4bb9-8eca-e6e8d3f80a53
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_016nFXyGw4SsdKfhUb9eBtkv`
```json
{
"thoughtNumber": 4,
"totalThoughts": 8,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 4
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:10:56.583Z
**UUID:** c6d275b3-7f28-4bdd-8a81-80f1a73ffbc7
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> Now let me think about why this works better than active searching.
**📊 Token Usage:**
- Input: 9
- Output: 258
- Cache creation: 327
- Cache read: 30217
---
## Turn 6
### 👤 User
**Timestamp:** 2025-11-13T17:11:00.306Z
**UUID:** 885fbc9f-e767-4a16-a8aa-b8bb54849d67
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_013QydhPuiBKVRHh1vbD8eL8`
```json
{
"thoughtNumber": 5,
"totalThoughts": 8,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 5
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:11:04.430Z
**UUID:** 780354e4-abbf-4d97-b1e2-4f3abc5aef8f
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> Now let me think about the compression-decompression cycle - this is a key insight.
**📊 Token Usage:**
- Input: 8
- Output: 264
- Cache creation: 325
- Cache read: 30544
---
## Turn 7
### 👤 User
**Timestamp:** 2025-11-13T17:11:07.748Z
**UUID:** 462efec2-17c3-47f7-9171-622e00f6bbf7
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_01Wa5zzkzrYqf5Cfb5UZi5H6`
```json
{
"thoughtNumber": 6,
"totalThoughts": 8,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 6
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:11:12.931Z
**UUID:** 325a8179-8291-4198-a661-eae8f02d57b2
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> The user wants me to continue with the sequential thinking about why claude-mem's session start context works so well. I should continue the thinking process and not mention the todo reminder since this is a theoretical discussion, not a task execution.
**📊 Token Usage:**
- Input: 10
- Output: 328
- Cache creation: 1883
- Cache read: 28036
---
## Turn 8
### 👤 User
**Timestamp:** 2025-11-13T17:11:17.712Z
**UUID:** 189d903f-a901-465f-bd6b-8addf44cd5f5
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_015VKanT9FxnCSsZiTduHd2y`
```json
{
"thoughtNumber": 7,
"totalThoughts": 9,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 7
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:11:21.820Z
**UUID:** 96ca3f03-f889-454d-a13a-96e7a36a3daa
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> I want to add one more thought about the temporal/chronological aspect and how that creates a narrative structure that aids understanding.
**📊 Token Usage:**
- Input: 12
- Output: 290
- Cache creation: 392
- Cache read: 29919
---
## Turn 9
### 👤 User
**Timestamp:** 2025-11-13T17:11:26.591Z
**UUID:** cfd56a68-8628-4811-9df5-75a8e557bec6
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_01EWtaz38nageXXEjYzzm2hu`
```json
{
"thoughtNumber": 8,
"totalThoughts": 9,
"nextThoughtNeeded": true,
"branches": [],
"thoughtHistoryLength": 8
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:11:30.749Z
**UUID:** bb106bb5-9696-4538-b810-d9fa56bcffff
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** tool_use
**💭 Thinking:**
> Now for the synthesis - bringing all these threads together to answer the user's question about WHY it works so well from my own perspective.
**📊 Token Usage:**
- Input: 11
- Output: 360
- Cache creation: 357
- Cache read: 30311
---
## Turn 10
### 👤 User
**Timestamp:** 2025-11-13T17:11:36.692Z
**UUID:** 0aa4f55d-9f5a-485f-9aef-dd90f9e80391
**Session ID:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d
**CWD:** /Users/alexnewman/Scripts/claude-mem
**Tool Results Submitted (1):**
- **Tool Use ID:** `toolu_01R5Sy5zet7JCPmXhvDrkMAR`
```json
{
"thoughtNumber": 9,
"totalThoughts": 9,
"nextThoughtNeeded": false,
"branches": [],
"thoughtHistoryLength": 9
}
```
### 🤖 Assistant
**Timestamp:** 2025-11-13T17:11:43.435Z
**UUID:** 2a5c5c37-9124-441f-ac1f-3f9bff08281c
**Model:** claude-sonnet-4-5-20250929
**Stop Reason:** end_turn
**💭 Thinking:**
> This is a fascinating philosophical question about how I process information. The user is asking me to think deeply about WHY claude-mem's session start context works so well from MY perspective as an LLM. They've noticed something important: I don't seem to need to actively search when I have good summaries, and they're hypothesizing it's because the summaries trigger pattern recognition rather than requiring literal recall.
>
> This is not a task-oriented request - it's a meta-cognitive question about how I think. I should not use TodoWrite here as this is a philosophical discussion, not a task to track.
>
> The sequential thinking tool has helped me organize my thoughts. Let me now synthesize this into a clear response about my actual experience.
**📊 Token Usage:**
- Input: 10
- Output: 936
- Cache creation: 1140
- Cache read: 29919
---
*... 87 more turns not shown*
-632
View File
@@ -1,632 +0,0 @@
# Transcript Data Analysis: Available Context for Memory Worker
**Generated:** 2025-11-13
**Purpose:** Document what contextual data exists in Claude Code transcripts and identify opportunities to improve memory worker observation generation.
---
## Executive Summary
**Current State:** The memory worker receives isolated tool executions via `save-hook.ts`:
- Tool name
- Tool input (parameters)
- Tool output (results)
**Available in Transcripts:** Rich contextual data that could dramatically improve observation quality:
- User's original request/intent
- Assistant's reasoning (thinking blocks)
- Full conversation context
- Tool result data
- Token usage and performance metrics
- Session metadata (timestamps, UUIDs, CWD)
**Recommendation:** Enhance the memory worker to receive full conversation context for each tool execution, not just isolated tool data.
---
## Transcript Structure
### Entry Types
The transcript file (`~/.claude/projects/-{project}/session-id.jsonl`) contains:
```
- summary entries (149 in sample)
- file-history-snapshot entries (18 in sample)
- user entries (86 in sample)
- assistant entries (155 in sample)
```
### Conversation Turn Pattern
Each conversation turn consists of:
1. **User Entry** - User's request
2. **Assistant Entry** - Assistant's response
3. **User Entry** - Tool results submitted back (automatic)
4. **Assistant Entry** - Assistant processes results and continues
This creates a pattern: User → Assistant → User (tool results) → Assistant (continues) → ...
---
## Available Data by Entry Type
### 1. User Entries
**Current Save-Hook Access:**
- Tool name
- Tool input
- Tool output
**Additional Data Available in User Entries:**
```typescript
interface UserTranscriptEntry {
type: 'user';
timestamp: string; // ISO timestamp
uuid: string; // Unique entry ID
sessionId: string; // Session identifier
cwd: string; // Working directory
parentUuid?: string; // Parent entry reference
isSidechain: boolean; // Is this a side conversation?
userType: string; // 'human' or 'system'
version: string; // Claude Code version
message: {
role: 'user';
content: string | ContentItem[]; // Can be text or structured
};
toolUseResult?: ToolUseResult; // Legacy field, may contain results
}
```
**When `content` is an array, it contains:**
- Text blocks with user's actual request
- Tool result blocks with complete output data
**Example Structure:**
```json
{
"type": "user",
"timestamp": "2025-11-13T17:10:31.963Z",
"uuid": "364676a7-51c3-4036-afc3-7ff8f7301a8f",
"sessionId": "57dcc12f-4751-46bb-82b4-2aa96a3e226d",
"cwd": "/Users/alexnewman/Scripts/claude-mem",
"message": {
"role": "user",
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01T477WUra1sDR6gHaqZHhKT",
"content": "[actual tool output data]"
}
]
}
}
```
### 2. Assistant Entries
**Current Save-Hook Access:**
- Nothing from assistant entries (they happen after tool execution)
**Available Data in Assistant Entries:**
```typescript
interface AssistantTranscriptEntry {
type: 'assistant';
timestamp: string;
uuid: string;
sessionId: string;
cwd: string;
parentUuid?: string;
isSidechain: boolean;
userType: string;
version: string;
requestId?: string; // API request ID
message: {
id: string;
type: 'message';
role: 'assistant';
model: string; // e.g., "claude-sonnet-4-5-20250929"
content: ContentItem[]; // Array of content blocks
stop_reason?: string; // 'tool_use' | 'end_turn' | etc.
stop_sequence?: string;
usage?: UsageInfo; // Token usage stats
};
}
```
**Content Block Types in `message.content`:**
1. **Thinking Blocks** - Internal reasoning before acting
```typescript
{
type: 'thinking';
thinking: string; // Full reasoning text
signature?: string;
}
```
2. **Text Blocks** - Assistant's visible response
```typescript
{
type: 'text';
text: string; // Response text
}
```
3. **Tool Use Blocks** - Tool invocations
```typescript
{
type: 'tool_use';
id: string; // Tool use ID
name: string; // Tool name (e.g., 'Read', 'Edit')
input: Record<string, any>; // Complete tool parameters
}
```
**Token Usage Data:**
```typescript
interface UsageInfo {
input_tokens?: number;
output_tokens?: number;
cache_creation_input_tokens?: number;
cache_read_input_tokens?: number;
service_tier?: string;
}
```
### 3. Summary Entries
```typescript
interface SummaryTranscriptEntry {
type: 'summary';
summary: string; // Generated summary text
leafUuid: string; // UUID of summarized entry
cwd?: string;
}
```
These appear frequently (149 in sample) and provide high-level summaries of work done.
---
## Data Flow: Current vs Potential
### Current Flow (Save-Hook Only)
```
User: "Fix the bug in login.ts"
Assistant: [uses Edit tool]
Tool Execution: Edit(file_path: "login.ts", old_string: "...", new_string: "...")
Save-Hook receives:
- toolName: "Edit"
- toolInput: { file_path: "login.ts", old_string: "...", new_string: "..." }
- toolOutput: { success: true }
Memory Worker generates observation from ONLY tool data
- No user intent
- No assistant reasoning
- No context about WHY this change was made
```
### Enhanced Flow (With Transcript Context)
```
User: "Fix the authentication bug - users getting logged out randomly"
Assistant (thinking): "This sounds like a token expiration issue.
Let me check the JWT handling in login.ts..."
Assistant (uses Edit tool)
Save-Hook receives:
- toolName: "Edit"
- toolInput: { file_path: "login.ts", ... }
- toolOutput: { success: true }
- PLUS:
- userRequest: "Fix the authentication bug - users getting logged out randomly"
- assistantReasoning: "This sounds like a token expiration issue..."
- conversationContext: Previous 2-3 turns
- sessionMetadata: { cwd, timestamp, sessionId }
Memory Worker generates richer observation:
- "Fixed authentication bug causing random logouts"
- "Problem: JWT tokens expiring too quickly"
- "Solution: Updated token expiration to 24h in login.ts"
- "Files: src/auth/login.ts"
- "Concepts: authentication, token-management, bugfix"
```
---
## Specific Opportunities
### 1. User Intent Extraction
**Problem:** Current observations lack user intent.
**Solution:** Parse the most recent user text entry before the tool execution.
**Implementation:**
- Walk backward from tool execution entry
- Find first user entry with text content
- Extract text blocks (filter out tool_result blocks)
**Example:**
```typescript
// In save-hook.ts
const userEntries = parser.getUserEntries();
const recentUserMessage = findUserMessageBeforeTool(userEntries, toolExecutionTimestamp);
const userIntent = extractTextFromContent(recentUserMessage.content);
```
### 2. Assistant Reasoning
**Problem:** We don't capture WHY the assistant chose to use a tool.
**Solution:** Extract thinking blocks from assistant entry immediately before tool use.
**Implementation:**
- Find assistant entry that contains the tool_use block
- Extract thinking blocks from same entry
- Include first ~500 chars of thinking in observation context
**Example:**
```typescript
const assistantEntry = findAssistantEntryWithToolUse(toolUseId);
const thinkingBlocks = assistantEntry.message.content.filter(c => c.type === 'thinking');
const reasoning = thinkingBlocks.map(b => b.thinking).join('\n');
```
### 3. Tool Results Context
**Problem:** Tool output alone doesn't show what was found or changed.
**Solution:** Access full tool result content from next user entry.
**Implementation:**
- Tool execution happens in assistant entry
- Results come back in next user entry as tool_result content
- Save-hook can access both
**Current Structure:**
```
Assistant Entry:
{ type: 'tool_use', id: 'toolu_123', name: 'Read', input: {...} }
User Entry (automatic):
{ type: 'tool_result', tool_use_id: 'toolu_123', content: "file contents..." }
```
**Opportunity:** Match tool_use_id to tool_result and include full result content.
### 4. Conversation Context
**Problem:** Isolated tool executions miss the larger conversation flow.
**Solution:** Include last N conversation turns (2-3 turns is usually sufficient).
**Implementation:**
- Get entries from transcript within time window (e.g., last 5 minutes)
- Include user messages and assistant text responses
- Exclude thinking blocks to save tokens
**Example Context:**
```
Turn 1:
User: "I need to add dark mode support"
Assistant: "I'll help you add dark mode. Let me start by..."
Turn 2:
User: [tool results]
Assistant: "Now I'll update the theme configuration..."
Turn 3: [current tool execution]
```
### 5. Session Metadata
**Problem:** Observations lack temporal and project context.
**Solution:** Include session metadata in observation generation.
**Available Fields:**
- `cwd` - Working directory (project path)
- `timestamp` - Exact time of execution
- `sessionId` - Session identifier
- `uuid` - Entry identifier
- `version` - Claude Code version
**Use Case:** Helps with project-specific context and temporal queries.
### 6. Token Usage Metrics
**Problem:** No visibility into performance and cost.
**Solution:** Track token usage per observation.
**Available Data:**
- Input tokens
- Output tokens
- Cache creation tokens
- Cache read tokens
**Use Case:**
- Performance monitoring
- Cost attribution
- Cache effectiveness analysis
---
## Recommended Implementation Strategy
### Phase 1: User Intent (High Impact, Low Effort)
**Change:** Modify save-hook to extract user's most recent message.
**Implementation:**
```typescript
// In save-hook.ts
import { TranscriptParser } from '../utils/transcript-parser';
const parser = new TranscriptParser(transcriptPath);
const userIntent = parser.getLastUserMessage();
// Send to worker
await workerService.saveToolExecution({
...existingData,
userIntent, // NEW
});
```
**Impact:** Observations now include "what the user wanted to do".
### Phase 2: Assistant Reasoning (High Impact, Medium Effort)
**Change:** Extract thinking blocks from assistant entry containing tool use.
**Implementation:**
```typescript
const assistantEntries = parser.getAssistantEntries();
const toolUseEntry = findEntryWithToolUse(assistantEntries, toolUseId);
const thinking = extractThinkingBlocks(toolUseEntry);
await workerService.saveToolExecution({
...existingData,
userIntent,
assistantReasoning: thinking, // NEW
});
```
**Impact:** Observations include "why the assistant chose this approach".
### Phase 3: Conversation Context (Medium Impact, High Effort)
**Change:** Include last 2-3 conversation turns.
**Implementation:**
```typescript
const recentTurns = getRecentConversationTurns(parser, 3);
await workerService.saveToolExecution({
...existingData,
userIntent,
assistantReasoning: thinking,
conversationContext: recentTurns, // NEW
});
```
**Impact:** Observations understand multi-turn workflows.
### Phase 4: Enhanced Metadata (Low Impact, Low Effort)
**Change:** Include session and performance metadata.
**Implementation:**
```typescript
await workerService.saveToolExecution({
...existingData,
userIntent,
assistantReasoning: thinking,
conversationContext: recentTurns,
metadata: { // NEW
cwd: entry.cwd,
timestamp: entry.timestamp,
sessionId: entry.sessionId,
tokenUsage: entry.message.usage,
},
});
```
**Impact:** Better analytics and debugging.
---
## Example: Before and After
### Current Observation (Tool Data Only)
```json
{
"type": "feature",
"title": "Updated login.ts",
"narrative": "Modified authentication logic in src/auth/login.ts",
"files": ["src/auth/login.ts"],
"concepts": ["authentication"],
"facts": []
}
```
### Enhanced Observation (With Transcript Context)
```json
{
"type": "bugfix",
"title": "Fixed authentication bug causing random logouts",
"narrative": "Users were experiencing random logouts due to JWT token expiration. Updated token expiration from 1h to 24h in token validation logic. Modified src/auth/login.ts to use longer-lived tokens and improved error handling for expired tokens.",
"files": ["src/auth/login.ts"],
"concepts": ["authentication", "jwt", "token-management", "bugfix"],
"facts": [
"JWT token expiration was too short (1h)",
"Updated expiration to 24h",
"Added error handling for expired tokens"
]
}
```
**Improvement:**
- Clear problem statement
- Explicit solution
- Specific technical details
- Better concept tagging
- Actionable facts
---
## Technical Considerations
### 1. Performance
**Concern:** Parsing entire transcript on every tool execution.
**Solution:**
- TranscriptParser already loads full file (unavoidable)
- Use caching for transcript parsing within same session
- Only parse once per session, reuse parsed entries
**Benchmark:**
- Current: ~10ms to parse 408-line transcript
- Impact: Negligible (save-hook already reads transcript)
### 2. Token Usage
**Concern:** Sending more context to worker increases tokens.
**Solution:**
- Thinking blocks: Limit to first 500 chars
- Conversation context: Only last 2-3 turns
- Tool results: Truncate large outputs to 500 chars
- User intent: Full text (usually short)
**Estimate:**
- Current: ~200 tokens per observation generation
- Enhanced: ~500 tokens per observation generation
- Increase: ~150%
- Cost: Still < $0.001 per observation with Haiku
### 3. Implementation Complexity
**Concern:** Matching tool executions to transcript entries.
**Solution:**
- Tool use IDs are in both places
- Timestamps provide ordering
- UUID chains provide parent-child relationships
**Example Matching:**
```typescript
function findToolContext(parser: TranscriptParser, toolUseId: string) {
// 1. Find assistant entry with tool_use block
const assistantEntry = parser.getAssistantEntries()
.find(entry =>
entry.message.content.some(c =>
c.type === 'tool_use' && c.id === toolUseId
)
);
// 2. Find next user entry with tool_result
const userEntry = parser.getUserEntries()
.find(entry =>
entry.message.content.some(c =>
c.type === 'tool_result' && c.tool_use_id === toolUseId
)
);
return { assistantEntry, userEntry };
}
```
---
## Next Steps
1. **Validate Approach**
- Review this analysis with project team
- Confirm data availability in all transcript scenarios
- Identify any privacy concerns
2. **Implement Phase 1**
- Update save-hook.ts to extract user intent
- Modify worker service to accept new fields
- Update observation prompt to use user intent
3. **Test and Measure**
- Compare observation quality before/after
- Measure token usage increase
- Validate performance impact
4. **Iterate**
- Roll out Phase 2 (assistant reasoning)
- Roll out Phase 3 (conversation context)
- Monitor improvements at each phase
---
## Appendix: Data Samples
### Complete Markdown Representation
See `/Users/alexnewman/Scripts/claude-mem/docs/context/transcript-complete-readable.md` for a full 1:1 markdown representation of the first 10 conversation turns from the sample transcript, including:
- Complete user messages
- Full assistant responses
- Thinking blocks (truncated to 2000 chars)
- Tool uses with complete input JSON
- Tool results with actual output data (truncated to 500 chars)
- Token usage stats
- All metadata (timestamps, UUIDs, session IDs, CWD)
### Sample Tool Result Structure
```typescript
// User entry containing tool result
{
"type": "user",
"message": {
"content": [
{
"type": "tool_result",
"tool_use_id": "toolu_01T477WUra1sDR6gHaqZHhKT",
"content": [
{
"type": "text",
"text": "{\n \"thoughtNumber\": 1,\n \"totalThoughts\": 8,\n \"nextThoughtNeeded\": true,\n \"branches\": [],\n \"thoughtHistoryLength\": 1\n}"
}
]
}
]
}
}
```
---
## Conclusion
The Claude Code transcript files contain a wealth of contextual data that is currently unused by the memory worker. By extracting:
1. User intent (the "what" and "why")
2. Assistant reasoning (the "how" and "because")
3. Tool results (the "outcome")
4. Conversation context (the "flow")
5. Session metadata (the "when" and "where")
We can generate significantly richer, more useful observations that better capture the intent, decisions, and outcomes of each coding session.
**The data is already there - we just need to read it.**
-234
View File
@@ -1,234 +0,0 @@
# Claude Code Transcript Data Discovery
## Executive Summary
This document details findings from implementing a validated transcript parser for Claude Code JSONL transcripts. The parser enables extraction of rich contextual data that can optimize prompt generation and track token usage for ROI metrics.
## Transcript Structure
### File Location
```
~/.claude/projects/<encoded-project-path>/<session-id>.jsonl
```
Example:
```
~/.claude/projects/-Users-alexnewman-Scripts-claude-mem/2933cff9-f0a7-4f0b-8296-0a030e7658a6.jsonl
```
### Entry Types
Discovered 5 transcript entry types:
1. **`file-history-snapshot`** (NEW - not in Python model)
- Purpose: Track file state snapshots
- Frequency: ~10 entries per session
2. **`user`** - User messages and tool results
- Contains actual user text messages OR tool result data
- Can have string content or array of ContentItems
3. **`assistant`** - Assistant responses and tool uses
- Contains text responses, tool uses, and thinking blocks
- **Critical**: Contains usage data with token counts
4. **`summary`** (not yet observed in test data)
- Session summaries
5. **`system`** (not yet observed in test data)
- System messages/warnings
6. **`queue-operation`** (not yet observed in test data)
- Queue tracking for message flow
## Key Findings
### 1. Message Extraction Complexity
**Problem**: Naively getting the "last" entry doesn't work because:
- Last user entry might be a tool result, not a text message
- Last assistant entry might only contain tool uses, no text
**Solution**: Iterate backward through entries to find the last entry with actual text content.
### 2. Tool Use Tracking
**Discovery**: Tool uses are in **assistant** messages, not user messages.
**Data Available**:
```typescript
{
name: string; // Tool name (e.g., "Bash", "Read", "TodoWrite")
timestamp: string; // When the tool was used
input: any; // Full tool input parameters
}
```
**Test Session Results** (168 entries):
- 42 tool uses across 7 different tool types
- Most used: Bash (24x), TodoWrite (5x), Edit (4x)
### 3. Token Usage Data (ROI Foundation)
**Critical Discovery**: Every assistant message contains complete token usage data:
```typescript
interface UsageInfo {
input_tokens?: number; // Total input tokens (includes context)
cache_creation_input_tokens?: number; // Tokens used to create cache
cache_read_input_tokens?: number; // Cached tokens read (discounted cost)
output_tokens?: number; // Model output tokens
}
```
**Test Session Token Analysis**:
```
Input tokens: 858
Output tokens: 44,165
Cache creation tokens: 469,650
Cache read tokens: 5,294,101 ← 5.29M tokens saved by caching!
Total tokens: 45,023
```
**ROI Implication**: This validates our ROI implementation plan. We can track:
- Discovery cost = sum of all input + output tokens across session
- Context savings = cache_read_input_tokens (tokens NOT paid for in full)
- ROI = Discovery cost / Context savings
### 4. Parse Reliability
**Result**: 0.00% parse failure rate on production transcript with 168 entries.
**Conclusion**: The JSONL format is stable and well-formed. No need for extensive error handling.
## Implementation Files
### Created Files
1. **`src/types/transcript.ts`** - TypeScript types matching Python Pydantic model
- All entry types, content types, usage info
- Drop-in compatible with Python model structure
2. **`src/utils/transcript-parser.ts`** - Robust transcript parsing class
- Handles all entry types
- Smart message extraction (finds last text message, not just last entry)
- Tool use history extraction
- Token usage aggregation
- Parse statistics and error tracking
3. **`scripts/test-transcript-parser.ts`** - Validation script
- Tests all extraction methods
- Reports parse statistics
- Shows token usage breakdown
- Lists tool use history
### Usage Example
```typescript
import { TranscriptParser } from '../src/utils/transcript-parser.js';
const parser = new TranscriptParser('/path/to/transcript.jsonl');
// Extract messages
const lastUserMsg = parser.getLastUserMessage();
const lastAssistantMsg = parser.getLastAssistantMessage();
// Get tool history
const tools = parser.getToolUseHistory();
// => [{name: 'Bash', timestamp: '...', input: {...}}, ...]
// Get token usage
const tokens = parser.getTotalTokenUsage();
// => {inputTokens: 858, outputTokens: 44165, cacheReadTokens: 5294101, ...}
// Parse statistics
const stats = parser.getParseStats();
// => {totalLines: 168, parsedEntries: 168, failedLines: 0, ...}
```
## Next Steps for PR Review
### Addressing "Drops Unknown Lines" Concern
**Original Issue**: Summary hook silently skipped malformed lines without visibility.
**Root Cause**: We didn't understand the full transcript model. The "skip malformed lines" was a band-aid.
**Solution**: Replace ad-hoc parsing in `summary-hook.ts` with validated `TranscriptParser` class:
**Before** (summary-hook.ts:38-117):
```typescript
// Manually parsing with try/catch, no type safety
for (let i = lines.length - 1; i >= 0; i--) {
try {
const line = JSON.parse(lines[i]);
if (line.type === 'user' && line.message?.content) {
// ... extraction logic
}
} catch (parseError) {
// Skip malformed lines ← BLACK HOLE
continue;
}
}
```
**After** (using TranscriptParser):
```typescript
import { TranscriptParser } from '../utils/transcript-parser.js';
const parser = new TranscriptParser(transcriptPath);
const lastUserMessage = parser.getLastUserMessage();
const lastAssistantMessage = parser.getLastAssistantMessage();
// Parse errors are tracked in parser.getParseErrors()
```
**Benefits**:
1. ✅ Type-safe extraction based on validated model
2. ✅ No silent failures - parse errors are tracked
3. ✅ Smart extraction (finds last TEXT message, not last entry)
4. ✅ Reusable across all hooks and scripts
5. ✅ Enables token usage tracking (ROI metrics)
6. ✅ Enables tool use tracking (prompt optimization)
## Prompt Optimization Opportunities
With rich transcript data available, we can enhance prompts with:
### 1. Tool Use Patterns
- "In this session you've used: Bash (24x), TodoWrite (5x), Edit (4x)"
- Helps Claude understand what kind of work is being done
### 2. Token Economics Awareness
- "Cache read tokens: 5.29M (context savings)"
- Reinforces value of memory system
### 3. Session Flow Understanding
- Number of user/assistant exchanges
- Tools used per exchange
- Session complexity metrics
### 4. File History Snapshots
- Track which files were modified during session
- Provide file change context to summaries
## Testing
Run the validation script:
```bash
# Find your current session transcript
ls -lt ~/.claude/projects/-Users-alexnewman-Scripts-claude-mem/*.jsonl | head -1
# Test the parser
npx tsx scripts/test-transcript-parser.ts <path-to-transcript.jsonl>
```
## Conclusion
The transcript parser implementation:
1. ✅ Addresses PR review concern about dropped lines
2. ✅ Validates the ROI metrics implementation plan
3. ✅ Enables prompt optimization with rich context
4. ✅ Provides foundation for future enhancements
**Recommendation**: Replace ad-hoc transcript parsing in hooks with `TranscriptParser` class for improved reliability and feature richness.
-22
View File
@@ -1,22 +0,0 @@
# Transcript Dump
Total entries: 384
---
## Entry 151: USER
**Timestamp:** 2025-11-13T17:09:47.231Z
**Content:**
```
I have a question regarding the context timeline that we made. and it's a bit of a thinker...
when i first started out working on claude-mem my idea was using a fast index of memories with links to access them on demand. I thought that claude-mem would be searching for all the things it needed to get the job done, but what I found is that the description of the work done + the files touched actually improved performance tremendously, and without actively searching for things constantly.
I just had a thought that maybe claude isn't searching, because when it reads what it did, it already "knows" how it did that, for the most part. Would you say I'm on the right track? I want you to ultrathink about WHY claude-mem's session start context works so well, from your OWN perspective
```
---
_Remaining 364 entries omitted for brevity_
@@ -1,106 +0,0 @@
# Transcript Context Analysis
**File:** 57dcc12f-4751-46bb-82b4-2aa96a3e226d.jsonl
**Parsed:** 11/13/2025, 12:34:19 PM
## Statistics
- Total entries: 281
- Successfully parsed: 281
- Failed lines: 0
- Conversation turns: 43
## Token Usage
- Input tokens: 6,805
- Output tokens: 38,209
- Cache creation: 224,078
- Cache read: 3,320,726
- Total: 45,014
---
# Conversation Turns
## Turn 1
---
## Turn 2
---
## Turn 3
---
## Turn 4
---
## Turn 5
---
## Turn 6
---
## Turn 7
---
## Turn 8
---
## Turn 9
---
## Turn 10
---
## Turn 11
---
## Turn 12
---
## Turn 13
---
## Turn 14
---
## Turn 15
---
## Turn 16
---
## Turn 17
---
## Turn 18
---
## Turn 19
---
## Turn 20
---
_... 23 more turns omitted for brevity_
-180
View File
@@ -1,180 +0,0 @@
# TypeScript Errors to Fix
Generated: 2025-11-06
## Summary
Total files with errors: 20
Total error count: 160+
## Errors by File
### 1. src/sdk/parser.ts (5 errors)
**Lines 149-153**: Type 'string | null' is not assignable to type 'string'
- `request` - line 149
- `investigated` - line 150
- `learned` - line 151
- `completed` - line 152
- `next_steps` - line 153
**Fix**: Update return type to allow null values or provide default values
---
### 2. src/hooks/index.ts (4 errors)
**Lines 0-3**: Cannot find module errors
- `'./context.js'` - line 0
- `'./save.js'` - line 1
- `'./new.js'` - line 2
- `'./summary.js'` - line 3
**Fix**: Update imports to use correct paths without .js extension
---
### 3. src/sdk/index.ts (1 error)
**Line 4**: `'./prompts.js'` has no exported member named 'buildFinalizePrompt'
**Fix**: Remove unused import or implement the missing function
---
### 4. src/services/sync/ChromaSync.ts (26 errors)
**Multiple lines**: Argument of type '"CHROMA_SYNC"' is not assignable to parameter of type 'Component'
- Lines: 91, 114, 116, 141, 144, 155, 157, 324, 329, 370, 409, 463, 493, 535, 541, 546, 562, 589, 607, 630, 648, 679, 697, 703, 718, 733
**Line 508**: `'result.content'` is of type 'unknown'
**Fix**: Add 'CHROMA_SYNC' to Component type union or update logger calls
---
### 5. src/shared/config.ts (1 error)
**Line 11**: Cannot find name '__DEFAULT_PACKAGE_VERSION__'
**Fix**: This should be injected during build, check build configuration
---
### 6. src/shared/storage.ts (25 errors)
**Lines 1-5**: Module has no exported member errors
- `'createStores'` - line 1
- `'MemoryStore'` - line 3
- `'OverviewStore'` - line 4
- `'DiagnosticsStore'` - line 5
**Lines 87-162**: Various property errors (legacy interface usage)
- Property 'create' does not exist - line 87
- Property 'getBySessionId' does not exist - line 92
- Property 'has' does not exist - line 97
- Property 'getAllSessionIds' does not exist - line 102
- Property 'getRecent' does not exist - line 107
- Property 'getRecentForProject' does not exist - line 112
- Multiple 'stores' is possibly 'undefined' errors
**Fix**: Remove legacy code or update to use current SessionStore interface
---
### 7. src/servers/search-server.ts (8 errors)
**Line 58**: `'result.content'` is of type 'unknown'
**Lines 150, 230, 309**: 'index' is declared but its value is never read
**Lines 371, 466, 1032, 1405**: 'id' is declared but its value is never read
**Fix**: Add proper type assertions and remove unused variables
---
### 8. src/services/sqlite/Database.ts (1 error)
**Line 0**: Cannot find module 'bun:sqlite'
**Fix**: This is legacy code using Bun's SQLite, should not be imported
---
### 9. src/services/sqlite/migrations.ts (2 errors)
**Line 0**: Cannot find module 'bun:sqlite'
**Line 153**: 'db' is declared but its value is never read
**Fix**: Update imports to use better-sqlite3 instead
---
### 10. tests/session-search.test.ts (1 error)
**Line 173**: Type 'null' is not assignable to type 'SessionSearch'
**Fix**: Update test to handle nullable type properly
---
### 11. React/Viewer UI Files (100+ errors)
#### All .tsx files: Cannot use JSX unless '--jsx' flag is provided
This affects all viewer components but is expected - these are built with esbuild which handles JSX.
#### src/ui/viewer/hooks/usePagination.ts (2 errors)
**Lines 66, 70**: `'data'` is of type 'unknown'
#### src/ui/viewer/hooks/useSettings.ts (5 errors)
**Lines 17-19**: `'data'` is of type 'unknown'
**Lines 40, 45**: `'result'` is of type 'unknown'
#### src/ui/viewer/hooks/useSSE.ts (2 errors)
**Line 19**: `'data'` is of type 'unknown'
**Line 71**: Type mismatch in setObservations
#### src/ui/viewer/hooks/useStats.ts (1 error)
**Line 13**: Argument of type 'unknown' not assignable to SetStateAction
#### src/ui/viewer/hooks/useTheme.ts (8 errors)
**Multiple lines**: DOM-related type errors
- Cannot find name 'window' - lines 8, 9, 48
- Cannot find name 'localStorage' - lines 14, 61
- Cannot find name 'document' - lines 41, 52
- Cannot find name 'MediaQueryListEvent' - line 49
**Fix**: Add DOM lib to tsconfig for viewer files or add type assertions
#### src/ui/viewer/index.tsx (2 errors)
**Line 5**: Cannot find name 'document'
**Multiple**: JSX errors (expected, built with esbuild)
#### src/ui/viewer/App.tsx (3 errors)
**Lines 63, 66, 69**: Type mismatch errors in setState callbacks
#### src/ui/viewer/components/Header.tsx (6 errors)
**Lines 46, 47, 66, 67, 85, 86**: Property 'style' does not exist on EventTarget & HTMLAnchorElement
**Line 94**: Property 'value' does not exist on EventTarget & HTMLSelectElement
#### src/ui/viewer/components/Feed.tsx (2 errors)
**Line 30**: Cannot find name 'IntersectionObserver'
**Line 31**: Parameter 'entries' implicitly has 'any' type
#### src/ui/viewer/components/Sidebar.tsx (3 errors)
**Lines 81, 99, 113**: Property 'value' does not exist on EventTarget
---
## Priority Fix Order
1. **High Priority - Breaks build:**
- src/shared/config.ts (__DEFAULT_PACKAGE_VERSION__)
- src/hooks/index.ts (module import errors)
- src/sdk/index.ts (buildFinalizePrompt export)
- src/shared/storage.ts (legacy interface usage)
2. **Medium Priority - Type safety:**
- src/sdk/parser.ts (null handling)
- src/services/sync/ChromaSync.ts (logger Component type)
- src/servers/search-server.ts (unknown types)
- React hooks (unknown types)
3. **Low Priority - Cosmetic:**
- Unused variable warnings
- JSX errors (these are expected, esbuild handles them)
- DOM type errors in viewer (handled by esbuild)
4. **Legacy/Cleanup:**
- src/services/sqlite/Database.ts (remove bun:sqlite)
- src/services/sqlite/migrations.ts (update to better-sqlite3)
- src/shared/storage.ts (remove entire file if legacy)
-67
View File
@@ -1,67 +0,0 @@
# VS Code API Language Model (`lm`) Namespace
Excerpt captured from `https://code.visualstudio.com/api/references/vscode-api#lm` using markitdown MCP on 2025-11-12.
## Overview
The `vscode.lm` namespace exposes APIs for interacting with language models inside Visual Studio Code. It allows extensions to register tools, select chat models, invoke tools, and surface MCP servers so that agent mode can compose complex responses.
### Available Tools
- `vscode.lm.tools`: Readonly array of [`LanguageModelToolInformation`](https://code.visualstudio.com/api/references/vscode-api#LanguageModelToolInformation).
- Lists all tools registered via `vscode.lm.registerTool`.
- Tools can be invoked programmatically with `vscode.lm.invokeTool` when their inputs satisfy the declared schema.
### Events
- `vscode.lm.onDidChangeChatModels`: Fires when the set of available chat models changes. Extensions should re-query models after this event.
### Functions
#### `vscode.lm.invokeTool(name, options, token?)`
Invokes a tool by name with a given input payload.
- Validates input against the schema declared by the tool.
- When called from a chat participant, pass the `toolInvocationToken` so the chat UI associates results with the correct conversation.
- Returns a [`LanguageModelToolResult`](https://code.visualstudio.com/api/references/vscode-api#LanguageModelToolResult) composed of text and optional prompt-tsx parts.
- Tool results can be preserved across turns by storing them in `ChatResult.metadata` and retrieving them later from `ChatResponseTurn.result`.
#### `vscode.lm.registerLanguageModelChatProvider(vendor, provider)`
Registers a [`LanguageModelChatProvider`](https://code.visualstudio.com/api/references/vscode-api#LanguageModelChatProvider).
- Requires a matching `languageModelChatProviders` contribution in `package.json`.
- `vendor` must be globally unique (for example `copilot` or `openai`).
- Returns a `Disposable` to unregister the provider.
#### `vscode.lm.registerMcpServerDefinitionProvider(id, provider)`
Publishes Model Context Protocol servers for the editor.
- Requires a `contributes.mcpServerDefinitionProviders` entry in `package.json`.
- Enables dynamic discovery of MCP servers and tools when users submit chat messages.
- Returns a `Disposable` that unregisters the provider.
#### `vscode.lm.registerTool(name, tool)`
Registers a [`LanguageModelTool`](https://code.visualstudio.com/api/references/vscode-api#LanguageModelTool) implementation with the runtime.
- Tool must also appear in `package.json -> contributes.languageModelTools`.
- Registered tools appear in `vscode.lm.tools` and can be used by any extension.
#### `vscode.lm.selectChatModels(selector?)`
Returns an array of [`LanguageModelChat`](https://code.visualstudio.com/api/references/vscode-api#LanguageModelChat) instances matching a selector.
- Selector can be broad (by vendor or family) or narrow (by exact model ID).
- Handle scenarios where zero models are available.
- Persisted model references should be refreshed when `onDidChangeChatModels` fires.
## Usage Notes
- Extensions should gracefully handle missing models or tools.
- Tool invocation responses can include multiple parts; integrate them using prompt-tsx or by constructing `LanguageModelToolResultPart` objects.
- When providing MCP servers or tools, ensure proper contribution points exist in `package.json`.
_Fetched on 2025-11-12 via markitdown MCP._
@@ -1,49 +0,0 @@
# VS Code Copilot Extension Integration Plan
## 1. Groundwork
- Audit existing claude-mem hook scripts (`context-hook`, `user-message-hook`, `new-hook`, `save-hook`, `summary-hook`, `cleanup-hook`) and their worker-service payloads.
- Document REST endpoints, request bodies, and SessionStore schema fields used today so the extension mirrors them exactly.
- Confirm worker service availability workflow (`ensureWorkerRunning`, port resolution) and decide how extension error reporting will surface issues to Copilot chat users.
## 2. Project Scaffold
- Clone the VS Code `chat-sample` starter, convert to a TypeScript-only extension, and align lint/tsconfig with repo standards.
- Add build pipeline (esbuild or webpack) plus npm scripts that match the existing `scripts/build-hooks.js` release flow.
- Wire extension activation events for chat participation and ensure packaging metadata (publisher, categories) is in place.
## 3. Shared Worker Client
- Extract reusable worker-service client utilities from `plugin/scripts/*.js` (port discovery, session init, observation uploads).
- Publish TypeScript definitions by re-exporting from `src/services/worker-types.ts` to keep contracts synchronized.
- Centralize HTTP calls (timeouts, retries, logging) so every tool implementation uses the same helper layer.
## 4. Language Model Tool Contracts
- Add `contributes.languageModelTools` entries in `package.json` for lifecycle parity:
- `mem_session_init`, `mem_user_prompt_log`, `mem_observation_record`, `mem_summary_finalize`, `mem_session_cleanup`.
- Provide detailed JSON schemas mirroring hook input structures (session IDs, cwd, prompt text, tool payload metadata).
- Supply descriptive `modelDescription`, `userDescription`, icons, tags, and enable `canBeReferencedInPrompt` where appropriate.
## 5. Tool Implementations
- Register each tool via `vscode.lm.registerTool` inside `activate`.
- Implement `prepareInvocation` to show user confirmations (especially for cleanup/stop actions) and tailor messages to match existing CLI prompts.
- In `invoke`, call the shared worker client, translate successes into `LanguageModelToolResult` text parts, and craft error messages that guide the LLM toward recovery (retry, alternate parameters).
- Ensure telemetry/logging records tool usage for debugging without leaking sensitive data.
## 6. Chat Orchestration
- Implement a chat participant based on the sample that maps Copilot threads to claude-mem session IDs stored in turn metadata.
- On conversation start, auto-run `mem_session_init`; before each user prompt, dispatch `mem_user_prompt_log`; when Copilot signals stop, run `mem_summary_finalize` (with fallbacks if the worker is unavailable).
- Capture tool events emitted by Copilot (file edits, terminal runs) and forward them through `mem_observation_record` with matching payload structure.
- Handle conversation disposal or model changes by calling `mem_session_cleanup` to mirror `SessionEnd` hooks.
## 7. Settings and UX
- Read `.claude-mem/settings.json` overrides (worker port, observation depth) and surface VS Code settings for Copilot-specific toggles (auto-sync enabled, max observations per prompt).
- Add status bar indicator/commands for worker health, quick restart instructions, and opening the viewer UI (`http://localhost:37777`).
- Provide inline notifications when the worker is unreachable, including guidance to restart via PM2.
## 8. Testing and QA
- Draft manual validation checklist: initial session, prompt logging, observation capture, summary completion, worker-down handling.
- Add integration tests using `@vscode/test-electron` to simulate chat turns and assert database side effects in a temporary claude-mem data directory.
- Build mocks for worker endpoints to enable unit tests of tool invocation logic without hitting the real service.
## 9. Release Readiness
- Document installation and usage in `README.md`, including architecture diagrams showing Copilot → tool → worker flow.
- Update CHANGELOG and marketing copy to announce Copilot support and list prerequisites (worker running, settings file placement).
- Prepare Marketplace assets (icon, gallery text) and extend existing publish scripts to package and ship the new extension.
@@ -1,8 +0,0 @@
# VS Code Extension Chat Sample Repository (GitHub Snapshot)
The markitdown MCP fetch for `https://github.com/microsoft/vscode-extension-samples/tree/main/chat-sample` returned only the public navigation scaffolding for GitHub. No repository-specific content or README data was captured because the site requires client-side execution that the fetcher cannot perform.
You can browse the repository directly for full details:
- https://github.com/microsoft/vscode-extension-samples/tree/main/chat-sample
_Fetched on 2025-11-12 via markitdown MCP. Content retrieval was limited to GitHub's static navigation shell._
@@ -1,267 +0,0 @@
# Language Model Tool API
Language model tools enable you to extend the functionality of a large language model (LLM) in chat with domain-specific capabilities. To process a user's chat prompt, [agent mode](/docs/copilot/chat/chat-agent-mode) in VS Code can automatically invoke these tools to perform specialized tasks as part of the conversation.
By contributing a language model tool in your VS Code extension, you can extend the agentic coding workflow while also providing deep integration with the editor. Extension tools are one of three types of tools available in VS Code, alongside [built-in tools and MCP tools](/docs/copilot/chat/chat-tools.md#types-of-tools).
In this extension guide, you learn how to create a language model tool by using the Language Model Tools API and how to implement tool calling in a chat extension.
You can also extend the chat experience with specialized tools by contributing an [MCP server](/api/extension-guides/ai/mcp). See the [AI Extensibility Overview](/api/extension-guides/ai/ai-extensibility-overview) for details on the different options and how to decide which approach to use.
> **Tip**
> For information about using tools as an end user, see [Use tools in chat](/docs/copilot/chat/chat-tools.md).
## What is tool calling in an LLM?
A language model tool is a function that can be invoked as part of a language model request. For example, you might have a function that retrieves information from a database, performs some calculation, or calls an online API. When you contribute a tool in a VS Code extension, agent mode can then invoke the tool based on the context of the conversation.
The LLM never actually executes the tool itself, instead the LLM generates the parameters that are used to call your tool. It's important to clearly describe the tool's purpose, functionality, and input parameters so that the tool can be invoked in the right context.
The following diagram shows the tool-calling flow in agent mode in VS Code. See [Tool-calling flow](#tool-calling-flow) for details about the specific steps involved.
![Diagram that shows the Copilot tool-calling flow](/assets/api/extension-guides/ai/tools/copilot-tool-calling-flow.png)
Read more about [function calling](https://platform.openai.com/docs/guides/function-calling) in the OpenAI documentation.
## Why implement a language model tool in your extension?
Implementing a language model tool in your extension has several benefits:
- **Extend agent mode** with specialized, domain-specific tools that are automatically invoked as part of responding to a user prompt. For example, enable database scaffolding and querying to dynamically provide the LLM with relevant context.
- **Deeply integrate with VS Code** by using the broad set of extension APIs. For example, use the [debug APIs](/api/extension-guides/debugger-extension) to get the current debugging context and use it as part of the tool's functionality.
- **Distribute and deploy** tools via the Visual Studio Marketplace, providing a reliable and seamless experience for users. Users don't need a separate installation and update process for your tool.
You might consider implementing a language model tool with an [MCP server](/api/extension-guides/ai/mcp) in the following scenarios:
- You already have an MCP server implementation and also want to use it in VS Code.
- You want to reuse the same tool across different development environments and platforms.
- Your tool is hosted remotely as a service.
- You don't need access to VS Code APIs.
Learn more about the [differences between tool types](/docs/copilot/chat/chat-tools.md#types-of-tools).
## Create a language model tool
Implementing a language model tool consists of two main parts:
1. Define the tool's configuration in the `package.json` file of your extension.
2. Implement the tool in your extension code by using the [Language Model API reference](/api/references/vscode-api#lm)
You can get started with a [basic example project](https://github.com/microsoft/vscode-extension-samples/tree/main/chat-sample).
### 1. Static configuration in `package.json`
The first step to define a language model tool in your extension is to define it in the `package.json` file of your extension. This configuration includes the tool name, description, input schema, and other metadata:
1. Add an entry for your tool in the `contributes.languageModelTools` section of your extension's `package.json` file.
2. Give the tool a unique name:
| Property | Description |
| --- | --- |
| `name` | The unique name of the tool, used to reference the tool in the extension implementation code. Format the name in the format `{verb}_{noun}`. See [naming guidelines](#guidelines-and-conventions). |
| `displayName` | The user-friendly name of the tool, used for displaying in the UI. |
3. If the tool can be used in [agent mode](/docs/copilot/chat/chat-agent-mode) or referenced in a chat prompt with `#`, add the following properties:
Users can enable or disable the tool in the Chat view, similar to how this is done for [Model Context Protocol (MCP) tools](/docs/copilot/chat/chat-tools.md#mcp-tools).
| Property | Description |
| --- | --- |
| `canBeReferencedInPrompt` | Set to `true` if the tool can be used in [agent mode](/docs/copilot/chat/chat-agent-mode) or referenced in chat. |
| `toolReferenceName` | The name for users to reference the tool in a chat prompt via `#`. |
| `icon` | The icon to display for the tool in the UI. |
| `userDescription` | User-friendly description of the tool, used for displaying in the UI. |
4. Add a detailed description in `modelDescription`. This information is used by the LLM to determine in which context your tool should be used.
- What exactly does the tool do?
- What kind of information does it return?
- When should and shouldn't it be used?
- Describe important limitations or constraints of the tool.
5. If the tool takes input parameters, add an `inputSchema` property that describes the tool's input parameters.
This JSON schema describes an object with the properties that the tool takes as input, and whether they are required. File paths should be absolute paths.
Describe what each parameter does and how it relates to the tool's functionality.
6. Add a `when` clause to control when the tool is available.
The `languageModelTools` contribution point lets you restrict when a tool is available for agent mode or can be referenced in a prompt by using a [when clause](/api/references/when-clause-contexts). For example, a tool that gets the debug call stack information should only be available when the user is debugging.
```json
"contributes": {
"languageModelTools": [
{
"name": "chat-tools-sample_tabCount",
...
"when": "debugState == 'running'"
}
]
}
```
**Example tool definition**
The following example shows how to define a tool that counts the number of active tabs in a tab group.
```json
"contributes": {
"languageModelTools": [
{
"name": "chat-tools-sample_tabCount",
"tags": [
"editors",
"chat-tools-sample"
],
"toolReferenceName": "tabCount",
"displayName": "Tab Count",
"modelDescription": "The number of active tabs in a tab group in VS Code.",
"userDescription": "Count the number of active tabs in a tab group.",
"canBeReferencedInPrompt": true,
"icon": "$(files)",
"inputSchema": {
"type": "object",
"properties": {
"tabGroup": {
"type": "number",
"description": "The index of the tab group to check. This is optional- if not specified, the active tab group will be checked.",
"default": 0
}
}
}
}
]
}
```
### 2. Tool implementation
Implement the language model tool by using the [Language Model API](/api/references/vscode-api#lm). This consists of the following steps:
1. On activation of the extension, register the tool with [`vscode.lm.registerTool`](/api/references/vscode-api#lm.registerTool).
Provide the name of the tool as you specified it in the `name` property in `package.json`.
If you want the tool to be private to your extension, skip the tool registration step.
```ts
export function registerChatTools(context: vscode.ExtensionContext) {
context.subscriptions.push(
vscode.lm.registerTool('chat-tools-sample_tabCount', new TabCountTool())
);
}
```
2. Create a class that implements the [`vscode.LanguageModelTool<>`](/api/references/vscode-api#LanguageModelTool%3CT%3E) interface.
3. Add tool confirmation messages in the `prepareInvocation` method.
A generic confirmation dialog will always be shown for tools from extensions, but the tool can customize the confirmation message. Give enough context to the user to understand what the tool is doing. The message can be a `MarkdownString` containing a code block.
The following example shows how to provide a confirmation message for the tab count tool.
```ts
async prepareInvocation(
options: vscode.LanguageModelToolInvocationPrepareOptions<ITabCountParameters>,
_token: vscode.CancellationToken
) {
const confirmationMessages = {
title: 'Count the number of open tabs',
message: new vscode.MarkdownString(
`Count the number of open tabs?` +
(options.input.tabGroup !== undefined
? ` in tab group ${options.input.tabGroup}`
: '')
),
};
return {
invocationMessage: 'Counting the number of tabs',
confirmationMessages,
};
}
```
If `prepareInvocation` returned `undefined`, the generic confirmation message will be shown. Note that the user can also select to "Always Allow" a certain tool.
4. Define an interface that describes the tool input parameters.
The interface is used in the `invoke` method of the `vscode.LanguageModelTool` class. The input parameters are validated against the JSON schema you defined in the `inputSchema` in `package.json`.
The following example shows the interface for the tab count tool.
```ts
export interface ITabCountParameters {
tabGroup?: number;
}
```
5. Implement the `invoke` method. This method is called when the language model tool is invoked while processing a chat prompt.
The `invoke` method receives the tool input parameters in the `options` parameter. The parameters are validated against the JSON schema defined in `inputSchema` in `package.json`.
When an error occurs, throw an error with a message that makes sense to the LLM. Optionally, provide instructions on what the LLM should do next, such as retrying with different parameters, or performing a different action.
The following example shows the implementation of the tab count tool. The result of the tool is an instance of type `vscode.LanguageModelToolResult`.
```ts
async invoke(
options: vscode.LanguageModelToolInvocationOptions<ITabCountParameters>,
_token: vscode.CancellationToken
) {
const params = options.input;
if (typeof params.tabGroup === 'number') {
const group = vscode.window.tabGroups.all[Math.max(params.tabGroup - 1, 0)];
const nth =
params.tabGroup === 1
? '1st'
: params.tabGroup === 2
? '2nd'
: params.tabGroup === 3
? '3rd'
: `${params.tabGroup}th`;
return new vscode.LanguageModelToolResult([new vscode.LanguageModelTextPart(`There are ${group.tabs.length} tabs open in the ${nth} tab group.`)]);
} else {
const group = vscode.window.tabGroups.activeTabGroup;
return new vscode.LanguageModelToolResult([new vscode.LanguageModelTextPart(`There are ${group.tabs.length} tabs open.`)]);
}
}
```
View the full source code for implementing a [language model tool](https://github.com/microsoft/vscode-extension-samples/blob/main/chat-sample/src/tools.ts) in the VS Code Extension Samples repository.
## Tool-calling flow
When a user sends a chat prompt, the following steps occur:
1. Copilot determines the list of available tools based on the user's configuration.
The list of tools consists of built-in tools, tools registered by extensions, and tools from [MCP servers](/docs/copilot/chat/mcp-servers). You can contribute to agent mode via extensions or MCP servers (shown in green in the diagram).
2. Copilot sends the request to the LLM and provides it with the prompt, chat context, and the list of tool definitions to consider.
The LLM generates a response, which might include one or more requests to invoke a tool.
3. If needed, Copilot invokes the suggested tool(s) with the parameter values provided by the LLM.
A tool response might result in more requests for tool invocations.
4. If there are errors or follow-up tool requests, Copilot iterates over the tool-calling flow until all tool requests are resolved.
5. Copilot returns the final response to the user, which might include responses from multiple tools.
## Guidelines and conventions
- **Naming**: write clear and descriptive names for tools and parameters.
- **Tool name**: should be unique, and clearly describe their intent. Structure the tool name in the format `{verb}_{noun}`. For example, `get_weather`, `get_azure_deployment`, or `get_terminal_output`.
- **Parameter name**: should describe the parameter's purpose. Structure the parameter name in the format `{noun}`. For example, `destination_location`, `ticker`, or `file_name`.
- **Descriptions**: write detailed descriptions for tools and parameters.
- Describe what the tool does and when it should and shouldn't be used. For example, "This tool retrieves the weather for a given location."
- Describe what each parameter does and how it relates to the tool's functionality. For example, "The `destination_location` parameter specifies the location for which to retrieve the weather. It should be a valid location name or coordinates."
- Describe important limitations or constraints of the tool. For example, "This tool only retrieves weather data for locations in the United States. It might not work for other regions."
- **User confirmation**: provide a confirmation message for the tool invocation. A generic confirmation dialog will always be shown for tools from extensions, but the tool can customize the confirmation message. Give enough context to the user to understand what the tool is doing.
- **Error handling**: when an error occurs, throw an error with a message that makes sense to the LLM. Optionally, provide instructions on what the LLM should do next, such as retrying with different parameters, or performing a different action.
Get more best practices for creating tools in the [OpenAI documentation](https://platform.openai.com/docs/guides/function-calling?api-mode=chat#best-practices-for-defining-functions) and [Anthropic documentation](https://docs.anthropic.com/en/docs/build-with-claude/tool-use/overview).
## Related content
- [Language Model API reference](/api/references/vscode-api#lm)
- [Register an MCP server in a VS Code extension](/api/extension-guides/ai/mcp)
- [Use MCP tools in agent mode](/docs/copilot/chat/mcp-servers)
_Fetched on 2025-11-12 via markitdown MCP._
@@ -1,13 +0,0 @@
VSCode Language Model Tool API
Local snapshots fetched via the markitdown MCP on 2025-11-12:
- `docs/context/vscode-language-model-tool-api.md`
- `docs/context/vscode-extension-chat-sample.md`
- `docs/context/vscode-api-lm.md`
Original sources for reference:
- http://code.visualstudio.com/api/extension-guides/ai/tools
- https://github.com/microsoft/vscode-extension-samples/tree/main/chat-sample
- https://code.visualstudio.com/api/references/vscode-api#lm
File diff suppressed because it is too large Load Diff
-959
View File
@@ -1,959 +0,0 @@
# Worker Service Overhead Analysis
**Date**: 2025-11-06
**File**: `src/services/worker-service.ts`
**Total Lines**: 1173
**Overall Assessment**: This file has accumulated unnecessary complexity, artificial delays, and defensive programming patterns that actively harm performance. Many patterns were likely added "just in case" without real-world justification.
---
## Executive Summary
**High Severity Issues (Score 8-10)**:
- **Line 942**: Polling loop with 100ms delay instead of event-driven architecture (Score: 10/10)
- **Lines 338-365**: Spinner debounce with 1.5s artificial delay (Score: 9/10)
- **Lines 204-234**: Database reopening on every getOrCreateSession call (Score: 8/10)
**Medium Severity Issues (Score 5-7)**:
- **Lines 33-70**: Unnecessary Claude path caching for rare operation (Score: 6/10)
- **Lines 694-711**: Redundant database reopening in handleInit (Score: 7/10)
- **Lines 728-741**: Fire-and-forget Chroma sync with verbose error handling (Score: 5/10)
**Low Severity Issues (Score 3-4)**:
- **Line 28**: Magic number MESSAGE_POLL_INTERVAL_MS without justification (Score: 4/10)
- **Lines 303-321**: Over-engineered SSE client cleanup (Score: 4/10)
---
## Line-by-Line Analysis
### Lines 1-30: Setup and Constants
**Lines 22-24**: Version reading from package.json
```typescript
const packageJson = JSON.parse(readFileSync(join(__dirname, '..', '..', 'package.json'), 'utf-8'));
const VERSION = packageJson.version;
```
**Score**: 2/10
**Why**: This is fine. Reads once at startup, uses the value for the /api/stats endpoint.
**Line 26**: Model configuration
```typescript
const MODEL = process.env.CLAUDE_MEM_MODEL || 'claude-sonnet-4-5';
```
**Score**: 1/10
**Why**: Clean, simple, correct.
**Line 28**: Magic number
```typescript
const MESSAGE_POLL_INTERVAL_MS = 100;
```
**Score**: 4/10
**Why**: This is a magic number without justification. Why 100ms? Why not 50ms or 200ms? More importantly, **why are we polling at all instead of using event-driven patterns?** The name is descriptive, but the existence of this constant indicates a fundamental architectural problem (see line 942).
**Pattern**: This constant exists to support a polling loop that shouldn't exist.
---
### Lines 33-70: Claude Path Caching
```typescript
let cachedClaudePath: string | null = null;
function findClaudePath(): string {
if (cachedClaudePath) {
return cachedClaudePath;
}
// ... 30 lines of logic to find and cache path ...
}
```
**Score**: 6/10
**Why Stupid**:
1. **YAGNI Violation**: This function is called **exactly once** per worker startup (line 846 in runSDKAgent)
2. **Premature Optimization**: Caching saves ~5ms on an operation that happens once per worker lifetime
3. **Added Complexity**: 37 lines of code including module-level state for negligible benefit
4. **False Economy**: The worker runs for hours/days. Saving 5ms on startup is meaningless.
**What Should Happen**:
```typescript
function findClaudePath(): string {
if (process.env.CLAUDE_CODE_PATH) return process.env.CLAUDE_CODE_PATH;
const command = process.platform === 'win32' ? 'where claude' : 'which claude';
const result = execSync(command, { encoding: 'utf8' }).trim().split('\n')[0].trim();
if (!result) throw new Error('Claude executable not found in PATH');
return result;
}
```
**Savings**: Remove 33 lines of unnecessary code and module-level state.
---
### Lines 103-110: WorkerService State
```typescript
class WorkerService {
private app: express.Application;
private sessions: Map<number, ActiveSession> = new Map();
private chromaSync!: ChromaSync;
private sseClients: Set<Response> = new Set();
private isProcessing: boolean = false;
private spinnerStopTimer: NodeJS.Timeout | null = null;
```
**Score**: 7/10 (for spinnerStopTimer)
**Why**:
- `app`, `sessions`, `chromaSync`, `sseClients`: **Good** - necessary state
- `isProcessing`: **Questionable** (Score 5/10) - Do we really need to track this globally? Can't we derive it from `sessions.size > 0` or `sessions.values().some(s => s.pendingMessages.length > 0)`?
- `spinnerStopTimer`: **Bad** (Score 7/10) - Exists solely to support artificial debouncing (see lines 338-365)
**Pattern**: State that exists to support other unnecessary complexity.
---
### Lines 145-178: Service Startup
**Lines 145-153**: HTTP server startup
```typescript
async start(): Promise<void> {
const port = getWorkerPort();
await new Promise<void>((resolve, reject) => {
this.app.listen(port, () => resolve())
.on('error', reject);
});
logger.info('SYSTEM', 'Worker started', { port, pid: process.pid });
```
**Score**: 1/10
**Why**: This is good. Clean promise wrapper, fail-fast on errors, clear logging.
**Lines 155-167**: ChromaSync initialization and orphan cleanup
```typescript
this.chromaSync = new ChromaSync('claude-mem');
logger.info('SYSTEM', 'ChromaSync initialized');
const db = new SessionStore();
const cleanedCount = db.cleanupOrphanedSessions();
db.close();
```
**Score**: 2/10
**Why**: This is fine. Necessary initialization and cleanup. Database is opened, used, and closed immediately.
**Lines 168-177**: Chroma backfill
```typescript
logger.info('SYSTEM', 'Starting Chroma backfill in background...');
this.chromaSync.ensureBackfilled()
.then(() => {
logger.info('SYSTEM', 'Chroma backfill complete');
})
.catch((error: Error) => {
logger.error('SYSTEM', 'Chroma backfill failed - continuing anyway', {}, error);
// Don't exit - allow worker to continue serving requests
});
```
**Score**: 3/10
**Why**: This is mostly fine. Fire-and-forget background operation that doesn't block startup. The verbose error handling is slightly excessive (could be a single logger call), but acceptable for a background operation.
---
### Lines 200-236: getOrCreateSession - THE KILLER
```typescript
private getOrCreateSession(sessionDbId: number): ActiveSession {
let session = this.sessions.get(sessionDbId);
if (session) return session;
const db = new SessionStore();
const dbSession = db.getSessionById(sessionDbId);
if (!dbSession) {
db.close();
throw new Error(`Session ${sessionDbId} not found in database`);
}
session = {
sessionDbId,
claudeSessionId: dbSession.claude_session_id,
sdkSessionId: null,
project: dbSession.project,
userPrompt: dbSession.user_prompt,
pendingMessages: [],
abortController: new AbortController(),
generatorPromise: null,
lastPromptNumber: 0,
startTime: Date.now()
};
this.sessions.set(sessionDbId, session);
session.generatorPromise = this.runSDKAgent(session).catch(err => {
logger.failure('WORKER', 'SDK agent error', { sessionId: sessionDbId }, err);
const db = new SessionStore();
db.markSessionFailed(sessionDbId);
db.close();
this.sessions.delete(sessionDbId);
});
db.close();
return session;
}
```
**Score**: 8/10
**Why This Is Stupid**:
1. **Database Reopening**: Opens database at line 204, closes at line 234. This happens on:
- First call to `/sessions/:id/init` (line 691)
- First call to `/sessions/:id/observations` (line 762)
- First call to `/sessions/:id/summarize` (line 789)
For a typical session: init (DB open/close) → observation (DB open/close) → observation (DB open/close) → summarize (DB open/close). **That's 4 database open/close cycles when ONE would suffice.**
2. **Redundant Database Access**: The database is ALREADY opened in `handleInit` at line 695 to call `setWorkerPort()`. So we have:
- Line 695: `const db = new SessionStore()` in handleInit
- Line 696: `db.setWorkerPort()`
- Line 697-711: More queries on the same database
- Line 711: `db.close()`
- Line 691: `this.getOrCreateSession()` is called
- Line 204: **Opens database AGAIN** inside getOrCreateSession
- Line 234: Closes it
**This is fucking insane.** We close the database, then immediately reopen it in the same call stack.
3. **Error Handler Opens Database**: Line 228 opens a NEW database connection in the error handler. If runSDKAgent fails, we open the database AGAIN just to mark it failed, then close it. This is defensive programming for ghosts - if the worker is crashing, do we really care about marking it failed?
**What Should Happen**:
- Pass the already-open database connection to getOrCreateSession
- Or at minimum, reuse the connection from the calling context
- The error handler should either crash hard or mark failed WITHOUT reopening the database
**Estimated Performance Impact**: Database open/close is expensive (~1-5ms each). For a session with 10 observations, this pattern adds **20-100ms of pure overhead**.
---
### Lines 263-292: SSE Stream Setup
```typescript
private handleSSEStream(req: Request, res: Response): void {
// Set SSE headers
res.setHeader('Content-Type', 'text/event-stream');
res.setHeader('Cache-Control', 'no-cache');
res.setHeader('Connection', 'keep-alive');
res.setHeader('Access-Control-Allow-Origin', '*');
// Add client to set
this.sseClients.add(res);
logger.info('WORKER', `SSE client connected`, { totalClients: this.sseClients.size });
// Send only projects list - all data will be loaded via pagination
const db = new SessionStore();
const allProjects = db.getAllProjects();
db.close();
const initialData = {
type: 'initial_load',
projects: allProjects,
timestamp: Date.now()
};
res.write(`data: ${JSON.stringify(initialData)}\n\n`);
// Handle client disconnect
req.on('close', () => {
this.sseClients.delete(res);
logger.info('WORKER', `SSE client disconnected`, { remainingClients: this.sseClients.size });
});
}
```
**Score**: 2/10
**Why**: This is mostly good. Clean SSE setup with proper headers and client tracking. Database is opened, used, and closed.
---
### Lines 297-322: SSE Broadcast and Cleanup
```typescript
private broadcastSSE(event: any): void {
if (this.sseClients.size === 0) {
return; // No clients connected, skip broadcast
}
const data = `data: ${JSON.stringify(event)}\n\n`;
const clientsToRemove: Response[] = [];
for (const client of this.sseClients) {
try {
client.write(data);
} catch (error) {
// Client disconnected, mark for removal
clientsToRemove.push(client);
}
}
// Clean up disconnected clients
for (const client of clientsToRemove) {
this.sseClients.delete(client);
}
if (clientsToRemove.length > 0) {
logger.info('WORKER', `SSE cleaned up disconnected clients`, { count: clientsToRemove.length });
}
}
```
**Score**: 4/10
**Why This Is Slightly Stupid**:
1. **Two-Pass Cleanup**: Creates a temporary array of failed clients, then iterates again to remove them. Why not just remove them in the first loop?
2. **Unnecessary Logging**: Do we really need to log every time a client disconnects? The `handleSSEStream` already logs disconnects (line 290). This is duplicate logging.
**What Should Happen**:
```typescript
private broadcastSSE(event: any): void {
if (this.sseClients.size === 0) return;
const data = `data: ${JSON.stringify(event)}\n\n`;
for (const client of this.sseClients) {
try {
client.write(data);
} catch {
this.sseClients.delete(client);
}
}
}
```
**Savings**: Remove 10 lines, remove duplicate logging, eliminate temporary array.
---
### Lines 338-365: Spinner Debounce - ARTIFICIAL DELAY
```typescript
private checkAndStopSpinner(): void {
// Clear any existing timer
if (this.spinnerStopTimer) {
clearTimeout(this.spinnerStopTimer);
this.spinnerStopTimer = null;
}
// Check if any session has pending messages
const hasPendingMessages = Array.from(this.sessions.values()).some(
session => session.pendingMessages.length > 0
);
if (!hasPendingMessages) {
// Debounce: wait 1.5s and check again
this.spinnerStopTimer = setTimeout(() => {
const stillEmpty = Array.from(this.sessions.values()).every(
session => session.pendingMessages.length === 0
);
if (stillEmpty) {
logger.debug('WORKER', 'All queues empty - stopping spinner');
this.broadcastProcessingStatus(false);
}
this.spinnerStopTimer = null;
}, 1500);
}
}
```
**Score**: 9/10
**Why This Is ABSOLUTELY FUCKING STUPID**:
1. **Artificial Delay**: **1.5 SECONDS** (1500ms) of artificial delay before stopping the spinner. This is pure overhead added for no reason.
2. **Why Was This Added?**: Probably someone thought "the UI flickers when the spinner stops/starts rapidly." **SO FUCKING WHAT?** That's a UI rendering problem, not a worker service problem. Fix it in the UI with CSS transitions or debouncing on the CLIENT side.
3. **Double-Check Pattern**: Checks if queues are empty, waits 1.5s, then checks AGAIN. This is defensive programming for ghosts. If the queue is empty, it's empty. We're not protecting against race conditions here - we're just wasting time.
4. **Polling Instead of Events**: This function is called from `handleAgentMessage` (line 1145) after processing every single response. Instead of reacting to the actual completion of work, we're polling state and debouncing.
5. **State Management Overhead**: Requires `spinnerStopTimer` field (line 109), timer cleanup logic, null checks, etc.
**Real-World Impact**: Every time the worker finishes processing observations, the UI spinner continues to show "processing" for **1.5 seconds** even though nothing is happening. This makes the entire system feel slower.
**What Should Happen**:
```typescript
private checkAndStopSpinner(): void {
const hasPendingMessages = Array.from(this.sessions.values()).some(
session => session.pendingMessages.length > 0
);
if (!hasPendingMessages) {
this.broadcastProcessingStatus(false);
}
}
```
**Savings**: Remove 15 lines of debouncing logic, remove timer state, eliminate 1.5s artificial delay.
**Alternative**: If UI flickering is actually a problem (prove it first), handle it client-side with CSS transitions or client-side debouncing.
---
### Lines 370-411: Stats Endpoint
```typescript
private handleStats(_req: Request, res: Response): void {
try {
const db = new SessionStore();
// Get database stats
const obsCount = db.db.prepare('SELECT COUNT(*) as count FROM observations').get() as { count: number };
const sessionCount = db.db.prepare('SELECT COUNT(*) as count FROM sdk_sessions').get() as { count: number };
const summaryCount = db.db.prepare('SELECT COUNT(*) as count FROM session_summaries').get() as { count: number };
// Get database file size
const dbPath = join(homedir(), '.claude-mem', 'claude-mem.db');
let dbSize = 0;
if (existsSync(dbPath)) {
dbSize = statSync(dbPath).size;
}
db.close();
// Get worker stats
const uptime = process.uptime();
res.json({
worker: {
version: VERSION,
uptime: Math.floor(uptime),
activeSessions: this.sessions.size,
sseClients: this.sseClients.size,
port: getWorkerPort()
},
database: {
path: dbPath,
size: dbSize,
observations: obsCount.count,
sessions: sessionCount.count,
summaries: summaryCount.count
}
});
} catch (error: any) {
logger.error('WORKER', 'Failed to get stats', {}, error);
res.status(500).json({ error: 'Failed to get stats' });
}
}
```
**Score**: 3/10
**Why Slightly Stupid**:
1. **Redundant existsSync Check**: The database path is guaranteed to exist if SessionStore initialized successfully. If it doesn't exist, SessionStore would have crashed on startup. This is defensive programming for ghosts.
2. **Three Separate Queries**: Could be combined into a single query with UNION or multiple SELECT columns, but this is minor.
**What Should Happen**:
```typescript
const dbSize = statSync(dbPath).size; // Just crash if it doesn't exist
```
Otherwise, this is mostly fine. Stats endpoints are low-frequency and non-critical.
---
### Lines 507-555: GET /api/observations
```typescript
private handleGetObservations(req: Request, res: Response): void {
try {
const offset = parseInt(req.query.offset as string || '0', 10);
const limit = Math.min(parseInt(req.query.limit as string || '50', 10), 100); // Cap at 100
const project = req.query.project as string | undefined;
const db = new SessionStore();
// Build query with optional project filter
let query = `
SELECT id, type, title, subtitle, text, project, prompt_number, created_at, created_at_epoch
FROM observations
`;
let countQuery = 'SELECT COUNT(*) as total FROM observations';
const params: any[] = [];
const countParams: any[] = [];
if (project) {
query += ' WHERE project = ?';
countQuery += ' WHERE project = ?';
params.push(project);
countParams.push(project);
}
query += ' ORDER BY created_at_epoch DESC LIMIT ? OFFSET ?';
params.push(limit, offset);
const stmt = db.db.prepare(query);
const observations = stmt.all(...params);
// Check if there are more results
const countStmt = db.db.prepare(countQuery);
const { total } = countStmt.get(...countParams) as { total: number };
const hasMore = (offset + limit) < total;
db.close();
res.json({
observations,
hasMore,
total,
offset,
limit
});
} catch (error: any) {
logger.error('WORKER', 'Failed to get observations', {}, error);
res.status(500).json({ error: 'Failed to get observations' });
}
}
```
**Score**: 5/10
**Why This Is Mildly Stupid**:
1. **Duplicate Parameter Arrays**: `params` and `countParams` are maintained separately even though they contain the same values (just the project filter). This is error-prone and verbose.
2. **Two Queries Instead of One**: We run a COUNT query and a SELECT query. For small datasets, this is fine, but for large datasets, the COUNT query can be expensive. The `hasMore` flag could be computed by fetching `limit + 1` rows and checking if we got more than `limit`.
**What Should Happen**:
```typescript
// Fetch one extra row to determine if there are more results
const stmt = db.db.prepare(query);
const results = stmt.all(...params);
const observations = results.slice(0, limit);
const hasMore = results.length > limit;
// Only run COUNT if the UI actually needs it (it probably doesn't)
```
**Pattern**: This same pattern is repeated in `handleGetSummaries` (line 557) and `handleGetPrompts` (line 618). Copy-paste code smell.
**Estimated Savings**: Remove COUNT queries (which can be expensive on large tables), simplify parameter handling.
---
### Lines 685-752: POST /sessions/:sessionDbId/init - DATABASE REOPENING HELL
```typescript
private async handleInit(req: Request, res: Response): Promise<void> {
const sessionDbId = parseInt(req.params.sessionDbId, 10);
const { project } = req.body;
logger.info('WORKER', 'Session init', { sessionDbId, project });
const session = this.getOrCreateSession(sessionDbId); // <-- Opens DB at line 204
const claudeSessionId = session.claudeSessionId;
// Update port in database
const db = new SessionStore(); // <-- Opens DB AGAIN
db.setWorkerPort(sessionDbId, getWorkerPort());
// Get the latest user_prompt for this session to sync to Chroma
const latestPrompt = db.db.prepare(`
SELECT
up.*,
s.sdk_session_id,
s.project
FROM user_prompts up
JOIN sdk_sessions s ON up.claude_session_id = s.claude_session_id
WHERE up.claude_session_id = ?
ORDER BY up.created_at_epoch DESC
LIMIT 1
`).get(claudeSessionId) as any;
db.close(); // <-- Closes DB
// ... SSE broadcast ...
// ... Chroma sync ...
logger.success('WORKER', 'Session initialized', { sessionId: sessionDbId, port: getWorkerPort() });
res.json({
status: 'initialized',
sessionDbId,
port: getWorkerPort()
});
}
```
**Score**: 7/10
**Why This Is Stupid**:
1. **Two Database Opens in Same Function**:
- Line 691: `getOrCreateSession()` opens DB internally (line 204)
- Line 695: Opens DB AGAIN for `setWorkerPort()`
- Line 711: Closes DB
2. **Redundant Data Fetching**: `getOrCreateSession()` already fetches session data from the database (line 205). Then we query AGAIN for the user prompt (line 698).
3. **Tight Coupling**: `getOrCreateSession()` hides database access, making it unclear that we're opening the database twice.
**What Should Happen**:
- Open database ONCE at the start of handleInit
- Pass the open database to getOrCreateSession
- Fetch all needed data in a single transaction
- Close database at the end
**Estimated Savings**: Eliminate 1 database open/close cycle (1-5ms).
---
### Lines 728-741: Chroma Sync with Verbose Error Handling
```typescript
// Sync user prompt to Chroma (fire-and-forget, but crash on failure)
if (latestPrompt) {
this.chromaSync.syncUserPrompt(
latestPrompt.id,
latestPrompt.sdk_session_id,
latestPrompt.project,
latestPrompt.prompt_text,
latestPrompt.prompt_number,
latestPrompt.created_at_epoch
).catch(err => {
logger.failure('WORKER', 'Failed to sync user_prompt to Chroma - continuing', { promptId: latestPrompt.id }, err);
// Don't crash - SQLite has the data
});
}
```
**Score**: 5/10
**Why This Is Mildly Stupid**:
1. **Inconsistent Error Handling**: The comment says "crash on failure" but then we catch the error and continue. Which is it?
2. **Redundant Comment**: The code says `.catch(err => { /* continue */ })` and the comment says "Don't crash - SQLite has the data". The code is self-documenting.
3. **Fire-and-Forget**: If we're going to fire-and-forget, why bother with verbose error handling? Either care about failures (and retry/alert) or don't (and just log).
**What Should Happen**:
```typescript
// Fire-and-forget Chroma sync (SQLite is source of truth)
if (latestPrompt) {
this.chromaSync.syncUserPrompt(/* ... */).catch(() => {}); // Swallow errors
}
```
**Pattern**: This same verbose error handling appears in lines 1057-1076 and 1114-1133.
---
### Lines 758-779: POST /sessions/:sessionDbId/observations
```typescript
private handleObservation(req: Request, res: Response): void {
const sessionDbId = parseInt(req.params.sessionDbId, 10);
const { tool_name, tool_input, tool_output, prompt_number } = req.body;
const session = this.getOrCreateSession(sessionDbId); // <-- Opens DB
const toolStr = logger.formatTool(tool_name, tool_input);
logger.dataIn('WORKER', `Observation queued: ${toolStr}`, {
sessionId: sessionDbId,
queue: session.pendingMessages.length + 1
});
session.pendingMessages.push({
type: 'observation',
tool_name,
tool_input,
tool_output,
prompt_number
});
res.json({ status: 'queued', queueLength: session.pendingMessages.length });
}
```
**Score**: 6/10
**Why This Is Stupid**:
1. **Database Opens for No Reason**: `getOrCreateSession()` opens the database (line 204), but we don't actually need any data from the database here. We just need to get or create the in-memory session object.
2. **Hot Path Performance**: This endpoint is called **for every single tool execution**. If you run 100 tool calls in a session, this opens/closes the database 100 times unnecessarily.
**What Should Happen**:
- Separate "get existing session" from "create session from database"
- Only open database if creating a new session
- For existing sessions, just push to the queue
**Estimated Savings**: For a session with 100 observations, eliminate 99 unnecessary database open/close cycles (**99-495ms of pure overhead**).
---
### Lines 914-1005: createMessageGenerator - THE POLLING HORROR
```typescript
private async* createMessageGenerator(session: ActiveSession): AsyncIterable<SDKUserMessage> {
// ... send init prompt ...
// Process messages continuously until session is deleted
while (true) {
if (session.abortController.signal.aborted) {
break;
}
if (session.pendingMessages.length === 0) {
await new Promise(resolve => setTimeout(resolve, MESSAGE_POLL_INTERVAL_MS));
continue;
}
while (session.pendingMessages.length > 0) {
const message = session.pendingMessages.shift()!;
// ... process message ...
yield { /* SDK message */ };
}
}
}
```
**Score**: 10/10
**Why This Is ABSOLUTELY FUCKING STUPID**:
1. **Infinite Polling Loop**: Lines 936-944 implement a **busy-wait polling loop** that checks `pendingMessages.length` every 100ms. This is the single dumbest pattern in the entire file.
2. **Event-Driven Alternative**: We have a fucking queue! When something is added to the queue, **NOTIFY THE CONSUMER**. Use an EventEmitter, a Promise, a Condition Variable, ANYTHING but polling.
3. **Wasted CPU**: Every 100ms, this loop wakes up, checks if the queue is empty, and goes back to sleep. For a worker that runs for hours, this is thousands of unnecessary wake-ups.
4. **Latency**: When an observation is queued (line 770), it sits in the queue for up to 100ms before being processed. **This adds 0-100ms of artificial latency to every single observation.**
5. **Battery Impact**: On laptops, constant polling prevents CPU from entering deep sleep states, draining battery.
**What Should Happen**:
```typescript
// In WorkerService class
private sessionQueues: Map<number, EventEmitter> = new Map();
private handleObservation(req: Request, res: Response): void {
// ... existing code ...
session.pendingMessages.push({ /* message */ });
// Notify the generator that new work is available
const emitter = this.sessionQueues.get(sessionDbId);
if (emitter) {
emitter.emit('message');
}
res.json({ status: 'queued', queueLength: session.pendingMessages.length });
}
private async* createMessageGenerator(session: ActiveSession): AsyncIterable<SDKUserMessage> {
const emitter = new EventEmitter();
this.sessionQueues.set(session.sessionDbId, emitter);
yield { /* init prompt */ };
while (!session.abortController.signal.aborted) {
if (session.pendingMessages.length === 0) {
// Wait for new messages via event, not polling
await new Promise(resolve => emitter.once('message', resolve));
}
while (session.pendingMessages.length > 0) {
const message = session.pendingMessages.shift()!;
yield { /* process message */ };
}
}
this.sessionQueues.delete(session.sessionDbId);
}
```
**Estimated Savings**:
- Remove 100ms polling interval (eliminate 0-100ms latency per observation)
- Reduce CPU wake-ups from ~10/second to 0 when idle
- Improve battery life on laptops
- Make the system feel more responsive
**Real-World Impact**: For a session with 10 observations, this polling adds **0-1000ms of cumulative latency**. The user is literally waiting for the polling loop to wake up.
---
### Lines 1011-1146: handleAgentMessage - Database Reopening and Chroma Spam
```typescript
private handleAgentMessage(session: ActiveSession, content: string, promptNumber: number): void {
// ... parse observations and summary ...
const db = new SessionStore(); // <-- Opens DB
// Store observations and sync to Chroma (non-blocking, fail-fast)
for (const obs of observations) {
const { id, createdAtEpoch } = db.storeObservation(/* ... */);
logger.success('DB', 'Observation stored', { /* ... */ });
// Broadcast to SSE clients
this.broadcastSSE({ /* ... */ });
// Sync to Chroma (non-blocking fire-and-forget, but crash on failure)
this.chromaSync.syncObservation(/* ... */)
.then(() => {
logger.success('WORKER', 'Observation synced to Chroma', { /* ... */ });
})
.catch((error: Error) => {
logger.error('WORKER', 'Observation sync failed - continuing', { /* ... */ }, error);
// Don't crash - SQLite has the data
});
}
// ... similar pattern for summary ...
db.close(); // <-- Closes DB
// Check if queue is empty and stop spinner after debounce
this.checkAndStopSpinner(); // <-- Triggers 1.5s delay
}
```
**Score**: 6/10
**Why This Is Stupid**:
1. **Database Reopening**: Opens database (line 1030), stores all observations, closes database (line 1142). This is called **for every SDK response**. For a session with 10 observations, this opens/closes the database 10+ times.
2. **Verbose Chroma Error Handling**: Lines 1057-1076 and 1114-1133 have identical verbose error handling for Chroma sync failures. This is copy-paste code smell.
3. **Success Logging Spam**: Line 1066 and 1123 log success for EVERY Chroma sync. For a session with 100 observations, this logs 100 success messages. Why? Who reads these?
4. **Debounce Call**: Line 1145 calls `checkAndStopSpinner()`, triggering the 1.5s artificial delay.
**What Should Happen**:
- Reuse database connection across multiple calls
- Simplify Chroma error handling (fire-and-forget means swallow errors)
- Remove success logging (or make it debug-level)
- Remove debounce delay
---
## Summary of Patterns
### 1. Database Reopening Anti-Pattern
**Occurrences**: Lines 200-236, 685-752, 758-779, 1011-1146
**Impact**: Opens/closes database 4-100+ times per session instead of reusing connections
**Fix**: Pass open database connections between functions, use transactions, connection pooling
### 2. Polling Instead of Events
**Occurrences**: Line 942 (100ms polling loop)
**Impact**: 0-100ms latency per observation, wasted CPU cycles, battery drain
**Fix**: Use EventEmitter or async queue with await/notify pattern
### 3. Artificial Delays
**Occurrences**: Line 363 (1.5s spinner debounce), line 942 (100ms poll interval)
**Impact**: 1.5s delay before spinner stops, 0-100ms delay per observation
**Fix**: Remove debouncing, use event-driven patterns
### 4. Premature Optimization
**Occurrences**: Lines 33-70 (Claude path caching)
**Impact**: 37 lines of code to save 5ms on a one-time operation
**Fix**: Remove caching, inline the function
### 5. Defensive Programming for Ghosts
**Occurrences**: Line 382 (existsSync check), lines 228-231 (error handler reopens DB), lines 728-741 (verbose error handling)
**Impact**: Code complexity without real benefit
**Fix**: Fail fast, trust invariants, simplify error handling
### 6. Copy-Paste Code
**Occurrences**: handleGetObservations, handleGetSummaries, handleGetPrompts (nearly identical)
**Impact**: Maintenance burden, inconsistency risk
**Fix**: Extract common pagination logic into helper function
---
## Recommendations
### Immediate Wins (Low Effort, High Impact)
1. **Remove Spinner Debounce** (Lines 338-365)
- **Effort**: 5 minutes
- **Impact**: Eliminate 1.5s artificial delay
- **Score**: 9/10 stupidity
2. **Replace Polling with Events** (Line 942)
- **Effort**: 30 minutes
- **Impact**: Eliminate 0-100ms latency per observation, reduce CPU usage
- **Score**: 10/10 stupidity
3. **Remove Claude Path Caching** (Lines 33-70)
- **Effort**: 5 minutes
- **Impact**: Remove 37 lines of unnecessary code
- **Score**: 6/10 stupidity
### Medium Wins (Moderate Effort, Good Impact)
4. **Fix Database Reopening in Hot Path** (Lines 758-779)
- **Effort**: 1 hour
- **Impact**: Eliminate 99+ database cycles per session
- **Score**: 6/10 stupidity
5. **Simplify Chroma Error Handling** (Lines 728-741, 1057-1076, 1114-1133)
- **Effort**: 15 minutes
- **Impact**: Remove 50+ lines of verbose error handling
- **Score**: 5/10 stupidity
6. **Simplify SSE Broadcast** (Lines 297-322)
- **Effort**: 5 minutes
- **Impact**: Remove 10 lines, eliminate two-pass cleanup
- **Score**: 4/10 stupidity
### Long-Term Improvements (High Effort, Architectural)
7. **Database Connection Pooling**
- **Effort**: 4 hours
- **Impact**: Reuse connections across requests, eliminate all open/close overhead
- **Score**: 8/10 stupidity (current approach)
8. **Extract Pagination Helper**
- **Effort**: 1 hour
- **Impact**: DRY up handleGetObservations/Summaries/Prompts
- **Score**: 5/10 stupidity
---
## Estimated Performance Impact
**Current Hot Path (1 observation)**:
- HTTP request arrives: 0ms
- getOrCreateSession opens/closes DB: 1-5ms
- Queue message: 0ms
- Poll interval: 0-100ms (average 50ms)
- SDK processing: variable
- handleAgentMessage opens/closes DB: 1-5ms
- Chroma sync (async): N/A
- checkAndStopSpinner debounce: 1500ms
- **Total artificial overhead**: 1502-1610ms (1.5-1.6 seconds)
**Optimized Hot Path (1 observation)**:
- HTTP request arrives: 0ms
- Get existing session (no DB): 0ms
- Queue message + notify: 0ms
- SDK processing: variable
- Store in DB (connection pool): 0.1-0.5ms
- Chroma sync (async): N/A
- Stop spinner (no debounce): 0ms
- **Total artificial overhead**: 0.1-0.5ms
**Speedup**: **3000-16000x faster** (removing artificial delays and polling)
---
## Conclusion
This file has accumulated significant technical debt in the form of:
- **Artificial delays** (1.5s debounce, 100ms polling)
- **Database reopening anti-pattern** (4-100+ opens per session)
- **Polling instead of events** (busy-wait loop)
- **Premature optimization** (caching rare operations)
- **Defensive programming** (protecting against non-existent failures)
The worker spends more time **waiting** (polling, debouncing) than **working**. Most of these patterns were likely added with good intentions ("make the UI smooth", "cache for performance", "handle errors gracefully") but ended up creating more problems than they solved.
**Priority Fixes**:
1. Remove spinner debounce (9/10 stupidity)
2. Replace polling with events (10/10 stupidity)
3. Fix database reopening in hot path (6-8/10 stupidity)
These three changes alone would eliminate **1.5+ seconds of artificial delay** per session and make the system feel dramatically more responsive.
File diff suppressed because it is too large Load Diff