Merge pull request #78 from thedotmack/feature/skill-search

v5.4.0: Skill-Based Search Migration & Progressive Disclosure Pattern
This commit is contained in:
Alex Newman
2025-11-09 18:58:22 -05:00
committed by GitHub
37 changed files with 5497 additions and 618 deletions
+29
View File
@@ -0,0 +1,29 @@
# Project-Level Skills
This directory contains skills **for developing and maintaining the claude-mem project itself**, not skills that are released as part of the plugin.
## Distinction
**Project Skills** (`.claude/skills/`):
- Used by developers working on claude-mem
- Not included in the plugin distribution
- Project-specific workflows (version bumps, release management, etc.)
- Not synced to `~/.claude/plugins/marketplaces/thedotmack/`
**Plugin Skills** (`plugin/skills/`):
- Released as part of the claude-mem plugin
- Available to all users who install the plugin
- General-purpose memory search functionality
- Synced to user installations via `npm run sync-marketplace`
## Skills in This Directory
### version-bump
Manages semantic versioning for the claude-mem project itself. Handles updating all four version files (package.json, marketplace.json, plugin.json, CLAUDE.md), creating git tags, and GitHub releases.
**Usage**: Only for claude-mem maintainers releasing new versions.
## Adding New Skills
**For claude-mem development** → Add to `.claude/skills/`
**For end users** → Add to `plugin/skills/` (gets distributed with plugin)
+52 -169
View File
@@ -5,208 +5,91 @@ description: Manage semantic version updates for claude-mem project. Handles pat
# Version Bump Skill
IMPORTANT: This skill manages semantic versioning across the claude-mem project. YOU MUST update all FOUR version-tracked files consistently and create a git tag.
Manage semantic versioning across the claude-mem project with consistent updates to all version-tracked files.
## Quick Reference
**Files requiring updates:**
**Files requiring updates (ALL FOUR):**
1. `package.json` (line 3)
2. `.claude-plugin/marketplace.json` (line 13)
3. `plugin/.claude-plugin/plugin.json` (line 3)
4. `CLAUDE.md` (line 9 ONLY - version number, NOT version history)
**Semantic versioning:**
- PATCH (x.y.Z): Bugfixes only
- MINOR (x.Y.0): New features, backward compatible
- MAJOR (X.0.0): Breaking changes
- **PATCH** (x.y.Z): Bugfixes only
- **MINOR** (x.Y.0): New features, backward compatible
- **MAJOR** (X.0.0): Breaking changes
## Workflow
## Quick Decision Guide
When invoked, follow this process:
**What changed?**
- "Fixed a bug" → PATCH (5.3.0 → 5.3.1)
- "Added new feature" → MINOR (5.3.0 → 5.4.0)
- "Breaking change" → MAJOR (5.3.0 → 6.0.0)
### 1. Analyze Changes
First, understand what changed:
```bash
git log --oneline -5
git diff HEAD~1
```
**If unclear, ASK THE USER explicitly.**
### 2. Determine Version Type
Ask yourself:
- Breaking changes? → MAJOR
- New features? → MINOR
- Bugfixes only? → PATCH
## Standard Workflow
If unclear, ASK THE USER explicitly.
See [operations/workflow.md](operations/workflow.md) for detailed step-by-step process.
### 3. Calculate New Version
From current version in `package.json`:
```bash
grep '"version"' package.json
```
Apply semantic versioning rules:
- Patch: increment Z (4.2.8 → 4.2.9)
- Minor: increment Y, reset Z (4.2.8 → 4.3.0)
- Major: increment X, reset Y and Z (4.2.8 → 5.0.0)
### 4. Preview Changes
BEFORE making changes, show the user:
```
Current version: 4.2.8
New version: 4.2.9 (PATCH)
Reason: Fixed database query bug
Files to update:
- package.json: "version": "4.2.9"
- marketplace.json: "version": "4.2.9"
- plugin.json: "version": "4.2.9"
- CLAUDE.md line 9: "**Current Version**: 4.2.9" (version number ONLY)
- Git tag: v4.2.9
Proceed? (yes/no)
```
### 5. Update Files
**Update package.json:**
```json
{
"name": "claude-mem",
"version": "4.2.9",
...
}
```
**Update .claude-plugin/marketplace.json:**
```json
{
"name": "claude-mem",
"version": "4.2.9",
...
}
```
**Update plugin/.claude-plugin/plugin.json:**
```json
{
"name": "claude-mem",
"version": "4.2.9",
...
}
```
**Update CLAUDE.md:**
ONLY update line 9 with the version number:
```markdown
**Current Version**: 4.2.9
```
**CRITICAL**: DO NOT add version history entries to CLAUDE.md. Version history is managed separately outside this skill.
### 6. Verify Consistency
```bash
# Check all versions match
grep -n '"version"' package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json
# Should show same version in all three files
```
### 7. Test
```bash
# Verify the plugin loads correctly
npm run build
```
### 8. Commit and Tag
```bash
# Stage all version files
git add package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json CLAUDE.md plugin/scripts/
# Commit with descriptive message
git commit -m "Release vX.Y.Z: [Brief description]"
# Create annotated git tag
git tag vX.Y.Z -m "Release vX.Y.Z: [Brief description]"
# Push commit and tags
git push && git push --tags
```
### 9. Create GitHub Release
```bash
# Create GitHub release from the tag
gh release create vX.Y.Z --title "vX.Y.Z" --notes "[Brief release notes]"
# Or generate notes automatically from commits
gh release create vX.Y.Z --title "vX.Y.Z" --generate-notes
```
**IMPORTANT**: Always create the GitHub release immediately after pushing the tag. This makes the release discoverable to users and triggers any automated workflows.
**Quick version:**
1. Determine version type (PATCH/MINOR/MAJOR)
2. Calculate new version from current
3. Preview changes to user
4. Update ALL FOUR files
5. Verify consistency
6. Build and test
7. Commit and create git tag
8. Push and create GitHub release
## Common Scenarios
**Scenario 1: Bug fix after testing**
```
User: "Fixed the memory leak in the search function"
You: Determine → PATCH
Calculate → 4.2.8 → 4.2.9
Update all four files (version numbers only)
Build and commit
Create git tag v4.2.9
Push commit and tags
Create GitHub release v4.2.9
```
See [operations/scenarios.md](operations/scenarios.md) for examples:
- Bug fix releases
- New feature releases
- Breaking change releases
**Scenario 2: New MCP tool added**
```
User: "Added web search MCP integration"
You: Determine → MINOR (new feature)
Calculate → 4.2.8 → 4.3.0
Update all four files (version numbers only)
Build and commit
Create git tag v4.3.0
Push commit and tags
Create GitHub release v4.3.0
```
## Critical Rules
**Scenario 3: Database schema redesign**
```
User: "Rewrote storage layer, old data needs migration"
You: Determine → MAJOR (breaking change)
Calculate → 4.2.8 → 5.0.0
Update all four files (version numbers only)
Build and commit
Create git tag v5.0.0
Push commit and tags
Create GitHub release v5.0.0
```
## Error Prevention
**ALWAYS verify:**
- [ ] All FOUR files have matching version numbers (package.json, marketplace.json, plugin.json, CLAUDE.md)
- [ ] Git tag created with format vX.Y.Z
- [ ] GitHub release created from the tag
- [ ] CLAUDE.md: ONLY updated line 9 (version number), did NOT touch version history
- [ ] Commit and tags pushed to remote
**ALWAYS:**
- Update ALL FOUR files with matching version numbers
- Create git tag with format `vX.Y.Z`
- Create GitHub release from the tag
- Ask user if version type is unclear
**NEVER:**
- Update only one, two, or three files - ALL FOUR must be updated
- Update only one, two, or three files
- Skip the verification step
- Forget to create git tag
- Forget to create GitHub release
- Forget to ask user if version type is unclear
- Forget to create git tag or GitHub release
- Add version history entries to CLAUDE.md (that's managed separately)
## Verification Checklist
Before considering the task complete:
- [ ] All FOUR files have matching version numbers
- [ ] `npm run build` succeeds
- [ ] Git commit created with all version files
- [ ] Git tag created (format: vX.Y.Z)
- [ ] Commit and tags pushed to remote
- [ ] GitHub release created from the tag
- [ ] CLAUDE.md: ONLY line 9 updated (version number), NOT version history
## Reference Commands
```bash
# View current version
cat package.json | grep version
grep '"version"' package.json
# Verify consistency across all version files
grep '"version"' package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json
# View git tags
git tag -l -n1
# Check what will be committed
git status
git diff package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json CLAUDE.md
```
For more commands, see [operations/reference.md](operations/reference.md).
@@ -0,0 +1,275 @@
# Version Bump Reference
Quick reference for version bump commands and file locations.
## File Locations
### Version-Tracked Files (ALL FOUR)
1. **package.json**
- Path: `package.json`
- Line: 3
- Format: `"version": "X.Y.Z",`
2. **marketplace.json**
- Path: `.claude-plugin/marketplace.json`
- Line: 13
- Format: `"version": "X.Y.Z",`
3. **plugin.json**
- Path: `plugin/.claude-plugin/plugin.json`
- Line: 3
- Format: `"version": "X.Y.Z",`
4. **CLAUDE.md**
- Path: `CLAUDE.md`
- Line: 9
- Format: `**Current Version**: X.Y.Z`
## Essential Commands
### View Current Version
```bash
# From package.json
grep '"version"' package.json
# Extract just the version number
grep '"version"' package.json | head -1 | sed 's/.*"version": "\(.*\)".*/\1/'
# From all version files
grep '"version"' package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json
grep "Current Version" CLAUDE.md
```
### Verify Version Consistency
```bash
# Check all JSON files match
grep '"version"' package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json
# Should output identical version in all three:
# package.json:3: "version": "5.3.0",
# .claude-plugin/marketplace.json:13: "version": "5.3.0",
# plugin/.claude-plugin/plugin.json:3: "version": "5.3.0",
# Check CLAUDE.md
grep "Current Version" CLAUDE.md
# Should output: **Current Version**: 5.3.0
```
### Git Commands
```bash
# View recent commits
git log --oneline -5
# View changes since last tag
LAST_TAG=$(git describe --tags --abbrev=0)
git log $LAST_TAG..HEAD --oneline
git diff $LAST_TAG..HEAD
# List all tags
git tag -l
# View tag details
git show vX.Y.Z
# List tags with messages
git tag -l -n1
```
### Build and Test
```bash
# Build plugin
npm run build
# Sync to marketplace
npm run sync-marketplace
# Run tests (if available)
npm test
```
### Commit and Tag
```bash
# Stage version files
git add package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json CLAUDE.md plugin/scripts/
# Commit
git commit -m "Release vX.Y.Z: [Description]"
# Create tag
git tag vX.Y.Z -m "Release vX.Y.Z: [Description]"
# Push
git push && git push --tags
```
### GitHub Release
```bash
# Create release
gh release create vX.Y.Z --title "vX.Y.Z" --notes "[Release notes]"
# Create with auto-generated notes
gh release create vX.Y.Z --title "vX.Y.Z" --generate-notes
# View release
gh release view vX.Y.Z
# List all releases
gh release list
# Delete release (if needed)
gh release delete vX.Y.Z
```
## Semantic Versioning Rules
### Version Format: MAJOR.MINOR.PATCH
**MAJOR (X.0.0):**
- Breaking changes
- Incompatible API changes
- Schema changes requiring migration
- Removes features
**MINOR (x.Y.0):**
- New features (backward compatible)
- New functionality
- Deprecations (but not removals)
- Resets PATCH to 0
**PATCH (x.y.Z):**
- Bug fixes
- Performance improvements
- Documentation fixes
- No new features
### Incrementing Rules
```
PATCH: 5.3.2 → 5.3.3
MINOR: 5.3.2 → 5.4.0 (resets patch)
MAJOR: 5.3.2 → 6.0.0 (resets minor and patch)
```
## Common Patterns
### Bug Fix Release
```bash
# Example: 5.3.0 → 5.3.1
# 1. Update all four files to 5.3.1
# 2. Build and test
npm run build
# 3. Commit and tag
git add package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json CLAUDE.md plugin/scripts/
git commit -m "Release v5.3.1: Fixed observer crash"
git tag v5.3.1 -m "Release v5.3.1: Fixed observer crash"
git push && git push --tags
# 4. Create release
gh release create v5.3.1 --title "v5.3.1" --notes "Fixed observer crash on empty content"
```
### Feature Release
```bash
# Example: 5.3.0 → 5.4.0
# 1. Update all four files to 5.4.0
# 2. Build and test
npm run build
# 3. Commit and tag
git add package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json CLAUDE.md plugin/scripts/
git commit -m "Release v5.4.0: Added dark mode support"
git tag v5.4.0 -m "Release v5.4.0: Added dark mode support"
git push && git push --tags
# 4. Create release
gh release create v5.4.0 --title "v5.4.0" --generate-notes
```
### Breaking Change Release
```bash
# Example: 5.3.0 → 6.0.0
# 1. Update all four files to 6.0.0
# 2. Build and test
npm run build
# 3. Commit and tag
git add package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json CLAUDE.md plugin/scripts/
git commit -m "Release v6.0.0: Storage layer redesign"
git tag v6.0.0 -m "Release v6.0.0: Storage layer redesign"
git push && git push --tags
# 4. Create release with warning
gh release create v6.0.0 --title "v6.0.0" --notes "⚠️ Breaking change: Storage layer redesigned. Migration required."
```
## Rollback Commands
### Delete Tag
```bash
# Delete local tag
git tag -d vX.Y.Z
# Delete remote tag
git push origin :refs/tags/vX.Y.Z
# Or: git push --delete origin vX.Y.Z
```
### Delete Release
```bash
# Delete GitHub release
gh release delete vX.Y.Z
# Confirm deletion
gh release delete vX.Y.Z --yes
```
### Revert Commit
```bash
# Revert last commit (creates new commit)
git revert HEAD
# Reset to previous commit (destructive)
git reset --hard HEAD~1
git push --force # Dangerous! Only if not shared
```
## Error Prevention
### Pre-commit Checks
```bash
# Check all versions match before committing
V1=$(grep '"version"' package.json | head -1 | sed 's/.*"\([^"]*\)".*/\1/')
V2=$(grep '"version"' .claude-plugin/marketplace.json | sed 's/.*"\([^"]*\)".*/\1/')
V3=$(grep '"version"' plugin/.claude-plugin/plugin.json | head -1 | sed 's/.*"\([^"]*\)".*/\1/')
if [ "$V1" = "$V2" ] && [ "$V2" = "$V3" ]; then
echo "✓ All versions match: $V1"
else
echo "✗ Version mismatch!"
echo " package.json: $V1"
echo " marketplace.json: $V2"
echo " plugin.json: $V3"
fi
```
### Pre-push Checks
```bash
# Check tag exists
git tag -l | grep vX.Y.Z || echo "Warning: Tag not created"
# Check build succeeds
npm run build || echo "Error: Build failed"
# Check no uncommitted changes
git status --porcelain | grep -q . && echo "Warning: Uncommitted changes"
```
@@ -0,0 +1,218 @@
# Common Version Bump Scenarios
Real-world examples of version bumps with decision rationale.
## Scenario 1: Bug Fix After Testing
**User request:**
> "Fixed the memory leak in the search function"
**Analysis:**
- What changed: Bug fix
- Breaking changes: No
- New features: No
- **Decision: PATCH**
**Workflow:**
```
Current: 4.2.8
New: 4.2.9 (PATCH)
Steps:
1. Update all four files to 4.2.9
2. npm run build
3. git commit -m "Release v4.2.9: Fixed memory leak in search"
4. git tag v4.2.9 -m "Release v4.2.9: Fixed memory leak in search"
5. git push && git push --tags
6. gh release create v4.2.9 --title "v4.2.9" --notes "Fixed memory leak in search function"
```
## Scenario 2: New Feature Added
**User request:**
> "Added web search MCP integration"
**Analysis:**
- What changed: New feature (MCP integration)
- Breaking changes: No
- Backward compatible: Yes
- **Decision: MINOR**
**Workflow:**
```
Current: 4.2.8
New: 4.3.0 (MINOR - reset patch to 0)
Steps:
1. Update all four files to 4.3.0
2. npm run build
3. git commit -m "Release v4.3.0: Added web search MCP integration"
4. git tag v4.3.0 -m "Release v4.3.0: Added web search MCP integration"
5. git push && git push --tags
6. gh release create v4.3.0 --title "v4.3.0" --generate-notes
```
## Scenario 3: Database Schema Redesign
**User request:**
> "Rewrote storage layer, old data needs migration"
**Analysis:**
- What changed: Storage layer rewrite
- Breaking changes: Yes (requires migration)
- Backward compatible: No
- **Decision: MAJOR**
**Workflow:**
```
Current: 4.2.8
New: 5.0.0 (MAJOR - reset minor and patch to 0)
Steps:
1. Update all four files to 5.0.0
2. npm run build
3. git commit -m "Release v5.0.0: Storage layer redesign with migration required"
4. git tag v5.0.0 -m "Release v5.0.0: Storage layer redesign"
5. git push && git push --tags
6. gh release create v5.0.0 --title "v5.0.0" --notes "⚠️ Breaking change: Storage layer redesigned. Migration required."
```
## Scenario 4: Multiple Small Bug Fixes
**User request:**
> "Fixed three bugs: observer crash, viewer pagination, and date formatting"
**Analysis:**
- What changed: Multiple bug fixes
- Breaking changes: No
- New features: No
- **Decision: PATCH** (one patch covers all fixes)
**Workflow:**
```
Current: 4.2.8
New: 4.2.9 (PATCH)
Steps:
1. Update all four files to 4.2.9
2. npm run build
3. git commit -m "Release v4.2.9: Multiple bug fixes
- Fixed observer crash on empty content
- Fixed viewer pagination edge case
- Fixed date formatting in timeline"
4. git tag v4.2.9 -m "Release v4.2.9: Multiple bug fixes"
5. git push && git push --tags
6. gh release create v4.2.9 --title "v4.2.9" --generate-notes
```
## Scenario 5: Feature + Bug Fix
**User request:**
> "Added dark mode support and fixed the viewer crash bug"
**Analysis:**
- What changed: New feature + bug fix
- Breaking changes: No
- **Decision: MINOR** (feature trumps bug fix)
**Workflow:**
```
Current: 5.1.0
New: 5.2.0 (MINOR)
Steps:
1. Update all four files to 5.2.0
2. npm run build
3. git commit -m "Release v5.2.0: Dark mode support + bug fixes
Features:
- Added dark mode toggle to viewer UI
Bug fixes:
- Fixed viewer crash on empty database"
4. git tag v5.2.0 -m "Release v5.2.0: Dark mode support"
5. git push && git push --tags
6. gh release create v5.2.0 --title "v5.2.0" --generate-notes
```
## Scenario 6: Documentation Only
**User request:**
> "Updated README with new installation instructions"
**Analysis:**
- What changed: Documentation only
- Breaking changes: No
- Code changes: No
- **Decision: PATCH** (or skip version bump if no code changes)
**Workflow:**
```
Option 1: PATCH (if you want to tag doc improvements)
Current: 4.2.8
New: 4.2.9
Option 2: No version bump (documentation-only changes don't require versioning)
Just commit without bumping version
```
**Recommendation:** Skip version bump for documentation-only changes unless it's a significant documentation overhaul.
## Scenario 7: Configuration Change
**User request:**
> "Changed default observation count from 50 to 30"
**Analysis:**
- What changed: Default configuration
- Breaking changes: Yes (behavior changes)
- Users might notice different context size
- **Decision: MINOR or MAJOR** (ask user)
**Workflow:**
```
Ask user:
"This changes default behavior (context size). Users will see different results.
Is this:
- MINOR (acceptable behavior change): 4.2.8 → 4.3.0
- MAJOR (breaking change): 4.2.8 → 5.0.0
Which should I use?"
```
## Scenario 8: Dependency Update
**User request:**
> "Updated Claude SDK from 1.2.0 to 1.3.0"
**Analysis:**
- What changed: Dependency version
- Breaking changes: Depends on SDK changes
- **Decision: Ask user or check SDK changelog**
**Workflow:**
```
1. Check SDK changelog for breaking changes
2. If SDK has breaking changes → MAJOR
3. If SDK adds features → MINOR
4. If SDK only fixes bugs → PATCH
Typical: PATCH (unless SDK breaks compatibility)
```
## Decision Tree
```
Is there a breaking change?
├─ Yes → MAJOR (X.0.0)
└─ No
├─ Is there a new feature?
│ ├─ Yes → MINOR (x.Y.0)
│ └─ No
│ ├─ Is there a bug fix?
│ │ ├─ Yes → PATCH (x.y.Z)
│ │ └─ No → Don't bump version (docs only, etc.)
│ └─ Configuration change? → Ask user (MINOR or MAJOR)
└─ Multiple changes? → Use highest level (MAJOR > MINOR > PATCH)
```
@@ -0,0 +1,228 @@
# Detailed Version Bump Workflow
Step-by-step process for bumping versions in the claude-mem project.
## Step 1: Analyze Changes
First, understand what changed:
```bash
# View recent commits
git log --oneline -5
# See what changed in last commit
git diff HEAD~1
# Or see all changes since last tag
LAST_TAG=$(git describe --tags --abbrev=0)
git log $LAST_TAG..HEAD --oneline
git diff $LAST_TAG..HEAD
```
## Step 2: Determine Version Type
Ask yourself:
- **Breaking changes?** → MAJOR
- **New features?** → MINOR
- **Bugfixes only?** → PATCH
**If unclear, ASK THE USER explicitly.**
### Decision Matrix
| Change Type | Version Bump | Example |
|------------|--------------|---------|
| Bug fix | PATCH | 4.2.8 → 4.2.9 |
| New feature (backward compatible) | MINOR | 4.2.8 → 4.3.0 |
| Breaking change | MAJOR | 4.2.8 → 5.0.0 |
| Multiple features | MINOR | 4.2.8 → 4.3.0 |
| Feature + breaking change | MAJOR | 4.2.8 → 5.0.0 |
## Step 3: Calculate New Version
From current version in `package.json`:
```bash
grep '"version"' package.json
```
Apply semantic versioning rules:
- **Patch:** increment Z (4.2.8 → 4.2.9)
- **Minor:** increment Y, reset Z (4.2.8 → 4.3.0)
- **Major:** increment X, reset Y and Z (4.2.8 → 5.0.0)
## Step 4: Preview Changes
**BEFORE making changes, show the user:**
```
Current version: 4.2.8
New version: 4.2.9 (PATCH)
Reason: Fixed database query bug
Files to update:
- package.json: "version": "4.2.9"
- marketplace.json: "version": "4.2.9"
- plugin.json: "version": "4.2.9"
- CLAUDE.md line 9: "**Current Version**: 4.2.9" (version number ONLY)
- Git tag: v4.2.9
Proceed? (yes/no)
```
Wait for user confirmation before proceeding.
## Step 5: Update Files
### Update package.json
File: `package.json`
```json
{
"name": "claude-mem",
"version": "4.2.9",
...
}
```
Update line 3 with new version.
### Update marketplace.json
File: `.claude-plugin/marketplace.json`
```json
{
"name": "claude-mem",
"version": "4.2.9",
...
}
```
Update line 13 with new version.
### Update plugin.json
File: `plugin/.claude-plugin/plugin.json`
```json
{
"name": "claude-mem",
"version": "4.2.9",
...
}
```
Update line 3 with new version.
### Update CLAUDE.md
File: `CLAUDE.md`
**ONLY update line 9 with the version number:**
```markdown
**Current Version**: 4.2.9
```
**CRITICAL:** DO NOT add version history entries to CLAUDE.md. Version history is managed separately outside this skill.
## Step 6: Verify Consistency
```bash
# Check all versions match
grep -n '"version"' package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json
# Should show same version in all three files:
# package.json:3: "version": "4.2.9",
# .claude-plugin/marketplace.json:13: "version": "4.2.9",
# plugin/.claude-plugin/plugin.json:3: "version": "4.2.9",
```
All three must match exactly.
## Step 7: Test
```bash
# Verify the plugin loads correctly
npm run build
```
Build must succeed before proceeding.
## Step 8: Commit and Tag
```bash
# Stage all version files
git add package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json CLAUDE.md plugin/scripts/
# Commit with descriptive message
git commit -m "Release vX.Y.Z: [Brief description]
[Optional detailed description]
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>"
# Create annotated git tag
git tag vX.Y.Z -m "Release vX.Y.Z: [Brief description]"
# Push commit and tags
git push && git push --tags
```
Replace `X.Y.Z` with actual version (e.g., `4.2.9`).
## Step 9: Create GitHub Release
```bash
# Create GitHub release from the tag
gh release create vX.Y.Z --title "vX.Y.Z" --notes "[Brief release notes]"
# Or generate notes automatically from commits
gh release create vX.Y.Z --title "vX.Y.Z" --generate-notes
```
**IMPORTANT:** Always create the GitHub release immediately after pushing the tag. This makes the release discoverable to users and triggers any automated workflows.
## Verification
After completing all steps, verify:
```bash
# Check git tag created
git tag -l | grep vX.Y.Z
# Check remote has tag
git ls-remote --tags origin | grep vX.Y.Z
# Check GitHub release exists
gh release view vX.Y.Z
# Verify versions match
grep '"version"' package.json .claude-plugin/marketplace.json plugin/.claude-plugin/plugin.json
```
All checks should pass.
## Rollback (If Needed)
If you made a mistake:
```bash
# Delete local tag
git tag -d vX.Y.Z
# Delete remote tag (if already pushed)
git push origin :refs/tags/vX.Y.Z
# Delete GitHub release (if created)
gh release delete vX.Y.Z
# Revert commits if needed
git revert HEAD
```
Then restart the workflow with correct version.
+49
View File
@@ -8,6 +8,55 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [Unreleased]
## [5.4.0] - 2025-11-09
### ⚠️ BREAKING CHANGE: MCP Search Tools Removed
**Migration**: None required. Claude automatically uses the search skill when needed.
### Changed
- **Skill-Based Search Architecture**: Replaced MCP search tools with skill-based HTTP API
- **Token Savings**: ~2,250 tokens per session start
- **Progressive Disclosure**: Skill frontmatter (~250 tokens) vs MCP tool definitions (~2,500 tokens)
- Search functionality works identically but with better efficiency
- No user action required - migration is transparent
### Added
- **10 New HTTP Search API Endpoints** in worker service:
- `GET /api/search/observations` - Full-text search observations
- `GET /api/search/sessions` - Full-text search session summaries
- `GET /api/search/prompts` - Full-text search user prompts
- `GET /api/search/by-concept` - Find observations by concept tag
- `GET /api/search/by-file` - Find work related to specific files
- `GET /api/search/by-type` - Find observations by type (bugfix, feature, etc.)
- `GET /api/context/recent` - Get recent session context
- `GET /api/context/timeline` - Get timeline around specific point in time
- `GET /api/timeline/by-query` - Search + timeline in one call
- `GET /api/search/help` - API documentation
- **Search Skill** (`plugin/skills/search/SKILL.md`):
- Auto-invoked when users ask about past work, decisions, or history
- Comprehensive documentation with usage examples and workflows
- Format guidelines for presenting search results
### Removed
- **MCP Search Server** (deprecated):
- Removed `claude-mem-search` from plugin/.mcp.json
- Build script no longer compiles search-server.mjs
- Source file kept for reference: src/servers/search-server.ts
- All 9 MCP tools replaced by equivalent HTTP API endpoints
### Technical Details
- **How It Works**: User asks → Claude recognizes intent → Invokes search skill → Skill uses curl to call HTTP API → Formats results
- **User Experience**: Identical search capabilities with significantly lower context overhead
- **Performance**: Same search speed, better session start performance
### Documentation
- Updated CLAUDE.md with skill-based search explanation
- Removed MCP references throughout documentation
- Added comprehensive search skill documentation
- Updated build scripts to skip search-server compilation
## [5.1.2] - 2025-11-06
### Added
+39 -12
View File
@@ -50,10 +50,11 @@ Claude-mem is a Claude Code plugin providing persistent memory across sessions.
- FTS5 virtual tables for full-text search
- `SessionStore` = CRUD, `SessionSearch` = FTS5 queries
**MCP Search Server** (`src/servers/search-server.ts`)
- Exposes 9 search tools to Claude Code
- Configured in `plugin/.mcp.json`
- Built to `plugin/search-server.mjs` (ESM format)
**Search Skill** (`plugin/skills/search/SKILL.md`)
- Provides access to all search functionality via HTTP API + skill
- Auto-invoked when users ask about past work, decisions, or history
- Uses HTTP endpoints instead of MCP tools (~2,250 token savings per session)
- 10 search operations: observations, sessions, prompts, by-type, by-file, by-concept, timelines, etc.
**Chroma Vector Database** (`src/services/sync/ChromaSync.ts`)
- Hybrid semantic + keyword search architecture
@@ -86,12 +87,11 @@ npm run worker:restart
```
Must restart PM2 worker for changes to take effect.
### When You Modify MCP Server
### When You Modify Search Skill
```bash
npm run build
npm run sync-marketplace
# Restart Claude Code for MCP changes
```
Skill changes take effect immediately on next Claude Code session. No build or restart needed (skills are markdown).
### When You Modify Viewer UI
```bash
@@ -104,7 +104,7 @@ Changes to React components, styles, or viewer logic require rebuilding and rest
### Build Pipeline
1. `npm run build` → Compiles TypeScript, outputs to `plugin/`
2. `npm run sync-marketplace` → Syncs to `~/.claude/plugins/marketplaces/thedotmack/`
3. Changes are live for next session (hooks/MCP) or after restart (worker)
3. Changes are live for next session (hooks/skills) or after restart (worker)
## Coding Standards: DRY, YAGNI, and Anti-Patterns
@@ -263,7 +263,7 @@ pm2 delete claude-mem-worker # Force clean start
2. `npm run build && npm run sync-marketplace`
3. Start new Claude Code session (hooks) or restart worker (worker changes)
4. Check `~/.claude-mem/claude-mem.db` for database state
5. Use MCP search tools to verify behavior
5. Use search skill to verify behavior (auto-invoked when asking about past work)
### Version Bumps
Use the version-bump skill:
@@ -291,7 +291,7 @@ Choose patch/minor/major. Updates package.json, marketplace.json, plugin.json, a
```
Deploy a general-purpose Task agent to:
1. Read src/hooks/context-hook.ts in full
2. Read src/servers/search-server.ts in full
2. Read src/services/worker-service.ts in full
3. Answer: How do these files work together? What's the current implementation state?
4. Find any bugs or inconsistencies between them
```
@@ -304,6 +304,33 @@ Use this when:
## Recent Changes
### v5.4.0 - Skill-Based Search Migration
**Breaking Change**: MCP search tools replaced with skill-based approach
- **Token Savings**: ~2,250 tokens per session start
- **Progressive Disclosure**: Skill frontmatter (~250 tokens) instead of 9 MCP tool definitions (~2,500 tokens)
- **New HTTP API**: 10 search endpoints in worker service (localhost:37777/api/search/*)
- **Search Skill**: Auto-invoked when users ask about past work, decisions, or history
- **No User Action Required**: Migration is transparent, searches work automatically
- **Deprecated**: MCP search server (source kept for reference: src/servers/search-server.ts)
**Available Search Operations:**
1. Search observations (full-text)
2. Search session summaries (full-text)
3. Search user prompts (full-text)
4. Search by observation type (bugfix, feature, refactor, discovery, decision)
5. Search by concept tag
6. Search by file path
7. Get recent context for a project
8. Get timeline around specific point in time
9. Get timeline by query (search + timeline in one call)
10. Get API help documentation
**How It Works:**
- User asks: "What bug did we fix last session?"
- Claude sees skill description matches → invokes search skill
- Skill loads full instructions → uses curl to call HTTP API → formats results
- User sees formatted answer with past work context
### v5.1.2 - Theme Toggle
**Theme Support**: Light/dark mode for viewer UI
- User-selectable theme with persistent settings
@@ -373,9 +400,9 @@ Use this when:
- Hybrid semantic + keyword search combining ChromaDB with SQLite FTS5
- ChromaSync service for automatic vector embedding synchronization (738 lines)
- 90-day recency filtering for contextually relevant results
- New MCP tools: `get_context_timeline` and `get_timeline_by_query`
- Timeline and context search capabilities (now provided via skill-based HTTP API)
- Performance: Semantic search <200ms with 8,000+ vector documents
- Enhanced all 9 MCP search tools with hybrid search capabilities
- Full-text search across observations, sessions, and prompts
## Configuration Users Can Set
+129
View File
@@ -0,0 +1,129 @@
Plan: Migrate to Skill-Based Search (Deprecate MCP)
Goal
Replace MCP search tools with a skill-based approach, reducing session
start context from ~2,500 tokens to ~250 tokens. Clean migration, no
toggles.
Implementation Steps
1. Add HTTP API Endpoints to Worker Service
File: src/services/worker-service.ts
Add 10 new routes that wrap existing SessionSearch methods:
- GET /api/search/observations?query=...&format=index&limit=20&project=...
- GET /api/search/sessions?query=...&format=index&limit=20
- GET /api/search/prompts?query=...&format=index&limit=20
- GET /api/search/by-concept?concept=discovery&format=index&limit=5
- GET /api/search/by-file?filePath=...&format=index&limit=10
- GET /api/search/by-type?type=bugfix&format=index&limit=10
- GET /api/context/recent?project=...&limit=3
- GET /api/context/timeline?anchor=123&depth_before=10&depth_after=10
- GET
/api/timeline/by-query?query=...&mode=auto&depth_before=10&depth_after=10
- GET /api/search/help - Returns available endpoints and usage docs
All endpoints return JSON. Skill parses and formats for readability.
2. Create Search Skill
File: plugin/skills/search/SKILL.md
Frontmatter:
---
name: search
description: Search claude-mem persistent memory for past sessions,
observations, bugs fixed, features implemented, decisions made, code
changes, and previous work. Use when answering questions about history,
finding past decisions, or researching previous implementations.
---
Content: Instructions for all 9 search types using curl to call HTTP
endpoints, formatting guidelines, common workflows.
3. Remove MCP Search Server
Files to modify:
- Remove plugin/.mcp.json entry for claude-mem-search
- Keep src/servers/search-server.ts for reference but don't build it
- Update scripts/build-plugin.js to skip building search-server.mjs
- Archive search-server implementation (don't delete, for reference)
4. Update Documentation
File: CLAUDE.md
Remove MCP search references, add skill search explanation:
- Token savings: ~2,250 tokens per session
- How skill auto-invokes (model-driven, not user-driven)
- Available search operations
- Examples of triggering searches
5. Add Migration Notice
File: CHANGELOG.md or release notes
Document the breaking change:
## v5.4.0 - Skill-Based Search Migration
**BREAKING CHANGE**: MCP search tools have been replaced with a
skill-based approach.
**What changed**:
- Removed 9 MCP search tools (search_observations, search_sessions, etc.)
- Added `search` skill that provides the same functionality
- Reduced session start context by ~2,250 tokens
**Migration**: None required. Claude automatically uses the search skill
when needed.
The skill provides the same search capabilities with better token
efficiency.
**Why**: Skill-based search uses progressive disclosure (~250 tokens for
frontmatter)
instead of loading all 9 tool definitions (~2,500 tokens) on every session
start.
6. Testing Checklist
- All 10 HTTP endpoints return correct data
- Skill auto-invokes when asking about past work
- Skill successfully calls endpoints via curl
- Skill formats results as readable markdown
- Worker restart updates endpoints
- Skill distributed correctly with plugin
- No MCP search server registered
- Session start context reduced by ~2,250 tokens
Token Impact
- Before: ~2,500 tokens (9 MCP tool definitions)
- After: ~250 tokens (skill frontmatter only)
- Savings: ~2,250 tokens per session start
User Experience
New behavior:
- User: "What bug did we fix last session?"
- Claude sees skill description matches → invokes search skill
- Skill loads full instructions → uses curl to call HTTP API → formats
results
- User sees formatted answer
No user action required: Migration is transparent, searches work
automatically.
Build & Deploy
npm run build # Builds skill, skips MCP server
npm run sync-marketplace # Syncs plugin with skill
npm run worker:restart # Restart worker with new HTTP endpoints
Rollout
1. Ship as breaking change in v5.4.0
2. Update plugin marketplace listing
3. All users get automatic token savings on update
4. Archive MCP search implementation for reference
+302
View File
@@ -0,0 +1,302 @@
# Agent Skills in the SDK
> Extend Claude with specialized capabilities using Agent Skills in the Claude Agent SDK
## Overview
Agent Skills extend Claude with specialized capabilities that Claude autonomously invokes when relevant. Skills are packaged as `SKILL.md` files containing instructions, descriptions, and optional supporting resources.
For comprehensive information about Skills, including benefits, architecture, and authoring guidelines, see the [Agent Skills overview](/en/docs/agents-and-tools/agent-skills/overview).
## How Skills Work with the SDK
When using the Claude Agent SDK, Skills are:
1. **Defined as filesystem artifacts**: Created as `SKILL.md` files in specific directories (`.claude/skills/`)
2. **Loaded from filesystem**: Skills are loaded from configured filesystem locations. You must specify `settingSources` (TypeScript) or `setting_sources` (Python) to load Skills from the filesystem
3. **Automatically discovered**: Once filesystem settings are loaded, Skill metadata is discovered at startup from user and project directories; full content loaded when triggered
4. **Model-invoked**: Claude autonomously chooses when to use them based on context
5. **Enabled via allowed\_tools**: Add `"Skill"` to your `allowed_tools` to enable Skills
Unlike subagents (which can be defined programmatically), Skills must be created as filesystem artifacts. The SDK does not provide a programmatic API for registering Skills.
<Note>
**Default behavior**: By default, the SDK does not load any filesystem settings. To use Skills, you must explicitly configure `settingSources: ['user', 'project']` (TypeScript) or `setting_sources=["user", "project"]` (Python) in your options.
</Note>
## Using Skills with the SDK
To use Skills with the SDK, you need to:
1. Include `"Skill"` in your `allowed_tools` configuration
2. Configure `settingSources`/`setting_sources` to load Skills from the filesystem
Once configured, Claude automatically discovers Skills from the specified directories and invokes them when relevant to the user's request.
<CodeGroup>
```python Python theme={null}
import asyncio
from claude_agent_sdk import query, ClaudeAgentOptions
async def main():
options = ClaudeAgentOptions(
cwd="/path/to/project", # Project with .claude/skills/
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill", "Read", "Write", "Bash"] # Enable Skill tool
)
async for message in query(
prompt="Help me process this PDF document",
options=options
):
print(message)
asyncio.run(main())
```
```typescript TypeScript theme={null}
import { query } from "@anthropic-ai/claude-agent-sdk";
for await (const message of query({
prompt: "Help me process this PDF document",
options: {
cwd: "/path/to/project", // Project with .claude/skills/
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill", "Read", "Write", "Bash"] // Enable Skill tool
}
})) {
console.log(message);
}
```
</CodeGroup>
## Skill Locations
Skills are loaded from filesystem directories based on your `settingSources`/`setting_sources` configuration:
* **Project Skills** (`.claude/skills/`): Shared with your team via git - loaded when `setting_sources` includes `"project"`
* **User Skills** (`~/.claude/skills/`): Personal Skills across all projects - loaded when `setting_sources` includes `"user"`
* **Plugin Skills**: Bundled with installed Claude Code plugins
## Creating Skills
Skills are defined as directories containing a `SKILL.md` file with YAML frontmatter and Markdown content. The `description` field determines when Claude invokes your Skill.
**Example directory structure**:
```bash theme={null}
.claude/skills/processing-pdfs/
└── SKILL.md
```
For complete guidance on creating Skills, including SKILL.md structure, multi-file Skills, and examples, see:
* [Agent Skills in Claude Code](https://code.claude.com/docs/skills): Complete guide with examples
* [Agent Skills Best Practices](/en/docs/agents-and-tools/agent-skills/best-practices): Authoring guidelines and naming conventions
## Tool Restrictions
<Note>
The `allowed-tools` frontmatter field in SKILL.md is only supported when using Claude Code CLI directly. **It does not apply when using Skills through the SDK**.
When using the SDK, control tool access through the main `allowedTools` option in your query configuration.
</Note>
To restrict tools for Skills in SDK applications, use the `allowedTools` option:
<Note>
Import statements from the first example are assumed in the following code snippets.
</Note>
<CodeGroup>
```python Python theme={null}
options = ClaudeAgentOptions(
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill", "Read", "Grep", "Glob"] # Restricted toolset
)
async for message in query(
prompt="Analyze the codebase structure",
options=options
):
print(message)
```
```typescript TypeScript theme={null}
// Skills can only use Read, Grep, and Glob tools
for await (const message of query({
prompt: "Analyze the codebase structure",
options: {
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill", "Read", "Grep", "Glob"] // Restricted toolset
}
})) {
console.log(message);
}
```
</CodeGroup>
## Discovering Available Skills
To see which Skills are available in your SDK application, simply ask Claude:
<CodeGroup>
```python Python theme={null}
options = ClaudeAgentOptions(
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill"]
)
async for message in query(
prompt="What Skills are available?",
options=options
):
print(message)
```
```typescript TypeScript theme={null}
for await (const message of query({
prompt: "What Skills are available?",
options: {
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill"]
}
})) {
console.log(message);
}
```
</CodeGroup>
Claude will list the available Skills based on your current working directory and installed plugins.
## Testing Skills
Test Skills by asking questions that match their descriptions:
<CodeGroup>
```python Python theme={null}
options = ClaudeAgentOptions(
cwd="/path/to/project",
setting_sources=["user", "project"], # Load Skills from filesystem
allowed_tools=["Skill", "Read", "Bash"]
)
async for message in query(
prompt="Extract text from invoice.pdf",
options=options
):
print(message)
```
```typescript TypeScript theme={null}
for await (const message of query({
prompt: "Extract text from invoice.pdf",
options: {
cwd: "/path/to/project",
settingSources: ["user", "project"], // Load Skills from filesystem
allowedTools: ["Skill", "Read", "Bash"]
}
})) {
console.log(message);
}
```
</CodeGroup>
Claude automatically invokes the relevant Skill if the description matches your request.
## Troubleshooting
### Skills Not Found
**Check settingSources configuration**: Skills are only loaded when you explicitly configure `settingSources`/`setting_sources`. This is the most common issue:
<CodeGroup>
```python Python theme={null}
# Wrong - Skills won't be loaded
options = ClaudeAgentOptions(
allowed_tools=["Skill"]
)
# Correct - Skills will be loaded
options = ClaudeAgentOptions(
setting_sources=["user", "project"], # Required to load Skills
allowed_tools=["Skill"]
)
```
```typescript TypeScript theme={null}
// Wrong - Skills won't be loaded
const options = {
allowedTools: ["Skill"]
};
// Correct - Skills will be loaded
const options = {
settingSources: ["user", "project"], // Required to load Skills
allowedTools: ["Skill"]
};
```
</CodeGroup>
For more details on `settingSources`/`setting_sources`, see the [TypeScript SDK reference](/en/docs/agent-sdk/typescript#settingsource) or [Python SDK reference](/en/docs/agent-sdk/python#settingsource).
**Check working directory**: The SDK loads Skills relative to the `cwd` option. Ensure it points to a directory containing `.claude/skills/`:
<CodeGroup>
```python Python theme={null}
# Ensure your cwd points to the directory containing .claude/skills/
options = ClaudeAgentOptions(
cwd="/path/to/project", # Must contain .claude/skills/
setting_sources=["user", "project"], # Required to load Skills
allowed_tools=["Skill"]
)
```
```typescript TypeScript theme={null}
// Ensure your cwd points to the directory containing .claude/skills/
const options = {
cwd: "/path/to/project", // Must contain .claude/skills/
settingSources: ["user", "project"], // Required to load Skills
allowedTools: ["Skill"]
};
```
</CodeGroup>
See the "Using Skills with the SDK" section above for the complete pattern.
**Verify filesystem location**:
```bash theme={null}
# Check project Skills
ls .claude/skills/*/SKILL.md
# Check personal Skills
ls ~/.claude/skills/*/SKILL.md
```
### Skill Not Being Used
**Check the Skill tool is enabled**: Confirm `"Skill"` is in your `allowedTools`.
**Check the description**: Ensure it's specific and includes relevant keywords. See [Agent Skills Best Practices](/en/docs/agents-and-tools/agent-skills/best-practices#writing-effective-descriptions) for guidance on writing effective descriptions.
### Additional Troubleshooting
For general Skills troubleshooting (YAML syntax, debugging, etc.), see the [Claude Code Skills troubleshooting section](https://code.claude.com/docs/skills#troubleshooting).
## Related Documentation
### Skills Guides
* [Agent Skills in Claude Code](https://code.claude.com/docs/skills): Complete Skills guide with creation, examples, and troubleshooting
* [Agent Skills Overview](/en/docs/agents-and-tools/agent-skills/overview): Conceptual overview, benefits, and architecture
* [Agent Skills Best Practices](/en/docs/agents-and-tools/agent-skills/best-practices): Authoring guidelines for effective Skills
* [Agent Skills Cookbook](https://github.com/anthropics/claude-cookbooks/tree/main/skills): Example Skills and templates
### SDK Resources
* [Subagents in the SDK](/en/docs/agent-sdk/subagents): Similar filesystem-based agents with programmatic options
* [Slash Commands in the SDK](/en/docs/agent-sdk/slash-commands): User-invoked commands
* [SDK Overview](/en/docs/agent-sdk/overview): General SDK concepts
* [TypeScript SDK Reference](/en/docs/agent-sdk/typescript): Complete API documentation
* [Python SDK Reference](/en/docs/agent-sdk/python): Complete API documentation
+607
View File
@@ -0,0 +1,607 @@
# Agent Skills
> Create, manage, and share Skills to extend Claude's capabilities in Claude Code.
This guide shows you how to create, use, and manage Agent Skills in Claude Code. Skills are modular capabilities that extend Claude's functionality through organized folders containing instructions, scripts, and resources.
## Prerequisites
* Claude Code version 1.0 or later
* Basic familiarity with [Claude Code](/en/quickstart)
## What are Agent Skills?
Agent Skills package expertise into discoverable capabilities. Each Skill consists of a `SKILL.md` file with instructions that Claude reads when relevant, plus optional supporting files like scripts and templates.
**How Skills are invoked**: Skills are **model-invoked**—Claude autonomously decides when to use them based on your request and the Skill's description. This is different from slash commands, which are **user-invoked** (you explicitly type `/command` to trigger them).
**Benefits**:
* Extend Claude's capabilities for your specific workflows
* Share expertise across your team via git
* Reduce repetitive prompting
* Compose multiple Skills for complex tasks
Learn more in the [Agent Skills overview](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview).
<Note>
For a deep dive into the architecture and real-world applications of Agent Skills, read our engineering blog: [Equipping agents for the real world with Agent Skills](https://www.anthropic.com/engineering/equipping-agents-for-the-real-world-with-agent-skills).
</Note>
## Create a Skill
Skills are stored as directories containing a `SKILL.md` file.
### Personal Skills
Personal Skills are available across all your projects. Store them in `~/.claude/skills/`:
```bash theme={null}
mkdir -p ~/.claude/skills/my-skill-name
```
**Use personal Skills for**:
* Your individual workflows and preferences
* Experimental Skills you're developing
* Personal productivity tools
### Project Skills
Project Skills are shared with your team. Store them in `.claude/skills/` within your project:
```bash theme={null}
mkdir -p .claude/skills/my-skill-name
```
**Use project Skills for**:
* Team workflows and conventions
* Project-specific expertise
* Shared utilities and scripts
Project Skills are checked into git and automatically available to team members.
### Plugin Skills
Skills can also come from [Claude Code plugins](/en/plugins). Plugins may bundle Skills that are automatically available when the plugin is installed. These Skills work the same way as personal and project Skills.
## Write SKILL.md
Create a `SKILL.md` file with YAML frontmatter and Markdown content:
```yaml theme={null}
---
name: your-skill-name
description: Brief description of what this Skill does and when to use it
---
# Your Skill Name
## Instructions
Provide clear, step-by-step guidance for Claude.
## Examples
Show concrete examples of using this Skill.
```
**Field requirements**:
* `name`: Must use lowercase letters, numbers, and hyphens only (max 64 characters)
* `description`: Brief description of what the Skill does and when to use it (max 1024 characters)
The `description` field is critical for Claude to discover when to use your Skill. It should include both what the Skill does and when Claude should use it.
See the [best practices guide](https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices) for complete authoring guidance including validation rules.
## Add supporting files
Create additional files alongside SKILL.md:
```
my-skill/
├── SKILL.md (required)
├── reference.md (optional documentation)
├── examples.md (optional examples)
├── scripts/
│ └── helper.py (optional utility)
└── templates/
└── template.txt (optional template)
```
Reference these files from SKILL.md:
````markdown theme={null}
For advanced usage, see [reference.md](reference.md).
Run the helper script:
```bash
python scripts/helper.py input.txt
```
````
Claude reads these files only when needed, using progressive disclosure to manage context efficiently.
## Restrict tool access with allowed-tools
Use the `allowed-tools` frontmatter field to limit which tools Claude can use when a Skill is active:
```yaml theme={null}
---
name: safe-file-reader
description: Read files without making changes. Use when you need read-only file access.
allowed-tools: Read, Grep, Glob
---
# Safe File Reader
This Skill provides read-only file access.
## Instructions
1. Use Read to view file contents
2. Use Grep to search within files
3. Use Glob to find files by pattern
```
When this Skill is active, Claude can only use the specified tools (Read, Grep, Glob) without needing to ask for permission. This is useful for:
* Read-only Skills that shouldn't modify files
* Skills with limited scope (e.g., only data analysis, no file writing)
* Security-sensitive workflows where you want to restrict capabilities
If `allowed-tools` is not specified, Claude will ask for permission to use tools as normal, following the standard permission model.
<Note>
`allowed-tools` is only supported for Skills in Claude Code.
</Note>
## View available Skills
Skills are automatically discovered by Claude from three sources:
* Personal Skills: `~/.claude/skills/`
* Project Skills: `.claude/skills/`
* Plugin Skills: bundled with installed plugins
**To view all available Skills**, ask Claude directly:
```
What Skills are available?
```
or
```
List all available Skills
```
This will show all Skills from all sources, including plugin Skills.
**To inspect a specific Skill**, you can also check the filesystem:
```bash theme={null}
# List personal Skills
ls ~/.claude/skills/
# List project Skills (if in a project directory)
ls .claude/skills/
# View a specific Skill's content
cat ~/.claude/skills/my-skill/SKILL.md
```
## Test a Skill
After creating a Skill, test it by asking questions that match your description.
**Example**: If your description mentions "PDF files":
```
Can you help me extract text from this PDF?
```
Claude autonomously decides to use your Skill if it matches the request—you don't need to explicitly invoke it. The Skill activates automatically based on the context of your question.
## Debug a Skill
If Claude doesn't use your Skill, check these common issues:
### Make description specific
**Too vague**:
```yaml theme={null}
description: Helps with documents
```
**Specific**:
```yaml theme={null}
description: Extract text and tables from PDF files, fill forms, merge documents. Use when working with PDF files or when the user mentions PDFs, forms, or document extraction.
```
Include both what the Skill does and when to use it in the description.
### Verify file path
**Personal Skills**: `~/.claude/skills/skill-name/SKILL.md`
**Project Skills**: `.claude/skills/skill-name/SKILL.md`
Check the file exists:
```bash theme={null}
# Personal
ls ~/.claude/skills/my-skill/SKILL.md
# Project
ls .claude/skills/my-skill/SKILL.md
```
### Check YAML syntax
Invalid YAML prevents the Skill from loading. Verify the frontmatter:
```bash theme={null}
cat SKILL.md | head -n 10
```
Ensure:
* Opening `---` on line 1
* Closing `---` before Markdown content
* Valid YAML syntax (no tabs, correct indentation)
### View errors
Run Claude Code with debug mode to see Skill loading errors:
```bash theme={null}
claude --debug
```
## Share Skills with your team
**Recommended approach**: Distribute Skills through [plugins](/en/plugins).
To share Skills via plugin:
1. Create a plugin with Skills in the `skills/` directory
2. Add the plugin to a marketplace
3. Team members install the plugin
For complete instructions, see [Add Skills to your plugin](/en/plugins#add-skills-to-your-plugin).
You can also share Skills directly through project repositories:
### Step 1: Add Skill to your project
Create a project Skill:
```bash theme={null}
mkdir -p .claude/skills/team-skill
# Create SKILL.md
```
### Step 2: Commit to git
```bash theme={null}
git add .claude/skills/
git commit -m "Add team Skill for PDF processing"
git push
```
### Step 3: Team members get Skills automatically
When team members pull the latest changes, Skills are immediately available:
```bash theme={null}
git pull
claude # Skills are now available
```
## Update a Skill
Edit SKILL.md directly:
```bash theme={null}
# Personal Skill
code ~/.claude/skills/my-skill/SKILL.md
# Project Skill
code .claude/skills/my-skill/SKILL.md
```
Changes take effect the next time you start Claude Code. If Claude Code is already running, restart it to load the updates.
## Remove a Skill
Delete the Skill directory:
```bash theme={null}
# Personal
rm -rf ~/.claude/skills/my-skill
# Project
rm -rf .claude/skills/my-skill
git commit -m "Remove unused Skill"
```
## Best practices
### Keep Skills focused
One Skill should address one capability:
**Focused**:
* "PDF form filling"
* "Excel data analysis"
* "Git commit messages"
**Too broad**:
* "Document processing" (split into separate Skills)
* "Data tools" (split by data type or operation)
### Write clear descriptions
Help Claude discover when to use Skills by including specific triggers in your description:
**Clear**:
```yaml theme={null}
description: Analyze Excel spreadsheets, create pivot tables, and generate charts. Use when working with Excel files, spreadsheets, or analyzing tabular data in .xlsx format.
```
**Vague**:
```yaml theme={null}
description: For files
```
### Test with your team
Have teammates use Skills and provide feedback:
* Does the Skill activate when expected?
* Are the instructions clear?
* Are there missing examples or edge cases?
### Document Skill versions
You can document Skill versions in your SKILL.md content to track changes over time. Add a version history section:
```markdown theme={null}
# My Skill
## Version History
- v2.0.0 (2025-10-01): Breaking changes to API
- v1.1.0 (2025-09-15): Added new features
- v1.0.0 (2025-09-01): Initial release
```
This helps team members understand what changed between versions.
## Troubleshooting
### Claude doesn't use my Skill
**Symptom**: You ask a relevant question but Claude doesn't use your Skill.
**Check**: Is the description specific enough?
Vague descriptions make discovery difficult. Include both what the Skill does and when to use it, with key terms users would mention.
**Too generic**:
```yaml theme={null}
description: Helps with data
```
**Specific**:
```yaml theme={null}
description: Analyze Excel spreadsheets, generate pivot tables, create charts. Use when working with Excel files, spreadsheets, or .xlsx files.
```
**Check**: Is the YAML valid?
Run validation to check for syntax errors:
```bash theme={null}
# View frontmatter
cat .claude/skills/my-skill/SKILL.md | head -n 15
# Check for common issues
# - Missing opening or closing ---
# - Tabs instead of spaces
# - Unquoted strings with special characters
```
**Check**: Is the Skill in the correct location?
```bash theme={null}
# Personal Skills
ls ~/.claude/skills/*/SKILL.md
# Project Skills
ls .claude/skills/*/SKILL.md
```
### Skill has errors
**Symptom**: The Skill loads but doesn't work correctly.
**Check**: Are dependencies available?
Claude will automatically install required dependencies (or ask for permission to install them) when it needs them.
**Check**: Do scripts have execute permissions?
```bash theme={null}
chmod +x .claude/skills/my-skill/scripts/*.py
```
**Check**: Are file paths correct?
Use forward slashes (Unix style) in all paths:
**Correct**: `scripts/helper.py`
**Wrong**: `scripts\helper.py` (Windows style)
### Multiple Skills conflict
**Symptom**: Claude uses the wrong Skill or seems confused between similar Skills.
**Be specific in descriptions**: Help Claude choose the right Skill by using distinct trigger terms in your descriptions.
Instead of:
```yaml theme={null}
# Skill 1
description: For data analysis
# Skill 2
description: For analyzing data
```
Use:
```yaml theme={null}
# Skill 1
description: Analyze sales data in Excel files and CRM exports. Use for sales reports, pipeline analysis, and revenue tracking.
# Skill 2
description: Analyze log files and system metrics data. Use for performance monitoring, debugging, and system diagnostics.
```
## Examples
### Simple Skill (single file)
```
commit-helper/
└── SKILL.md
```
```yaml theme={null}
---
name: generating-commit-messages
description: Generates clear commit messages from git diffs. Use when writing commit messages or reviewing staged changes.
---
# Generating Commit Messages
## Instructions
1. Run `git diff --staged` to see changes
2. I'll suggest a commit message with:
- Summary under 50 characters
- Detailed description
- Affected components
## Best practices
- Use present tense
- Explain what and why, not how
```
### Skill with tool permissions
```
code-reviewer/
└── SKILL.md
```
```yaml theme={null}
---
name: code-reviewer
description: Review code for best practices and potential issues. Use when reviewing code, checking PRs, or analyzing code quality.
allowed-tools: Read, Grep, Glob
---
# Code Reviewer
## Review checklist
1. Code organization and structure
2. Error handling
3. Performance considerations
4. Security concerns
5. Test coverage
## Instructions
1. Read the target files using Read tool
2. Search for patterns using Grep
3. Find related files using Glob
4. Provide detailed feedback on code quality
```
### Multi-file Skill
```
pdf-processing/
├── SKILL.md
├── FORMS.md
├── REFERENCE.md
└── scripts/
├── fill_form.py
└── validate.py
```
**SKILL.md**:
````yaml theme={null}
---
name: pdf-processing
description: Extract text, fill forms, merge PDFs. Use when working with PDF files, forms, or document extraction. Requires pypdf and pdfplumber packages.
---
# PDF Processing
## Quick start
Extract text:
```python
import pdfplumber
with pdfplumber.open("doc.pdf") as pdf:
text = pdf.pages[0].extract_text()
```
For form filling, see [FORMS.md](FORMS.md).
For detailed API reference, see [REFERENCE.md](REFERENCE.md).
## Requirements
Packages must be installed in your environment:
```bash
pip install pypdf pdfplumber
```
````
<Note>
List required packages in the description. Packages must be installed in your environment before Claude can use them.
</Note>
Claude loads additional files only when needed.
## Next steps
<CardGroup cols={2}>
<Card title="Authoring best practices" icon="lightbulb" href="https://docs.claude.com/en/docs/agents-and-tools/agent-skills/best-practices">
Write Skills that Claude can use effectively
</Card>
<Card title="Agent Skills overview" icon="book" href="https://docs.claude.com/en/docs/agents-and-tools/agent-skills/overview">
Learn how Skills work across Claude products
</Card>
<Card title="Use Skills in the Agent SDK" icon="cube" href="https://docs.claude.com/en/docs/agent-sdk/skills">
Use Skills programmatically with TypeScript and Python
</Card>
<Card title="Get started with Agent Skills" icon="rocket" href="https://docs.claude.com/en/docs/agents-and-tools/agent-skills/quickstart">
Create your first Skill
</Card>
</CardGroup>
+33 -19
View File
@@ -54,10 +54,21 @@ When Claude finishes responding (triggering the Stop hook), a summary is automat
When you start a new Claude Code session, the SessionStart hook:
1. Queries the database for recent sessions in your project
2. Retrieves the last 10 session summaries
3. Formats them with three-tier verbosity (most recent = most detail)
4. Injects them into Claude's initial context
1. Queries the database for recent observations in your project (default: 50)
2. Retrieves recent session summaries for context
3. Displays observations in a chronological timeline with session markers
4. Shows full summary details (Investigated, Learned, Completed, Next Steps) **only if the summary was generated after the last observation**
5. Injects formatted context into Claude's initial context
**Summary Display Logic:**
The most recent summary's full details appear at the end of the context display **only when** the summary was generated after the most recent observation. This ensures you see summary details when they represent the latest state of your project, but not when new observations have been captured since the last summary.
For example:
- ✅ **Shows summary**: Last observation at 2:00 PM, summary generated at 2:05 PM → Summary details appear
- ❌ **Hides summary**: Summary generated at 2:00 PM, new observation at 2:05 PM → Summary details hidden (outdated)
This prevents showing stale summaries when new work has been captured but not yet summarized.
This means Claude "remembers" what happened in previous sessions!
@@ -138,26 +149,29 @@ FROM observations
WHERE session_id = 'YOUR_SESSION_ID';
```
## Understanding Verbosity Levels
## Understanding Progressive Disclosure
Context injection uses three-tier verbosity for efficient token usage:
Context injection uses progressive disclosure for efficient token usage:
### Tier 1 (Most Recent Session)
- Full summary with all details
- Request, investigated, learned, completed, next_steps, notes
- ~500-1000 tokens
### Layer 1: Index Display (Session Start)
- Shows observation titles with token cost estimates
- Displays session markers in chronological timeline
- Groups observations by file for visual clarity
- Shows full summary details **only if** generated after last observation
- Token cost: ~50-200 tokens for index view
### Tier 2 (Sessions 2-5)
- Medium detail
- Request, learned, completed
- ~200-400 tokens
### Layer 2: On-Demand Details (MCP Search)
- Fetch full observation narratives when needed
- Search by concept, file, type, or keyword
- Timeline context around specific observations
- Token cost: ~100-500 tokens per observation fetched
### Tier 3 (Sessions 6-10)
- Brief summary
- Request and completed only
- ~100-200 tokens
### Layer 3: Perfect Recall (Code Access)
- Read source files directly when needed
- Access original transcripts and raw data
- Full context available on-demand
This ensures you get maximum detail for recent work while still having context from older sessions.
This ensures efficient token usage while maintaining access to complete history when needed.
## Multi-Prompt Sessions & `/clear` Behavior
+1 -6
View File
@@ -1,8 +1,3 @@
{
"mcpServers": {
"claude-mem-search": {
"type": "stdio",
"command": "${CLAUDE_PLUGIN_ROOT}/scripts/search-server.mjs"
}
}
"mcpServers": {}
}
+11 -11
View File
@@ -1,7 +1,7 @@
#!/usr/bin/env node
import x from"path";import{homedir as se}from"os";import{existsSync as te,readFileSync as re}from"fs";import{stdin as F}from"process";import ee from"better-sqlite3";import{join as b,dirname as J,basename as Te}from"path";import{homedir as j}from"os";import{existsSync as Se,mkdirSync as Q}from"fs";import{fileURLToPath as z}from"url";function Z(){return typeof __dirname<"u"?__dirname:J(z(import.meta.url))}var fe=Z(),N=process.env.CLAUDE_MEM_DATA_DIR||b(j(),".claude-mem"),$=process.env.CLAUDE_CONFIG_DIR||b(j(),".claude"),Ne=b(N,"archives"),Oe=b(N,"logs"),Ie=b(N,"trash"),Le=b(N,"backups"),ye=b(N,"settings.json"),P=b(N,"claude-mem.db"),Ae=b(N,"vector-db"),ve=b($,"settings.json"),Ce=b($,"commands"),De=b($,"CLAUDE.md");function H(c){Q(c,{recursive:!0})}var M=(i=>(i[i.DEBUG=0]="DEBUG",i[i.INFO=1]="INFO",i[i.WARN=2]="WARN",i[i.ERROR=3]="ERROR",i[i.SILENT=4]="SILENT",i))(M||{}),w=class{level;useColor;constructor(){let e=process.env.CLAUDE_MEM_LOG_LEVEL?.toUpperCase()||"INFO";this.level=M[e]??1,this.useColor=process.stdout.isTTY??!1}correlationId(e,s){return`obs-${e}-${s}`}sessionId(e){return`session-${e}`}formatData(e){if(e==null)return"";if(typeof e=="string")return e;if(typeof e=="number"||typeof e=="boolean")return e.toString();if(typeof e=="object"){if(e instanceof Error)return this.level===0?`${e.message}
${e.stack}`:e.message;if(Array.isArray(e))return`[${e.length} items]`;let s=Object.keys(e);return s.length===0?"{}":s.length<=3?JSON.stringify(e):`{${s.length} keys: ${s.slice(0,3).join(", ")}...}`}return String(e)}formatTool(e,s){if(!s)return e;try{let t=typeof s=="string"?JSON.parse(s):s;if(e==="Bash"&&t.command){let r=t.command.length>50?t.command.substring(0,50)+"...":t.command;return`${e}(${r})`}if(e==="Read"&&t.file_path){let r=t.file_path.split("/").pop()||t.file_path;return`${e}(${r})`}if(e==="Edit"&&t.file_path){let r=t.file_path.split("/").pop()||t.file_path;return`${e}(${r})`}if(e==="Write"&&t.file_path){let r=t.file_path.split("/").pop()||t.file_path;return`${e}(${r})`}return e}catch{return e}}log(e,s,t,r,i){if(e<this.level)return;let d=new Date().toISOString().replace("T"," ").substring(0,23),p=M[e].padEnd(5),_=s.padEnd(6),E="";r?.correlationId?E=`[${r.correlationId}] `:r?.sessionId&&(E=`[session-${r.sessionId}] `);let n="";i!=null&&(this.level===0&&typeof i=="object"?n=`
`+JSON.stringify(i,null,2):n=" "+this.formatData(i));let O="";if(r){let{sessionId:S,sdkSessionId:R,correlationId:l,...a}=r;Object.keys(a).length>0&&(O=` {${Object.entries(a).map(([T,m])=>`${T}=${m}`).join(", ")}}`)}let L=`[${d}] [${p}] [${_}] ${E}${t}${O}${n}`;e===3?console.error(L):console.log(L)}debug(e,s,t,r){this.log(0,e,s,t,r)}info(e,s,t,r){this.log(1,e,s,t,r)}warn(e,s,t,r){this.log(2,e,s,t,r)}error(e,s,t,r){this.log(3,e,s,t,r)}dataIn(e,s,t,r){this.info(e,`\u2192 ${s}`,t,r)}dataOut(e,s,t,r){this.info(e,`\u2190 ${s}`,t,r)}success(e,s,t,r){this.info(e,`\u2713 ${s}`,t,r)}failure(e,s,t,r){this.error(e,`\u2717 ${s}`,t,r)}timing(e,s,t,r){this.info(e,`\u23F1 ${s}`,r,{duration:`${t}ms`})}},G=new w;var C=class{db;constructor(){H(N),this.db=new ee(P),this.db.pragma("journal_mode = WAL"),this.db.pragma("synchronous = NORMAL"),this.db.pragma("foreign_keys = ON"),this.initializeSchema(),this.ensureWorkerPortColumn(),this.ensurePromptTrackingColumns(),this.removeSessionSummariesUniqueConstraint(),this.addObservationHierarchicalFields(),this.makeObservationsTextNullable(),this.createUserPromptsTable()}initializeSchema(){try{this.db.exec(`
import x from"path";import{homedir as re}from"os";import{existsSync as ne,readFileSync as ie}from"fs";import{stdin as X}from"process";import te from"better-sqlite3";import{join as b,dirname as z,basename as he}from"path";import{homedir as H}from"os";import{existsSync as fe,mkdirSync as Z}from"fs";import{fileURLToPath as ee}from"url";function se(){return typeof __dirname<"u"?__dirname:z(ee(import.meta.url))}var Oe=se(),N=process.env.CLAUDE_MEM_DATA_DIR||b(H(),".claude-mem"),M=process.env.CLAUDE_CONFIG_DIR||b(H(),".claude"),Ie=b(N,"archives"),Le=b(N,"logs"),ye=b(N,"trash"),Ae=b(N,"backups"),ve=b(N,"settings.json"),G=b(N,"claude-mem.db"),Ce=b(N,"vector-db"),De=b(M,"settings.json"),xe=b(M,"commands"),ke=b(M,"CLAUDE.md");function W(c){Z(c,{recursive:!0})}var w=(i=>(i[i.DEBUG=0]="DEBUG",i[i.INFO=1]="INFO",i[i.WARN=2]="WARN",i[i.ERROR=3]="ERROR",i[i.SILENT=4]="SILENT",i))(w||{}),F=class{level;useColor;constructor(){let e=process.env.CLAUDE_MEM_LOG_LEVEL?.toUpperCase()||"INFO";this.level=w[e]??1,this.useColor=process.stdout.isTTY??!1}correlationId(e,s){return`obs-${e}-${s}`}sessionId(e){return`session-${e}`}formatData(e){if(e==null)return"";if(typeof e=="string")return e;if(typeof e=="number"||typeof e=="boolean")return e.toString();if(typeof e=="object"){if(e instanceof Error)return this.level===0?`${e.message}
${e.stack}`:e.message;if(Array.isArray(e))return`[${e.length} items]`;let s=Object.keys(e);return s.length===0?"{}":s.length<=3?JSON.stringify(e):`{${s.length} keys: ${s.slice(0,3).join(", ")}...}`}return String(e)}formatTool(e,s){if(!s)return e;try{let t=typeof s=="string"?JSON.parse(s):s;if(e==="Bash"&&t.command){let r=t.command.length>50?t.command.substring(0,50)+"...":t.command;return`${e}(${r})`}if(e==="Read"&&t.file_path){let r=t.file_path.split("/").pop()||t.file_path;return`${e}(${r})`}if(e==="Edit"&&t.file_path){let r=t.file_path.split("/").pop()||t.file_path;return`${e}(${r})`}if(e==="Write"&&t.file_path){let r=t.file_path.split("/").pop()||t.file_path;return`${e}(${r})`}return e}catch{return e}}log(e,s,t,r,i){if(e<this.level)return;let d=new Date().toISOString().replace("T"," ").substring(0,23),p=w[e].padEnd(5),_=s.padEnd(6),E="";r?.correlationId?E=`[${r.correlationId}] `:r?.sessionId&&(E=`[session-${r.sessionId}] `);let n="";i!=null&&(this.level===0&&typeof i=="object"?n=`
`+JSON.stringify(i,null,2):n=" "+this.formatData(i));let O="";if(r){let{sessionId:S,sdkSessionId:R,correlationId:m,...a}=r;Object.keys(a).length>0&&(O=` {${Object.entries(a).map(([B,u])=>`${B}=${u}`).join(", ")}}`)}let L=`[${d}] [${p}] [${_}] ${E}${t}${O}${n}`;e===3?console.error(L):console.log(L)}debug(e,s,t,r){this.log(0,e,s,t,r)}info(e,s,t,r){this.log(1,e,s,t,r)}warn(e,s,t,r){this.log(2,e,s,t,r)}error(e,s,t,r){this.log(3,e,s,t,r)}dataIn(e,s,t,r){this.info(e,`\u2192 ${s}`,t,r)}dataOut(e,s,t,r){this.info(e,`\u2190 ${s}`,t,r)}success(e,s,t,r){this.info(e,`\u2713 ${s}`,t,r)}failure(e,s,t,r){this.error(e,`\u2717 ${s}`,t,r)}timing(e,s,t,r){this.info(e,`\u23F1 ${s}`,r,{duration:`${t}ms`})}},Y=new F;var C=class{db;constructor(){W(N),this.db=new te(G),this.db.pragma("journal_mode = WAL"),this.db.pragma("synchronous = NORMAL"),this.db.pragma("foreign_keys = ON"),this.initializeSchema(),this.ensureWorkerPortColumn(),this.ensurePromptTrackingColumns(),this.removeSessionSummariesUniqueConstraint(),this.addObservationHierarchicalFields(),this.makeObservationsTextNullable(),this.createUserPromptsTable()}initializeSchema(){try{this.db.exec(`
CREATE TABLE IF NOT EXISTS schema_versions (
id INTEGER PRIMARY KEY,
version INTEGER UNIQUE NOT NULL,
@@ -299,7 +299,7 @@ ${e.stack}`:e.message;if(Array.isArray(e))return`[${e.length} items]`;let s=Obje
UPDATE sdk_sessions
SET sdk_session_id = ?
WHERE id = ? AND sdk_session_id IS NULL
`).run(s,e).changes===0?(G.debug("DB","sdk_session_id already set, skipping update",{sessionId:e,sdkSessionId:s}),!1):!0}setWorkerPort(e,s){this.db.prepare(`
`).run(s,e).changes===0?(Y.debug("DB","sdk_session_id already set, skipping update",{sessionId:e,sdkSessionId:s}),!1):!0}setWorkerPort(e,s){this.db.prepare(`
UPDATE sdk_sessions
SET worker_port = ?
WHERE id = ?
@@ -369,7 +369,7 @@ ${e.stack}`:e.message;if(Array.isArray(e))return`[${e.length} items]`;let s=Obje
WHERE id >= ? ${d}
ORDER BY id ASC
LIMIT ?
`;try{let l=this.db.prepare(S).all(e,...p,t+1),a=this.db.prepare(R).all(e,...p,r+1);if(l.length===0&&a.length===0)return{observations:[],sessions:[],prompts:[]};_=l.length>0?l[l.length-1].created_at_epoch:s,E=a.length>0?a[a.length-1].created_at_epoch:s}catch(l){return console.error("[SessionStore] Error getting boundary observations:",l.message),{observations:[],sessions:[],prompts:[]}}}else{let S=`
`;try{let m=this.db.prepare(S).all(e,...p,t+1),a=this.db.prepare(R).all(e,...p,r+1);if(m.length===0&&a.length===0)return{observations:[],sessions:[],prompts:[]};_=m.length>0?m[m.length-1].created_at_epoch:s,E=a.length>0?a[a.length-1].created_at_epoch:s}catch(m){return console.error("[SessionStore] Error getting boundary observations:",m.message),{observations:[],sessions:[],prompts:[]}}}else{let S=`
SELECT created_at_epoch
FROM observations
WHERE created_at_epoch <= ? ${d}
@@ -381,7 +381,7 @@ ${e.stack}`:e.message;if(Array.isArray(e))return`[${e.length} items]`;let s=Obje
WHERE created_at_epoch >= ? ${d}
ORDER BY created_at_epoch ASC
LIMIT ?
`;try{let l=this.db.prepare(S).all(s,...p,t),a=this.db.prepare(R).all(s,...p,r+1);if(l.length===0&&a.length===0)return{observations:[],sessions:[],prompts:[]};_=l.length>0?l[l.length-1].created_at_epoch:s,E=a.length>0?a[a.length-1].created_at_epoch:s}catch(l){return console.error("[SessionStore] Error getting boundary timestamps:",l.message),{observations:[],sessions:[],prompts:[]}}}let n=`
`;try{let m=this.db.prepare(S).all(s,...p,t),a=this.db.prepare(R).all(s,...p,r+1);if(m.length===0&&a.length===0)return{observations:[],sessions:[],prompts:[]};_=m.length>0?m[m.length-1].created_at_epoch:s,E=a.length>0?a[a.length-1].created_at_epoch:s}catch(m){return console.error("[SessionStore] Error getting boundary timestamps:",m.message),{observations:[],sessions:[],prompts:[]}}}let n=`
SELECT *
FROM observations
WHERE created_at_epoch >= ? AND created_at_epoch <= ? ${d}
@@ -397,7 +397,7 @@ ${e.stack}`:e.message;if(Array.isArray(e))return`[${e.length} items]`;let s=Obje
JOIN sdk_sessions s ON up.claude_session_id = s.claude_session_id
WHERE up.created_at_epoch >= ? AND up.created_at_epoch <= ? ${d.replace("project","s.project")}
ORDER BY up.created_at_epoch ASC
`;try{let S=this.db.prepare(n).all(_,E,...p),R=this.db.prepare(O).all(_,E,...p),l=this.db.prepare(L).all(_,E,...p);return{observations:S,sessions:R.map(a=>({id:a.id,sdk_session_id:a.sdk_session_id,project:a.project,request:a.request,completed:a.completed,next_steps:a.next_steps,created_at:a.created_at,created_at_epoch:a.created_at_epoch})),prompts:l.map(a=>({id:a.id,claude_session_id:a.claude_session_id,project:a.project,prompt:a.prompt_text,created_at:a.created_at,created_at_epoch:a.created_at_epoch}))}}catch(S){return console.error("[SessionStore] Error querying timeline records:",S.message),{observations:[],sessions:[],prompts:[]}}}close(){this.db.close()}};function ne(){try{let c=x.join(se(),".claude","settings.json");if(te(c)){let e=JSON.parse(re(c,"utf-8"));if(e.env?.CLAUDE_MEM_CONTEXT_OBSERVATIONS){let s=parseInt(e.env.CLAUDE_MEM_CONTEXT_OBSERVATIONS,10);if(!isNaN(s)&&s>0)return s}}}catch{}return parseInt(process.env.CLAUDE_MEM_CONTEXT_OBSERVATIONS||"50",10)}var ie=ne(),W=10,oe=4,ae=1,o={reset:"\x1B[0m",bright:"\x1B[1m",dim:"\x1B[2m",cyan:"\x1B[36m",green:"\x1B[32m",yellow:"\x1B[33m",blue:"\x1B[34m",magenta:"\x1B[35m",gray:"\x1B[90m",red:"\x1B[31m"};function de(c){if(!c)return[];try{let e=JSON.parse(c);return Array.isArray(e)?e:[]}catch{return[]}}function ce(c){return new Date(c).toLocaleString("en-US",{month:"short",day:"numeric",hour:"numeric",minute:"2-digit",hour12:!0})}function pe(c){return new Date(c).toLocaleString("en-US",{hour:"numeric",minute:"2-digit",hour12:!0})}function _e(c){return new Date(c).toLocaleString("en-US",{month:"short",day:"numeric",year:"numeric"})}function ue(c){return c?Math.ceil(c.length/oe):0}function le(c,e){return x.isAbsolute(c)?x.relative(e,c):c}function D(c,e,s,t){return e?t?[`${s}${c}:${o.reset} ${e}`,""]:[`**${c}**: ${e}`,""]:[]}async function Y(c,e=!1){let s=c?.cwd??process.cwd(),t=s?x.basename(s):"unknown-project",r=new C,i=r.db.prepare(`
`;try{let S=this.db.prepare(n).all(_,E,...p),R=this.db.prepare(O).all(_,E,...p),m=this.db.prepare(L).all(_,E,...p);return{observations:S,sessions:R.map(a=>({id:a.id,sdk_session_id:a.sdk_session_id,project:a.project,request:a.request,completed:a.completed,next_steps:a.next_steps,created_at:a.created_at,created_at_epoch:a.created_at_epoch})),prompts:m.map(a=>({id:a.id,claude_session_id:a.claude_session_id,project:a.project,prompt:a.prompt_text,created_at:a.created_at,created_at_epoch:a.created_at_epoch}))}}catch(S){return console.error("[SessionStore] Error querying timeline records:",S.message),{observations:[],sessions:[],prompts:[]}}}close(){this.db.close()}};function oe(){try{let c=x.join(re(),".claude","settings.json");if(ne(c)){let e=JSON.parse(ie(c,"utf-8"));if(e.env?.CLAUDE_MEM_CONTEXT_OBSERVATIONS){let s=parseInt(e.env.CLAUDE_MEM_CONTEXT_OBSERVATIONS,10);if(!isNaN(s)&&s>0)return s}}}catch{}return parseInt(process.env.CLAUDE_MEM_CONTEXT_OBSERVATIONS||"50",10)}var ae=oe(),V=10,de=4,ce=1,o={reset:"\x1B[0m",bright:"\x1B[1m",dim:"\x1B[2m",cyan:"\x1B[36m",green:"\x1B[32m",yellow:"\x1B[33m",blue:"\x1B[34m",magenta:"\x1B[35m",gray:"\x1B[90m",red:"\x1B[31m"};function pe(c){if(!c)return[];try{let e=JSON.parse(c);return Array.isArray(e)?e:[]}catch{return[]}}function _e(c){return new Date(c).toLocaleString("en-US",{month:"short",day:"numeric",hour:"numeric",minute:"2-digit",hour12:!0})}function ue(c){return new Date(c).toLocaleString("en-US",{hour:"numeric",minute:"2-digit",hour12:!0})}function me(c){return new Date(c).toLocaleString("en-US",{month:"short",day:"numeric",year:"numeric"})}function le(c){return c?Math.ceil(c.length/de):0}function Ee(c,e){return x.isAbsolute(c)?x.relative(e,c):c}function D(c,e,s,t){return e?t?[`${s}${c}:${o.reset} ${e}`,""]:[`**${c}**: ${e}`,""]:[]}async function q(c,e=!1){let s=c?.cwd??process.cwd(),t=s?x.basename(s):"unknown-project",r=new C,i=r.db.prepare(`
SELECT
id, sdk_session_id, type, title, subtitle, narrative,
facts, concepts, files_read, files_modified,
@@ -406,18 +406,18 @@ ${e.stack}`:e.message;if(Array.isArray(e))return`[${e.length} items]`;let s=Obje
WHERE project = ?
ORDER BY created_at_epoch DESC
LIMIT ?
`).all(t,ie),d=r.db.prepare(`
`).all(t,ae),d=r.db.prepare(`
SELECT id, sdk_session_id, request, investigated, learned, completed, next_steps, created_at, created_at_epoch
FROM session_summaries
WHERE project = ?
ORDER BY created_at_epoch DESC
LIMIT ?
`).all(t,W+ae);if(i.length===0&&d.length===0)return r.close(),e?`
`).all(t,V+ce);if(i.length===0&&d.length===0)return r.close(),e?`
${o.bright}${o.cyan}\u{1F4DD} [${t}] recent context${o.reset}
${o.gray}${"\u2500".repeat(60)}${o.reset}
${o.dim}No previous sessions found for this project yet.${o.reset}
`:`# [${t}] recent context
No previous sessions found for this project yet.`;let p=i,_=d.slice(0,W),E=p,n=[];if(e?(n.push(""),n.push(`${o.bright}${o.cyan}\u{1F4DD} [${t}] recent context${o.reset}`),n.push(`${o.gray}${"\u2500".repeat(60)}${o.reset}`),n.push("")):(n.push(`# [${t}] recent context`),n.push("")),E.length>0){e?(n.push(`${o.dim}Legend: \u{1F3AF} session-request | \u{1F534} bugfix | \u{1F7E3} feature | \u{1F504} refactor | \u2705 change | \u{1F535} discovery | \u{1F9E0} decision${o.reset}`),n.push("")):(n.push("**Legend:** \u{1F3AF} session-request | \u{1F534} bugfix | \u{1F7E3} feature | \u{1F504} refactor | \u2705 change | \u{1F535} discovery | \u{1F9E0} decision"),n.push("")),e?(n.push(`${o.dim}\u{1F4A1} Progressive Disclosure: This index shows WHAT exists (titles) and retrieval COST (token counts).${o.reset}`),n.push(`${o.dim} \u2192 Use MCP search tools to fetch full observation details on-demand (Layer 2)${o.reset}`),n.push(`${o.dim} \u2192 Prefer searching observations over re-reading code for past decisions and learnings${o.reset}`),n.push(`${o.dim} \u2192 Critical types (\u{1F534} bugfix, \u{1F9E0} decision) often worth fetching immediately${o.reset}`),n.push("")):(n.push("\u{1F4A1} **Progressive Disclosure:** This index shows WHAT exists (titles) and retrieval COST (token counts)."),n.push("- Use MCP search tools to fetch full observation details on-demand (Layer 2)"),n.push("- Prefer searching observations over re-reading code for past decisions and learnings"),n.push("- Critical types (\u{1F534} bugfix, \u{1F9E0} decision) often worth fetching immediately"),n.push(""));let O=d[0]?.id,L=_.map((u,T)=>{let m=T===0?null:d[T+1];return{...u,displayEpoch:m?m.created_at_epoch:u.created_at_epoch,displayTime:m?m.created_at:u.created_at,shouldShowLink:u.id!==O}}),S=[...E.map(u=>({type:"observation",data:u})),...L.map(u=>({type:"summary",data:u}))];S.sort((u,T)=>{let m=u.type==="observation"?u.data.created_at_epoch:u.data.displayEpoch,I=T.type==="observation"?T.data.created_at_epoch:T.data.displayEpoch;return m-I});let R=new Map;for(let u of S){let T=u.type==="observation"?u.data.created_at:u.data.displayTime,m=_e(T);R.has(m)||R.set(m,[]),R.get(m).push(u)}let l=Array.from(R.entries()).sort((u,T)=>{let m=new Date(u[0]).getTime(),I=new Date(T[0]).getTime();return m-I});for(let[u,T]of l){e?(n.push(`${o.bright}${o.cyan}${u}${o.reset}`),n.push("")):(n.push(`### ${u}`),n.push(""));let m=null,I="",y=!1;for(let k of T)if(k.type==="summary"){y&&(n.push(""),y=!1,m=null,I="");let g=k.data,A=`${g.request||"Session started"} (${ce(g.displayTime)})`,f=g.shouldShowLink?`claude-mem://session-summary/${g.id}`:"";if(e){let h=f?`${o.dim}[${f}]${o.reset}`:"";n.push(`\u{1F3AF} ${o.yellow}#S${g.id}${o.reset} ${A} ${h}`)}else{let h=f?` [\u2192](${f})`:"";n.push(`**\u{1F3AF} #S${g.id}** ${A}${h}`)}n.push("")}else{let g=k.data,A=de(g.files_modified),f=A.length>0?le(A[0],s):"General";f!==m&&(y&&n.push(""),e?n.push(`${o.dim}${f}${o.reset}`):n.push(`**${f}**`),e||(n.push("| ID | Time | T | Title | Tokens |"),n.push("|----|------|---|-------|--------|")),m=f,y=!0,I="");let h="\u2022";switch(g.type){case"bugfix":h="\u{1F534}";break;case"feature":h="\u{1F7E3}";break;case"refactor":h="\u{1F504}";break;case"change":h="\u2705";break;case"discovery":h="\u{1F535}";break;case"decision":h="\u{1F9E0}";break;default:h="\u2022"}let v=pe(g.created_at),X=g.title||"Untitled",U=ue(g.narrative),B=v!==I,V=B?v:"";if(I=v,e){let q=B?`${o.dim}${v}${o.reset}`:" ".repeat(v.length),K=U>0?`${o.dim}(~${U}t)${o.reset}`:"";n.push(` ${o.dim}#${g.id}${o.reset} ${q} ${h} ${X} ${K}`)}else n.push(`| #${g.id} | ${V||"\u2033"} | ${h} | ${X} | ~${U} |`)}y&&n.push("")}let a=d[0];a&&(a.investigated||a.learned||a.completed||a.next_steps)&&(n.push(...D("Investigated",a.investigated,o.blue,e)),n.push(...D("Learned",a.learned,o.yellow,e)),n.push(...D("Completed",a.completed,o.green,e)),n.push(...D("Next Steps",a.next_steps,o.magenta,e))),e?n.push(`${o.dim}Use claude-mem MCP search to access records with the given ID${o.reset}`):n.push("*Use claude-mem MCP search to access records with the given ID*")}return r.close(),n.join(`
`).trimEnd()}var me=process.argv.includes("--colors");if(F.isTTY||me)Y(void 0,!0).then(c=>{console.log(c),process.exit(0)});else{let c="";F.on("data",e=>c+=e),F.on("end",async()=>{let e=c.trim()?JSON.parse(c):void 0,t={hookSpecificOutput:{hookEventName:"SessionStart",additionalContext:await Y(e,!1)}};console.log(JSON.stringify(t)),process.exit(0)})}
No previous sessions found for this project yet.`;let p=i,_=d.slice(0,V),E=p,n=[];if(e?(n.push(""),n.push(`${o.bright}${o.cyan}\u{1F4DD} [${t}] recent context${o.reset}`),n.push(`${o.gray}${"\u2500".repeat(60)}${o.reset}`),n.push("")):(n.push(`# [${t}] recent context`),n.push("")),E.length>0){e?(n.push(`${o.dim}Legend: \u{1F3AF} session-request | \u{1F534} bugfix | \u{1F7E3} feature | \u{1F504} refactor | \u2705 change | \u{1F535} discovery | \u{1F9E0} decision${o.reset}`),n.push("")):(n.push("**Legend:** \u{1F3AF} session-request | \u{1F534} bugfix | \u{1F7E3} feature | \u{1F504} refactor | \u2705 change | \u{1F535} discovery | \u{1F9E0} decision"),n.push("")),e?(n.push(`${o.dim}\u{1F4A1} Progressive Disclosure: This index shows WHAT exists (titles) and retrieval COST (token counts).${o.reset}`),n.push(`${o.dim} \u2192 Use MCP search tools to fetch full observation details on-demand (Layer 2)${o.reset}`),n.push(`${o.dim} \u2192 Prefer searching observations over re-reading code for past decisions and learnings${o.reset}`),n.push(`${o.dim} \u2192 Critical types (\u{1F534} bugfix, \u{1F9E0} decision) often worth fetching immediately${o.reset}`),n.push("")):(n.push("\u{1F4A1} **Progressive Disclosure:** This index shows WHAT exists (titles) and retrieval COST (token counts)."),n.push("- Use MCP search tools to fetch full observation details on-demand (Layer 2)"),n.push("- Prefer searching observations over re-reading code for past decisions and learnings"),n.push("- Critical types (\u{1F534} bugfix, \u{1F9E0} decision) often worth fetching immediately"),n.push(""));let O=d[0]?.id,L=_.map((u,h)=>{let l=h===0?null:d[h+1];return{...u,displayEpoch:l?l.created_at_epoch:u.created_at_epoch,displayTime:l?l.created_at:u.created_at,shouldShowLink:u.id!==O}}),S=[...E.map(u=>({type:"observation",data:u})),...L.map(u=>({type:"summary",data:u}))];S.sort((u,h)=>{let l=u.type==="observation"?u.data.created_at_epoch:u.data.displayEpoch,I=h.type==="observation"?h.data.created_at_epoch:h.data.displayEpoch;return l-I});let R=new Map;for(let u of S){let h=u.type==="observation"?u.data.created_at:u.data.displayTime,l=me(h);R.has(l)||R.set(l,[]),R.get(l).push(u)}let m=Array.from(R.entries()).sort((u,h)=>{let l=new Date(u[0]).getTime(),I=new Date(h[0]).getTime();return l-I});for(let[u,h]of m){e?(n.push(`${o.bright}${o.cyan}${u}${o.reset}`),n.push("")):(n.push(`### ${u}`),n.push(""));let l=null,I="",y=!1;for(let U of h)if(U.type==="summary"){y&&(n.push(""),y=!1,l=null,I="");let T=U.data,A=`${T.request||"Session started"} (${_e(T.displayTime)})`,f=T.shouldShowLink?`claude-mem://session-summary/${T.id}`:"";if(e){let g=f?`${o.dim}[${f}]${o.reset}`:"";n.push(`\u{1F3AF} ${o.yellow}#S${T.id}${o.reset} ${A} ${g}`)}else{let g=f?` [\u2192](${f})`:"";n.push(`**\u{1F3AF} #S${T.id}** ${A}${g}`)}n.push("")}else{let T=U.data,A=pe(T.files_modified),f=A.length>0?Ee(A[0],s):"General";f!==l&&(y&&n.push(""),e?n.push(`${o.dim}${f}${o.reset}`):n.push(`**${f}**`),e||(n.push("| ID | Time | T | Title | Tokens |"),n.push("|----|------|---|-------|--------|")),l=f,y=!0,I="");let g="\u2022";switch(T.type){case"bugfix":g="\u{1F534}";break;case"feature":g="\u{1F7E3}";break;case"refactor":g="\u{1F504}";break;case"change":g="\u2705";break;case"discovery":g="\u{1F535}";break;case"decision":g="\u{1F9E0}";break;default:g="\u2022"}let v=ue(T.created_at),j=T.title||"Untitled",$=le(T.narrative),P=v!==I,K=P?v:"";if(I=v,e){let J=P?`${o.dim}${v}${o.reset}`:" ".repeat(v.length),Q=$>0?`${o.dim}(~${$}t)${o.reset}`:"";n.push(` ${o.dim}#${T.id}${o.reset} ${J} ${g} ${j} ${Q}`)}else n.push(`| #${T.id} | ${K||"\u2033"} | ${g} | ${j} | ~${$} |`)}y&&n.push("")}let a=d[0],k=p[0];a&&(a.investigated||a.learned||a.completed||a.next_steps)&&(!k||a.created_at_epoch>k.created_at_epoch)&&(n.push(...D("Investigated",a.investigated,o.blue,e)),n.push(...D("Learned",a.learned,o.yellow,e)),n.push(...D("Completed",a.completed,o.green,e)),n.push(...D("Next Steps",a.next_steps,o.magenta,e))),e?n.push(`${o.dim}Use claude-mem MCP search to access records with the given ID${o.reset}`):n.push("*Use claude-mem MCP search to access records with the given ID*")}return r.close(),n.join(`
`).trimEnd()}var Te=process.argv.includes("--colors");if(X.isTTY||Te)q(void 0,!0).then(c=>{console.log(c),process.exit(0)});else{let c="";X.on("data",e=>c+=e),X.on("end",async()=>{let e=c.trim()?JSON.parse(c):void 0,t={hookSpecificOutput:{hookEventName:"SessionStart",additionalContext:await q(e,!1)}};console.log(JSON.stringify(t)),process.exit(0)})}
File diff suppressed because one or more lines are too long
+96
View File
@@ -0,0 +1,96 @@
---
name: search
description: Search claude-mem persistent memory for past sessions, observations, bugs fixed, features implemented, decisions made, code changes, and previous work. Use when answering questions about history, finding past decisions, or researching previous implementations.
---
# Claude-Mem Search Skill
Access claude-mem's persistent memory through a comprehensive HTTP API. Search for past work, understand context, and learn from previous decisions.
## When to Use This Skill
**Invoke this skill when users ask about:**
- Past work: "What did we do last session?"
- Bug fixes: "Did we fix this before?" or "What bugs did we fix?"
- Features: "How did we implement authentication?"
- Decisions: "Why did we choose this approach?"
- Code changes: "What files were modified in that refactor?"
- File history: "What changes to auth/login.ts?"
- Timeline context: "What was happening around that time?"
- Recent activity: "What have we been working on?"
**Do NOT invoke** for current session work or future planning (use regular tools for that).
## Quick Decision Guide
Once the skill is loaded, choose the appropriate operation:
**What are you looking for?**
- "What did we do last session?" → [operations/recent-context.md](operations/recent-context.md)
- "Did we fix this bug before?" → [operations/by-type.md](operations/by-type.md) (type=bugfix)
- "How did we implement X?" → [operations/observations.md](operations/observations.md)
- "What changes to file.ts?" → [operations/by-file.md](operations/by-file.md)
- "What was happening then?" → [operations/timeline.md](operations/timeline.md)
- "Why did we choose X?" → [operations/observations.md](operations/observations.md) (search for decisions)
## Available Operations
Choose the appropriate operation file for detailed instructions:
### Full-Text Search
1. **[Search Observations](operations/observations.md)** - Find observations by keyword (bugs, features, decisions, etc.)
2. **[Search Sessions](operations/sessions.md)** - Search session summaries to understand what was accomplished
3. **[Search Prompts](operations/prompts.md)** - Find what users have asked about in the past
### Filtered Search
4. **[Search by Type](operations/by-type.md)** - Find bugfix, feature, refactor, decision, or discovery observations
5. **[Search by Concept](operations/by-concept.md)** - Find observations tagged with specific concepts
6. **[Search by File](operations/by-file.md)** - Find all work related to a specific file path
### Context Retrieval
7. **[Get Recent Context](operations/recent-context.md)** - Get recent session summaries and observations for a project
8. **[Get Timeline](operations/timeline.md)** - Get chronological timeline around a specific point in time
9. **[Timeline by Query](operations/timeline-by-query.md)** - Search then get timeline around the best match
### Utilities
10. **[API Help](operations/help.md)** - Get API documentation
## Common Workflows
For step-by-step guides on typical user requests, see [operations/common-workflows.md](operations/common-workflows.md):
- Understanding past work
- Finding specific bug fixes
- Understanding file history
- Timeline investigation
## Response Formatting
For guidelines on how to present search results to users, see [operations/formatting.md](operations/formatting.md):
- Format=index responses (compact lists)
- Format=full responses (complete details)
- Timeline responses (chronologically grouped)
## Technical Notes
- **Port:** Default 37777 (configurable via `CLAUDE_MEM_WORKER_PORT`)
- **Response format:** Always JSON
- **Search type:** FTS5 full-text search + structured filters
- **All operations use HTTP GET** with query parameters
## Performance Tips
1. Use **format=index** first for overviews, then **format=full** for details
2. Start with **limit=5-10**, expand if needed
3. Use **project filtering** when working on one codebase
4. Use **timeline depth** of 5-10 for focused context
5. Be specific in search queries: "authentication JWT" > "auth"
## Error Handling
If HTTP request fails:
1. Inform user the search service isn't available
2. Suggest checking if worker is running: `pm2 list`
3. Offer to help troubleshoot
For detailed error handling, see the specific operation files.
@@ -0,0 +1,66 @@
# Search by Concept
Find observations tagged with specific concepts.
## When to Use
- Looking for observations about a specific concept
- Understanding patterns across the codebase
- Finding related learnings
## Command
```bash
curl -s "http://localhost:37777/api/search/by-concept?concept=discovery&limit=5&format=index"
```
## Parameters
- **concept** (required): Concept tag to search for
- **format**: "index" or "full" (default: "full")
- **limit**: Number of results (default: 10, max: 100)
- **project**: Filter by project name (optional)
## Common Concepts
- **discovery**: Learnings and findings
- **decision**: Choices and rationale
- **architecture**: System design
- **performance**: Speed and optimization
- **security**: Security considerations
- **testing**: Test-related work
- **how-it-works**: Implementation details
- **why-it-exists**: Rationale and context
- **gotcha**: Tricky issues or edge cases
- **pattern**: Reusable patterns
## Use Case
"What have we learned about the database?" → Search concept=discovery + keyword search for "database"
You can combine concept search with keyword search:
```bash
# First get observations with concept=discovery
curl -s "http://localhost:37777/api/search/by-concept?concept=discovery&limit=20&format=index"
# Then filter results for "database" mentions
```
## How to Present Results
```markdown
Found 5 discoveries:
1. 🔵 **#1230** Database connection pooling best practices
> Learned that pool size should match CPU cores * 2
> Nov 8, 2024 • api-server
2. 🔵 **#1231** JWT library comparison
> Evaluated 3 libraries: jsonwebtoken, jose, passport-jwt
> Nov 8, 2024 • api-server
```
## Tips
1. Concepts provide semantic grouping beyond full-text search
2. Useful for finding patterns across different parts of work
3. Combine with full-text search for precise results
@@ -0,0 +1,83 @@
# Search by File Path
Find all work related to a specific file.
## When to Use
- User asks: "What changes did we make to auth/login.ts?"
- Understanding the history of a specific file
- Finding all observations that touched a file
## Command
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=auth/login.ts&limit=10&format=index"
```
## Parameters
- **filePath** (required): Full or partial file path
- Examples: "auth/", "login.ts", "src/components/Button.tsx"
- **format**: "index" or "full" (default: "full")
- **limit**: Number of results per type (default: 10, max: 100)
- **project**: Filter by project name (optional)
## Response Structure
Returns both observations and sessions that touched the file:
```json
{
"filePath": "auth/login.ts",
"count": 5,
"format": "index",
"results": {
"observations": [...],
"sessions": [...]
}
}
```
## Use Cases
**Full file path:**
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=src/auth/login.ts&limit=10"
```
**Partial path (matches all files in directory):**
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=auth/&limit=10"
```
**Filename only (matches across directories):**
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=login.ts&limit=10"
```
## How to Present Results
```markdown
Found 5 changes to auth/login.ts:
**Observations:**
1. 🟣 **#1234** Implemented JWT authentication
> Added token-based auth with refresh tokens
> Nov 9, 2024
2. 🔴 **#1235** Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024
**Sessions:**
1. **Session #123** (Nov 8, 2024)
> Add user authentication
> Completed: Implemented JWT auth, added middleware
```
## Tips
1. Partial paths are powerful for finding all work in a directory
2. Use this before modifying a file to understand its history
3. Helps identify who/when/why changes were made
4. Combine observations + sessions for complete file history
@@ -0,0 +1,72 @@
# Search by Type
Find observations by their classification (bugfix, feature, refactor, decision, discovery, change).
## When to Use
- User asks: "What bugs did we fix?"
- User asks: "What features did we add?"
- User asks: "What decisions did we make?"
- Looking for specific types of work
## Command
```bash
curl -s "http://localhost:37777/api/search/by-type?type=bugfix&limit=10&format=index"
```
## Parameters
- **type** (required): Observation type
- **format**: "index" or "full" (default: "full")
- **limit**: Number of results (default: 10, max: 100)
- **project**: Filter by project name (optional)
## Valid Types
- **bugfix**: Bug fixes and error resolutions 🔴
- **feature**: New features and capabilities 🟣
- **refactor**: Code restructuring and improvements 🔄
- **decision**: Architectural or design decisions 🧠
- **discovery**: Learnings about the codebase 🔵
- **change**: General changes and updates ✅
## Use Cases
**"Show me recent bugs we fixed"**
```bash
curl -s "http://localhost:37777/api/search/by-type?type=bugfix&limit=10&format=index"
```
**"What features did we add this week?"**
```bash
curl -s "http://localhost:37777/api/search/by-type?type=feature&limit=20&format=index"
```
**"What architectural decisions have we made?"**
```bash
curl -s "http://localhost:37777/api/search/by-type?type=decision&limit=10&format=full"
```
## How to Present Results
```markdown
Found 5 recent bugfixes:
1. 🔴 **#1234** Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024 • api-server
2. 🔴 **#1235** Resolved database connection pooling issue
> Fixed connection leak in long-running queries
> Nov 8, 2024 • api-server
```
Use type-specific emojis for visual clarity.
## Tips
1. type=bugfix is great for understanding what issues were resolved
2. type=decision helps understand architectural choices
3. type=discovery reveals learnings about the codebase
4. Combine with project filtering for focused results
@@ -0,0 +1,166 @@
# Common Workflows
Step-by-step guides for typical user requests.
## Workflow 1: Understanding Past Work
**User asks:** "What did we do last session?"
**Steps:**
1. Use [recent-context.md](recent-context.md) to get last 3 sessions
2. Parse and format the summary, observations, and outcomes
3. Present in readable markdown
**Example:**
```bash
RESULT=$(curl -s "http://localhost:37777/api/context/recent?limit=3")
# Parse JSON and format for user
```
**Present as:**
- Show session request
- List key accomplishments
- Highlight important observations
- Note any next steps
---
## Workflow 2: Finding a Specific Bug Fix
**User asks:** "Did we fix the login timeout issue?"
**Steps:**
1. Search observations with [by-type.md](by-type.md): `type=bugfix`
2. Or use [observations.md](observations.md): `query=login+timeout`
3. If results found, show title + subtitle + ID
4. Offer to get more details or timeline context
**Example:**
```bash
# Option 1: Search by type
curl -s "http://localhost:37777/api/search/by-type?type=bugfix&limit=20&format=index"
# Option 2: Full-text search
curl -s "http://localhost:37777/api/search/observations?query=login+timeout&format=index&limit=10"
```
**Present as:**
- List matching bugfixes
- Include observation ID for follow-up
- Offer to show full details or timeline
---
## Workflow 3: Understanding File History
**User asks:** "What changes have we made to auth/login.ts?"
**Steps:**
1. Use [by-file.md](by-file.md) to search by file path
2. Get both observations and sessions
3. Sort chronologically and present
**Example:**
```bash
curl -s "http://localhost:37777/api/search/by-file?filePath=auth/login.ts&limit=10&format=index"
```
**Present as:**
- Chronological list of changes
- Separate observations and sessions
- Include what changed and when
- Highlight recent modifications
---
## Workflow 4: Timeline Investigation
**User asks:** "What were we working on around the time of that deployment?"
**Steps:**
1. Use [timeline-by-query.md](timeline-by-query.md) for one-shot query
2. Or two-step: search for "deployment" to get ID, then use [timeline.md](timeline.md)
**Option 1 (Recommended): One request**
```bash
curl -s "http://localhost:37777/api/timeline/by-query?query=deployment&depth_before=10&depth_after=10"
```
**Option 2: Two requests**
```bash
# Step 1: Find the deployment
curl -s "http://localhost:37777/api/search/observations?query=deployment&format=index&limit=5"
# Get observation ID (e.g., #1234)
# Step 2: Get timeline around it
curl -s "http://localhost:37777/api/context/timeline?anchor=1234&depth_before=10&depth_after=10"
```
**Present as:**
- Show the anchor point (deployment observation)
- Chronological timeline grouped by day
- Highlight observations, sessions, and prompts
- Use emojis for visual clarity
---
## Workflow 5: Understanding Decisions
**User asks:** "Why did we choose PostgreSQL over MySQL?"
**Steps:**
1. Search for decisions using [by-type.md](by-type.md): `type=decision`
2. Filter results for "PostgreSQL" or "MySQL"
3. Show the decision observation with full context
**Example:**
```bash
curl -s "http://localhost:37777/api/search/by-type?type=decision&limit=20&format=index"
# Then search results for database-related decisions
```
Or use full-text search:
```bash
curl -s "http://localhost:37777/api/search/observations?query=PostgreSQL+MySQL+decision&format=full&limit=5"
```
**Present as:**
- Show the decision with full narrative
- Include facts and rationale
- Link to related observations if available
---
## Workflow 6: Exploring a Topic
**User asks:** "What have we learned about authentication?"
**Steps:**
1. Use [observations.md](observations.md) for full-text search
2. Filter by type=discovery for learnings
3. Or use [by-concept.md](by-concept.md) for concept=discovery
**Example:**
```bash
# Full-text search
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=20"
# Or discoveries only
curl -s "http://localhost:37777/api/search/by-type?type=discovery&limit=20&format=index"
# Then filter for "authentication" in results
```
**Present as:**
- Group by type (features, bugs, decisions, discoveries)
- Show progression of work over time
- Highlight key learnings
---
## Tips for All Workflows
1. **Start with format=index** for overviews, then format=full for details
2. **Use limit=5-10** initially, expand if needed
3. **Combine operations** for comprehensive answers
4. **Offer follow-ups**: "Want more details?" "See timeline context?"
5. **Use project filtering** when working on one codebase
@@ -0,0 +1,242 @@
# Response Formatting Guidelines
How to present search results to users.
## Format=Index Responses
When using `format=index`, present results as a **compact list**.
### Observations
```markdown
Found 5 results for "authentication":
1. **#1234** [feature] Implemented JWT authentication
> Added token-based auth with refresh tokens
> Nov 9, 2024 • claude-mem
2. **#1235** [bugfix] Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024 • claude-mem
3. **#1236** [refactor] Simplified authentication middleware
> Reduced code complexity by 40%
> Nov 10, 2024 • claude-mem
```
**Include:**
- ID (for follow-up queries)
- Type with emoji (see below)
- Title
- Subtitle (one-line summary)
- Date and project
**Type Emojis:**
- 🔴 **bugfix**: Bug fixes
- 🟣 **feature**: New features
- 🔄 **refactor**: Code restructuring
- 🧠 **decision**: Architectural decisions
- 🔵 **discovery**: Learnings
-**change**: General changes
### Sessions
```markdown
Found 3 sessions about "deployment":
1. **Session #123** (Nov 8, 2024)
> Deploy Docker container to production
> Completed: Set up CI/CD pipeline, configured secrets
2. **Session #124** (Nov 9, 2024)
> Fix deployment rollback issues
> Completed: Added health checks, fixed rollback script
```
### Prompts
```markdown
Found 3 past prompts about "docker":
1. **Prompt #456** (Nov 8, 2024)
> "Help me set up Docker for this project"
2. **Prompt #457** (Nov 9, 2024)
> "Fix Docker compose networking issues"
```
---
## Format=Full Responses
When using `format=full`, present **complete details**.
### Observations (Full)
```markdown
### Observation #1234: Implemented JWT authentication
**Type:** Feature 🟣
**Project:** claude-mem
**Date:** Nov 9, 2024 3:30 PM
**Summary:** Added token-based auth with refresh tokens
**Details:**
Implemented a complete JWT authentication system for the API. The system uses
short-lived access tokens (15 minutes) combined with longer-lived refresh tokens
(7 days) to balance security and user experience. The implementation includes
middleware for route protection and automatic token refresh handling.
**Facts:**
- Used jsonwebtoken library (v9.0.2)
- Access tokens expire after 15 minutes
- Refresh tokens expire after 7 days
- Tokens include user ID and role claims
- Added rate limiting to auth endpoints
**Files Modified:**
- src/auth/jwt.ts (created, 145 lines)
- src/middleware/auth.ts (created, 78 lines)
- src/routes/auth.ts (created, 92 lines)
- tests/auth.test.ts (created, 234 lines)
**Concepts:** authentication, security, tokens, middleware
```
### Sessions (Full)
```markdown
### Session #123: Add user authentication (Nov 8, 2024)
**Request:** Implement JWT-based authentication for the API
**Completed:**
- Implemented JWT authentication system with access and refresh tokens
- Created authentication middleware for route protection
- Added login, logout, and token refresh endpoints
- Wrote comprehensive tests for auth flows
- Added rate limiting to prevent brute force attacks
**Learned:**
- JWT refresh token rotation is critical for security
- Need to handle token expiration gracefully on client side
- Rate limiting should be IP-based for auth endpoints
- Token blacklisting adds complexity, short expiration is simpler
**Next Steps:**
- Add password reset functionality
- Implement 2FA for admin accounts
- Add OAuth integration for social login
**Files Read:**
- docs/authentication-spec.md
- src/middleware/existing-auth.ts
- tests/integration/auth.test.ts
**Files Edited:**
- src/auth/jwt.ts (created)
- src/middleware/auth.ts (created)
- src/routes/auth.ts (created)
- tests/auth.test.ts (created)
```
---
## Timeline Responses
Present timeline results **chronologically grouped by day**.
```markdown
## Timeline around Observation #1234
**Window:** 10 records before → 10 records after
**Total:** 15 items (8 obs, 5 sessions, 2 prompts)
### Nov 8, 2024
**4:30 PM** - 🎯 **Session Request:** "Add user authentication"
**4:45 PM** - 🔵 **Discovery #1230:** "JWT library options compared"
> Evaluated 3 libraries: jsonwebtoken, jose, passport-jwt
> Chose jsonwebtoken for simplicity and community support
**5:00 PM** - 🧠 **Decision #1231:** "Chose jsonwebtoken for simplicity"
> jsonwebtoken has better TypeScript support and simpler API
**5:15 PM** - 🟣 **Feature #1232:** "Created JWT utility functions"
> Sign, verify, and decode token helpers
### Nov 9, 2024
**3:30 PM** - 🟣 **Feature #1234:** "Implemented JWT authentication" ← ANCHOR
> Complete auth system with access and refresh tokens
**4:00 PM** - 🔴 **Bugfix #1235:** "Fixed token expiration edge case"
> Handled race condition in refresh flow
**4:30 PM** - ✅ **Change #1236:** "Updated API documentation"
> Added auth endpoint docs to README
```
**Legend:**
- 🎯 session-request
- 🔴 bugfix
- 🟣 feature
- 🔄 refactor
- ✅ change
- 🔵 discovery
- 🧠 decision
**Formatting Rules:**
1. Group by day with date headers
2. Show time for each item
3. Use emoji + type + ID/title
4. Indent subtitle/summary with `>`
5. Mark anchor point with `← ANCHOR`
6. Include legend at bottom
---
## Error Responses
### No Results
```markdown
No results found for "foobar". Try different search terms or:
- Check spelling
- Use broader terms
- Try synonyms
- Search by type or concept instead
```
### Service Unavailable
```markdown
Search service is not available. The claude-mem worker may not be running.
To check worker status:
\`\`\`bash
pm2 list
\`\`\`
To restart the worker:
\`\`\`bash
pm2 restart claude-mem-worker
\`\`\`
Would you like help troubleshooting?
```
---
## General Formatting Tips
1. **Use markdown formatting**: Bold, headers, code blocks, quotes
2. **Be concise**: Users want quick answers, not walls of text
3. **Highlight key information**: IDs, dates, types
4. **Group related items**: By day, by type, by file
5. **Offer follow-ups**: "Want more details?" "See timeline?"
6. **Use visual hierarchy**: Headers, lists, indentation
7. **Include context**: Project names, dates, related observations
8. **Make IDs clickable-ready**: **#1234** stands out for reference
+103
View File
@@ -0,0 +1,103 @@
# API Help
Get comprehensive API documentation from the search service.
## Command
```bash
curl -s "http://localhost:37777/api/search/help"
```
## Response
Returns complete API documentation in JSON format including:
- All 10 endpoint paths
- HTTP methods (all GET)
- Parameter descriptions
- Example curl commands
## Example Response
```json
{
"title": "Claude-Mem Search API",
"description": "HTTP API for searching persistent memory",
"endpoints": [
{
"path": "/api/search/observations",
"method": "GET",
"description": "Search observations using full-text search",
"parameters": {
"query": "Search query (required)",
"format": "Response format: 'index' or 'full' (default: 'full')",
"limit": "Number of results (default: 20)",
"project": "Filter by project name (optional)"
}
},
// ... 9 more endpoints
],
"examples": [
"curl \"http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5\"",
"curl \"http://localhost:37777/api/search/by-type?type=bugfix&limit=10\"",
// ... more examples
]
}
```
## When to Use
- User asks: "How do I use the search API?"
- Need to see all available endpoints
- Reference for parameter names and formats
- Getting started with search
## How to Present
```markdown
## Claude-Mem Search API Documentation
**Base URL:** http://localhost:37777
**Port:** Configurable via `CLAUDE_MEM_WORKER_PORT` (default: 37777)
### Available Endpoints
**Full-Text Search:**
1. `GET /api/search/observations` - Search observations by keyword
2. `GET /api/search/sessions` - Search session summaries
3. `GET /api/search/prompts` - Search user prompts
**Filtered Search:**
4. `GET /api/search/by-type` - Filter by observation type
5. `GET /api/search/by-concept` - Filter by concept tags
6. `GET /api/search/by-file` - Find work by file path
**Context Retrieval:**
7. `GET /api/context/recent` - Get recent sessions
8. `GET /api/context/timeline` - Timeline around a point
9. `GET /api/timeline/by-query` - Search + timeline in one call
**Documentation:**
10. `GET /api/search/help` - This help documentation
### Example Usage
\`\`\`bash
# Search for authentication-related observations
curl "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5"
# Get recent bugfixes
curl "http://localhost:37777/api/search/by-type?type=bugfix&limit=10"
# Get timeline around observation #1234
curl "http://localhost:37777/api/context/timeline?anchor=1234&depth_before=5&depth_after=5"
\`\`\`
For detailed information on each endpoint, see the operation-specific documentation files.
```
## Tips
- This endpoint is useful for quick API reference
- Most users won't need to use this directly
- The router SKILL.md provides better user-facing guidance
- Use this when users specifically ask "how do I use the API"
@@ -0,0 +1,96 @@
# Search Observations (Full-Text)
Search all observations using natural language queries.
## When to Use
- User asks: "How did we implement authentication?"
- User asks: "What bugs did we fix?"
- User asks: "What features did we add?"
- Looking for past work by keyword or topic
## Command
```bash
curl -s "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=20"
```
## Parameters
- **query** (required): Search terms (e.g., "authentication", "bug fix", "database migration")
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
## When to Use Each Format
**Use format=index for:**
- Quick overviews
- Finding IDs for deeper investigation
- Listing multiple results
**Use format=full for:**
- Complete details including narrative, facts, files, concepts
- Understanding the full context of specific observations
## Example Response (format=index)
```json
{
"query": "authentication",
"count": 5,
"format": "index",
"results": [
{
"id": 1234,
"type": "feature",
"title": "Implemented JWT authentication",
"subtitle": "Added token-based auth with refresh tokens",
"created_at_epoch": 1699564800000,
"project": "api-server",
"score": 0.95
}
]
}
```
## How to Present Results
For format=index, present as a compact list:
```markdown
Found 5 results for "authentication":
1. **#1234** [feature] Implemented JWT authentication
> Added token-based auth with refresh tokens
> Nov 9, 2024 • api-server
2. **#1235** [bugfix] Fixed token expiration edge case
> Handled race condition in refresh flow
> Nov 9, 2024 • api-server
```
**Include:** ID (for follow-up), type emoji (🔴 bugfix, 🟣 feature, 🔄 refactor, 🔵 discovery, 🧠 decision, ✅ change), title, subtitle, date, project.
For complete formatting guidelines, see [formatting.md](formatting.md).
## Error Handling
**Missing query parameter:**
```json
{"error": "Missing required parameter: query"}
```
Fix: Add the query parameter
**No results found:**
```json
{"query": "foobar", "count": 0, "results": []}
```
Response: "No results found for 'foobar'. Try different search terms."
## Tips
1. Be specific: "authentication JWT" > "auth"
2. Start with format=index and limit=5-10
3. Use project filtering when working on one codebase
4. If no results, try broader terms or check spelling
@@ -0,0 +1,64 @@
# Search User Prompts
Find what users have asked about in the past.
## When to Use
- User asks: "Have we worked on Docker before?"
- Looking for patterns in user requests
- Understanding what topics have been explored
## Command
```bash
curl -s "http://localhost:37777/api/search/prompts?query=docker&format=index&limit=10"
```
## Parameters
- **query** (required): Search terms
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
- **project**: Filter by project name (optional)
## Use Case
"Have we worked on Docker before?" → Search prompts to see related user requests
## Example Response
```json
{
"query": "docker",
"count": 3,
"format": "index",
"results": [
{
"id": 456,
"claude_session_id": "abc-123",
"prompt_number": 1,
"prompt_text": "Help me set up Docker for this project",
"created_at_epoch": 1699564800000,
"score": 0.98
}
]
}
```
## How to Present Results
```markdown
Found 3 past prompts about "docker":
1. **Prompt #456** (Nov 8, 2024)
> "Help me set up Docker for this project"
2. **Prompt #457** (Nov 9, 2024)
> "Fix Docker compose networking issues"
```
## Tips
1. Useful for understanding what users have asked about
2. Combine with session search to see both questions and outcomes
3. Helps identify recurring topics or pain points
@@ -0,0 +1,93 @@
# Get Recent Context
Get recent session summaries and observations for a project.
## When to Use
- User asks: "What did we do last session?"
- User asks: "What have we been working on?"
- Need to understand recent project activity
## Command
```bash
curl -s "http://localhost:37777/api/context/recent?project=claude-mem&limit=3"
```
## Parameters
- **project**: Project name (default: current directory basename)
- **limit**: Number of recent sessions (default: 3, max: 10)
## Response Structure
Returns complete session data including summaries, observations, and status:
```json
{
"project": "claude-mem",
"limit": 3,
"count": 3,
"sessions": [
{
"sdk_session_id": "abc-123",
"status": "completed",
"has_summary": 1,
"summary": {
"request": "Add authentication",
"completed": "Implemented JWT auth...",
"learned": "...",
"next_steps": "..."
},
"observations": [...]
}
]
}
```
## Use Case: "What did we do last session?"
```bash
# Get last 3 sessions
RESULT=$(curl -s "http://localhost:37777/api/context/recent?limit=3")
# Parse and format:
# - Show session request
# - Show what was completed
# - List key observations
# - Highlight next steps
```
## How to Present Results
```markdown
## Recent Work on claude-mem
### Session 1 (Nov 9, 2024 - Completed)
**Request:** Add user authentication
**Completed:**
- Implemented JWT authentication with token-based auth
- Added middleware for route protection
- Created login and refresh token endpoints
**Key Observations:**
1. 🟣 Implemented JWT authentication (#1234)
2. 🔴 Fixed token expiration edge case (#1235)
**Next Steps:**
- Add password reset functionality
- Implement rate limiting
---
### Session 2 (Nov 8, 2024 - Completed)
...
```
## Tips
1. This is the best operation for "what did we do recently" questions
2. Returns complete context including summaries and observations
3. Active sessions show current work in progress
4. Default limit=3 is usually sufficient for recent context
@@ -0,0 +1,63 @@
# Search Session Summaries
Search session-level summaries to understand what was accomplished in past sessions.
## When to Use
- User asks: "What did we accomplish in previous sessions?"
- Looking for sessions about a specific topic
- Understanding the scope of past work
## Command
```bash
curl -s "http://localhost:37777/api/search/sessions?query=deployment&format=index&limit=10"
```
## Parameters
- **query** (required): Search terms (e.g., "deployment", "bug fix", "refactor")
- **format**: "index" (summary) or "full" (complete details). Default: "full"
- **limit**: Number of results (default: 20, max: 100)
## Response Fields
- **request**: Original user request
- **completed**: What was accomplished
- **learned**: Technical learnings
- **next_steps**: Planned follow-ups
- **files_read**: Files that were read
- **files_edited**: Files that were modified
## Example Use Case
User asks: "Have we worked on deployment before?"
```bash
RESULT=$(curl -s "http://localhost:37777/api/search/sessions?query=deployment&format=index&limit=5")
# Parse JSON and present matching sessions
```
## How to Present Results
For format=index:
```markdown
Found 3 sessions about "deployment":
1. **Session #123** (Nov 8, 2024)
> Deploy Docker container to production
> Completed: Set up CI/CD pipeline, configured secrets
2. **Session #124** (Nov 9, 2024)
> Fix deployment rollback issues
> Completed: Added health checks, fixed rollback script
```
For format=full, include all fields (request, completed, learned, next_steps, files).
## Tips
1. Use format=index to find relevant sessions quickly
2. Then fetch format=full for complete details
3. Sessions capture high-level accomplishments vs observations (which are granular facts)
@@ -0,0 +1,97 @@
# Timeline by Query
Search for something, then automatically get timeline context around the best match.
## When to Use
- User asks: "What led to the authentication refactor?"
- Want to find something AND see surrounding context in one request
- Understand the full story with minimal requests
## Command
```bash
curl -s "http://localhost:37777/api/timeline/by-query?query=authentication+refactor&mode=auto&depth_before=10&depth_after=10"
```
## Parameters
- **query** (required): Search terms
- **mode**: Where to search (default: "auto")
- `"auto"`: Search both observations and sessions, return best match
- `"observations"`: Search only observations
- `"sessions"`: Search only sessions
- **depth_before**: Records before match (default: 10, max: 50)
- **depth_after**: Records after match (default: 10, max: 50)
- **project**: Filter by project name (optional)
## Response Structure
Returns both the best match AND timeline around it:
```json
{
"query": "authentication refactor",
"mode": "auto",
"match": {
"type": "observation",
"id": 1234,
"title": "Refactored authentication middleware",
"score": 0.95,
"created_at_epoch": 1699564800000
},
"depth_before": 10,
"depth_after": 10,
"timeline": {
"observations": [...],
"sessions": [...],
"prompts": [...]
}
}
```
## Use Case: "What led to the authentication refactor?"
One query gets both:
1. The authentication refactor observation (best match)
2. Complete timeline before and after showing what led to it
```bash
curl -s "http://localhost:37777/api/timeline/by-query?query=authentication+refactor&depth_before=10&depth_after=10"
```
## How to Present Results
```markdown
## Found: Refactored authentication middleware (Observation #1234)
**Match score:** 0.95
**Date:** Nov 9, 2024 3:30 PM
### Timeline (10 before → 10 after)
**Total:** 18 items (11 obs, 5 sessions, 2 prompts)
### Nov 8, 2024
**2:00 PM** - 🔴 **Bugfix #1220:** "Fixed token validation bug"
> Tokens weren't properly validated
**3:00 PM** - 🔵 **Discovery #1225:** "Current auth middleware is fragile"
> Multiple edge cases not handled
### Nov 9, 2024
**3:30 PM** - 🔄 **Refactor #1234:** "Refactored authentication middleware" ← MATCH
> Complete rewrite with better error handling
**4:00 PM** - ✅ **Change #1235:** "Updated all routes to use new middleware"
```
## Tips
1. This is the most efficient operation for "what led to X" questions
2. One request instead of two (search + timeline)
3. Use mode="auto" to search both observations and sessions
4. Adjust depth based on how much context you need
5. Great for understanding causality and sequence
@@ -0,0 +1,97 @@
# Get Timeline
Get a chronological timeline around a specific point in time.
## When to Use
- User asks: "What was happening when we fixed that bug?"
- Need context around a specific observation or session
- Understanding the sequence of events
## Command
```bash
# Around an observation ID
curl -s "http://localhost:37777/api/context/timeline?anchor=1234&depth_before=10&depth_after=10"
# Around a session ID
curl -s "http://localhost:37777/api/context/timeline?anchor=S123&depth_before=10&depth_after=10"
# Around a timestamp
curl -s "http://localhost:37777/api/context/timeline?anchor=2024-11-09T15:30:00Z&depth_before=10&depth_after=10"
```
## Parameters
- **anchor** (required): Observation ID (number), Session ID ("S123"), or ISO timestamp
- **depth_before**: Number of records before anchor (default: 10, max: 50)
- **depth_after**: Number of records after anchor (default: 10, max: 50)
- **project**: Filter by project name (optional)
## Response Structure
Returns unified timeline with observations, sessions, and prompts interleaved chronologically:
```json
{
"anchor": "1234",
"depth_before": 10,
"depth_after": 10,
"timeline": {
"observations": [...],
"sessions": [...],
"prompts": [...]
}
}
```
## Workflow: "What was happening when we fixed that auth bug?"
1. First, find the bug observation:
```bash
curl -s "http://localhost:37777/api/search/observations?query=auth+bug&format=index&limit=5"
# Get observation ID (e.g., #1234)
```
2. Then get timeline around it:
```bash
curl -s "http://localhost:37777/api/context/timeline?anchor=1234&depth_before=5&depth_after=5"
```
## How to Present Results
Present chronologically grouped by day:
```markdown
## Timeline around Observation #1234
**Window:** 5 records before → 5 records after
**Total:** 12 items (7 obs, 3 sessions, 2 prompts)
### Nov 8, 2024
**4:30 PM** - 🎯 **Session Request:** "Add user authentication"
**4:45 PM** - 🔵 **Discovery #1230:** "JWT library options compared"
> Evaluated 3 libraries: jsonwebtoken, jose, passport-jwt
**5:00 PM** - 🧠 **Decision #1231:** "Chose jsonwebtoken for simplicity"
### Nov 9, 2024
**3:30 PM** - 🟣 **Feature #1234:** "Implemented JWT authentication" ← ANCHOR
**4:00 PM** - 🔴 **Bugfix #1235:** "Fixed token expiration edge case"
> Handled race condition in refresh flow
```
**Legend:** 🎯 session-request | 🔴 bugfix | 🟣 feature | 🔄 refactor | 🔵 discovery | 🧠 decision
For complete formatting guidelines, see [formatting.md](formatting.md).
## Tips
1. Use depth_before=5, depth_after=5 for focused context
2. Increase depth for broader investigation
3. Timeline shows the full story around a specific point
4. Helps understand causality and sequence of events
+57 -331
View File
@@ -5,359 +5,85 @@ description: Diagnose and fix claude-mem installation issues. Checks PM2 worker
# Claude-Mem Troubleshooting Skill
This skill diagnoses and resolves common installation and operational issues with the claude-mem plugin.
Diagnose and resolve installation and operational issues with the claude-mem plugin.
## Quick Reference
## When to Use This Skill
**Common Issues:**
**Invoke this skill when:**
- Memory not persisting after `/clear`
- Viewer UI empty or not loading
- Worker service not running
- Database missing or corrupted
- Port conflicts
- Missing dependencies
- "Nothing is remembered" complaints
- Search results empty when they shouldn't be
## Diagnostic Workflow
**Do NOT invoke** for feature requests or usage questions (use regular documentation for that).
When invoked, follow these steps systematically:
## Quick Decision Guide
### 1. Check PM2 Worker Status
Once the skill is loaded, choose the appropriate operation:
First, verify if the worker service is running:
**What's the problem?**
- "Nothing is being remembered" → [operations/common-issues.md](operations/common-issues.md#nothing-remembered)
- "Viewer is empty" → [operations/common-issues.md](operations/common-issues.md#viewer-empty)
- "Worker won't start" → [operations/common-issues.md](operations/common-issues.md#worker-not-starting)
- "Want to run full diagnostics" → [operations/diagnostics.md](operations/diagnostics.md)
- "Need automated fix" → [operations/automated-fixes.md](operations/automated-fixes.md)
## Available Operations
Choose the appropriate operation file for detailed instructions:
### Diagnostic Workflows
1. **[Full System Diagnostics](operations/diagnostics.md)** - Comprehensive step-by-step diagnostic workflow
2. **[Worker Diagnostics](operations/worker.md)** - PM2 worker-specific troubleshooting
3. **[Database Diagnostics](operations/database.md)** - Database integrity and data checks
### Issue Resolution
4. **[Common Issues](operations/common-issues.md)** - Quick fixes for frequently encountered problems
5. **[Automated Fixes](operations/automated-fixes.md)** - One-command fix sequences
### Reference
6. **[Quick Commands](operations/reference.md)** - Essential commands for troubleshooting
## Quick Start
**Fast automated fix (try this first):**
```bash
# Check if PM2 is available
which pm2 || echo "PM2 not found in PATH"
# List PM2 processes
pm2 jlist 2>&1
# If pm2 is not found, try the local installation
~/.claude/plugins/marketplaces/thedotmack/node_modules/.bin/pm2 jlist 2>&1
```
**Expected output:** JSON array with `claude-mem-worker` process showing `"status": "online"`
**If worker not running or status is not "online":**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
pm2 start ecosystem.config.cjs
# Or use local pm2:
node_modules/.bin/pm2 start ecosystem.config.cjs
```
### 2. Check Worker Service Health
Test if the worker service responds to HTTP requests:
```bash
# Default port is 37777
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
pm2 delete claude-mem-worker 2>/dev/null; \
npm install && \
node_modules/.bin/pm2 start ecosystem.config.cjs && \
sleep 3 && \
curl -s http://127.0.0.1:37777/health
# Check custom port from settings
PORT=$(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_WORKER_PORT | grep -o '[0-9]\+' || echo "37777")
curl -s http://127.0.0.1:$PORT/health
```
**Expected output:** `{"status":"ok"}`
Expected output: `{"status":"ok"}`
**If connection refused:**
- Worker not running → Go back to step 1
- Port conflict → Check what's using the port:
```bash
lsof -i :37777 || netstat -tlnp | grep 37777
```
If that doesn't work, proceed to detailed diagnostics.
### 3. Check Database
## Response Format
Verify the database exists and contains data:
When troubleshooting:
1. **Identify the symptom** - What's the user reporting?
2. **Choose operation file** - Use the decision guide above
3. **Follow steps systematically** - Don't skip diagnostic steps
4. **Report findings** - Tell user what you found and what was fixed
5. **Verify resolution** - Confirm the issue is resolved
```bash
# Check if database file exists
ls -lh ~/.claude-mem/claude-mem.db
## Technical Notes
# Check database size (should be > 0 bytes)
du -h ~/.claude-mem/claude-mem.db
- **Worker port:** Default 37777 (configurable via `CLAUDE_MEM_WORKER_PORT`)
- **Database location:** `~/.claude-mem/claude-mem.db`
- **Plugin location:** `~/.claude/plugins/marketplaces/thedotmack/`
- **PM2 process name:** `claude-mem-worker`
# Query database for observation count (requires sqlite3)
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) as observation_count FROM observations;" 2>&1
## Error Reporting
# Query for session count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) as session_count FROM sessions;" 2>&1
If troubleshooting doesn't resolve the issue, collect diagnostic data and direct user to:
https://github.com/thedotmack/claude-mem/issues
# Check recent observations
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, type, title FROM observations ORDER BY created_at DESC LIMIT 5;" 2>&1
```
**Expected:**
- Database file exists (typically 100KB - 10MB+)
- Contains observations and sessions
- Recent observations visible
**If database missing or empty:**
- New installation - this is normal, database will populate as you work
- After `/clear` - sessions are marked complete but not deleted, data should persist
- Corrupted database - backup and recreate:
```bash
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup
# Worker will recreate on next observation
```
### 4. Check Dependencies Installation
Verify all required npm packages are installed:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
# Check for critical packages
ls node_modules/@anthropic-ai/claude-agent-sdk 2>&1 | head -1
ls node_modules/better-sqlite3 2>&1 | head -1
ls node_modules/express 2>&1 | head -1
ls node_modules/pm2 2>&1 | head -1
```
**Expected:** All critical packages present
**If dependencies missing:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm install
```
### 5. Check Worker Logs
Review recent worker logs for errors:
```bash
# View last 50 lines of worker logs
pm2 logs claude-mem-worker --lines 50 --nostream
# Or use local pm2:
cd ~/.claude/plugins/marketplaces/thedotmack/
node_modules/.bin/pm2 logs claude-mem-worker --lines 50 --nostream
# Check for specific errors
pm2 logs claude-mem-worker --lines 100 --nostream | grep -i "error\|exception\|failed"
```
### 6. Test Viewer UI
Check if the web viewer is accessible:
```bash
# Test viewer endpoint
curl -s http://127.0.0.1:37777/ | head -20
# Test stats endpoint
curl -s http://127.0.0.1:37777/api/stats
```
**Expected:**
- `/` returns HTML page with React viewer
- `/api/stats` returns JSON with database counts
### 7. Check Port Configuration
Verify port settings and availability:
```bash
# Check if custom port is configured
cat ~/.claude-mem/settings.json 2>/dev/null
cat ~/.claude/settings.json 2>/dev/null
# Check what's listening on default port
lsof -i :37777 2>&1 || netstat -tlnp 2>&1 | grep 37777
# Test connectivity
nc -zv 127.0.0.1 37777 2>&1
```
## Automated Fix Sequence
If you're seeing issues, try this automated fix sequence:
```bash
# 1. Stop the worker
pm2 delete claude-mem-worker 2>/dev/null || true
# 2. Navigate to plugin directory
cd ~/.claude/plugins/marketplaces/thedotmack/
# 3. Ensure dependencies are installed
npm install
# 4. Start worker with local pm2
node_modules/.bin/pm2 start ecosystem.config.cjs
# 5. Wait for health check
sleep 3
curl -s http://127.0.0.1:37777/health
# 6. Check logs for any errors
node_modules/.bin/pm2 logs claude-mem-worker --lines 20 --nostream
```
## Common Issue Resolutions
### Issue: "Nothing is remembered after /clear"
**Root cause:** Sessions are marked complete but data should persist. This suggests:
- Worker not processing observations
- Database not being written to
- Context hook not reading from database
**Fix:**
1. Verify worker is running (Step 1)
2. Check database has recent observations (Step 3)
3. Restart worker and start new session
4. Create a test observation: `/skill version-bump` then cancel
5. Check if observation appears in viewer: http://127.0.0.1:37777
### Issue: "Viewer empty after every Claude restart"
**Root cause:**
- Database being recreated on startup (shouldn't happen)
- Worker reading from wrong database location
- Database permissions issue
**Fix:**
1. Check database file exists and has data (Step 3)
2. Check file permissions:
```bash
ls -la ~/.claude-mem/claude-mem.db
# Should be readable/writable by your user
```
3. Verify worker is using correct database path in logs
4. Test viewer connection manually
### Issue: "Old memory in Claude"
**Root cause:** Context hook injecting stale observations
**Fix:**
1. Check the observation count setting:
```bash
grep CLAUDE_MEM_CONTEXT_OBSERVATIONS ~/.claude/settings.json
```
2. Default is 50 observations - you can adjust this
3. Check database for actual observation dates:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, project, title FROM observations ORDER BY created_at DESC LIMIT 10;"
```
### Issue: "Worker not starting"
**Root cause:**
- Port already in use
- PM2 not installed or not in PATH
- Missing dependencies
**Fix:**
1. Try manual worker start:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
node plugin/scripts/worker-service.cjs
# Should start server on port 37777
```
2. If port in use, change it:
```bash
echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json
```
## Full System Diagnosis
Run this comprehensive diagnostic script:
```bash
#!/bin/bash
echo "=== Claude-Mem Troubleshooting Report ==="
echo ""
echo "1. Environment"
echo " OS: $(uname -s)"
echo ""
echo "2. Plugin Installation"
echo " Plugin directory exists: $([ -d ~/.claude/plugins/marketplaces/thedotmack ] && echo 'YES' || echo 'NO')"
echo " Package version: $(grep '"version"' ~/.claude/plugins/marketplaces/thedotmack/package.json 2>/dev/null | head -1)"
echo ""
echo "3. Database"
echo " Database exists: $([ -f ~/.claude-mem/claude-mem.db ] && echo 'YES' || echo 'NO')"
echo " Database size: $(du -h ~/.claude-mem/claude-mem.db 2>/dev/null | cut -f1)"
echo " Observation count: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM observations;' 2>/dev/null || echo 'N/A')"
echo " Session count: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM sessions;' 2>/dev/null || echo 'N/A')"
echo ""
echo "4. Worker Service"
PM2_PATH=$(which pm2 2>/dev/null || echo "~/.claude/plugins/marketplaces/thedotmack/node_modules/.bin/pm2")
echo " PM2 path: $PM2_PATH"
WORKER_STATUS=$($PM2_PATH jlist 2>/dev/null | grep -o '"name":"claude-mem-worker".*"status":"[^"]*"' | grep -o 'status":"[^"]*"' | cut -d'"' -f3 || echo 'not running')
echo " Worker status: $WORKER_STATUS"
echo " Health check: $(curl -s http://127.0.0.1:37777/health 2>/dev/null || echo 'FAILED')"
echo ""
echo "5. Configuration"
echo " Port setting: $(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_WORKER_PORT || echo 'default (37777)')"
echo " Observation count: $(cat ~/.claude/settings.json 2>/dev/null | grep CLAUDE_MEM_CONTEXT_OBSERVATIONS || echo 'default (50)')"
echo ""
echo "6. Recent Activity"
echo " Latest observation: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT created_at FROM observations ORDER BY created_at DESC LIMIT 1;' 2>/dev/null || echo 'N/A')"
echo " Latest session: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT created_at FROM sessions ORDER BY created_at DESC LIMIT 1;' 2>/dev/null || echo 'N/A')"
echo ""
echo "=== End Report ==="
```
Save this as `/tmp/claude-mem-diagnostics.sh` and run:
```bash
bash /tmp/claude-mem-diagnostics.sh
```
## Reporting Issues
If troubleshooting doesn't resolve the issue, collect this information for a bug report:
1. Full diagnostic report (run script above)
2. Worker logs: `pm2 logs claude-mem-worker --lines 100 --nostream`
3. Your setup:
- Claude version: Check with Claude
- OS: `uname -a`
- Node version: `node --version`
- Plugin version: In package.json
4. Steps to reproduce the issue
5. Expected vs actual behavior
Post to: https://github.com/thedotmack/claude-mem/issues
## Prevention Tips
**Keep claude-mem healthy:**
- Regularly check viewer UI to see if observations are being captured
- Monitor database size (shouldn't grow unbounded)
- Update plugin when new versions are released
- Keep Claude Code updated
**Performance tuning:**
- Adjust `CLAUDE_MEM_CONTEXT_OBSERVATIONS` if context is too large/small
- Use `/clear` to mark sessions complete and start fresh
- Use MCP search tools to query specific memories instead of loading everything
## Quick Commands Reference
```bash
# Restart worker
pm2 restart claude-mem-worker
# View logs
pm2 logs claude-mem-worker
# Check health
curl http://127.0.0.1:37777/health
# View database stats
curl http://127.0.0.1:37777/api/stats
# Open viewer
open http://127.0.0.1:37777
# Delete and reinstall worker
pm2 delete claude-mem-worker
cd ~/.claude/plugins/marketplaces/thedotmack/
pm2 start ecosystem.config.cjs
```
See [operations/diagnostics.md](operations/diagnostics.md#reporting-issues) for details on what to collect.
@@ -0,0 +1,151 @@
# Automated Fix Sequences
One-command fix sequences for common claude-mem issues.
## Quick Fix: Complete Reset and Restart
**Use when:** General issues, worker not responding, after updates
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
pm2 delete claude-mem-worker 2>/dev/null; \
npm install && \
node_modules/.bin/pm2 start ecosystem.config.cjs && \
sleep 3 && \
curl -s http://127.0.0.1:37777/health
```
**Expected output:** `{"status":"ok"}`
**What it does:**
1. Stops the worker (if running)
2. Ensures dependencies are installed
3. Starts worker with local PM2
4. Waits for startup
5. Verifies health
## Fix: Worker Not Running
**Use when:** PM2 shows worker as stopped or not listed
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
node_modules/.bin/pm2 start ecosystem.config.cjs && \
sleep 2 && \
pm2 status
```
**Expected output:** Worker shows as "online"
## Fix: Dependencies Missing
**Use when:** Worker won't start due to missing packages
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
npm install && \
pm2 restart claude-mem-worker
```
## Fix: Port Conflict
**Use when:** Error shows port already in use
```bash
# Change to port 37778
mkdir -p ~/.claude-mem && \
echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json && \
pm2 restart claude-mem-worker && \
sleep 2 && \
curl -s http://127.0.0.1:37778/health
```
**Expected output:** `{"status":"ok"}`
## Fix: Database Issues
**Use when:** Database appears corrupted or out of sync
```bash
# Backup and test integrity
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup && \
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;" && \
pm2 restart claude-mem-worker
```
**If integrity check fails, recreate database:**
```bash
# WARNING: This deletes all memory data
mv ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.old && \
pm2 restart claude-mem-worker
```
## Fix: Clean Reinstall
**Use when:** All else fails, nuclear option
```bash
# Backup data first
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup 2>/dev/null
# Stop and remove worker
pm2 delete claude-mem-worker 2>/dev/null
# Reinstall dependencies
cd ~/.claude/plugins/marketplaces/thedotmack/ && \
rm -rf node_modules && \
npm install
# Start worker
node_modules/.bin/pm2 start ecosystem.config.cjs && \
sleep 3 && \
curl -s http://127.0.0.1:37777/health
```
## Fix: Clear PM2 Logs
**Use when:** Logs are too large, want fresh start
```bash
pm2 flush claude-mem-worker && \
pm2 restart claude-mem-worker
```
## Verification Commands
**After running any fix, verify with these:**
```bash
# Check worker status
pm2 status | grep claude-mem-worker
# Check health
curl -s http://127.0.0.1:37777/health
# Check database
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
# Check viewer
curl -s http://127.0.0.1:37777/api/stats
# Check logs for errors
pm2 logs claude-mem-worker --lines 20 --nostream | grep -i error
```
**All checks should pass:**
- Worker status: "online"
- Health: `{"status":"ok"}`
- Database: Shows count (may be 0 if new)
- Stats: Returns JSON with counts
- Logs: No recent errors
## Troubleshooting the Fixes
**If automated fix fails:**
1. Run the diagnostic script from [diagnostics.md](diagnostics.md)
2. Check specific error in PM2 logs
3. Try manual worker start to see detailed error:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
node plugin/scripts/worker-service.cjs
```
@@ -0,0 +1,232 @@
# Common Issue Resolutions
Quick fixes for frequently encountered claude-mem problems.
## Issue: Nothing is Remembered After `/clear` {#nothing-remembered}
**Symptoms:**
- Data doesn't persist across sessions
- Context is empty after `/clear`
- Search returns no results for past work
**Root cause:** Sessions are marked complete but data should persist. This suggests:
- Worker not processing observations
- Database not being written to
- Context hook not reading from database
**Fix:**
1. Verify worker is running:
```bash
pm2 jlist | grep claude-mem-worker
```
2. Check database has recent observations:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations WHERE created_at > datetime('now', '-1 day');"
```
3. Restart worker and start new session:
```bash
pm2 restart claude-mem-worker
```
4. Create a test observation: `/skill version-bump` then cancel
5. Check if observation appears in viewer:
```bash
open http://127.0.0.1:37777
# Or manually check database:
sqlite3 ~/.claude-mem/claude-mem.db "SELECT * FROM observations ORDER BY created_at DESC LIMIT 1;"
```
## Issue: Viewer Empty After Every Claude Restart {#viewer-empty}
**Symptoms:**
- Viewer shows no data at http://127.0.0.1:37777
- Stats endpoint returns all zeros
- Database appears empty in UI
**Root cause:**
- Database being recreated on startup (shouldn't happen)
- Worker reading from wrong database location
- Database permissions issue
**Fix:**
1. Check database file exists and has data:
```bash
ls -lh ~/.claude-mem/claude-mem.db
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
```
2. Check file permissions:
```bash
ls -la ~/.claude-mem/claude-mem.db
# Should be readable/writable by your user
```
3. Verify worker is using correct database path in logs:
```bash
pm2 logs claude-mem-worker --lines 50 --nostream | grep "Database"
```
4. Test viewer connection manually:
```bash
curl -s http://127.0.0.1:37777/api/stats
# Should show non-zero counts if data exists
```
## Issue: Old Memory in Claude {#old-memory}
**Symptoms:**
- Context contains outdated observations
- Irrelevant past work appearing in sessions
- Context feels stale
**Root cause:** Context hook injecting stale observations
**Fix:**
1. Check the observation count setting:
```bash
grep CLAUDE_MEM_CONTEXT_OBSERVATIONS ~/.claude/settings.json
```
2. Default is 50 observations - you can adjust this:
```json
{
"env": {
"CLAUDE_MEM_CONTEXT_OBSERVATIONS": "25"
}
}
```
3. Check database for actual observation dates:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, project, title FROM observations ORDER BY created_at DESC LIMIT 10;"
```
4. Consider filtering by project if working on multiple codebases
## Issue: Worker Not Starting {#worker-not-starting}
**Symptoms:**
- PM2 shows worker as "stopped" or "errored"
- Health check fails
- Viewer not accessible
**Root cause:**
- Port already in use
- PM2 not installed or not in PATH
- Missing dependencies
**Fix:**
1. Try manual worker start to see error:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
node plugin/scripts/worker-service.cjs
# Should start server on port 37777 or show error
```
2. If port in use, change it:
```bash
mkdir -p ~/.claude-mem
echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json
```
3. If dependencies missing:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm install
pm2 start ecosystem.config.cjs
```
## Issue: Search Results Empty
**Symptoms:**
- Search skill returns no results
- API endpoints return empty arrays
- Know there's data but can't find it
**Root cause:**
- FTS5 tables not synchronized
- Wrong project filter
- Database not being queried correctly
**Fix:**
1. Check if observations exist in database:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
```
2. Check FTS5 table sync:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations_fts;"
# Should match observation count
```
3. Try search via API directly:
```bash
curl "http://127.0.0.1:37777/api/search/observations?q=test&format=index"
```
4. If FTS5 out of sync, restart worker (triggers reindex):
```bash
pm2 restart claude-mem-worker
```
## Issue: Port Conflicts
**Symptoms:**
- Worker won't start
- Error: "EADDRINUSE: address already in use"
- Health check fails
**Fix:**
1. Check what's using port 37777:
```bash
lsof -i :37777
```
2. Either kill the conflicting process or change claude-mem port:
```bash
mkdir -p ~/.claude-mem
echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json
pm2 restart claude-mem-worker
```
## Issue: Database Corrupted
**Symptoms:**
- SQLite errors in logs
- Worker crashes on startup
- Queries fail
**Fix:**
1. Backup the database:
```bash
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup
```
2. Try to repair:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
3. If repair fails, recreate (loses data):
```bash
rm ~/.claude-mem/claude-mem.db
pm2 restart claude-mem-worker
# Worker will create new database
```
## Prevention Tips
**Keep claude-mem healthy:**
- Regularly check viewer UI to see if observations are being captured
- Monitor database size (shouldn't grow unbounded)
- Update plugin when new versions are released
- Keep Claude Code updated
**Performance tuning:**
- Adjust `CLAUDE_MEM_CONTEXT_OBSERVATIONS` if context is too large/small
- Use `/clear` to mark sessions complete and start fresh
- Use search skill to query specific memories instead of loading everything
@@ -0,0 +1,403 @@
# Database Diagnostics
SQLite database troubleshooting for claude-mem.
## Database Overview
Claude-mem uses SQLite3 for persistent storage:
- **Location:** `~/.claude-mem/claude-mem.db`
- **Library:** better-sqlite3 (synchronous, not bun:sqlite)
- **Features:** FTS5 full-text search, triggers, indexes
- **Tables:** observations, sessions, user_prompts, observations_fts, sessions_fts, prompts_fts
## Basic Database Checks
### Check Database Exists
```bash
# Check file exists
ls -lh ~/.claude-mem/claude-mem.db
# Check file size
du -h ~/.claude-mem/claude-mem.db
# Check permissions
ls -la ~/.claude-mem/claude-mem.db
```
**Expected:**
- File exists
- Size: 100KB - 10MB+ (depends on usage)
- Permissions: Readable/writable by your user
### Check Database Integrity
```bash
# Run integrity check
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
**Expected output:** `ok`
**If errors appear:**
- Database corrupted
- Backup immediately: `cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup`
- Consider recreating (data loss)
## Data Inspection
### Count Records
```bash
# Observation count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
# Session count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM sessions;"
# User prompt count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM user_prompts;"
# FTS5 table counts (should match main tables)
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations_fts;"
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM sessions_fts;"
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM prompts_fts;"
```
### View Recent Records
```bash
# Recent observations
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
created_at,
type,
title,
project
FROM observations
ORDER BY created_at DESC
LIMIT 10;
"
# Recent sessions
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
created_at,
request,
project
FROM sessions
ORDER BY created_at DESC
LIMIT 5;
"
# Recent user prompts
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
created_at,
prompt
FROM user_prompts
ORDER BY created_at DESC
LIMIT 10;
"
```
### Check Projects
```bash
# List all projects
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT DISTINCT project
FROM observations
ORDER BY project;
"
# Count observations per project
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
project,
COUNT(*) as count
FROM observations
GROUP BY project
ORDER BY count DESC;
"
```
## Database Schema
### View Table Structure
```bash
# List all tables
sqlite3 ~/.claude-mem/claude-mem.db ".tables"
# Show observations table schema
sqlite3 ~/.claude-mem/claude-mem.db ".schema observations"
# Show all schemas
sqlite3 ~/.claude-mem/claude-mem.db ".schema"
```
### Expected Tables
- `observations` - Main observation records
- `observations_fts` - FTS5 virtual table for full-text search
- `sessions` - Session summary records
- `sessions_fts` - FTS5 virtual table for session search
- `user_prompts` - User prompt records
- `prompts_fts` - FTS5 virtual table for prompt search
## FTS5 Synchronization
The FTS5 tables should stay synchronized with main tables via triggers.
### Check FTS5 Sync
```bash
# Compare counts
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
(SELECT COUNT(*) FROM observations) as observations,
(SELECT COUNT(*) FROM observations_fts) as observations_fts,
(SELECT COUNT(*) FROM sessions) as sessions,
(SELECT COUNT(*) FROM sessions_fts) as sessions_fts,
(SELECT COUNT(*) FROM user_prompts) as prompts,
(SELECT COUNT(*) FROM prompts_fts) as prompts_fts;
"
```
**Expected:** All pairs should match (observations = observations_fts, etc.)
### Fix FTS5 Desync
If FTS5 counts don't match, triggers may have failed. Restart worker to rebuild:
```bash
pm2 restart claude-mem-worker
```
The worker will rebuild FTS5 indexes on startup if they're out of sync.
## Common Database Issues
### Issue: Database Doesn't Exist
**Cause:** First run, or database was deleted
**Fix:** Database will be created automatically on first observation. No action needed.
### Issue: Database is Empty (0 Records)
**Cause:**
- New installation (normal)
- Data was deleted
- Worker not processing observations
**Fix:**
1. Create test observation (use any skill and cancel)
2. Check worker logs for errors:
```bash
pm2 logs claude-mem-worker --lines 50 --nostream
```
3. Verify observation appears in database
### Issue: Database Permission Denied
**Cause:** File permissions wrong, database owned by different user
**Fix:**
```bash
# Check ownership
ls -la ~/.claude-mem/claude-mem.db
# Fix permissions (if needed)
chmod 644 ~/.claude-mem/claude-mem.db
chown $USER ~/.claude-mem/claude-mem.db
```
### Issue: Database Locked
**Cause:**
- Multiple processes accessing database
- Crash left lock file
- Long-running transaction
**Fix:**
```bash
# Check for lock file
ls -la ~/.claude-mem/claude-mem.db-wal
ls -la ~/.claude-mem/claude-mem.db-shm
# Remove lock files (only if worker is stopped!)
pm2 stop claude-mem-worker
rm ~/.claude-mem/claude-mem.db-wal ~/.claude-mem/claude-mem.db-shm
pm2 start claude-mem-worker
```
### Issue: Database Growing Too Large
**Cause:** Too many observations accumulated
**Check size:**
```bash
du -h ~/.claude-mem/claude-mem.db
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
```
**Options:**
1. Delete old observations (manual cleanup):
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
DELETE FROM observations
WHERE created_at < datetime('now', '-90 days');
"
```
2. Vacuum to reclaim space:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "VACUUM;"
```
3. Archive and start fresh:
```bash
mv ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.archive
pm2 restart claude-mem-worker
```
## Database Recovery
### Backup Database
**Before any destructive operations:**
```bash
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup
```
### Restore from Backup
```bash
pm2 stop claude-mem-worker
cp ~/.claude-mem/claude-mem.db.backup ~/.claude-mem/claude-mem.db
pm2 start claude-mem-worker
```
### Export Data
Export to JSON for safekeeping:
```bash
# Export observations
sqlite3 ~/.claude-mem/claude-mem.db -json "SELECT * FROM observations;" > observations.json
# Export sessions
sqlite3 ~/.claude-mem/claude-mem.db -json "SELECT * FROM sessions;" > sessions.json
# Export prompts
sqlite3 ~/.claude-mem/claude-mem.db -json "SELECT * FROM user_prompts;" > prompts.json
```
### Recreate Database
**WARNING: Data loss. Backup first!**
```bash
# Stop worker
pm2 stop claude-mem-worker
# Backup current database
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.old
# Delete database
rm ~/.claude-mem/claude-mem.db
# Start worker (creates new database)
pm2 start claude-mem-worker
```
## Database Statistics
### Storage Analysis
```bash
# Database file size
du -h ~/.claude-mem/claude-mem.db
# Record counts by type
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
type,
COUNT(*) as count
FROM observations
GROUP BY type
ORDER BY count DESC;
"
# Observations per month
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
strftime('%Y-%m', created_at) as month,
COUNT(*) as count
FROM observations
GROUP BY month
ORDER BY month DESC;
"
# Average observation size (characters)
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
AVG(LENGTH(content)) as avg_content_length,
MAX(LENGTH(content)) as max_content_length
FROM observations;
"
```
## Advanced Queries
### Find Specific Observations
```bash
# Search by keyword (FTS5)
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT title, created_at
FROM observations_fts
WHERE observations_fts MATCH 'authentication'
ORDER BY created_at DESC;
"
# Find by type
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT title, created_at
FROM observations
WHERE type = 'bugfix'
ORDER BY created_at DESC
LIMIT 10;
"
# Find by file path
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT title, created_at
FROM observations
WHERE file_path LIKE '%auth%'
ORDER BY created_at DESC;
"
```
## Database Maintenance
### Regular Maintenance Tasks
```bash
# Analyze for query optimization
sqlite3 ~/.claude-mem/claude-mem.db "ANALYZE;"
# Rebuild FTS5 indexes
sqlite3 ~/.claude-mem/claude-mem.db "
INSERT INTO observations_fts(observations_fts) VALUES('rebuild');
INSERT INTO sessions_fts(sessions_fts) VALUES('rebuild');
INSERT INTO prompts_fts(prompts_fts) VALUES('rebuild');
"
# Vacuum to reclaim space
sqlite3 ~/.claude-mem/claude-mem.db "VACUUM;"
```
**Run monthly to keep database healthy.**
@@ -0,0 +1,219 @@
# Full System Diagnostics
Comprehensive step-by-step diagnostic workflow for claude-mem issues.
## Diagnostic Workflow
Run these checks systematically to identify the root cause:
### 1. Check PM2 Worker Status
First, verify if the worker service is running:
```bash
# Check if PM2 is available
which pm2 || echo "PM2 not found in PATH"
# List PM2 processes
pm2 jlist 2>&1
# If pm2 is not found, try the local installation
~/.claude/plugins/marketplaces/thedotmack/node_modules/.bin/pm2 jlist 2>&1
```
**Expected output:** JSON array with `claude-mem-worker` process showing `"status": "online"`
**If worker not running or status is not "online":**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
pm2 start ecosystem.config.cjs
# Or use local pm2:
node_modules/.bin/pm2 start ecosystem.config.cjs
```
### 2. Check Worker Service Health
Test if the worker service responds to HTTP requests:
```bash
# Default port is 37777
curl -s http://127.0.0.1:37777/health
# Check custom port from settings
PORT=$(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_WORKER_PORT | grep -o '[0-9]\+' || echo "37777")
curl -s http://127.0.0.1:$PORT/health
```
**Expected output:** `{"status":"ok"}`
**If connection refused:**
- Worker not running → Go back to step 1
- Port conflict → Check what's using the port:
```bash
lsof -i :37777 || netstat -tlnp | grep 37777
```
### 3. Check Database
Verify the database exists and contains data:
```bash
# Check if database file exists
ls -lh ~/.claude-mem/claude-mem.db
# Check database size (should be > 0 bytes)
du -h ~/.claude-mem/claude-mem.db
# Query database for observation count (requires sqlite3)
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) as observation_count FROM observations;" 2>&1
# Query for session count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) as session_count FROM sessions;" 2>&1
# Check recent observations
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, type, title FROM observations ORDER BY created_at DESC LIMIT 5;" 2>&1
```
**Expected:**
- Database file exists (typically 100KB - 10MB+)
- Contains observations and sessions
- Recent observations visible
**If database missing or empty:**
- New installation - this is normal, database will populate as you work
- After `/clear` - sessions are marked complete but not deleted, data should persist
- Corrupted database - backup and recreate:
```bash
cp ~/.claude-mem/claude-mem.db ~/.claude-mem/claude-mem.db.backup
# Worker will recreate on next observation
```
### 4. Check Dependencies Installation
Verify all required npm packages are installed:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
# Check for critical packages
ls node_modules/@anthropic-ai/claude-agent-sdk 2>&1 | head -1
ls node_modules/better-sqlite3 2>&1 | head -1
ls node_modules/express 2>&1 | head -1
ls node_modules/pm2 2>&1 | head -1
```
**Expected:** All critical packages present
**If dependencies missing:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm install
```
### 5. Check Worker Logs
Review recent worker logs for errors:
```bash
# View last 50 lines of worker logs
pm2 logs claude-mem-worker --lines 50 --nostream
# Or use local pm2:
cd ~/.claude/plugins/marketplaces/thedotmack/
node_modules/.bin/pm2 logs claude-mem-worker --lines 50 --nostream
# Check for specific errors
pm2 logs claude-mem-worker --lines 100 --nostream | grep -i "error\|exception\|failed"
```
### 6. Test Viewer UI
Check if the web viewer is accessible:
```bash
# Test viewer endpoint
curl -s http://127.0.0.1:37777/ | head -20
# Test stats endpoint
curl -s http://127.0.0.1:37777/api/stats
```
**Expected:**
- `/` returns HTML page with React viewer
- `/api/stats` returns JSON with database counts
### 7. Check Port Configuration
Verify port settings and availability:
```bash
# Check if custom port is configured
cat ~/.claude-mem/settings.json 2>/dev/null
cat ~/.claude/settings.json 2>/dev/null
# Check what's listening on default port
lsof -i :37777 2>&1 || netstat -tlnp 2>&1 | grep 37777
# Test connectivity
nc -zv 127.0.0.1 37777 2>&1
```
## Full System Diagnosis Script
Run this comprehensive diagnostic script to collect all information:
```bash
#!/bin/bash
echo "=== Claude-Mem Troubleshooting Report ==="
echo ""
echo "1. Environment"
echo " OS: $(uname -s)"
echo ""
echo "2. Plugin Installation"
echo " Plugin directory exists: $([ -d ~/.claude/plugins/marketplaces/thedotmack ] && echo 'YES' || echo 'NO')"
echo " Package version: $(grep '"version"' ~/.claude/plugins/marketplaces/thedotmack/package.json 2>/dev/null | head -1)"
echo ""
echo "3. Database"
echo " Database exists: $([ -f ~/.claude-mem/claude-mem.db ] && echo 'YES' || echo 'NO')"
echo " Database size: $(du -h ~/.claude-mem/claude-mem.db 2>/dev/null | cut -f1)"
echo " Observation count: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM observations;' 2>/dev/null || echo 'N/A')"
echo " Session count: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT COUNT(*) FROM sessions;' 2>/dev/null || echo 'N/A')"
echo ""
echo "4. Worker Service"
PM2_PATH=$(which pm2 2>/dev/null || echo "~/.claude/plugins/marketplaces/thedotmack/node_modules/.bin/pm2")
echo " PM2 path: $PM2_PATH"
WORKER_STATUS=$($PM2_PATH jlist 2>/dev/null | grep -o '"name":"claude-mem-worker".*"status":"[^"]*"' | grep -o 'status":"[^"]*"' | cut -d'"' -f3 || echo 'not running')
echo " Worker status: $WORKER_STATUS"
echo " Health check: $(curl -s http://127.0.0.1:37777/health 2>/dev/null || echo 'FAILED')"
echo ""
echo "5. Configuration"
echo " Port setting: $(cat ~/.claude-mem/settings.json 2>/dev/null | grep CLAUDE_MEM_WORKER_PORT || echo 'default (37777)')"
echo " Observation count: $(cat ~/.claude/settings.json 2>/dev/null | grep CLAUDE_MEM_CONTEXT_OBSERVATIONS || echo 'default (50)')"
echo ""
echo "6. Recent Activity"
echo " Latest observation: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT created_at FROM observations ORDER BY created_at DESC LIMIT 1;' 2>/dev/null || echo 'N/A')"
echo " Latest session: $(sqlite3 ~/.claude-mem/claude-mem.db 'SELECT created_at FROM sessions ORDER BY created_at DESC LIMIT 1;' 2>/dev/null || echo 'N/A')"
echo ""
echo "=== End Report ==="
```
Save this as `/tmp/claude-mem-diagnostics.sh` and run:
```bash
bash /tmp/claude-mem-diagnostics.sh
```
## Reporting Issues
If troubleshooting doesn't resolve the issue, collect this information for a bug report:
1. Full diagnostic report (run script above)
2. Worker logs: `pm2 logs claude-mem-worker --lines 100 --nostream`
3. Your setup:
- Claude version: Check with Claude
- OS: `uname -a`
- Node version: `node --version`
- Plugin version: In package.json
4. Steps to reproduce the issue
5. Expected vs actual behavior
Post to: https://github.com/thedotmack/claude-mem/issues
@@ -0,0 +1,190 @@
# Quick Commands Reference
Essential commands for troubleshooting claude-mem.
## Worker Management
```bash
# Check worker status
pm2 status | grep claude-mem-worker
pm2 jlist | grep claude-mem-worker # JSON format
# Start worker
cd ~/.claude/plugins/marketplaces/thedotmack/
pm2 start ecosystem.config.cjs
# Restart worker
pm2 restart claude-mem-worker
# Stop worker
pm2 stop claude-mem-worker
# Delete worker (for clean restart)
pm2 delete claude-mem-worker
# View logs
pm2 logs claude-mem-worker
# View last N lines
pm2 logs claude-mem-worker --lines 50 --nostream
# Clear logs
pm2 flush claude-mem-worker
```
## Health Checks
```bash
# Check worker health (default port)
curl -s http://127.0.0.1:37777/health
# Check viewer stats
curl -s http://127.0.0.1:37777/api/stats
# Open viewer in browser
open http://127.0.0.1:37777
# Test custom port
PORT=37778
curl -s http://127.0.0.1:$PORT/health
```
## Database Queries
```bash
# Observation count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM observations;"
# Session count
sqlite3 ~/.claude-mem/claude-mem.db "SELECT COUNT(*) FROM sessions;"
# Recent observations
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, type, title FROM observations ORDER BY created_at DESC LIMIT 10;"
# Recent sessions
sqlite3 ~/.claude-mem/claude-mem.db "SELECT created_at, request FROM sessions ORDER BY created_at DESC LIMIT 5;"
# Database size
du -h ~/.claude-mem/claude-mem.db
# Database integrity check
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
# Projects in database
sqlite3 ~/.claude-mem/claude-mem.db "SELECT DISTINCT project FROM observations ORDER BY project;"
```
## Configuration
```bash
# View current settings
cat ~/.claude-mem/settings.json
cat ~/.claude/settings.json
# Change worker port
echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json
# Change context observation count
# Edit ~/.claude/settings.json and add:
{
"env": {
"CLAUDE_MEM_CONTEXT_OBSERVATIONS": "25"
}
}
# Change AI model
{
"env": {
"CLAUDE_MEM_MODEL": "claude-haiku-4-5"
}
}
```
## Plugin Management
```bash
# Navigate to plugin directory
cd ~/.claude/plugins/marketplaces/thedotmack/
# Check plugin version
grep '"version"' package.json
# Reinstall dependencies
npm install
# View package.json
cat package.json
```
## Port Diagnostics
```bash
# Check what's using port 37777
lsof -i :37777
netstat -tlnp | grep 37777
# Test port connectivity
nc -zv 127.0.0.1 37777
curl -v http://127.0.0.1:37777/health
```
## Log Analysis
```bash
# Search logs for errors
pm2 logs claude-mem-worker --lines 100 --nostream | grep -i "error"
# Search for specific keyword
pm2 logs claude-mem-worker --lines 100 --nostream | grep "keyword"
# Follow logs in real-time
pm2 logs claude-mem-worker
# Show only error logs
pm2 logs claude-mem-worker --err
```
## File Locations
```bash
# Plugin directory
~/.claude/plugins/marketplaces/thedotmack/
# Database
~/.claude-mem/claude-mem.db
# Settings
~/.claude-mem/settings.json
~/.claude/settings.json
# Chroma vector database
~/.claude-mem/chroma/
# Usage logs
~/.claude-mem/usage-logs/
# PM2 logs
~/.pm2/logs/
```
## System Information
```bash
# OS version
uname -a
# Node version
node --version
# NPM version
npm --version
# PM2 version
pm2 --version
# SQLite version
sqlite3 --version
# Check disk space
df -h ~/.claude-mem/
```
@@ -0,0 +1,308 @@
# Worker Service Diagnostics
PM2 worker-specific troubleshooting for claude-mem.
## PM2 Worker Overview
The claude-mem worker is a persistent background service managed by PM2. It:
- Runs Express.js server on port 37777 (default)
- Processes observations asynchronously
- Serves the viewer UI
- Provides search API endpoints
## Check Worker Status
### Basic Status Check
```bash
# List all PM2 processes
pm2 list
# JSON format (parseable)
pm2 jlist
# Filter for claude-mem-worker
pm2 status | grep claude-mem-worker
```
**Expected output:**
```
│ claude-mem-worker │ online │ 12345 │ 0 │ 45m │ 0% │ 85.6mb │
```
**Status meanings:**
- `online` - Worker running correctly
- `stopped` - Worker stopped (normal shutdown)
- `errored` - Worker crashed (check logs)
- `stopping` - Worker shutting down
- Not listed - Worker never started
### Detailed Worker Info
```bash
# Show detailed information
pm2 show claude-mem-worker
# JSON format
pm2 jlist | grep -A 20 '"name":"claude-mem-worker"'
```
## Worker Health Endpoint
The worker exposes a health endpoint at `/health`:
```bash
# Check health (default port)
curl -s http://127.0.0.1:37777/health
# With custom port
PORT=$(grep CLAUDE_MEM_WORKER_PORT ~/.claude-mem/settings.json | grep -o '[0-9]\+' || echo "37777")
curl -s http://127.0.0.1:$PORT/health
```
**Expected response:** `{"status":"ok"}`
**Error responses:**
- Connection refused - Worker not running
- Timeout - Worker hung (restart needed)
- Empty response - Worker crashed mid-request
## Worker Logs
### View Recent Logs
```bash
# Last 50 lines
pm2 logs claude-mem-worker --lines 50 --nostream
# Last 200 lines
pm2 logs claude-mem-worker --lines 200 --nostream
# Follow logs in real-time
pm2 logs claude-mem-worker
```
### Search Logs for Errors
```bash
# Find errors
pm2 logs claude-mem-worker --lines 500 --nostream | grep -i "error"
# Find exceptions
pm2 logs claude-mem-worker --lines 500 --nostream | grep -i "exception"
# Find failed requests
pm2 logs claude-mem-worker --lines 500 --nostream | grep -i "failed"
# All error patterns
pm2 logs claude-mem-worker --lines 500 --nostream | grep -iE "error|exception|failed|crash"
```
### Common Log Patterns
**Good startup:**
```
Worker service started on port 37777
Database initialized
Express server listening
```
**Database errors:**
```
Error: SQLITE_ERROR
Error initializing database
Database locked
```
**Port conflicts:**
```
Error: listen EADDRINUSE
Port 37777 already in use
```
**Crashes:**
```
PM2 | App [claude-mem-worker] exited with code [1]
PM2 | App [claude-mem-worker] will restart in 100ms
```
## Starting the Worker
### Basic Start
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
pm2 start ecosystem.config.cjs
```
### Start with Local PM2
If `pm2` command not in PATH:
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
node_modules/.bin/pm2 start ecosystem.config.cjs
```
### Force Restart
```bash
# Restart if already running
pm2 restart claude-mem-worker
# Delete and start fresh
pm2 delete claude-mem-worker
pm2 start ecosystem.config.cjs
```
## Stopping the Worker
```bash
# Graceful stop
pm2 stop claude-mem-worker
# Delete completely (also removes from PM2 list)
pm2 delete claude-mem-worker
```
## Worker Not Starting
### Diagnostic Steps
1. **Try manual start to see error:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
node plugin/scripts/worker-service.cjs
```
This runs the worker directly without PM2, showing full error output.
2. **Check PM2 itself:**
```bash
which pm2
pm2 --version
```
If PM2 not found, dependencies not installed.
3. **Check dependencies:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
ls node_modules/@anthropic-ai/claude-agent-sdk
ls node_modules/better-sqlite3
ls node_modules/express
ls node_modules/pm2
```
4. **Check port availability:**
```bash
lsof -i :37777
```
If port in use, either kill that process or change claude-mem port.
### Common Fixes
**Dependencies missing:**
```bash
cd ~/.claude/plugins/marketplaces/thedotmack/
npm install
pm2 start ecosystem.config.cjs
```
**Port conflict:**
```bash
echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json
pm2 restart claude-mem-worker
```
**Corrupted PM2:**
```bash
pm2 kill # Stop PM2 daemon
cd ~/.claude/plugins/marketplaces/thedotmack/
pm2 start ecosystem.config.cjs
```
## Worker Crashing Repeatedly
If worker keeps restarting (check with `pm2 status` showing high restart count):
### Find the Cause
1. **Check error logs:**
```bash
pm2 logs claude-mem-worker --err --lines 100 --nostream
```
2. **Look for crash pattern:**
```bash
pm2 logs claude-mem-worker --lines 200 --nostream | grep -A 5 "exited with code"
```
### Common Crash Causes
**Database corruption:**
```bash
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
If fails, backup and recreate database.
**Out of memory:**
Check if database is too large or memory leak. Restart:
```bash
pm2 restart claude-mem-worker
```
**Port conflict race condition:**
Another process grabbing port intermittently. Change port:
```bash
echo '{"env":{"CLAUDE_MEM_WORKER_PORT":"37778"}}' > ~/.claude-mem/settings.json
pm2 restart claude-mem-worker
```
## PM2 Management Commands
```bash
# List processes
pm2 list
pm2 jlist # JSON format
# Show detailed info
pm2 show claude-mem-worker
# Monitor resources
pm2 monit
# Clear logs
pm2 flush claude-mem-worker
# Restart PM2 daemon
pm2 kill
pm2 resurrect # Restore saved processes
# Save current process list
pm2 save
# Update PM2
npm install -g pm2
```
## Testing Worker Endpoints
Once worker is running, test all endpoints:
```bash
# Health check
curl -s http://127.0.0.1:37777/health
# Viewer HTML
curl -s http://127.0.0.1:37777/ | head -20
# Stats API
curl -s http://127.0.0.1:37777/api/stats
# Search API
curl -s "http://127.0.0.1:37777/api/search/observations?q=test&format=index"
# Prompts API
curl -s "http://127.0.0.1:37777/api/prompts?limit=5"
```
All should return appropriate responses (HTML for viewer, JSON for APIs).
+12 -26
View File
@@ -26,13 +26,15 @@ const WORKER_SERVICE = {
source: 'src/services/worker-service.ts'
};
const SEARCH_SERVER = {
name: 'search-server',
source: 'src/servers/search-server.ts'
};
// DEPRECATED: MCP search server replaced by skill-based search
// Keeping source file for reference: src/servers/search-server.ts
// const SEARCH_SERVER = {
// name: 'search-server',
// source: 'src/servers/search-server.ts'
// };
async function buildHooks() {
console.log('🔨 Building claude-mem hooks, worker service, and search server...\n');
console.log('🔨 Building claude-mem hooks and worker service...\n');
try {
// Read version from package.json
@@ -124,31 +126,15 @@ async function buildHooks() {
console.log(`${hook.name} built (${sizeInKB} KB)`);
}
// Build search server
console.log(`\n🔧 Building search server...`);
await build({
entryPoints: [SEARCH_SERVER.source],
bundle: true,
format: 'esm',
platform: 'node',
outfile: `${hooksDir}/${SEARCH_SERVER.name}.mjs`,
minify: true,
packages: 'external',
banner: {
js: '#!/usr/bin/env node'
}
});
// DEPRECATED: MCP search server no longer built (replaced by skill-based search)
// Search functionality now provided via HTTP API + search skill
// Source file kept for reference: src/servers/search-server.ts
// Make search server executable
fs.chmodSync(`${hooksDir}/${SEARCH_SERVER.name}.mjs`, 0o755);
const searchStats = fs.statSync(`${hooksDir}/${SEARCH_SERVER.name}.mjs`);
console.log(`✓ search-server built (${(searchStats.size / 1024).toFixed(2)} KB)`);
console.log('\n✅ All hooks, worker service, and search server built successfully!');
console.log('\n✅ All hooks and worker service built successfully!');
console.log(` Output: ${hooksDir}/`);
console.log(` - Hooks: *-hook.js`);
console.log(` - Worker: worker-service.cjs`);
console.log(` - Search: search-server.mjs`);
console.log(` - Skills: plugin/skills/`);
console.log('\n💡 Note: Dependencies will be auto-installed on first hook execution');
} catch (error) {
+8 -1
View File
@@ -441,8 +441,15 @@ async function contextHook(input?: SessionStartInput, useColors: boolean = false
}
// Add full summary details for most recent session
// Only show if summary was generated AFTER the last observation
const mostRecentSummary = recentSummaries[0];
if (mostRecentSummary && (mostRecentSummary.investigated || mostRecentSummary.learned || mostRecentSummary.completed || mostRecentSummary.next_steps)) {
const mostRecentObservation = observations[0]; // observations are DESC by created_at_epoch
const shouldShowSummary = mostRecentSummary &&
(mostRecentSummary.investigated || mostRecentSummary.learned || mostRecentSummary.completed || mostRecentSummary.next_steps) &&
(!mostRecentObservation || mostRecentSummary.created_at_epoch > mostRecentObservation.created_at_epoch);
if (shouldShowSummary) {
output.push(...renderSummaryField('Investigated', mostRecentSummary.investigated, colors.blue, useColors));
output.push(...renderSummaryField('Learned', mostRecentSummary.learned, colors.yellow, useColors));
output.push(...renderSummaryField('Completed', mostRecentSummary.completed, colors.green, useColors));
+563
View File
@@ -100,6 +100,18 @@ export class WorkerService {
// Settings
this.app.get('/api/settings', this.handleGetSettings.bind(this));
this.app.post('/api/settings', this.handleUpdateSettings.bind(this));
// Search API endpoints (for skill-based search)
this.app.get('/api/search/observations', this.handleSearchObservations.bind(this));
this.app.get('/api/search/sessions', this.handleSearchSessions.bind(this));
this.app.get('/api/search/prompts', this.handleSearchPrompts.bind(this));
this.app.get('/api/search/by-concept', this.handleSearchByConcept.bind(this));
this.app.get('/api/search/by-file', this.handleSearchByFile.bind(this));
this.app.get('/api/search/by-type', this.handleSearchByType.bind(this));
this.app.get('/api/context/recent', this.handleGetRecentContext.bind(this));
this.app.get('/api/context/timeline', this.handleGetContextTimeline.bind(this));
this.app.get('/api/timeline/by-query', this.handleGetTimelineByQuery.bind(this));
this.app.get('/api/search/help', this.handleSearchHelp.bind(this));
}
/**
@@ -642,6 +654,557 @@ export class WorkerService {
res.status(500).json({ error: (error as Error).message });
}
}
// ============================================================================
// Search API Handlers (for skill-based search)
// ============================================================================
/**
* Search observations
* GET /api/search/observations?query=...&format=index&limit=20&project=...
*/
private handleSearchObservations(req: Request, res: Response): void {
try {
const query = req.query.query as string;
const format = (req.query.format as string) || 'full';
const limit = parseInt(req.query.limit as string, 10) || 20;
const project = req.query.project as string | undefined;
if (!query) {
res.status(400).json({ error: 'Missing required parameter: query' });
return;
}
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.searchObservations(query, { limit, project });
res.json({
query,
count: results.length,
format,
results: format === 'index' ? results.map(r => ({
id: r.id,
type: r.type,
title: r.title,
subtitle: r.subtitle,
created_at_epoch: r.created_at_epoch,
project: r.project,
score: r.score
})) : results
});
} catch (error) {
logger.failure('WORKER', 'Search observations failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Search session summaries
* GET /api/search/sessions?query=...&format=index&limit=20
*/
private handleSearchSessions(req: Request, res: Response): void {
try {
const query = req.query.query as string;
const format = (req.query.format as string) || 'full';
const limit = parseInt(req.query.limit as string, 10) || 20;
if (!query) {
res.status(400).json({ error: 'Missing required parameter: query' });
return;
}
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.searchSessions(query, { limit });
res.json({
query,
count: results.length,
format,
results: format === 'index' ? results.map(r => ({
id: r.id,
request: r.request,
completed: r.completed,
created_at_epoch: r.created_at_epoch,
project: r.project,
score: r.score
})) : results
});
} catch (error) {
logger.failure('WORKER', 'Search sessions failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Search user prompts
* GET /api/search/prompts?query=...&format=index&limit=20
*/
private handleSearchPrompts(req: Request, res: Response): void {
try {
const query = req.query.query as string;
const format = (req.query.format as string) || 'full';
const limit = parseInt(req.query.limit as string, 10) || 20;
const project = req.query.project as string | undefined;
if (!query) {
res.status(400).json({ error: 'Missing required parameter: query' });
return;
}
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.searchUserPrompts(query, { limit, project });
res.json({
query,
count: results.length,
format,
results: format === 'index' ? results.map(r => ({
id: r.id,
claude_session_id: r.claude_session_id,
prompt_number: r.prompt_number,
prompt_text: r.prompt_text,
created_at_epoch: r.created_at_epoch,
score: r.score
})) : results
});
} catch (error) {
logger.failure('WORKER', 'Search prompts failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Search observations by concept
* GET /api/search/by-concept?concept=discovery&format=index&limit=5
*/
private handleSearchByConcept(req: Request, res: Response): void {
try {
const concept = req.query.concept as string;
const format = (req.query.format as string) || 'full';
const limit = parseInt(req.query.limit as string, 10) || 10;
const project = req.query.project as string | undefined;
if (!concept) {
res.status(400).json({ error: 'Missing required parameter: concept' });
return;
}
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.findByConcept(concept, { limit, project });
res.json({
concept,
count: results.length,
format,
results: format === 'index' ? results.map(r => ({
id: r.id,
type: r.type,
title: r.title,
subtitle: r.subtitle,
created_at_epoch: r.created_at_epoch,
project: r.project,
concepts: r.concepts
})) : results
});
} catch (error) {
logger.failure('WORKER', 'Search by concept failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Search by file path
* GET /api/search/by-file?filePath=...&format=index&limit=10
*/
private handleSearchByFile(req: Request, res: Response): void {
try {
const filePath = req.query.filePath as string;
const format = (req.query.format as string) || 'full';
const limit = parseInt(req.query.limit as string, 10) || 10;
const project = req.query.project as string | undefined;
if (!filePath) {
res.status(400).json({ error: 'Missing required parameter: filePath' });
return;
}
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.findByFile(filePath, { limit, project });
res.json({
filePath,
count: results.observations.length + results.sessions.length,
format,
results: {
observations: format === 'index' ? results.observations.map(r => ({
id: r.id,
type: r.type,
title: r.title,
subtitle: r.subtitle,
created_at_epoch: r.created_at_epoch,
project: r.project
})) : results.observations,
sessions: format === 'index' ? results.sessions.map(r => ({
id: r.id,
request: r.request,
completed: r.completed,
created_at_epoch: r.created_at_epoch,
project: r.project
})) : results.sessions
}
});
} catch (error) {
logger.failure('WORKER', 'Search by file failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Search observations by type
* GET /api/search/by-type?type=bugfix&format=index&limit=10
*/
private handleSearchByType(req: Request, res: Response): void {
try {
const type = req.query.type as string;
const format = (req.query.format as string) || 'full';
const limit = parseInt(req.query.limit as string, 10) || 10;
const project = req.query.project as string | undefined;
if (!type) {
res.status(400).json({ error: 'Missing required parameter: type' });
return;
}
const sessionSearch = this.dbManager.getSessionSearch();
const results = sessionSearch.findByType(type as any, { limit, project });
res.json({
type,
count: results.length,
format,
results: format === 'index' ? results.map(r => ({
id: r.id,
type: r.type,
title: r.title,
subtitle: r.subtitle,
created_at_epoch: r.created_at_epoch,
project: r.project
})) : results
});
} catch (error) {
logger.failure('WORKER', 'Search by type failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Get recent context (summaries and observations for a project)
* GET /api/context/recent?project=...&limit=3
*/
private handleGetRecentContext(req: Request, res: Response): void {
try {
const project = (req.query.project as string) || path.basename(process.cwd());
const limit = parseInt(req.query.limit as string, 10) || 3;
const sessionStore = this.dbManager.getSessionStore();
const sessions = sessionStore.getRecentSessionsWithStatus(project, limit);
const contextData = sessions.map(session => {
const summary = session.has_summary && session.sdk_session_id
? sessionStore.getSummaryForSession(session.sdk_session_id)
: null;
const observations = session.sdk_session_id
? sessionStore.getObservationsForSession(session.sdk_session_id)
: [];
return {
session_id: session.id,
sdk_session_id: session.sdk_session_id,
project: session.project,
status: session.status,
has_summary: session.has_summary,
summary,
observations: observations.map(o => ({
id: o.id,
type: o.type,
title: o.title,
subtitle: o.subtitle,
created_at_epoch: o.created_at_epoch
})),
created_at_epoch: session.started_at_epoch
};
});
res.json({
project,
limit,
count: contextData.length,
sessions: contextData
});
} catch (error) {
logger.failure('WORKER', 'Get recent context failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Get context timeline around an anchor point
* GET /api/context/timeline?anchor=123&depth_before=10&depth_after=10&project=...
*/
private handleGetContextTimeline(req: Request, res: Response): void {
try {
const anchor = req.query.anchor as string;
const depthBefore = parseInt(req.query.depth_before as string, 10) || 10;
const depthAfter = parseInt(req.query.depth_after as string, 10) || 10;
const project = req.query.project as string | undefined;
if (!anchor) {
res.status(400).json({ error: 'Missing required parameter: anchor' });
return;
}
const sessionStore = this.dbManager.getSessionStore();
let timeline;
// Check if anchor is a number (observation ID)
if (/^\d+$/.test(anchor)) {
const obsId = parseInt(anchor, 10);
const obs = sessionStore.getObservationById(obsId);
if (!obs) {
res.status(404).json({ error: `Observation #${obsId} not found` });
return;
}
timeline = sessionStore.getTimelineAroundObservation(obsId, obs.created_at_epoch, depthBefore, depthAfter, project);
} else if (anchor.startsWith('S') || anchor.startsWith('#S')) {
// Session ID
const sessionId = anchor.replace(/^#?S/, '');
const sessionNum = parseInt(sessionId, 10);
const sessions = sessionStore.getSessionSummariesByIds([sessionNum]);
if (sessions.length === 0) {
res.status(404).json({ error: `Session #${sessionNum} not found` });
return;
}
timeline = sessionStore.getTimelineAroundTimestamp(sessions[0].created_at_epoch, depthBefore, depthAfter, project);
} else {
// ISO timestamp
const date = new Date(anchor);
if (isNaN(date.getTime())) {
res.status(400).json({ error: `Invalid timestamp: ${anchor}` });
return;
}
timeline = sessionStore.getTimelineAroundTimestamp(date.getTime(), depthBefore, depthAfter, project);
}
res.json({
anchor,
depth_before: depthBefore,
depth_after: depthAfter,
project,
timeline
});
} catch (error) {
logger.failure('WORKER', 'Get context timeline failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Get timeline by query (search first, then get timeline around best match)
* GET /api/timeline/by-query?query=...&mode=auto&depth_before=10&depth_after=10
*/
private handleGetTimelineByQuery(req: Request, res: Response): void {
try {
const query = req.query.query as string;
const mode = (req.query.mode as string) || 'auto';
const depthBefore = parseInt(req.query.depth_before as string, 10) || 10;
const depthAfter = parseInt(req.query.depth_after as string, 10) || 10;
const project = req.query.project as string | undefined;
if (!query) {
res.status(400).json({ error: 'Missing required parameter: query' });
return;
}
const sessionSearch = this.dbManager.getSessionSearch();
const sessionStore = this.dbManager.getSessionStore();
// Search based on mode
let bestMatch: any = null;
let searchResults: any = null;
if (mode === 'observations' || mode === 'auto') {
const obsResults = sessionSearch.searchObservations(query, { limit: 1, project });
if (obsResults.length > 0) {
bestMatch = obsResults[0];
searchResults = { type: 'observation', results: obsResults };
}
}
if (!bestMatch && (mode === 'sessions' || mode === 'auto')) {
const sessionResults = sessionSearch.searchSessions(query, { limit: 1 });
if (sessionResults.length > 0) {
bestMatch = sessionResults[0];
searchResults = { type: 'session', results: sessionResults };
}
}
if (!bestMatch) {
res.json({
query,
mode,
match: null,
timeline: null,
message: 'No matches found for query'
});
return;
}
// Get timeline around best match
const timeline = searchResults.type === 'observation'
? sessionStore.getTimelineAroundObservation(bestMatch.id, bestMatch.created_at_epoch, depthBefore, depthAfter, project)
: sessionStore.getTimelineAroundTimestamp(bestMatch.created_at_epoch, depthBefore, depthAfter, project);
res.json({
query,
mode,
match: {
type: searchResults.type,
id: bestMatch.id,
title: bestMatch.title || bestMatch.request,
score: bestMatch.score,
created_at_epoch: bestMatch.created_at_epoch
},
depth_before: depthBefore,
depth_after: depthAfter,
timeline
});
} catch (error) {
logger.failure('WORKER', 'Get timeline by query failed', {}, error as Error);
res.status(500).json({ error: (error as Error).message });
}
}
/**
* Get search help documentation
* GET /api/search/help
*/
private handleSearchHelp(req: Request, res: Response): void {
res.json({
title: 'Claude-Mem Search API',
description: 'HTTP API for searching persistent memory',
endpoints: [
{
path: '/api/search/observations',
method: 'GET',
description: 'Search observations using full-text search',
parameters: {
query: 'Search query (required)',
format: 'Response format: "index" or "full" (default: "full")',
limit: 'Number of results (default: 20)',
project: 'Filter by project name (optional)'
}
},
{
path: '/api/search/sessions',
method: 'GET',
description: 'Search session summaries using full-text search',
parameters: {
query: 'Search query (required)',
format: 'Response format: "index" or "full" (default: "full")',
limit: 'Number of results (default: 20)'
}
},
{
path: '/api/search/prompts',
method: 'GET',
description: 'Search user prompts using full-text search',
parameters: {
query: 'Search query (required)',
format: 'Response format: "index" or "full" (default: "full")',
limit: 'Number of results (default: 20)',
project: 'Filter by project name (optional)'
}
},
{
path: '/api/search/by-concept',
method: 'GET',
description: 'Find observations by concept tag',
parameters: {
concept: 'Concept tag (required): discovery, decision, bugfix, feature, refactor',
format: 'Response format: "index" or "full" (default: "full")',
limit: 'Number of results (default: 10)',
project: 'Filter by project name (optional)'
}
},
{
path: '/api/search/by-file',
method: 'GET',
description: 'Find observations and sessions by file path',
parameters: {
filePath: 'File path or partial path (required)',
format: 'Response format: "index" or "full" (default: "full")',
limit: 'Number of results per type (default: 10)',
project: 'Filter by project name (optional)'
}
},
{
path: '/api/search/by-type',
method: 'GET',
description: 'Find observations by type',
parameters: {
type: 'Observation type (required): discovery, decision, bugfix, feature, refactor',
format: 'Response format: "index" or "full" (default: "full")',
limit: 'Number of results (default: 10)',
project: 'Filter by project name (optional)'
}
},
{
path: '/api/context/recent',
method: 'GET',
description: 'Get recent session context including summaries and observations',
parameters: {
project: 'Project name (default: current directory)',
limit: 'Number of recent sessions (default: 3)'
}
},
{
path: '/api/context/timeline',
method: 'GET',
description: 'Get unified timeline around a specific point in time',
parameters: {
anchor: 'Anchor point: observation ID, session ID (e.g., "S123"), or ISO timestamp (required)',
depth_before: 'Number of records before anchor (default: 10)',
depth_after: 'Number of records after anchor (default: 10)',
project: 'Filter by project name (optional)'
}
},
{
path: '/api/timeline/by-query',
method: 'GET',
description: 'Search for best match, then get timeline around it',
parameters: {
query: 'Search query (required)',
mode: 'Search mode: "auto", "observations", or "sessions" (default: "auto")',
depth_before: 'Number of records before match (default: 10)',
depth_after: 'Number of records after match (default: 10)',
project: 'Filter by project name (optional)'
}
},
{
path: '/api/search/help',
method: 'GET',
description: 'Get this help documentation'
}
],
examples: [
'curl "http://localhost:37777/api/search/observations?query=authentication&format=index&limit=5"',
'curl "http://localhost:37777/api/search/by-type?type=bugfix&limit=10"',
'curl "http://localhost:37777/api/context/recent?project=claude-mem&limit=3"',
'curl "http://localhost:37777/api/context/timeline?anchor=123&depth_before=5&depth_after=5"'
]
});
}
}
// ============================================================================