Merge main into feature/gemini-provider

Resolved conflicts to include both:
- Main's earliestPendingTimestamp for accurate observation timestamps
- PR's conversationHistory and currentProvider for Gemini provider switching

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2025-12-25 18:45:14 -05:00
48 changed files with 3041 additions and 1253 deletions
+1 -1
View File
@@ -10,7 +10,7 @@
"plugins": [
{
"name": "claude-mem",
"version": "8.0.6",
"version": "8.1.0",
"source": "./plugin",
"description": "Persistent memory system for Claude Code - context compression across sessions"
}
+111
View File
@@ -4,6 +4,117 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
## [8.1.0] - 2025-12-25
## Summary
This minor release brings significant architectural improvements focused on **explicit user control**, **simplified session management**, and **enhanced worker reliability**. The automatic recovery system has been replaced with a manual recovery approach, giving users complete control over when observations are reprocessed.
## Breaking Changes
**Manual Recovery Replaces Automatic Recovery**
- Worker no longer automatically reprocesses stuck observations on startup
- Users must explicitly trigger recovery via CLI tool or HTTP API
- Prevents unexpected duplicate observations and processing
- See the new Manual Recovery documentation for migration guide
**Removed Cleanup Hook**
- `cleanup-hook.ts` and `plugin/scripts/cleanup-hook.js` have been removed
- Hook behavior was moved to session completion handler
**Hook Timeout Increased**
- Default hook timeout changed from 5000ms to 120000ms
- Accommodates longer-running operations during startup
## New Features
**Queue Management API**
- `GET /api/pending-queue` - View processing queue status, stuck messages, and session work
- `POST /api/pending-queue/process` - Manually trigger recovery with session limits
- Detailed queue statistics including stuck detection (>5 minute threshold)
**CLI Recovery Tool**
- `bun scripts/check-pending-queue.ts` - Interactive queue inspection and recovery
- `--process` flag for non-interactive mode
- `--limit N` to control sessions processed per batch
- npm scripts: `npm run queue:check` and `npm run queue:process`
**Data Routes API**
- New DataRoutes module for queue management endpoints
- Session-aware pending work tracking
## Bug Fixes
**Observation Timestamps Fixed**
- Corrected timestamp handling throughout the observation lifecycle
- Fixed `created_at_epoch` preservation in database operations
**Enhanced Worker Reliability**
- Added error handlers to Chroma sync operations (prevents crashes on timeout)
- Version mismatch now logs warning instead of force-restarting worker
- Improved polling mechanism with increased retries and reduced interval
## Refactoring
**Simplified Session Management**
- Removed 279 lines of complexity from SessionStore
- `createSDKSession` simplified to pure `INSERT OR IGNORE`
- Removed auto-create logic from `storeObservation` and `storeSummary`
- Deleted 11 unused session management methods
- `prompt_number` now derived from `user_prompts` count
**Simplified Worker Utils**
- Removed 117 lines of legacy code
- Removed PM2 cleanup logic
- Streamlined `ensureWorkerRunning` function
**SDK Agent Improvements**
- Removed complex session creation retry logic
- Cleaner prompt number retrieval from SessionRoutes
## Documentation
**New Manual Recovery Guide** (`docs/public/usage/manual-recovery.mdx`)
- Complete 450-line guide for recovery workflows
- Interactive CLI usage examples
- HTTP API integration examples
- Troubleshooting stuck messages
- Cron job and monitoring script examples
**Enhanced Troubleshooting** (`docs/public/troubleshooting.mdx`)
- Added 195 lines of manual recovery troubleshooting
- Queue state explanations
- Direct database inspection queries
**Updated Development Guide**
- Changed testing philosophy to emphasize real-world testing
- Added manual testing workflow documentation
- Queue health verification procedures
**Worker Service Docs Updated**
- Documented 22 HTTP endpoints (up from 20)
- Queue management endpoint documentation
## Dependencies
- Upgraded `@anthropic-ai/claude-agent-sdk` from `^0.1.67` to `^0.1.76`
## New Scripts
- `scripts/check-pending-queue.ts` - CLI tool for queue management
- `scripts/fix-all-timestamps.ts` - Timestamp correction utility
- `scripts/fix-corrupted-timestamps.ts` - Corrupted timestamp repair
- `scripts/investigate-timestamps.ts` - Timestamp debugging tool
- `scripts/validate-timestamp-logic.ts` - Timestamp validation
- `scripts/verify-timestamp-fix.ts` - Post-fix verification
## Migration Guide
1. **After upgrading**: Run `bun scripts/check-pending-queue.ts` to check for stuck messages
2. **If messages found**: Run `bun scripts/check-pending-queue.ts --process` to recover
3. **Optional**: Add recovery to your workflow (cron job, pre-shutdown script)
4. **Note**: Automatic recovery no longer happens - you must trigger it manually
## [8.0.6] - 2025-12-24
## Bug Fixes
+103 -6
View File
@@ -19,7 +19,7 @@ The worker service is a long-running HTTP API built with Express.js and managed
## REST API Endpoints
The worker service exposes 20 HTTP endpoints organized into five categories:
The worker service exposes 22 HTTP endpoints organized into six categories:
### Viewer & Health Endpoints
@@ -385,9 +385,106 @@ POST /api/settings
}
```
### Queue Management Endpoints
#### 16. Get Pending Queue Status
```
GET /api/pending-queue
```
**Purpose**: View current processing queue status and identify stuck messages
**Response**:
```json
{
"queue": {
"messages": [
{
"id": 123,
"session_db_id": 45,
"claude_session_id": "abc123",
"message_type": "observation",
"status": "pending",
"retry_count": 0,
"created_at_epoch": 1730886600000,
"started_processing_at_epoch": null,
"completed_at_epoch": null
}
],
"totalPending": 5,
"totalProcessing": 2,
"totalFailed": 0,
"stuckCount": 1
},
"recentlyProcessed": [
{
"id": 122,
"session_db_id": 44,
"status": "processed",
"completed_at_epoch": 1730886500000
}
],
"sessionsWithPendingWork": [44, 45, 46]
}
```
**Status Definitions**:
- `pending`: Message queued, not yet processed
- `processing`: Message currently being processed by SDK agent
- `processed`: Message completed successfully
- `failed`: Message failed after max retry attempts (3 by default)
**Stuck Detection**: Messages in `processing` status for >5 minutes are considered stuck and included in `stuckCount`
**Use Case**: Check queue health after worker crashes or restarts to identify unprocessed observations
#### 17. Trigger Manual Recovery
```
POST /api/pending-queue/process
```
**Purpose**: Manually trigger processing of pending queues (replaces automatic recovery in v5.x+)
**Request Body**:
```json
{
"sessionLimit": 10
}
```
**Body Parameters**:
- `sessionLimit` (optional): Maximum number of sessions to process (default: 10, max: 100)
**Response**:
```json
{
"success": true,
"totalPendingSessions": 15,
"sessionsStarted": 10,
"sessionsSkipped": 2,
"startedSessionIds": [44, 45, 46, 47, 48, 49, 50, 51, 52, 53]
}
```
**Response Fields**:
- `totalPendingSessions`: Total sessions with pending messages in database
- `sessionsStarted`: Number of sessions we started processing this request
- `sessionsSkipped`: Sessions already actively processing (not restarted)
- `startedSessionIds`: Database IDs of sessions started
**Behavior**:
- Processes up to `sessionLimit` sessions with pending work
- Skips sessions already actively processing (prevents duplicate agents)
- Starts non-blocking SDK agents for each session
- Returns immediately with status (processing continues in background)
**Use Case**: Manually recover stuck observations after worker crashes, or when automatic recovery was disabled
**Recovery Strategy Note**: As of v5.x, automatic recovery on worker startup is disabled by default. Users must manually trigger recovery using this endpoint or the CLI tool (`bun scripts/check-pending-queue.ts`) to maintain explicit control over reprocessing.
### Session Management Endpoints
#### 16. Initialize Session
#### 19. Initialize Session
```
POST /sessions/:sessionDbId/init
```
@@ -408,7 +505,7 @@ POST /sessions/:sessionDbId/init
}
```
#### 17. Add Observation
#### 20. Add Observation
```
POST /sessions/:sessionDbId/observations
```
@@ -431,7 +528,7 @@ POST /sessions/:sessionDbId/observations
}
```
#### 18. Generate Summary
#### 21. Generate Summary
```
POST /sessions/:sessionDbId/summarize
```
@@ -451,7 +548,7 @@ POST /sessions/:sessionDbId/summarize
}
```
#### 19. Session Status
#### 22. Session Status
```
GET /sessions/:sessionDbId/status
```
@@ -466,7 +563,7 @@ GET /sessions/:sessionDbId/status
}
```
#### 20. Delete Session
#### 23. Delete Session
```
DELETE /sessions/:sessionDbId
```
+131 -27
View File
@@ -371,45 +371,149 @@ npm test
## Testing
### Running Tests
### Testing Philosophy
Claude-mem relies on **real-world usage and manual testing** rather than traditional unit tests. The project philosophy prioritizes:
1. **Manual verification** - Testing features in actual Claude Code sessions
2. **Integration testing** - Running the full system end-to-end
3. **Database inspection** - Verifying data correctness via SQLite queries
4. **CLI tools** - Interactive tools for checking system state
5. **Observability** - Comprehensive logging and worker health checks
This approach was chosen because:
- Hook behavior depends heavily on Claude Code's runtime environment
- SDK interactions require real API calls and responses
- SQLite and Bun runtime provide stability guarantees
- Manual testing catches integration issues that unit tests miss
### Manual Testing Workflow
When developing new features:
1. **Build and sync**:
```bash
npm run build
npm run sync-marketplace
claude-mem restart
```
2. **Test in real session**:
- Start Claude Code
- Trigger the feature you're testing
- Verify expected behavior
3. **Check database state**:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "SELECT * FROM your_table;"
```
4. **Monitor worker logs**:
```bash
npm run worker:logs
```
5. **Verify queue health** (for recovery features):
```bash
bun scripts/check-pending-queue.ts
```
### Testing Tools
**Health Checks**:
```bash
# All tests
npm test
# Worker status
npm run worker:status
# Specific test file
node --test tests/your-test.test.ts
# Queue inspection
curl http://localhost:37777/api/pending-queue
# With coverage (if configured)
npm test -- --coverage
# Database integrity
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
### Writing Tests
**Hook Testing**:
```bash
# Test context hook manually
echo '{"session_id":"test-123","cwd":"'$(pwd)'","source":"startup"}' | node plugin/scripts/context-hook.js
Create test files in `tests/`:
```typescript
import { describe, it } from 'node:test';
import assert from 'node:assert';
describe('YourFeature', () => {
it('should do something', () => {
// Test implementation
assert.strictEqual(result, expected);
});
});
# Test new hook
echo '{"session_id":"test-123","cwd":"'$(pwd)'","prompt":"test"}' | node plugin/scripts/new-hook.js
```
### Test Database
**Data Verification**:
```bash
# Check recent observations
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT id, tool_name, created_at
FROM observations
ORDER BY created_at_epoch DESC
LIMIT 10;
"
Use a separate test database:
```typescript
import { SessionStore } from '../src/services/sqlite/SessionStore';
const store = new SessionStore(':memory:'); // In-memory database
# Check summaries
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT id, request, completed
FROM session_summaries
ORDER BY created_at_epoch DESC
LIMIT 5;
"
```
### Recovery Feature Testing
For manual recovery features specifically:
1. **Simulate stuck messages**:
```bash
# Manually create stuck message (for testing only)
sqlite3 ~/.claude-mem/claude-mem.db "
UPDATE pending_messages
SET status = 'processing',
started_processing_at_epoch = strftime('%s', 'now', '-10 minutes') * 1000
WHERE id = 123;
"
```
2. **Test recovery**:
```bash
bun scripts/check-pending-queue.ts
```
3. **Verify results**:
```bash
curl http://localhost:37777/api/pending-queue | jq '.queue'
```
### Regression Testing
Before releasing:
1. **Test all hook triggers**:
- SessionStart: Start new Claude Code session
- UserPromptSubmit: Submit a prompt
- PostToolUse: Use a tool like Read
- Summary: Let session complete
- SessionEnd: Close Claude Code
2. **Test core features**:
- Context injection (recent sessions appear)
- Observation processing (summaries generated)
- MCP search tools (search returns results)
- Viewer UI (loads at http://localhost:37777)
- Manual recovery (stuck messages recovered)
3. **Test edge cases**:
- Worker crash recovery
- Database locks
- Port conflicts
- Large databases
4. **Cross-platform** (if applicable):
- macOS
- Linux
- Windows
## Code Style
### TypeScript Guidelines
+1
View File
@@ -40,6 +40,7 @@
"usage/claude-desktop",
"usage/private-tags",
"usage/export-import",
"usage/manual-recovery",
"beta-features",
"endless-mode"
]
+195
View File
@@ -285,6 +285,201 @@ The skill includes comprehensive diagnostics, automated repair sequences, and de
claude-mem restart
```
### Manual Recovery for Stuck Observations
**Symptoms**: Observations stuck in processing queue after worker crash or restart, no new summaries appearing despite worker running.
**Background**: As of v5.x, automatic queue recovery on worker startup is disabled. Users must manually trigger recovery to maintain explicit control over reprocessing and prevent unexpected duplicate observations.
**Solutions**:
#### Option 1: Use CLI Recovery Tool (Recommended)
The interactive CLI tool provides the safest and most user-friendly recovery experience:
```bash
# Check queue status and prompt for recovery
bun scripts/check-pending-queue.ts
# Auto-process without prompting
bun scripts/check-pending-queue.ts --process
# Process up to 5 sessions
bun scripts/check-pending-queue.ts --process --limit 5
```
**What it does**:
- ✅ Checks worker health before proceeding
- ✅ Shows detailed queue summary (pending, processing, failed, stuck)
- ✅ Groups messages by session with age and status breakdown
- ✅ Prompts user to confirm processing (unless `--process` flag used)
- ✅ Shows recently processed messages for feedback
**Interactive Example**:
```
Worker is healthy ✓
Queue Summary:
Pending: 12 messages
Processing: 2 messages (1 stuck)
Failed: 0 messages
Recently Processed: 5 messages in last 30 minutes
Sessions with pending work: 3
Session 44: 5 pending, 1 processing (age: 2m)
Session 45: 4 pending, 1 processing (age: 7m - STUCK)
Session 46: 2 pending
Would you like to process these pending queues? (y/n)
```
#### Option 2: Use HTTP API Directly
For automation or scripting scenarios:
1. **Check queue status**:
```bash
curl http://localhost:37777/api/pending-queue
```
Response shows:
- `queue.totalPending`: Messages waiting to process
- `queue.totalProcessing`: Messages currently processing
- `queue.stuckCount`: Processing messages >5 minutes old
- `sessionsWithPendingWork`: Session IDs needing recovery
2. **Trigger manual recovery**:
```bash
curl -X POST http://localhost:37777/api/pending-queue/process \
-H "Content-Type: application/json" \
-d '{"sessionLimit": 10}'
```
Response includes:
- `totalPendingSessions`: Total sessions with pending messages
- `sessionsStarted`: Number of sessions we started processing
- `sessionsSkipped`: Sessions already processing (not restarted)
- `startedSessionIds`: Database IDs of sessions started
#### Understanding Queue States
Messages progress through these states:
1. **pending** - Queued, waiting to process
2. **processing** - Currently being processed by SDK agent
3. **processed** - Completed successfully
4. **failed** - Failed after 3 retry attempts
**Stuck Detection**: Messages in `processing` state for >5 minutes are considered stuck and automatically reset to `pending` on worker startup (but not automatically reprocessed).
#### Recovery Strategy
**When to use manual recovery**:
- After worker crashes or unexpected restarts
- When observations appear saved but no summaries generated
- When queue status shows stuck messages (processing >5 minutes)
- After system crashes or forced shutdowns
**Best practices**:
1. Always check queue status before triggering recovery
2. Use the CLI tool for interactive sessions (provides feedback)
3. Use the HTTP API for automation/scripting
4. Start with a low session limit (5-10) to avoid overwhelming the worker
5. Monitor worker logs during recovery: `npm run worker:logs`
6. Check recently processed messages to confirm recovery worked
#### Troubleshooting Recovery Issues
If recovery fails or messages remain stuck:
1. **Verify worker is healthy**:
```bash
curl http://localhost:37777/health
# Should return: {"status":"ok","uptime":12345,"port":37777}
```
2. **Check database for corruption**:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
3. **View stuck messages directly**:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT id, session_db_id, status, retry_count,
(strftime('%s', 'now') * 1000 - started_processing_at_epoch) / 60000 as age_minutes
FROM pending_messages
WHERE status = 'processing'
ORDER BY started_processing_at_epoch;
"
```
4. **Force reset stuck messages** (nuclear option):
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
UPDATE pending_messages
SET status = 'pending', started_processing_at_epoch = NULL
WHERE status = 'processing';
"
```
Then trigger recovery:
```bash
bun scripts/check-pending-queue.ts --process
```
5. **Check worker logs for SDK errors**:
```bash
npm run worker:logs | grep -i error
```
#### Understanding the Queue Table
The `pending_messages` table tracks all messages with these key fields:
```sql
CREATE TABLE pending_messages (
id INTEGER PRIMARY KEY,
session_db_id INTEGER, -- Foreign key to sdk_sessions
claude_session_id TEXT, -- Claude session ID
message_type TEXT, -- 'observation' | 'summarize'
status TEXT, -- 'pending' | 'processing' | 'processed' | 'failed'
retry_count INTEGER, -- Current retry attempt (max: 3)
created_at_epoch INTEGER, -- When message was queued
started_processing_at_epoch INTEGER, -- When marked 'processing'
completed_at_epoch INTEGER -- When completed/failed
)
```
**Query examples**:
```bash
# Count messages by status
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT status, COUNT(*)
FROM pending_messages
GROUP BY status;
"
# Find sessions with pending work
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT session_db_id, COUNT(*) as pending_count
FROM pending_messages
WHERE status IN ('pending', 'processing')
GROUP BY session_db_id;
"
# View recent failures
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT id, session_db_id, message_type, retry_count,
datetime(completed_at_epoch/1000, 'unixepoch') as failed_at
FROM pending_messages
WHERE status = 'failed'
ORDER BY completed_at_epoch DESC
LIMIT 10;
"
```
## Hook Issues
### Hooks Not Firing
+450
View File
@@ -0,0 +1,450 @@
---
title: "Manual Recovery"
description: "Recover stuck observations after worker crashes or restarts"
---
# Manual Recovery Guide
## Overview
Claude-mem's manual recovery system helps you recover observations that get stuck in the processing queue after worker crashes, system restarts, or unexpected shutdowns.
**Key Change in v5.x**: Automatic recovery on worker startup is now disabled. This gives you explicit control over when reprocessing happens, preventing unexpected duplicate observations.
## When Do You Need Manual Recovery?
You should trigger manual recovery when:
- **Worker crashed or restarted** - Observations were queued but worker stopped before processing
- **No new summaries appearing** - Observations are being saved but not processed into summaries
- **Stuck messages detected** - Messages showing as "processing" for >5 minutes
- **System crashes** - Unexpected shutdowns left messages in incomplete states
## Quick Start
### Using the CLI Tool (Recommended)
The interactive CLI tool is the safest and easiest way to recover stuck observations:
```bash
# Check status and prompt for recovery
bun scripts/check-pending-queue.ts
```
This will:
1. Check worker health
2. Show queue summary (pending, processing, failed, stuck counts)
3. Display sessions with pending work
4. Prompt you to confirm recovery
5. Show recently processed messages for feedback
### Auto-Process Without Prompts
For scripting or when you're confident recovery is needed:
```bash
# Auto-process without prompting
bun scripts/check-pending-queue.ts --process
# Limit to 5 sessions
bun scripts/check-pending-queue.ts --process --limit 5
```
## Understanding Queue States
Messages progress through these lifecycle states:
1. **pending** → Queued, waiting to process
2. **processing** → Currently being processed by SDK agent
3. **processed** → Completed successfully
4. **failed** → Failed after 3 retry attempts
### Stuck Detection
Messages in `processing` state for **>5 minutes** are considered stuck:
- They're automatically reset to `pending` on worker startup
- They're NOT automatically reprocessed (requires manual trigger)
- They appear in the `stuckCount` field when checking queue status
## Recovery Methods
### Method 1: Interactive CLI Tool
**Best for**: Regular users, interactive sessions, when you want visibility into what's happening
```bash
bun scripts/check-pending-queue.ts
```
**Example Output**:
```
Checking worker health...
Worker is healthy ✓
Queue Summary:
Pending: 12 messages
Processing: 2 messages (1 stuck)
Failed: 0 messages
Recently Processed: 5 messages in last 30 minutes
Sessions with pending work: 3
Session 44: 5 pending, 1 processing (age: 2m)
Session 45: 4 pending, 1 processing (age: 7m - STUCK)
Session 46: 2 pending
Would you like to process these pending queues? (y/n)
```
**Features**:
- ✅ Pre-flight health check (verifies worker is running)
- ✅ Detailed queue breakdown by session
- ✅ Age tracking for stuck detection
- ✅ Confirmation prompt (prevents accidental reprocessing)
- ✅ Non-interactive mode with `--process` flag
- ✅ Session limit control with `--limit N`
### Method 2: HTTP API
**Best for**: Automation, scripting, integration with monitoring systems
#### Check Queue Status
```bash
curl http://localhost:37777/api/pending-queue
```
**Response**:
```json
{
"queue": {
"messages": [
{
"id": 123,
"session_db_id": 45,
"claude_session_id": "abc123",
"message_type": "observation",
"status": "pending",
"retry_count": 0,
"created_at_epoch": 1730886600000
}
],
"totalPending": 12,
"totalProcessing": 2,
"totalFailed": 0,
"stuckCount": 1
},
"recentlyProcessed": [...],
"sessionsWithPendingWork": [44, 45, 46]
}
```
**Key Fields**:
- `totalPending` - Messages waiting to process
- `totalProcessing` - Messages currently processing
- `stuckCount` - Processing messages >5 minutes old
- `sessionsWithPendingWork` - Session IDs needing recovery
#### Trigger Recovery
```bash
curl -X POST http://localhost:37777/api/pending-queue/process \
-H "Content-Type: application/json" \
-d '{"sessionLimit": 10}'
```
**Response**:
```json
{
"success": true,
"totalPendingSessions": 15,
"sessionsStarted": 10,
"sessionsSkipped": 2,
"startedSessionIds": [44, 45, 46, 47, 48, 49, 50, 51, 52, 53]
}
```
**Response Fields**:
- `totalPendingSessions` - Total sessions with pending messages in database
- `sessionsStarted` - Sessions we started processing this request
- `sessionsSkipped` - Sessions already processing (prevents duplicate agents)
- `startedSessionIds` - Database IDs of sessions we started
## Best Practices
### 1. Always Check Before Recovery
```bash
# Check queue status first
curl http://localhost:37777/api/pending-queue
# Or use CLI tool which checks automatically
bun scripts/check-pending-queue.ts
```
### 2. Start with Low Session Limits
```bash
# Process only 5 sessions at a time
bun scripts/check-pending-queue.ts --process --limit 5
```
This prevents overwhelming the worker with too many concurrent SDK agents.
### 3. Monitor During Recovery
Watch worker logs while recovery runs:
```bash
npm run worker:logs
```
Look for:
- SDK agent starts: `Starting SDK agent for session...`
- Processing completions: `Processed observation...`
- Errors: `ERROR` or `Failed to process...`
### 4. Verify Recovery Success
Check recently processed messages:
```bash
curl http://localhost:37777/api/pending-queue | jq '.recentlyProcessed'
```
Or use the CLI tool which shows this automatically.
### 5. Handle Failed Messages
Messages that fail 3 times are marked `failed` and won't auto-retry:
```bash
# View failed messages
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT id, session_db_id, message_type, retry_count
FROM pending_messages
WHERE status = 'failed'
ORDER BY completed_at_epoch DESC;
"
```
You can manually reset them if needed:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
UPDATE pending_messages
SET status = 'pending', retry_count = 0
WHERE status = 'failed';
"
```
## Troubleshooting
### Recovery Not Working
**Symptom**: Triggered recovery but messages still pending
**Solutions**:
1. **Verify worker health**:
```bash
curl http://localhost:37777/health
```
2. **Check worker logs for errors**:
```bash
npm run worker:logs | grep -i error
```
3. **Restart worker**:
```bash
claude-mem restart
```
4. **Check database integrity**:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA integrity_check;"
```
### Messages Stuck Forever
**Symptom**: Messages show as "processing" for hours
**Solution**: Force reset stuck messages
```bash
# Reset all stuck messages to pending
sqlite3 ~/.claude-mem/claude-mem.db "
UPDATE pending_messages
SET status = 'pending', started_processing_at_epoch = NULL
WHERE status = 'processing';
"
# Then trigger recovery
bun scripts/check-pending-queue.ts --process
```
### Worker Crashes During Recovery
**Symptom**: Worker stops while processing recovered messages
**Solutions**:
1. **Check available memory**:
```bash
npm run worker:status
```
2. **Reduce session limit**:
```bash
bun scripts/check-pending-queue.ts --process --limit 3
```
3. **Check for SDK errors in logs**:
```bash
npm run worker:logs | grep -i "sdk"
```
4. **Increase worker memory** (if using custom runner):
```bash
export NODE_OPTIONS="--max-old-space-size=4096"
claude-mem restart
```
## Advanced Usage
### Direct Database Inspection
View all pending messages:
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
id,
session_db_id,
message_type,
status,
retry_count,
datetime(created_at_epoch/1000, 'unixepoch') as created_at,
datetime(started_processing_at_epoch/1000, 'unixepoch') as started_at,
CAST((strftime('%s', 'now') * 1000 - started_processing_at_epoch) / 60000 AS INTEGER) as age_minutes
FROM pending_messages
WHERE status IN ('pending', 'processing')
ORDER BY created_at_epoch;
"
```
### Count Messages by Status
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT status, COUNT(*) as count
FROM pending_messages
GROUP BY status;
"
```
### Find Sessions with Pending Work
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
session_db_id,
COUNT(*) as pending_count,
GROUP_CONCAT(message_type) as message_types
FROM pending_messages
WHERE status IN ('pending', 'processing')
GROUP BY session_db_id;
"
```
### View Recent Failures
```bash
sqlite3 ~/.claude-mem/claude-mem.db "
SELECT
id,
session_db_id,
message_type,
retry_count,
datetime(completed_at_epoch/1000, 'unixepoch') as failed_at
FROM pending_messages
WHERE status = 'failed'
ORDER BY completed_at_epoch DESC
LIMIT 10;
"
```
## Integration Examples
### Cron Job for Automatic Recovery
```bash
#!/bin/bash
# Run every hour to process stuck queues
# Check if worker is healthy
if curl -f http://localhost:37777/health > /dev/null 2>&1; then
# Auto-process up to 5 sessions
bun scripts/check-pending-queue.ts --process --limit 5
else
echo "Worker not healthy, skipping recovery"
exit 1
fi
```
### Monitoring Script
```bash
#!/bin/bash
# Alert if stuck count exceeds threshold
STUCK_COUNT=$(curl -s http://localhost:37777/api/pending-queue | jq '.queue.stuckCount')
if [ "$STUCK_COUNT" -gt 5 ]; then
echo "WARNING: $STUCK_COUNT stuck messages detected"
# Send alert (email, Slack, etc.)
fi
```
### Pre-Shutdown Recovery
```bash
#!/bin/bash
# Process pending queues before system shutdown
echo "Processing pending queues before shutdown..."
bun scripts/check-pending-queue.ts --process --limit 20
echo "Waiting for processing to complete..."
sleep 10
echo "Stopping worker..."
claude-mem stop
```
## Migration Note
If you're upgrading from v4.x to v5.x:
**v4.x Behavior** (Automatic Recovery):
- Worker automatically recovered stuck messages on startup
- No user control over reprocessing timing
**v5.x Behavior** (Manual Recovery):
- Stuck messages detected but NOT automatically reprocessed
- User must explicitly trigger recovery via CLI or API
- Prevents unexpected duplicate observations
- Provides explicit control over when processing happens
**Migration Steps**:
1. Upgrade to v5.x
2. Check for stuck messages: `bun scripts/check-pending-queue.ts`
3. Process if needed: `bun scripts/check-pending-queue.ts --process`
4. Add recovery to your workflow (cron job, pre-shutdown script, etc.)
## See Also
- [Worker Service Architecture](../architecture/worker-service) - Technical details on queue processing
- [Troubleshooting - Manual Recovery](../troubleshooting#manual-recovery-for-stuck-observations) - Common issues and solutions
- [Database Schema](../architecture/database) - Pending messages table structure
+4 -2
View File
@@ -1,6 +1,6 @@
{
"name": "claude-mem",
"version": "8.0.6",
"version": "8.1.0",
"description": "Memory compression system for Claude Code - persist context across sessions",
"keywords": [
"claude",
@@ -44,6 +44,8 @@
"worker:stop": "bun plugin/scripts/worker-cli.js stop",
"worker:restart": "bun plugin/scripts/worker-cli.js restart",
"worker:status": "bun plugin/scripts/worker-cli.js status",
"queue:check": "bun scripts/check-pending-queue.ts",
"queue:process": "bun scripts/check-pending-queue.ts --process",
"translate-readme": "bun scripts/translate-readme/cli.ts -v -o docs/i18n README.md",
"translate:tier1": "npm run translate-readme -- zh ja pt-br ko es de fr",
"translate:tier2": "npm run translate-readme -- he ar ru pl cs nl tr uk",
@@ -53,7 +55,7 @@
"bug-report": "npx tsx scripts/bug-report/cli.ts"
},
"dependencies": {
"@anthropic-ai/claude-agent-sdk": "^0.1.67",
"@anthropic-ai/claude-agent-sdk": "^0.1.76",
"@modelcontextprotocol/sdk": "^1.20.1",
"ansi-to-html": "^0.7.2",
"express": "^4.18.2",
+1 -1
View File
@@ -1,6 +1,6 @@
{
"name": "claude-mem",
"version": "8.0.6",
"version": "8.1.0",
"description": "Persistent memory system for Claude Code - seamlessly preserve context across sessions",
"author": {
"name": "Alex Newman"
+19 -10
View File
@@ -5,6 +5,11 @@
{
"matcher": "startup|clear|compact",
"hooks": [
{
"type": "command",
"command": "bun \"${CLAUDE_PLUGIN_ROOT}/scripts/worker-cli.js\" restart",
"timeout": 30
},
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/smart-install.js\" && node \"${CLAUDE_PLUGIN_ROOT}/scripts/context-hook.js\"",
@@ -21,6 +26,11 @@
"UserPromptSubmit": [
{
"hooks": [
{
"type": "command",
"command": "bun \"${CLAUDE_PLUGIN_ROOT}/scripts/worker-cli.js\" start",
"timeout": 30
},
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/new-hook.js\"",
@@ -33,6 +43,11 @@
{
"matcher": "*",
"hooks": [
{
"type": "command",
"command": "bun \"${CLAUDE_PLUGIN_ROOT}/scripts/worker-cli.js\" start",
"timeout": 30
},
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/save-hook.js\"",
@@ -46,18 +61,12 @@
"hooks": [
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/summary-hook.js\"",
"timeout": 120
}
]
}
],
"SessionEnd": [
{
"hooks": [
"command": "bun \"${CLAUDE_PLUGIN_ROOT}/scripts/worker-cli.js\" start",
"timeout": 30
},
{
"type": "command",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/cleanup-hook.js\"",
"command": "node \"${CLAUDE_PLUGIN_ROOT}/scripts/summary-hook.js\"",
"timeout": 120
}
]
+7 -2
View File
@@ -82,7 +82,7 @@
"system_identity": "You are a Claude-Mem, a specialized observer tool for creating searchable memory FOR FUTURE SESSIONS.\n\nCRITICAL: Record what was DISCOVERED/IDENTIFIED/REVEALED about the investigation, not what you (the observer) are doing.\n\nYou do not have access to tools. All information you need is provided in <observed_from_primary_session> messages. Create observations from what you observe - no investigation needed.",
"spatial_awareness": "SPATIAL AWARENESS: Tool executions include the working directory (tool_cwd) to help you understand:\n- Which investigation folder/project is being worked on\n- Where email files are located relative to the project root\n- How to match requested paths to actual execution paths",
"observer_role": "Your job is to monitor an email fraud investigation happening RIGHT NOW, with the goal of creating observations about entities, relationships, timeline events, and evidence as they are discovered LIVE. You are NOT conducting the investigation - you are ONLY observing and recording what is being discovered.",
"recording_focus": "WHAT TO RECORD\n--------------\nFocus on investigative elements:\n- New entities discovered (people, organizations, email addresses)\n- Relationships between entities (who contacted whom, organizational ties)\n- Timeline events (when things happened, communication sequences)\n- Evidence supporting or refuting fraud patterns\n- Anomalies or red flags detected\n\nUse verbs like: identified, discovered, revealed, detected, corroborated, confirmed\n\n✅ GOOD EXAMPLES (describes what was discovered):\n- \"John Smith <john@example.com> sent 15 emails requesting wire transfers\"\n- \"Timeline reveals communication pattern between suspicious accounts\"\n- \"Email headers show spoofed sender domain\"\n\n❌ BAD EXAMPLES (describes observation process - DO NOT DO THIS):\n- \"Analyzed email headers and recorded findings\"\n- \"Tracked communication patterns and logged results\"\n- \"Monitored entity relationships and stored data\"",
"recording_focus": "WHAT TO RECORD\n--------------\nFocus on investigative elements:\n- New entities discovered (people, organizations, email addresses)\n- Relationships between entities (who contacted whom, organizational ties)\n- Timeline events (when things happened, communication sequences)\n- Evidence supporting or refuting fraud patterns\n- Anomalies or red flags detected\n\nCRITICAL OBSERVATION GRANULARITY:\n- Break up the information into multiple observations as necessary\n- Create AT LEAST 1 observation per tool use\n- When a single tool use returns rich information (like reading an email), create multiple smaller, focused observations rather than one large observation\n- Each observation should be atomic and semantically focused on ONE investigative element\n- Example: One email might yield 3-5 observations (entity discovery, timeline event, relationship, evidence, anomaly)\n\nUse verbs like: identified, discovered, revealed, detected, corroborated, confirmed\n\n✅ GOOD EXAMPLES (describes what was discovered):\n- \"John Smith <john@example.com> sent 15 emails requesting wire transfers\"\n- \"Timeline reveals communication pattern between suspicious accounts\"\n- \"Email headers show spoofed sender domain\"\n\n❌ BAD EXAMPLES (describes observation process - DO NOT DO THIS):\n- \"Analyzed email headers and recorded findings\"\n- \"Tracked communication patterns and logged results\"\n- \"Monitored entity relationships and stored data\"",
"skip_guidance": "WHEN TO SKIP\n------------\nSkip routine operations:\n- Empty searches with no results\n- Simple file listings\n- Repetitive operations you've already documented\n- If email research comes back as empty or not found\n- **No output necessary if skipping.**",
"type_guidance": "**type**: MUST be EXACTLY one of these options:\n - entity: new person, organization, or email address identified\n - relationship: connection between entities discovered\n - timeline-event: time-stamped event in communication sequence\n - evidence: supporting documentation or proof discovered\n - anomaly: suspicious pattern or irregularity detected\n - conclusion: investigative finding or determination",
"concept_guidance": "**concepts**: 2-5 knowledge-type categories. MUST use ONLY these exact keywords:\n - who: people and organizations involved\n - when: timing and sequence of events\n - what-happened: events and communications\n - motive: intent or purpose behind actions\n - red-flag: warning signs of fraud or deception\n - corroboration: evidence supporting a claim",
@@ -110,6 +110,11 @@
"header_summary_checkpoint": "INVESTIGATION SUMMARY CHECKPOINT\n================================",
"continuation_greeting": "Hello memory agent, you are continuing to observe the email fraud investigation session.",
"continuation_instruction": "IMPORTANT: Continue generating observations from tool use messages using the XML structure below."
"continuation_instruction": "IMPORTANT: Continue generating observations from tool use messages using the XML structure below.",
"summary_instruction": "Write progress notes of what was discovered, what entities were identified, and what investigation steps are next. This is a checkpoint to capture investigation progress so far. The session is ongoing - you may receive more tool executions after this summary. Write \"next_steps\" as the current trajectory of investigation (what's actively being examined or coming up next), not as post-session future work. Always write at least a minimal summary explaining current investigation progress, even if work is still in early stages.",
"summary_context_label": "Claude's Full Investigation Response:",
"summary_format_instruction": "Respond in this XML format:",
"summary_footer": "IMPORTANT! DO NOT do any work right now other than generating this next INVESTIGATION SUMMARY - and remember that you are a memory agent designed to summarize a DIFFERENT investigation session, not this one.\n\nNever reference yourself or your own actions. Do not output anything other than the summary content formatted in the XML structure above. All other output is ignored by the system.\n\nThank you, this summary will be very useful for tracking investigation progress!"
}
}
+1 -1
View File
@@ -1,6 +1,6 @@
{
"name": "claude-mem-plugin",
"version": "8.0.6",
"version": "8.1.0",
"private": true,
"description": "Runtime dependencies for claude-mem bundled hooks",
"type": "module",
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
Binary file not shown.
+54
View File
@@ -4,13 +4,62 @@ import * as path from "path";
const pathToFolder = "/Users/alexnewman/Scripts/claude-mem/datasets/epstein-mode/";
const pathToPlugin = "/Users/alexnewman/Scripts/claude-mem/plugin/";
const WORKER_PORT = 37777;
// Or read from a directory
const filesToProcess = fs
.readdirSync(pathToFolder)
.filter((f) => f.endsWith(".md"))
.sort((a, b) => {
// Extract numeric part from filename (e.g., "0001.md" -> 1)
const numA = parseInt(a.match(/\d+/)?.[0] || "0", 10);
const numB = parseInt(b.match(/\d+/)?.[0] || "0", 10);
return numA - numB;
})
.map((f) => path.join(pathToFolder, f));
/**
* Poll the worker's processing status endpoint until the queue is empty
*/
async function waitForQueueToEmpty(): Promise<void> {
const maxWaitTimeMs = 5 * 60 * 1000; // 5 minutes maximum
const pollIntervalMs = 500; // Poll every 500ms
const startTime = Date.now();
while (true) {
try {
const response = await fetch(`http://localhost:${WORKER_PORT}/api/processing-status`);
if (!response.ok) {
console.error(`Failed to get processing status: ${response.status}`);
break;
}
const status = await response.json();
console.log(`Queue status - Processing: ${status.isProcessing}, Queue depth: ${status.queueDepth}`);
// Exit when queue is empty
if (status.queueDepth === 0 && !status.isProcessing) {
console.log("Queue is empty, continuing to next prompt");
break;
}
// Check timeout
if (Date.now() - startTime > maxWaitTimeMs) {
console.warn("Warning: Queue did not empty within timeout, continuing anyway");
break;
}
// Wait before polling again
await new Promise(resolve => setTimeout(resolve, pollIntervalMs));
} catch (error) {
console.error("Error polling worker status:", error);
// On error, wait a bit and continue to avoid infinite loop
await new Promise(resolve => setTimeout(resolve, 1000));
break;
}
}
}
// var i = 0;
for (const file of filesToProcess) {
@@ -35,5 +84,10 @@ for (const file of filesToProcess) {
if (message.type === "assistant") {
console.log("Assistant:", message.message.content);
}
console.log("Raw:", JSON.stringify(message, null, 2));
}
// Wait for the worker queue to be empty before continuing to the next file
console.log("\n=== Waiting for worker queue to empty ===\n");
await waitForQueueToEmpty();
}
-1
View File
@@ -17,7 +17,6 @@ const HOOKS = [
{ name: 'new-hook', source: 'src/hooks/new-hook.ts' },
{ name: 'save-hook', source: 'src/hooks/save-hook.ts' },
{ name: 'summary-hook', source: 'src/hooks/summary-hook.ts' },
{ name: 'cleanup-hook', source: 'src/hooks/cleanup-hook.ts' },
{ name: 'user-message-hook', source: 'src/hooks/user-message-hook.ts' }
];
+241
View File
@@ -0,0 +1,241 @@
#!/usr/bin/env bun
/**
* Check and process pending observation queue
*
* Usage:
* bun scripts/check-pending-queue.ts # Check status and prompt to process
* bun scripts/check-pending-queue.ts --process # Auto-process without prompting
* bun scripts/check-pending-queue.ts --limit 5 # Process up to 5 sessions
*/
const WORKER_URL = 'http://localhost:37777';
interface QueueMessage {
id: number;
session_db_id: number;
message_type: string;
tool_name: string | null;
status: 'pending' | 'processing' | 'failed';
retry_count: number;
created_at_epoch: number;
project: string | null;
}
interface QueueResponse {
queue: {
messages: QueueMessage[];
totalPending: number;
totalProcessing: number;
totalFailed: number;
stuckCount: number;
};
recentlyProcessed: QueueMessage[];
sessionsWithPendingWork: number[];
}
interface ProcessResponse {
success: boolean;
totalPendingSessions: number;
sessionsStarted: number;
sessionsSkipped: number;
startedSessionIds: number[];
}
async function checkWorkerHealth(): Promise<boolean> {
try {
const res = await fetch(`${WORKER_URL}/api/health`);
return res.ok;
} catch {
return false;
}
}
async function getQueueStatus(): Promise<QueueResponse> {
const res = await fetch(`${WORKER_URL}/api/pending-queue`);
if (!res.ok) {
throw new Error(`Failed to get queue status: ${res.status}`);
}
return res.json();
}
async function processQueue(limit: number): Promise<ProcessResponse> {
const res = await fetch(`${WORKER_URL}/api/pending-queue/process`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ sessionLimit: limit })
});
if (!res.ok) {
throw new Error(`Failed to process queue: ${res.status}`);
}
return res.json();
}
function formatAge(epochMs: number): string {
const ageMs = Date.now() - epochMs;
const minutes = Math.floor(ageMs / 60000);
const hours = Math.floor(minutes / 60);
const days = Math.floor(hours / 24);
if (days > 0) return `${days}d ${hours % 24}h ago`;
if (hours > 0) return `${hours}h ${minutes % 60}m ago`;
return `${minutes}m ago`;
}
async function prompt(question: string): Promise<string> {
// Check if we have a TTY for interactive input
if (!process.stdin.isTTY) {
console.log(question + '(no TTY, use --process flag for non-interactive mode)');
return 'n';
}
return new Promise((resolve) => {
process.stdout.write(question);
process.stdin.setRawMode(false);
process.stdin.resume();
process.stdin.once('data', (data) => {
process.stdin.pause();
resolve(data.toString().trim());
});
});
}
async function main() {
const args = process.argv.slice(2);
// Help flag
if (args.includes('--help') || args.includes('-h')) {
console.log(`
Claude-Mem Pending Queue Manager
Check and process pending observation queue backlog.
Usage:
bun scripts/check-pending-queue.ts [options]
Options:
--help, -h Show this help message
--process Auto-process without prompting
--limit N Process up to N sessions (default: 10)
Examples:
# Check queue status interactively
bun scripts/check-pending-queue.ts
# Auto-process up to 10 sessions
bun scripts/check-pending-queue.ts --process
# Process up to 5 sessions
bun scripts/check-pending-queue.ts --process --limit 5
What is this for?
If the claude-mem worker crashes or restarts, pending observations may
be left unprocessed. This script shows the backlog and lets you trigger
processing. The worker no longer auto-recovers on startup to give you
control over when processing happens.
`);
process.exit(0);
}
const autoProcess = args.includes('--process');
const limitArg = args.find((_, i) => args[i - 1] === '--limit');
const limit = limitArg ? parseInt(limitArg, 10) : 10;
console.log('\n=== Claude-Mem Pending Queue Status ===\n');
// Check worker health
const healthy = await checkWorkerHealth();
if (!healthy) {
console.log('Worker is not running. Start it with:');
console.log(' cd ~/.claude/plugins/marketplaces/thedotmack && npm run worker:start\n');
process.exit(1);
}
console.log('Worker status: Running\n');
// Get queue status
const status = await getQueueStatus();
const { queue, sessionsWithPendingWork } = status;
// Display summary
console.log('Queue Summary:');
console.log(` Pending: ${queue.totalPending}`);
console.log(` Processing: ${queue.totalProcessing}`);
console.log(` Failed: ${queue.totalFailed}`);
console.log(` Stuck: ${queue.stuckCount} (processing > 5 min)`);
console.log(` Sessions: ${sessionsWithPendingWork.length} with pending work\n`);
// Check if there's any backlog
const hasBacklog = queue.totalPending > 0 || queue.totalFailed > 0;
const hasStuck = queue.stuckCount > 0;
if (!hasBacklog && !hasStuck) {
console.log('No backlog detected. Queue is healthy.\n');
// Show recently processed if any
if (status.recentlyProcessed.length > 0) {
console.log(`Recently processed: ${status.recentlyProcessed.length} messages in last 30 min\n`);
}
process.exit(0);
}
// Show details about pending messages
if (queue.messages.length > 0) {
console.log('Pending Messages:');
console.log('─'.repeat(80));
// Group by session
const bySession = new Map<number, QueueMessage[]>();
for (const msg of queue.messages) {
const list = bySession.get(msg.session_db_id) || [];
list.push(msg);
bySession.set(msg.session_db_id, list);
}
for (const [sessionId, messages] of bySession) {
const project = messages[0].project || 'unknown';
const oldest = Math.min(...messages.map(m => m.created_at_epoch));
const statuses = {
pending: messages.filter(m => m.status === 'pending').length,
processing: messages.filter(m => m.status === 'processing').length,
failed: messages.filter(m => m.status === 'failed').length
};
console.log(` Session ${sessionId} (${project})`);
console.log(` Messages: ${messages.length} total`);
console.log(` Status: ${statuses.pending} pending, ${statuses.processing} processing, ${statuses.failed} failed`);
console.log(` Age: ${formatAge(oldest)}`);
}
console.log('─'.repeat(80));
console.log('');
}
// Offer to process
if (autoProcess) {
console.log(`Auto-processing up to ${limit} sessions...\n`);
} else {
const answer = await prompt(`Process pending queue? (up to ${limit} sessions) [y/N]: `);
if (answer.toLowerCase() !== 'y') {
console.log('\nSkipped. Run with --process to auto-process.\n');
process.exit(0);
}
console.log('');
}
// Process the queue
const result = await processQueue(limit);
console.log('Processing Result:');
console.log(` Sessions started: ${result.sessionsStarted}`);
console.log(` Sessions skipped: ${result.sessionsSkipped} (already active)`);
console.log(` Remaining: ${result.totalPendingSessions - result.sessionsStarted}`);
if (result.startedSessionIds.length > 0) {
console.log(` Started IDs: ${result.startedSessionIds.join(', ')}`);
}
console.log('\nProcessing started in background. Check status again in a few minutes.\n');
}
main().catch(err => {
console.error('Error:', err.message);
process.exit(1);
});
+174
View File
@@ -0,0 +1,174 @@
#!/usr/bin/env bun
/**
* Fix ALL Corrupted Observation Timestamps
*
* This script finds and repairs ALL observations with timestamps that don't match
* their session start times, not just ones in an arbitrary "bad window".
*/
import Database from 'bun:sqlite';
import { resolve } from 'path';
const DB_PATH = resolve(process.env.HOME!, '.claude-mem/claude-mem.db');
interface CorruptedObservation {
obs_id: number;
obs_title: string;
obs_created: number;
session_started: number;
session_completed: number | null;
sdk_session_id: string;
}
function formatTimestamp(epoch: number): string {
return new Date(epoch).toLocaleString('en-US', {
timeZone: 'America/Los_Angeles',
year: 'numeric',
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
}
function main() {
const args = process.argv.slice(2);
const dryRun = args.includes('--dry-run');
const autoYes = args.includes('--yes') || args.includes('-y');
console.log('🔍 Finding ALL observations with timestamp corruption...\n');
if (dryRun) {
console.log('🏃 DRY RUN MODE - No changes will be made\n');
}
const db = new Database(DB_PATH);
try {
// Find all observations where timestamp doesn't match session
const corrupted = db.query<CorruptedObservation, []>(`
SELECT
o.id as obs_id,
o.title as obs_title,
o.created_at_epoch as obs_created,
s.started_at_epoch as session_started,
s.completed_at_epoch as session_completed,
s.sdk_session_id
FROM observations o
JOIN sdk_sessions s ON o.sdk_session_id = s.sdk_session_id
WHERE o.created_at_epoch < s.started_at_epoch -- Observation older than session
OR (s.completed_at_epoch IS NOT NULL
AND o.created_at_epoch > (s.completed_at_epoch + 3600000)) -- More than 1hr after session
ORDER BY o.id
`).all();
console.log(`Found ${corrupted.length} observations with corrupted timestamps\n`);
if (corrupted.length === 0) {
console.log('✅ No corrupted timestamps found!');
db.close();
return;
}
// Display findings
console.log('═══════════════════════════════════════════════════════════════════════');
console.log('PROPOSED FIXES:');
console.log('═══════════════════════════════════════════════════════════════════════\n');
for (const obs of corrupted.slice(0, 50)) {
const daysDiff = Math.round((obs.obs_created - obs.session_started) / (1000 * 60 * 60 * 24));
console.log(`Observation #${obs.obs_id}: ${obs.obs_title || '(no title)'}`);
console.log(` ❌ Wrong: ${formatTimestamp(obs.obs_created)}`);
console.log(` ✅ Correct: ${formatTimestamp(obs.session_started)}`);
console.log(` 📅 Off by ${daysDiff} days\n`);
}
if (corrupted.length > 50) {
console.log(`... and ${corrupted.length - 50} more\n`);
}
console.log('═══════════════════════════════════════════════════════════════════════');
console.log(`Ready to fix ${corrupted.length} observations.`);
if (dryRun) {
console.log('\n🏃 DRY RUN COMPLETE - No changes made.');
console.log('Run without --dry-run flag to apply fixes.\n');
db.close();
return;
}
if (autoYes) {
console.log('Auto-confirming with --yes flag...\n');
applyFixes(db, corrupted);
return;
}
console.log('Apply these fixes? (y/n): ');
const stdin = Bun.stdin.stream();
const reader = stdin.getReader();
reader.read().then(({ value }) => {
const response = new TextDecoder().decode(value).trim().toLowerCase();
if (response === 'y' || response === 'yes') {
applyFixes(db, corrupted);
} else {
console.log('\n❌ Fixes cancelled. No changes made.');
db.close();
}
});
} catch (error) {
console.error('❌ Error:', error);
db.close();
process.exit(1);
}
}
function applyFixes(db: Database, corrupted: CorruptedObservation[]) {
console.log('\n🔧 Applying fixes...\n');
const updateStmt = db.prepare(`
UPDATE observations
SET created_at_epoch = ?,
created_at = datetime(?/1000, 'unixepoch')
WHERE id = ?
`);
let successCount = 0;
let errorCount = 0;
for (const obs of corrupted) {
try {
updateStmt.run(
obs.session_started,
obs.session_started,
obs.obs_id
);
successCount++;
if (successCount % 10 === 0 || successCount <= 10) {
console.log(`✅ Fixed observation #${obs.obs_id}`);
}
} catch (error) {
errorCount++;
console.error(`❌ Failed to fix observation #${obs.obs_id}:`, error);
}
}
console.log('\n═══════════════════════════════════════════════════════════════════════');
console.log('RESULTS:');
console.log('═══════════════════════════════════════════════════════════════════════');
console.log(`✅ Successfully fixed: ${successCount}`);
console.log(`❌ Failed: ${errorCount}`);
console.log(`📊 Total processed: ${corrupted.length}\n`);
if (successCount > 0) {
console.log('🎉 ALL timestamp corruption has been repaired!\n');
}
db.close();
}
main();
+243
View File
@@ -0,0 +1,243 @@
#!/usr/bin/env bun
/**
* Fix Corrupted Observation Timestamps
*
* This script repairs observations that were created during the orphan queue processing
* on Dec 24, 2025 between 19:45-20:31. These observations got Dec 24 timestamps instead
* of their original timestamps from Dec 17-20.
*/
import Database from 'bun:sqlite';
import { resolve } from 'path';
const DB_PATH = resolve(process.env.HOME!, '.claude-mem/claude-mem.db');
// Bad window: Dec 24 19:45-20:31 (timestamps in milliseconds, not microseconds)
// Using actual observation epoch format (microseconds since epoch)
const BAD_WINDOW_START = 1766623500000; // Dec 24 19:45 PST
const BAD_WINDOW_END = 1766626260000; // Dec 24 20:31 PST
interface AffectedObservation {
id: number;
sdk_session_id: string;
created_at_epoch: number;
title: string;
}
interface ProcessedMessage {
id: number;
session_db_id: number;
tool_name: string;
created_at_epoch: number;
completed_at_epoch: number;
}
interface SessionMapping {
session_db_id: number;
sdk_session_id: string;
}
interface TimestampFix {
observation_id: number;
observation_title: string;
wrong_timestamp: number;
correct_timestamp: number;
session_db_id: number;
pending_message_id: number;
}
function formatTimestamp(epoch: number): string {
return new Date(epoch).toLocaleString('en-US', {
timeZone: 'America/Los_Angeles',
year: 'numeric',
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
}
function main() {
const args = process.argv.slice(2);
const dryRun = args.includes('--dry-run');
const autoYes = args.includes('--yes') || args.includes('-y');
console.log('🔍 Analyzing corrupted observation timestamps...\n');
if (dryRun) {
console.log('🏃 DRY RUN MODE - No changes will be made\n');
}
const db = new Database(DB_PATH);
try {
// Step 1: Find affected observations
console.log('Step 1: Finding observations created during bad window...');
const affectedObs = db.query<AffectedObservation, []>(`
SELECT id, sdk_session_id, created_at_epoch, title
FROM observations
WHERE created_at_epoch >= ${BAD_WINDOW_START}
AND created_at_epoch <= ${BAD_WINDOW_END}
ORDER BY id
`).all();
console.log(`Found ${affectedObs.length} observations in bad window\n`);
if (affectedObs.length === 0) {
console.log('✅ No affected observations found!');
return;
}
// Step 2: Find processed pending_messages from bad window
console.log('Step 2: Finding pending messages processed during bad window...');
const processedMessages = db.query<ProcessedMessage, []>(`
SELECT id, session_db_id, tool_name, created_at_epoch, completed_at_epoch
FROM pending_messages
WHERE status = 'processed'
AND completed_at_epoch >= ${BAD_WINDOW_START}
AND completed_at_epoch <= ${BAD_WINDOW_END}
ORDER BY completed_at_epoch
`).all();
console.log(`Found ${processedMessages.length} processed messages\n`);
// Step 3: Match observations to their session start times (simpler approach)
console.log('Step 3: Matching observations to session start times...');
const fixes: TimestampFix[] = [];
interface ObsWithSession {
obs_id: number;
obs_title: string;
obs_created: number;
session_started: number;
sdk_session_id: string;
}
const obsWithSessions = db.query<ObsWithSession, []>(`
SELECT
o.id as obs_id,
o.title as obs_title,
o.created_at_epoch as obs_created,
s.started_at_epoch as session_started,
s.sdk_session_id
FROM observations o
JOIN sdk_sessions s ON o.sdk_session_id = s.sdk_session_id
WHERE o.created_at_epoch >= ${BAD_WINDOW_START}
AND o.created_at_epoch <= ${BAD_WINDOW_END}
AND s.started_at_epoch < ${BAD_WINDOW_START}
ORDER BY o.id
`).all();
for (const row of obsWithSessions) {
fixes.push({
observation_id: row.obs_id,
observation_title: row.obs_title || '(no title)',
wrong_timestamp: row.obs_created,
correct_timestamp: row.session_started,
session_db_id: 0, // Not needed for this approach
pending_message_id: 0 // Not needed for this approach
});
}
console.log(`Identified ${fixes.length} observations to fix\n`);
// Step 5: Display what will be fixed
console.log('═══════════════════════════════════════════════════════════════════════');
console.log('PROPOSED FIXES:');
console.log('═══════════════════════════════════════════════════════════════════════\n');
for (const fix of fixes) {
const daysDiff = Math.round((fix.wrong_timestamp - fix.correct_timestamp) / (1000 * 60 * 60 * 24));
console.log(`Observation #${fix.observation_id}: ${fix.observation_title}`);
console.log(` ❌ Wrong: ${formatTimestamp(fix.wrong_timestamp)}`);
console.log(` ✅ Correct: ${formatTimestamp(fix.correct_timestamp)}`);
console.log(` 📅 Off by ${daysDiff} days\n`);
}
// Step 6: Ask for confirmation
console.log('═══════════════════════════════════════════════════════════════════════');
console.log(`Ready to fix ${fixes.length} observations.`);
if (dryRun) {
console.log('\n🏃 DRY RUN COMPLETE - No changes made.');
console.log('Run without --dry-run flag to apply fixes.\n');
db.close();
return;
}
if (autoYes) {
console.log('Auto-confirming with --yes flag...\n');
applyFixes(db, fixes);
return;
}
console.log('Apply these fixes? (y/n): ');
const stdin = Bun.stdin.stream();
const reader = stdin.getReader();
reader.read().then(({ value }) => {
const response = new TextDecoder().decode(value).trim().toLowerCase();
if (response === 'y' || response === 'yes') {
applyFixes(db, fixes);
} else {
console.log('\n❌ Fixes cancelled. No changes made.');
db.close();
}
});
} catch (error) {
console.error('❌ Error:', error);
db.close();
process.exit(1);
}
}
function applyFixes(db: Database, fixes: TimestampFix[]) {
console.log('\n🔧 Applying fixes...\n');
const updateStmt = db.prepare(`
UPDATE observations
SET created_at_epoch = ?,
created_at = datetime(?/1000, 'unixepoch')
WHERE id = ?
`);
let successCount = 0;
let errorCount = 0;
for (const fix of fixes) {
try {
updateStmt.run(
fix.correct_timestamp,
fix.correct_timestamp,
fix.observation_id
);
successCount++;
console.log(`✅ Fixed observation #${fix.observation_id}`);
} catch (error) {
errorCount++;
console.error(`❌ Failed to fix observation #${fix.observation_id}:`, error);
}
}
console.log('\n═══════════════════════════════════════════════════════════════════════');
console.log('RESULTS:');
console.log('═══════════════════════════════════════════════════════════════════════');
console.log(`✅ Successfully fixed: ${successCount}`);
console.log(`❌ Failed: ${errorCount}`);
console.log(`📊 Total processed: ${fixes.length}\n`);
if (successCount > 0) {
console.log('🎉 Timestamp corruption has been repaired!');
console.log('💡 Next steps:');
console.log(' 1. Verify the fixes with: bun scripts/verify-timestamp-fix.ts');
console.log(' 2. Consider re-enabling orphan processing if timestamp fix is working\n');
}
db.close();
}
main();
+143
View File
@@ -0,0 +1,143 @@
#!/usr/bin/env bun
/**
* Investigate Timestamp Situation
*
* This script investigates the actual state of observations and pending messages
* to understand what happened with the timestamp corruption.
*/
import Database from 'bun:sqlite';
import { resolve } from 'path';
const DB_PATH = resolve(process.env.HOME!, '.claude-mem/claude-mem.db');
function formatTimestamp(epoch: number): string {
return new Date(epoch).toLocaleString('en-US', {
timeZone: 'America/Los_Angeles',
year: 'numeric',
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
}
function main() {
console.log('🔍 Investigating timestamp situation...\n');
const db = new Database(DB_PATH);
try {
// Check 1: Recent observations on Dec 24
console.log('Check 1: All observations created on Dec 24, 2025...');
const dec24Start = 1735027200000; // Dec 24 00:00 PST
const dec24End = 1735113600000; // Dec 25 00:00 PST
const dec24Obs = db.query(`
SELECT id, sdk_session_id, created_at_epoch, title
FROM observations
WHERE created_at_epoch >= ${dec24Start}
AND created_at_epoch < ${dec24End}
ORDER BY created_at_epoch
LIMIT 100
`).all();
console.log(`Found ${dec24Obs.length} observations on Dec 24:\n`);
for (const obs of dec24Obs.slice(0, 20)) {
console.log(` #${obs.id}: ${formatTimestamp(obs.created_at_epoch)} - ${obs.title || '(no title)'}`);
}
if (dec24Obs.length > 20) {
console.log(` ... and ${dec24Obs.length - 20} more`);
}
console.log();
// Check 2: Observations from Dec 17-20
console.log('Check 2: Observations from Dec 17-20, 2025...');
const dec17Start = 1734422400000; // Dec 17 00:00 PST
const dec21Start = 1734768000000; // Dec 21 00:00 PST
const oldObs = db.query(`
SELECT id, sdk_session_id, created_at_epoch, title
FROM observations
WHERE created_at_epoch >= ${dec17Start}
AND created_at_epoch < ${dec21Start}
ORDER BY created_at_epoch
LIMIT 100
`).all();
console.log(`Found ${oldObs.length} observations from Dec 17-20:\n`);
for (const obs of oldObs.slice(0, 20)) {
console.log(` #${obs.id}: ${formatTimestamp(obs.created_at_epoch)} - ${obs.title || '(no title)'}`);
}
if (oldObs.length > 20) {
console.log(` ... and ${oldObs.length - 20} more`);
}
console.log();
// Check 3: Pending messages status
console.log('Check 3: Pending messages status...');
const statusCounts = db.query(`
SELECT status, COUNT(*) as count
FROM pending_messages
GROUP BY status
`).all();
console.log('Pending message counts by status:');
for (const row of statusCounts) {
console.log(` ${row.status}: ${row.count}`);
}
console.log();
// Check 4: Old pending messages from Dec 17-20
console.log('Check 4: Pending messages from Dec 17-20...');
const oldMessages = db.query(`
SELECT id, session_db_id, tool_name, status, created_at_epoch, completed_at_epoch
FROM pending_messages
WHERE created_at_epoch >= ${dec17Start}
AND created_at_epoch < ${dec21Start}
ORDER BY created_at_epoch
LIMIT 50
`).all();
console.log(`Found ${oldMessages.length} pending messages from Dec 17-20:\n`);
for (const msg of oldMessages.slice(0, 20)) {
const completedAt = msg.completed_at_epoch ? formatTimestamp(msg.completed_at_epoch) : 'N/A';
console.log(` #${msg.id}: ${msg.tool_name} - Status: ${msg.status}`);
console.log(` Created: ${formatTimestamp(msg.created_at_epoch)}`);
console.log(` Completed: ${completedAt}\n`);
}
if (oldMessages.length > 20) {
console.log(` ... and ${oldMessages.length - 20} more`);
}
// Check 5: Recently completed pending messages
console.log('Check 5: Recently completed pending messages...');
const recentCompleted = db.query(`
SELECT id, session_db_id, tool_name, status, created_at_epoch, completed_at_epoch
FROM pending_messages
WHERE completed_at_epoch IS NOT NULL
ORDER BY completed_at_epoch DESC
LIMIT 20
`).all();
console.log(`Most recent completed pending messages:\n`);
for (const msg of recentCompleted) {
const createdAt = formatTimestamp(msg.created_at_epoch);
const completedAt = formatTimestamp(msg.completed_at_epoch);
const lag = Math.round((msg.completed_at_epoch - msg.created_at_epoch) / 1000);
console.log(` #${msg.id}: ${msg.tool_name} (${msg.status})`);
console.log(` Created: ${createdAt}`);
console.log(` Completed: ${completedAt} (${lag}s later)\n`);
}
} catch (error) {
console.error('❌ Error:', error);
process.exit(1);
} finally {
db.close();
}
}
main();
+150
View File
@@ -0,0 +1,150 @@
#!/usr/bin/env bun
/**
* Validate Timestamp Logic
*
* This script validates that the backlog timestamp logic would work correctly
* by checking pending messages and simulating what timestamps they would get.
*/
import Database from 'bun:sqlite';
import { resolve } from 'path';
const DB_PATH = resolve(process.env.HOME!, '.claude-mem/claude-mem.db');
function formatTimestamp(epoch: number): string {
return new Date(epoch).toLocaleString('en-US', {
timeZone: 'America/Los_Angeles',
year: 'numeric',
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
}
function main() {
console.log('🔍 Validating timestamp logic for backlog processing...\n');
const db = new Database(DB_PATH);
try {
// Check for pending messages
const pendingStats = db.query(`
SELECT
status,
COUNT(*) as count,
MIN(created_at_epoch) as earliest,
MAX(created_at_epoch) as latest
FROM pending_messages
GROUP BY status
ORDER BY status
`).all();
console.log('Pending Messages Status:\n');
for (const stat of pendingStats) {
console.log(`${stat.status}: ${stat.count} messages`);
if (stat.earliest && stat.latest) {
console.log(` Created: ${formatTimestamp(stat.earliest)} to ${formatTimestamp(stat.latest)}`);
}
}
console.log();
// Get sample pending messages with their session info
const pendingWithSessions = db.query(`
SELECT
pm.id,
pm.session_db_id,
pm.tool_name,
pm.created_at_epoch as msg_created,
pm.status,
s.sdk_session_id,
s.started_at_epoch as session_started,
s.project
FROM pending_messages pm
LEFT JOIN sdk_sessions s ON pm.session_db_id = s.id
WHERE pm.status IN ('pending', 'processing')
ORDER BY pm.created_at_epoch
LIMIT 10
`).all();
if (pendingWithSessions.length === 0) {
console.log('✅ No pending messages - all caught up!\n');
db.close();
return;
}
console.log(`Sample of ${pendingWithSessions.length} pending messages:\n`);
console.log('═══════════════════════════════════════════════════════════════════════');
for (const msg of pendingWithSessions) {
console.log(`\nPending Message #${msg.id}: ${msg.tool_name} (${msg.status})`);
console.log(` Created: ${formatTimestamp(msg.msg_created)}`);
if (msg.session_started) {
console.log(` Session started: ${formatTimestamp(msg.session_started)}`);
console.log(` Project: ${msg.project}`);
// Validate logic
const ageDays = Math.round((Date.now() - msg.msg_created) / (1000 * 60 * 60 * 24));
if (msg.msg_created < msg.session_started) {
console.log(` ⚠️ WARNING: Message created BEFORE session! This is impossible.`);
} else if (ageDays > 0) {
console.log(` 📅 Message is ${ageDays} days old`);
console.log(` ✅ Would use original timestamp: ${formatTimestamp(msg.msg_created)}`);
} else {
console.log(` ✅ Recent message, would use original timestamp: ${formatTimestamp(msg.msg_created)}`);
}
} else {
console.log(` ⚠️ No session found for session_db_id ${msg.session_db_id}`);
}
}
console.log('\n═══════════════════════════════════════════════════════════════════════');
console.log('\nTimestamp Logic Validation:\n');
console.log('✅ Code Flow:');
console.log(' 1. SessionManager.yieldNextMessage() tracks earliestPendingTimestamp');
console.log(' 2. SDKAgent captures originalTimestamp before processing');
console.log(' 3. processSDKResponse passes originalTimestamp to storeObservation/storeSummary');
console.log(' 4. SessionStore uses overrideTimestampEpoch ?? Date.now()');
console.log(' 5. earliestPendingTimestamp reset after batch completes\n');
console.log('✅ Expected Behavior:');
console.log(' - New messages: get current timestamp');
console.log(' - Backlog messages: get original created_at_epoch');
console.log(' - Observations match their source message timestamps\n');
// Check for any sessions with stuck processing messages
const stuckMessages = db.query(`
SELECT
session_db_id,
COUNT(*) as count,
MIN(created_at_epoch) as earliest,
MAX(created_at_epoch) as latest
FROM pending_messages
WHERE status = 'processing'
GROUP BY session_db_id
ORDER BY count DESC
`).all();
if (stuckMessages.length > 0) {
console.log('⚠️ Stuck Messages (status=processing):\n');
for (const stuck of stuckMessages) {
const ageDays = Math.round((Date.now() - stuck.earliest) / (1000 * 60 * 60 * 24));
console.log(` Session ${stuck.session_db_id}: ${stuck.count} messages`);
console.log(` Stuck for ${ageDays} days (${formatTimestamp(stuck.earliest)})`);
}
console.log('\n 💡 These will be processed with original timestamps when orphan processing is enabled\n');
}
} catch (error) {
console.error('❌ Error:', error);
process.exit(1);
} finally {
db.close();
}
}
main();
+144
View File
@@ -0,0 +1,144 @@
#!/usr/bin/env bun
/**
* Verify Timestamp Fix
*
* This script verifies that the timestamp corruption has been properly fixed.
* It checks for any remaining observations in the bad window that shouldn't be there.
*/
import Database from 'bun:sqlite';
import { resolve } from 'path';
const DB_PATH = resolve(process.env.HOME!, '.claude-mem/claude-mem.db');
// Bad window: Dec 24 19:45-20:31 (using actual epoch format from database)
const BAD_WINDOW_START = 1766623500000; // Dec 24 19:45 PST
const BAD_WINDOW_END = 1766626260000; // Dec 24 20:31 PST
// Original corruption window: Dec 16-22 (when sessions actually started)
const ORIGINAL_WINDOW_START = 1765914000000; // Dec 16 00:00 PST
const ORIGINAL_WINDOW_END = 1766613600000; // Dec 23 23:59 PST
interface Observation {
id: number;
sdk_session_id: string;
created_at_epoch: number;
created_at: string;
title: string;
}
function formatTimestamp(epoch: number): string {
return new Date(epoch).toLocaleString('en-US', {
timeZone: 'America/Los_Angeles',
year: 'numeric',
month: 'short',
day: 'numeric',
hour: '2-digit',
minute: '2-digit',
second: '2-digit'
});
}
function main() {
console.log('🔍 Verifying timestamp fix...\n');
const db = new Database(DB_PATH);
try {
// Check 1: Observations still in bad window
console.log('Check 1: Looking for observations still in bad window (Dec 24 19:45-20:31)...');
const badWindowObs = db.query<Observation, []>(`
SELECT id, sdk_session_id, created_at_epoch, created_at, title
FROM observations
WHERE created_at_epoch >= ${BAD_WINDOW_START}
AND created_at_epoch <= ${BAD_WINDOW_END}
ORDER BY id
`).all();
if (badWindowObs.length === 0) {
console.log('✅ No observations found in bad window - GOOD!\n');
} else {
console.log(`⚠️ Found ${badWindowObs.length} observations still in bad window:\n`);
for (const obs of badWindowObs) {
console.log(` Observation #${obs.id}: ${obs.title || '(no title)'}`);
console.log(` Timestamp: ${formatTimestamp(obs.created_at_epoch)}`);
console.log(` Session: ${obs.sdk_session_id}\n`);
}
}
// Check 2: Observations now in original window
console.log('Check 2: Counting observations in original window (Dec 17-20)...');
const originalWindowObs = db.query<{ count: number }, []>(`
SELECT COUNT(*) as count
FROM observations
WHERE created_at_epoch >= ${ORIGINAL_WINDOW_START}
AND created_at_epoch <= ${ORIGINAL_WINDOW_END}
`).get();
console.log(`Found ${originalWindowObs?.count || 0} observations in Dec 17-20 window`);
console.log('(These should be the corrected observations)\n');
// Check 3: Session distribution
console.log('Check 3: Session distribution of corrected observations...');
const sessionDist = db.query<{ sdk_session_id: string; count: number }, []>(`
SELECT sdk_session_id, COUNT(*) as count
FROM observations
WHERE created_at_epoch >= ${ORIGINAL_WINDOW_START}
AND created_at_epoch <= ${ORIGINAL_WINDOW_END}
GROUP BY sdk_session_id
ORDER BY count DESC
`).all();
if (sessionDist.length > 0) {
console.log(`Observations distributed across ${sessionDist.length} sessions:\n`);
for (const dist of sessionDist.slice(0, 10)) {
console.log(` ${dist.sdk_session_id}: ${dist.count} observations`);
}
if (sessionDist.length > 10) {
console.log(` ... and ${sessionDist.length - 10} more sessions`);
}
console.log();
}
// Check 4: Pending messages processed count
console.log('Check 4: Verifying processed pending_messages...');
const processedCount = db.query<{ count: number }, []>(`
SELECT COUNT(*) as count
FROM pending_messages
WHERE status = 'processed'
AND completed_at_epoch >= ${BAD_WINDOW_START}
AND completed_at_epoch <= ${BAD_WINDOW_END}
`).get();
console.log(`${processedCount?.count || 0} pending messages were processed during bad window\n`);
// Summary
console.log('═══════════════════════════════════════════════════════════════════════');
console.log('VERIFICATION SUMMARY:');
console.log('═══════════════════════════════════════════════════════════════════════\n');
if (badWindowObs.length === 0 && (originalWindowObs?.count || 0) > 0) {
console.log('✅ SUCCESS: Timestamp fix appears to be working correctly!');
console.log(` - No observations remain in bad window (Dec 24 19:45-20:31)`);
console.log(` - ${originalWindowObs?.count} observations restored to Dec 17-20`);
console.log(` - Processed ${processedCount?.count} pending messages`);
console.log('\n💡 Safe to re-enable orphan processing in worker-service.ts\n');
} else if (badWindowObs.length > 0) {
console.log('⚠️ WARNING: Some observations still have incorrect timestamps!');
console.log(` - ${badWindowObs.length} observations still in bad window`);
console.log(' - Run fix-corrupted-timestamps.ts again or investigate manually\n');
} else {
console.log('️ No corrupted observations detected');
console.log(' - Either already fixed or corruption never occurred\n');
}
} catch (error) {
console.error('❌ Error:', error);
process.exit(1);
} finally {
db.close();
}
}
main();
+5 -4
View File
@@ -227,19 +227,20 @@ function main() {
claudeSessionToSdkSession.set(sessionMeta.sessionId, existing.sdk_session_id);
} else if (existing && !existing.sdk_session_id) {
// Session exists but sdk_session_id is NULL, update it
const dbId = (db['db'].prepare('SELECT id FROM sdk_sessions WHERE claude_session_id = ?').get(sessionMeta.sessionId) as { id: number }).id;
db.updateSDKSessionId(dbId, syntheticSdkSessionId);
db['db'].prepare('UPDATE sdk_sessions SET sdk_session_id = ? WHERE claude_session_id = ?')
.run(syntheticSdkSessionId, sessionMeta.sessionId);
claudeSessionToSdkSession.set(sessionMeta.sessionId, syntheticSdkSessionId);
} else {
// Create new SDK session
const dbId = db.createSDKSession(
db.createSDKSession(
sessionMeta.sessionId,
sessionMeta.project,
'Imported from transcript XML'
);
// Update with synthetic SDK session ID
db.updateSDKSessionId(dbId, syntheticSdkSessionId);
db['db'].prepare('UPDATE sdk_sessions SET sdk_session_id = ? WHERE claude_session_id = ?')
.run(syntheticSdkSessionId, sessionMeta.sessionId);
claudeSessionToSdkSession.set(sessionMeta.sessionId, syntheticSdkSessionId);
}
+31 -13
View File
@@ -1,52 +1,70 @@
import { ProcessManager } from '../services/process/ProcessManager.js';
import { getWorkerPort } from '../shared/worker-utils.js';
import { stdin } from 'process';
const command = process.argv[2];
const port = getWorkerPort();
const HOOK_STANDARD_RESPONSE = '{"continue": true, "suppressOutput": true}';
const isManualRun = stdin.isTTY;
async function main() {
switch (command) {
case 'start': {
const result = await ProcessManager.start(port);
if (result.success) {
console.log(`Worker started (PID: ${result.pid})`);
const date = new Date().toISOString().slice(0, 10);
console.log(`Logs: ~/.claude-mem/logs/worker-${date}.log`);
if (isManualRun) {
console.log(`Worker started (PID: ${result.pid})`);
const date = new Date().toISOString().slice(0, 10);
console.log(`Logs: ~/.claude-mem/logs/worker-${date}.log`);
} else {
console.log(HOOK_STANDARD_RESPONSE);
}
process.exit(0);
} else {
console.error(`Failed to start: ${result.error}`);
process.exit(1);
}
break;
}
case 'stop': {
await ProcessManager.stop();
console.log('Worker stopped');
if (isManualRun) {
console.log('Worker stopped');
} else {
console.log(HOOK_STANDARD_RESPONSE);
}
process.exit(0);
}
case 'restart': {
const result = await ProcessManager.restart(port);
if (result.success) {
console.log(`Worker restarted (PID: ${result.pid})`);
if (isManualRun) {
console.log(`Worker restarted (PID: ${result.pid})`);
} else {
console.log(HOOK_STANDARD_RESPONSE);
}
process.exit(0);
} else {
console.error(`Failed to restart: ${result.error}`);
process.exit(1);
}
break;
}
case 'status': {
const status = await ProcessManager.status();
if (status.running) {
console.log('Worker is running');
console.log(` PID: ${status.pid}`);
console.log(` Port: ${status.port}`);
console.log(` Uptime: ${status.uptime}`);
if (isManualRun) {
if (status.running) {
console.log('Worker is running');
console.log(` PID: ${status.pid}`);
console.log(` Port: ${status.port}`);
console.log(` Uptime: ${status.uptime}`);
} else {
console.log('Worker is not running');
}
} else {
console.log('Worker is not running');
console.log(HOOK_STANDARD_RESPONSE);
}
process.exit(0);
}
-68
View File
@@ -1,68 +0,0 @@
/**
* Cleanup Hook - SessionEnd
*
* Pure HTTP client - sends data to worker, worker handles all database operations.
* This allows the hook to run under any runtime (Node.js or Bun) since it has no
* native module dependencies.
*/
import { stdin } from 'process';
import { ensureWorkerRunning, getWorkerPort } from '../shared/worker-utils.js';
import { HOOK_TIMEOUTS } from '../shared/hook-constants.js';
export interface SessionEndInput {
session_id: string;
reason: 'exit' | 'clear' | 'logout' | 'prompt_input_exit' | 'other';
}
/**
* Cleanup Hook Main Logic - Fire-and-forget HTTP client
*/
async function cleanupHook(input?: SessionEndInput): Promise<void> {
// Ensure worker is running before any other logic
await ensureWorkerRunning();
if (!input) {
throw new Error('cleanup-hook requires input from Claude Code');
}
const { session_id, reason } = input;
const port = getWorkerPort();
// Send to worker - worker handles finding session, marking complete, and stopping spinner
const response = await fetch(`http://127.0.0.1:${port}/api/sessions/complete`, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({
claudeSessionId: session_id,
reason
}),
signal: AbortSignal.timeout(HOOK_TIMEOUTS.DEFAULT)
});
if (!response.ok) {
throw new Error(`Session cleanup failed: ${response.status}`);
}
console.log('{"continue": true, "suppressOutput": true}');
process.exit(0);
}
// Entry Point
if (stdin.isTTY) {
// Running manually
cleanupHook(undefined);
} else {
let input = '';
stdin.on('data', (chunk) => input += chunk);
stdin.on('end', async () => {
let parsed: SessionEndInput | undefined;
try {
parsed = input ? JSON.parse(input) : undefined;
} catch (error) {
throw new Error(`Failed to parse hook input: ${error instanceof Error ? error.message : String(error)}`);
}
await cleanupHook(parsed);
});
}
+1
View File
@@ -65,6 +65,7 @@ async function summaryHook(input?: StopInput): Promise<void> {
});
if (!response.ok) {
console.log(STANDARD_HOOK_RESPONSE);
throw new Error(`Summary generation failed: ${response.status}`);
}
+42 -243
View File
@@ -20,9 +20,11 @@ import {
export class SessionStore {
public db: Database;
constructor() {
ensureDir(DATA_DIR);
this.db = new Database(DB_PATH);
constructor(dbPath: string = DB_PATH) {
if (dbPath !== ':memory:') {
ensureDir(DATA_DIR);
}
this.db = new Database(dbPath);
// Ensure optimized settings
this.db.run('PRAGMA journal_mode = WAL');
@@ -928,11 +930,13 @@ export class SessionStore {
notes: string | null;
prompt_number: number | null;
created_at: string;
created_at_epoch: number;
} | null {
const stmt = this.db.prepare(`
SELECT
request, investigated, learned, completed, next_steps,
files_read, files_edited, notes, prompt_number, created_at
files_read, files_edited, notes, prompt_number, created_at,
created_at_epoch
FROM session_summaries
WHERE sdk_session_id = ?
ORDER BY created_at_epoch DESC
@@ -1037,80 +1041,20 @@ export class SessionStore {
return stmt.all(...sdkSessionIds) as any[];
}
/**
* Find active SDK session for a Claude session
*/
findActiveSDKSession(claudeSessionId: string): {
id: number;
sdk_session_id: string | null;
project: string;
worker_port: number | null;
} | null {
const stmt = this.db.prepare(`
SELECT id, sdk_session_id, project, worker_port
FROM sdk_sessions
WHERE claude_session_id = ? AND status = 'active'
LIMIT 1
`);
return stmt.get(claudeSessionId) || null;
}
/**
* Find any SDK session for a Claude session (active, failed, or completed)
* Get current prompt number by counting user_prompts for this session
* Replaces the prompt_counter column which is no longer maintained
*/
findAnySDKSession(claudeSessionId: string): { id: number } | null {
const stmt = this.db.prepare(`
SELECT id
FROM sdk_sessions
WHERE claude_session_id = ?
LIMIT 1
`);
return stmt.get(claudeSessionId) || null;
}
/**
* Reactivate an existing session
*/
reactivateSession(id: number, userPrompt: string): void {
const stmt = this.db.prepare(`
UPDATE sdk_sessions
SET status = 'active', user_prompt = ?, worker_port = NULL
WHERE id = ?
`);
stmt.run(userPrompt, id);
}
/**
* Increment prompt counter and return new value
*/
incrementPromptCounter(id: number): number {
const stmt = this.db.prepare(`
UPDATE sdk_sessions
SET prompt_counter = COALESCE(prompt_counter, 0) + 1
WHERE id = ?
`);
stmt.run(id);
getPromptNumberFromUserPrompts(claudeSessionId: string): number {
const result = this.db.prepare(`
SELECT prompt_counter FROM sdk_sessions WHERE id = ?
`).get(id) as { prompt_counter: number } | undefined;
return result?.prompt_counter || 1;
}
/**
* Get current prompt counter for a session
*/
getPromptCounter(id: number): number {
const result = this.db.prepare(`
SELECT prompt_counter FROM sdk_sessions WHERE id = ?
`).get(id) as { prompt_counter: number | null } | undefined;
return result?.prompt_counter || 0;
SELECT COUNT(*) as count FROM user_prompts WHERE claude_session_id = ?
`).get(claudeSessionId) as { count: number };
return result.count;
}
/**
@@ -1143,94 +1087,21 @@ export class SessionStore {
const now = new Date();
const nowEpoch = now.getTime();
// CRITICAL: INSERT OR IGNORE makes this idempotent
// First call (prompt #1): Creates new row
// Subsequent calls (prompt #2+): Ignored, returns existing ID
const stmt = this.db.prepare(`
// Pure INSERT OR IGNORE - no updates, no complexity
this.db.prepare(`
INSERT OR IGNORE INTO sdk_sessions
(claude_session_id, sdk_session_id, project, user_prompt, started_at, started_at_epoch, status)
VALUES (?, ?, ?, ?, ?, ?, 'active')
`);
`).run(claudeSessionId, claudeSessionId, project, userPrompt, now.toISOString(), nowEpoch);
const result = stmt.run(claudeSessionId, claudeSessionId, project, userPrompt, now.toISOString(), nowEpoch);
// If lastInsertRowid is 0, insert was ignored (session exists), so fetch existing ID
if (result.lastInsertRowid === 0 || result.changes === 0) {
// Session exists - UPDATE project and user_prompt if we have non-empty values
// This fixes the bug where SAVE hook creates session with empty project,
// then NEW hook can't update it because INSERT OR IGNORE skips the insert
if (project && project.trim() !== '') {
this.db.prepare(`
UPDATE sdk_sessions
SET project = ?, user_prompt = ?
WHERE claude_session_id = ?
`).run(project, userPrompt, claudeSessionId);
}
const selectStmt = this.db.prepare(`
SELECT id FROM sdk_sessions WHERE claude_session_id = ? LIMIT 1
`);
const existing = selectStmt.get(claudeSessionId) as { id: number } | undefined;
return existing!.id;
}
return result.lastInsertRowid as number;
// Return existing or new ID
const row = this.db.prepare('SELECT id FROM sdk_sessions WHERE claude_session_id = ?')
.get(claudeSessionId) as { id: number };
return row.id;
}
/**
* Update SDK session ID (captured from init message)
* Only updates if current sdk_session_id is NULL to avoid breaking foreign keys
* Returns true if update succeeded, false if skipped
*/
updateSDKSessionId(id: number, sdkSessionId: string): boolean {
const stmt = this.db.prepare(`
UPDATE sdk_sessions
SET sdk_session_id = ?
WHERE id = ? AND sdk_session_id IS NULL
`);
const result = stmt.run(sdkSessionId, id);
if (result.changes === 0) {
// This is expected behavior - sdk_session_id is already set
// Only log at debug level to avoid noise
logger.debug('DB', 'sdk_session_id already set, skipping update', {
sessionId: id,
sdkSessionId
});
return false;
}
return true;
}
/**
* Set worker port for a session
*/
setWorkerPort(id: number, port: number): void {
const stmt = this.db.prepare(`
UPDATE sdk_sessions
SET worker_port = ?
WHERE id = ?
`);
stmt.run(port, id);
}
/**
* Get worker port for a session
*/
getWorkerPort(id: number): number | null {
const stmt = this.db.prepare(`
SELECT worker_port
FROM sdk_sessions
WHERE id = ?
LIMIT 1
`);
const result = stmt.get(id) as { worker_port: number | null } | undefined;
return result?.worker_port || null;
}
/**
* Save a user prompt
@@ -1267,7 +1138,7 @@ export class SessionStore {
/**
* Store an observation (from SDK parsing)
* Auto-creates session record if it doesn't exist in the index
* Assumes session already exists (created by hook)
*/
storeObservation(
sdkSessionId: string,
@@ -1283,33 +1154,12 @@ export class SessionStore {
files_modified: string[];
},
promptNumber?: number,
discoveryTokens: number = 0
discoveryTokens: number = 0,
overrideTimestampEpoch?: number
): { id: number; createdAtEpoch: number } {
const now = new Date();
const nowEpoch = now.getTime();
// Ensure session record exists in the index (auto-create if missing)
const checkStmt = this.db.prepare(`
SELECT id FROM sdk_sessions WHERE sdk_session_id = ?
`);
const existingSession = checkStmt.get(sdkSessionId) as { id: number } | undefined;
if (!existingSession) {
// Auto-create session record if it doesn't exist
const insertSession = this.db.prepare(`
INSERT INTO sdk_sessions
(claude_session_id, sdk_session_id, project, started_at, started_at_epoch, status)
VALUES (?, ?, ?, ?, ?, 'active')
`);
insertSession.run(
sdkSessionId, // claude_session_id and sdk_session_id are the same
sdkSessionId,
project,
now.toISOString(),
nowEpoch
);
console.log(`[SessionStore] Auto-created session record for session_id: ${sdkSessionId}`);
}
// Use override timestamp if provided (for processing backlog messages with original timestamps)
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
const timestampIso = new Date(timestampEpoch).toISOString();
const stmt = this.db.prepare(`
INSERT INTO observations
@@ -1331,19 +1181,19 @@ export class SessionStore {
JSON.stringify(observation.files_modified),
promptNumber || null,
discoveryTokens,
now.toISOString(),
nowEpoch
timestampIso,
timestampEpoch
);
return {
id: Number(result.lastInsertRowid),
createdAtEpoch: nowEpoch
createdAtEpoch: timestampEpoch
};
}
/**
* Store a session summary (from SDK parsing)
* Auto-creates session record if it doesn't exist in the index
* Assumes session already exists - will fail with FK error if not
*/
storeSummary(
sdkSessionId: string,
@@ -1357,33 +1207,12 @@ export class SessionStore {
notes: string | null;
},
promptNumber?: number,
discoveryTokens: number = 0
discoveryTokens: number = 0,
overrideTimestampEpoch?: number
): { id: number; createdAtEpoch: number } {
const now = new Date();
const nowEpoch = now.getTime();
// Ensure session record exists in the index (auto-create if missing)
const checkStmt = this.db.prepare(`
SELECT id FROM sdk_sessions WHERE sdk_session_id = ?
`);
const existingSession = checkStmt.get(sdkSessionId) as { id: number } | undefined;
if (!existingSession) {
// Auto-create session record if it doesn't exist
const insertSession = this.db.prepare(`
INSERT INTO sdk_sessions
(claude_session_id, sdk_session_id, project, started_at, started_at_epoch, status)
VALUES (?, ?, ?, ?, ?, 'active')
`);
insertSession.run(
sdkSessionId, // claude_session_id and sdk_session_id are the same
sdkSessionId,
project,
now.toISOString(),
nowEpoch
);
console.log(`[SessionStore] Auto-created session record for session_id: ${sdkSessionId}`);
}
// Use override timestamp if provided (for processing backlog messages with original timestamps)
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
const timestampIso = new Date(timestampEpoch).toISOString();
const stmt = this.db.prepare(`
INSERT INTO session_summaries
@@ -1403,47 +1232,17 @@ export class SessionStore {
summary.notes,
promptNumber || null,
discoveryTokens,
now.toISOString(),
nowEpoch
timestampIso,
timestampEpoch
);
return {
id: Number(result.lastInsertRowid),
createdAtEpoch: nowEpoch
createdAtEpoch: timestampEpoch
};
}
/**
* Mark SDK session as completed
*/
markSessionCompleted(id: number): void {
const now = new Date();
const nowEpoch = now.getTime();
const stmt = this.db.prepare(`
UPDATE sdk_sessions
SET status = 'completed', completed_at = ?, completed_at_epoch = ?
WHERE id = ?
`);
stmt.run(now.toISOString(), nowEpoch, id);
}
/**
* Mark SDK session as failed
*/
markSessionFailed(id: number): void {
const now = new Date();
const nowEpoch = now.getTime();
const stmt = this.db.prepare(`
UPDATE sdk_sessions
SET status = 'failed', completed_at = ?, completed_at_epoch = ?
WHERE id = ?
`);
stmt.run(now.toISOString(), nowEpoch, id);
}
// REMOVED: cleanupOrphanedSessions - violates "EVERYTHING SHOULD SAVE ALWAYS"
// There's no such thing as an "orphaned" session. Sessions are created by hooks
+84
View File
@@ -431,6 +431,16 @@ export class WorkerService {
// Initialize database (once, stays open)
await this.dbManager.initialize();
// Recover stuck messages from previous crashes
// Messages stuck in 'processing' state are reset to 'pending' for reprocessing
const { PendingMessageStore } = await import('./sqlite/PendingMessageStore.js');
const pendingStore = new PendingMessageStore(this.dbManager.getSessionStore().db, 3);
const STUCK_THRESHOLD_MS = 5 * 60 * 1000; // 5 minutes
const resetCount = pendingStore.resetStuckMessages(STUCK_THRESHOLD_MS);
if (resetCount > 0) {
logger.info('SYSTEM', `Recovered ${resetCount} stuck messages from previous session`, { thresholdMinutes: 5 });
}
// Initialize search services (requires initialized database)
const formattingService = new FormattingService();
const timelineService = new TimelineService();
@@ -468,6 +478,8 @@ export class WorkerService {
this.initializationCompleteFlag = true;
this.resolveInitialization();
logger.info('SYSTEM', 'Background initialization complete');
// Note: Auto-recovery of orphaned queues disabled - use /api/pending-queue/process endpoint instead
} catch (error) {
logger.error('SYSTEM', 'Background initialization failed', {}, error as Error);
// Don't resolve - let the promise remain pending so readiness check continues to fail
@@ -475,6 +487,78 @@ export class WorkerService {
}
}
/**
* Process pending session queues
* Starts SDK agents for sessions that have pending messages but no active processor
* @param sessionLimit Maximum number of sessions to start processing (default: 10)
* @returns Info about what was started
*/
async processPendingQueues(sessionLimit: number = 10): Promise<{
totalPendingSessions: number;
sessionsStarted: number;
sessionsSkipped: number;
startedSessionIds: number[];
}> {
const { PendingMessageStore } = await import('./sqlite/PendingMessageStore.js');
const pendingStore = new PendingMessageStore(this.dbManager.getSessionStore().db, 3);
const orphanedSessionIds = pendingStore.getSessionsWithPendingMessages();
const result = {
totalPendingSessions: orphanedSessionIds.length,
sessionsStarted: 0,
sessionsSkipped: 0,
startedSessionIds: [] as number[]
};
if (orphanedSessionIds.length === 0) {
return result;
}
logger.info('SYSTEM', `Processing up to ${sessionLimit} of ${orphanedSessionIds.length} pending session queues`);
// Process each session sequentially up to the limit
for (const sessionDbId of orphanedSessionIds) {
if (result.sessionsStarted >= sessionLimit) {
break;
}
try {
// Skip if session already has an active generator
const existingSession = this.sessionManager.getSession(sessionDbId);
if (existingSession?.generatorPromise) {
result.sessionsSkipped++;
continue;
}
// Initialize session and start SDK agent
const session = this.sessionManager.initializeSession(sessionDbId);
logger.info('SYSTEM', `Starting processor for session ${sessionDbId}`, {
project: session.project,
pendingCount: pendingStore.getPendingCount(sessionDbId)
});
// Start SDK agent (non-blocking)
session.generatorPromise = this.sdkAgent.startSession(session, this)
.finally(() => {
session.generatorPromise = null;
this.broadcastProcessingStatus();
});
result.sessionsStarted++;
result.startedSessionIds.push(sessionDbId);
// Small delay between sessions to avoid rate limiting
await new Promise(resolve => setTimeout(resolve, 100));
} catch (error) {
logger.warn('SYSTEM', `Failed to process session ${sessionDbId}`, {}, error as Error);
result.sessionsSkipped++;
}
}
return result;
}
/**
* Extract a specific section from instruction content
* Used by /api/instructions endpoint for progressive instruction loading
+1
View File
@@ -31,6 +31,7 @@ export interface ActiveSession {
cumulativeInputTokens: number; // Track input tokens for discovery cost
cumulativeOutputTokens: number; // Track output tokens for discovery cost
pendingProcessingIds: Set<number>; // Track ALL message IDs yielded but not yet processed
earliestPendingTimestamp: number | null; // Original timestamp of earliest pending message (for accurate observation timestamps)
conversationHistory: ConversationMessage[]; // Shared conversation history for provider switching
currentProvider: 'claude' | 'gemini' | null; // Track which provider is currently running
}
-6
View File
@@ -110,10 +110,4 @@ export class DatabaseManager {
return session;
}
/**
* Mark session as completed
*/
markSessionComplete(sessionDbId: number): void {
this.getSessionStore().markSessionCompleted(sessionDbId);
}
}
+16 -9
View File
@@ -115,6 +115,9 @@ export class SDKAgent {
const discoveryTokens = (session.cumulativeInputTokens + session.cumulativeOutputTokens) - tokensBeforeResponse;
// Process response (empty or not) and mark messages as processed
// Capture earliest timestamp BEFORE processing (will be cleared after)
const originalTimestamp = session.earliestPendingTimestamp;
if (responseSize > 0) {
const truncatedResponse = responseSize > 100
? textContent.substring(0, 100) + '...'
@@ -124,8 +127,8 @@ export class SDKAgent {
promptNumber: session.lastPromptNumber
}, truncatedResponse);
// Parse and process response with discovery token delta
await this.processSDKResponse(session, textContent, worker, discoveryTokens);
// Parse and process response with discovery token delta and original timestamp
await this.processSDKResponse(session, textContent, worker, discoveryTokens, originalTimestamp);
} else {
// Empty response - still need to mark pending messages as processed
await this.markMessagesProcessed(session, worker);
@@ -145,8 +148,6 @@ export class SDKAgent {
duration: `${(sessionDuration / 1000).toFixed(1)}s`
});
this.dbManager.getSessionStore().markSessionCompleted(session.sessionDbId);
} catch (error: any) {
if (error.name === 'AbortError') {
logger.warn('SDK', 'Agent aborted', { sessionId: session.sessionDbId });
@@ -275,11 +276,12 @@ export class SDKAgent {
/**
* Process SDK response text (parse XML, save to database, sync to Chroma)
* @param discoveryTokens - Token cost for discovering this response (delta, not cumulative)
* @param originalTimestamp - Original epoch when message was queued (for backlog processing accuracy)
*
* Also captures assistant responses to shared conversation history for provider interop.
* This allows Gemini to see full context if provider is switched mid-session.
*/
private async processSDKResponse(session: ActiveSession, text: string, worker: any | undefined, discoveryTokens: number): Promise<void> {
private async processSDKResponse(session: ActiveSession, text: string, worker: any | undefined, discoveryTokens: number, originalTimestamp: number | null): Promise<void> {
// Add assistant response to shared conversation history for provider interop
if (text) {
session.conversationHistory.push({ role: 'assistant', content: text });
@@ -288,14 +290,15 @@ export class SDKAgent {
// Parse observations
const observations = parseObservations(text, session.claudeSessionId);
// Store observations
// Store observations with original timestamp (if processing backlog) or current time
for (const obs of observations) {
const { id: obsId, createdAtEpoch } = this.dbManager.getSessionStore().storeObservation(
session.claudeSessionId,
session.project,
obs,
session.lastPromptNumber,
discoveryTokens
discoveryTokens,
originalTimestamp ?? undefined
);
// Log observation details
@@ -365,14 +368,15 @@ export class SDKAgent {
// Parse summary
const summary = parseSummary(text, session.sessionDbId);
// Store summary
// Store summary with original timestamp (if processing backlog) or current time
if (summary) {
const { id: summaryId, createdAtEpoch } = this.dbManager.getSessionStore().storeSummary(
session.claudeSessionId,
session.project,
summary,
session.lastPromptNumber,
discoveryTokens
discoveryTokens,
originalTimestamp ?? undefined
);
// Log summary details
@@ -451,6 +455,9 @@ export class SDKAgent {
});
session.pendingProcessingIds.clear();
// Clear timestamp for next batch (will be set fresh from next message)
session.earliestPendingTimestamp = null;
// Clean up old processed messages (keep last 100 for UI display)
const deletedCount = pendingMessageStore.cleanupProcessed(100);
if (deletedCount > 0) {
+10 -1
View File
@@ -113,11 +113,12 @@ export class SessionManager {
pendingMessages: [],
abortController: new AbortController(),
generatorPromise: null,
lastPromptNumber: promptNumber || this.dbManager.getSessionStore().getPromptCounter(sessionDbId),
lastPromptNumber: promptNumber || this.dbManager.getSessionStore().getPromptNumberFromUserPrompts(dbSession.claude_session_id),
startTime: Date.now(),
cumulativeInputTokens: 0,
cumulativeOutputTokens: 0,
pendingProcessingIds: new Set(),
earliestPendingTimestamp: null,
conversationHistory: [], // Initialize empty - will be populated by agents
currentProvider: null // Will be set when generator starts
};
@@ -447,6 +448,14 @@ export class SessionManager {
// Track this message ID for completion marking
session.pendingProcessingIds.add(persistentMessage.id);
// Track earliest timestamp for accurate observation timestamps
// This ensures backlog messages get their original timestamps, not current time
if (session.earliestPendingTimestamp === null) {
session.earliestPendingTimestamp = persistentMessage.created_at_epoch;
} else {
session.earliestPendingTimestamp = Math.min(session.earliestPendingTimestamp, persistentMessage.created_at_epoch);
}
// Convert to PendingMessageWithId and yield
// Include original timestamp for accurate observation timestamps (survives stuck processing)
const message: PendingMessageWithId = {
@@ -51,6 +51,10 @@ export class DataRoutes extends BaseRouteHandler {
app.get('/api/processing-status', this.handleGetProcessingStatus.bind(this));
app.post('/api/processing', this.handleSetProcessing.bind(this));
// Pending queue management endpoints
app.get('/api/pending-queue', this.handleGetPendingQueue.bind(this));
app.post('/api/pending-queue/process', this.handleProcessPendingQueue.bind(this));
// Import endpoint
app.post('/api/import', this.handleImport.bind(this));
}
@@ -364,4 +368,58 @@ export class DataRoutes extends BaseRouteHandler {
stats
});
});
/**
* Get pending queue contents
* GET /api/pending-queue
* Returns all pending, processing, and failed messages with optional recently processed
*/
private handleGetPendingQueue = this.wrapHandler((req: Request, res: Response): void => {
const { PendingMessageStore } = require('../../../sqlite/PendingMessageStore.js');
const pendingStore = new PendingMessageStore(this.dbManager.getSessionStore().db, 3);
// Get queue contents (pending, processing, failed)
const queueMessages = pendingStore.getQueueMessages();
// Get recently processed (last 30 min, up to 20)
const recentlyProcessed = pendingStore.getRecentlyProcessed(20, 30);
// Get stuck message count (processing > 5 min)
const stuckCount = pendingStore.getStuckCount(5 * 60 * 1000);
// Get sessions with pending work
const sessionsWithPending = pendingStore.getSessionsWithPendingMessages();
res.json({
queue: {
messages: queueMessages,
totalPending: queueMessages.filter((m: { status: string }) => m.status === 'pending').length,
totalProcessing: queueMessages.filter((m: { status: string }) => m.status === 'processing').length,
totalFailed: queueMessages.filter((m: { status: string }) => m.status === 'failed').length,
stuckCount
},
recentlyProcessed,
sessionsWithPendingWork: sessionsWithPending
});
});
/**
* Process pending queue
* POST /api/pending-queue/process
* Body: { sessionLimit?: number } - defaults to 10
* Starts SDK agents for sessions with pending messages
*/
private handleProcessPendingQueue = this.wrapHandler(async (req: Request, res: Response): Promise<void> => {
const sessionLimit = Math.min(
Math.max(parseInt(req.body.sessionLimit, 10) || 10, 1),
100 // Max 100 sessions at once
);
const result = await this.workerService.processPendingQueues(sessionLimit);
res.json({
success: true,
...result
});
});
}
@@ -35,7 +35,6 @@ export class SessionRoutes extends BaseRouteHandler {
super();
this.completionHandler = new SessionCompletionHandler(
sessionManager,
dbManager,
eventBroadcaster
);
}
@@ -144,7 +143,6 @@ export class SessionRoutes extends BaseRouteHandler {
app.post('/api/sessions/init', this.handleSessionInitByClaudeId.bind(this));
app.post('/api/sessions/observations', this.handleObservationsByClaudeId.bind(this));
app.post('/api/sessions/summarize', this.handleSummarizeByClaudeId.bind(this));
app.post('/api/sessions/complete', this.handleSessionCompleteByClaudeId.bind(this));
}
/**
@@ -345,7 +343,7 @@ export class SessionRoutes extends BaseRouteHandler {
// Get or create session
const sessionDbId = store.createSDKSession(claudeSessionId, '', '');
const promptNumber = store.getPromptCounter(sessionDbId);
const promptNumber = store.getPromptNumberFromUserPrompts(claudeSessionId);
// Privacy check: skip if user prompt was entirely private
const userPrompt = PrivacyCheckValidator.checkUserPromptPrivacy(
@@ -412,7 +410,7 @@ export class SessionRoutes extends BaseRouteHandler {
// Get or create session
const sessionDbId = store.createSDKSession(claudeSessionId, '', '');
const promptNumber = store.getPromptCounter(sessionDbId);
const promptNumber = store.getPromptNumberFromUserPrompts(claudeSessionId);
// Privacy check: skip if user prompt was entirely private
const userPrompt = PrivacyCheckValidator.checkUserPromptPrivacy(
@@ -449,31 +447,6 @@ export class SessionRoutes extends BaseRouteHandler {
res.json({ status: 'queued' });
});
/**
* Complete session by claudeSessionId (cleanup-hook uses this)
* POST /api/sessions/complete
* Body: { claudeSessionId }
*
* Marks session complete, stops SDK agent, broadcasts status
*/
private handleSessionCompleteByClaudeId = this.wrapHandler(async (req: Request, res: Response): Promise<void> => {
const { claudeSessionId } = req.body;
if (!claudeSessionId) {
return this.badRequest(res, 'Missing claudeSessionId');
}
const found = await this.completionHandler.completeByClaudeId(claudeSessionId);
if (!found) {
// No active session - nothing to clean up (may have already been completed)
res.json({ success: true, message: 'No active session found' });
return;
}
res.json({ success: true });
});
/**
* Initialize session by claudeSessionId (new-hook uses this)
* POST /api/sessions/init
@@ -499,8 +472,9 @@ export class SessionRoutes extends BaseRouteHandler {
// Step 1: Create/get SDK session (idempotent INSERT OR IGNORE)
const sessionDbId = store.createSDKSession(claudeSessionId, project, prompt);
// Step 2: Increment prompt counter
const promptNumber = store.incrementPromptCounter(sessionDbId);
// Step 2: Get next prompt number from user_prompts count
const currentCount = store.getPromptNumberFromUserPrompts(claudeSessionId);
const promptNumber = currentCount + 1;
// Step 3: Strip privacy tags from prompt
const cleanedPrompt = stripMemoryTagsFromPrompt(prompt);
@@ -1,23 +1,20 @@
/**
* Session Completion Handler
*
* Consolidates session completion logic to eliminate duplication across
* three different completion endpoints (DELETE, POST by DB ID, POST by Claude ID).
* Consolidates session completion logic for manual session deletion/completion.
* Used by DELETE /api/sessions/:id and POST /api/sessions/:id/complete endpoints.
*
* All completion flows follow the same pattern:
* 1. Delete session from SessionManager (aborts SDK agent)
* 2. Mark session complete in database
* 3. Broadcast session completed event
* Completion flow:
* 1. Delete session from SessionManager (aborts SDK agent, cleans up in-memory state)
* 2. Broadcast session completed event (updates UI spinner)
*/
import { SessionManager } from '../SessionManager.js';
import { DatabaseManager } from '../DatabaseManager.js';
import { SessionEventBroadcaster } from '../events/SessionEventBroadcaster.js';
export class SessionCompletionHandler {
constructor(
private sessionManager: SessionManager,
private dbManager: DatabaseManager,
private eventBroadcaster: SessionEventBroadcaster
) {}
@@ -29,34 +26,7 @@ export class SessionCompletionHandler {
// Delete from session manager (aborts SDK agent)
await this.sessionManager.deleteSession(sessionDbId);
// Mark session complete in database
this.dbManager.markSessionComplete(sessionDbId);
// Broadcast session completed event
this.eventBroadcaster.broadcastSessionCompleted(sessionDbId);
}
/**
* Complete session by Claude session ID
* Used by POST /api/sessions/complete (cleanup-hook endpoint)
*
* @returns true if session was found and completed, false if no active session found
*/
async completeByClaudeId(claudeSessionId: string): Promise<boolean> {
const store = this.dbManager.getSessionStore();
// Find session by claudeSessionId
const session = store.findActiveSDKSession(claudeSessionId);
if (!session) {
// No active session - nothing to clean up (may have already been completed)
return false;
}
const sessionDbId = session.id;
// Complete using standard flow
await this.completeByDbId(sessionDbId);
return true;
}
}
+1 -1
View File
@@ -1,5 +1,5 @@
export const HOOK_TIMEOUTS = {
DEFAULT: 5000, // Standard HTTP timeout (up from 2000ms)
DEFAULT: 120000, // Standard HTTP timeout (up from 2000ms)
HEALTH_CHECK: 1000, // Worker health check (up from 500ms)
WORKER_STARTUP_WAIT: 1000,
WORKER_STARTUP_RETRIES: 15,
+18 -99
View File
@@ -1,10 +1,8 @@
import path from "path";
import { homedir } from "os";
import { spawnSync } from "child_process";
import { existsSync, writeFileSync, readFileSync, mkdirSync } from "fs";
import { readFileSync } from "fs";
import { logger } from "../utils/logger.js";
import { HOOK_TIMEOUTS, getTimeout } from "./hook-constants.js";
import { ProcessManager } from "../services/process/ProcessManager.js";
import { SettingsDefaultsManager } from "./SettingsDefaultsManager.js";
import { getWorkerRestartInstructions } from "../utils/error-messages.js";
@@ -96,123 +94,44 @@ async function getWorkerVersion(): Promise<string> {
/**
* Check if worker version matches plugin version
* If mismatch detected, restart the worker automatically
* Logs a warning if mismatch is detected
*/
async function ensureWorkerVersionMatches(): Promise<void> {
async function checkWorkerVersion(): Promise<void> {
const pluginVersion = getPluginVersion();
const workerVersion = await getWorkerVersion();
if (pluginVersion !== workerVersion) {
logger.info('SYSTEM', 'Worker version mismatch detected - restarting worker', {
logger.warn('SYSTEM', 'Worker version mismatch', {
pluginVersion,
workerVersion
workerVersion,
hint: 'Restart worker with: claude-mem worker restart'
});
// Give files time to sync before restart
await new Promise(resolve => setTimeout(resolve, getTimeout(HOOK_TIMEOUTS.PRE_RESTART_SETTLE_DELAY)));
// Restart the worker
await ProcessManager.restart(getWorkerPort());
// Give it a moment to start
await new Promise(resolve => setTimeout(resolve, 1000));
// Verify it's healthy
if (!await isWorkerHealthy()) {
throw new Error(`Worker failed to restart after version mismatch. Expected ${pluginVersion}, was running ${workerVersion}`);
}
}
}
/**
* Start the worker service using ProcessManager
* Handles both Unix (Bun) and Windows (compiled exe) platforms
*/
async function startWorker(): Promise<boolean> {
// Clean up legacy PM2 (one-time migration)
const dataDir = SettingsDefaultsManager.get('CLAUDE_MEM_DATA_DIR');
const pm2MigratedMarker = path.join(dataDir, '.pm2-migrated');
// Ensure data directory exists (may not exist on fresh install)
mkdirSync(dataDir, { recursive: true });
if (!existsSync(pm2MigratedMarker)) {
spawnSync('pm2', ['delete', 'claude-mem-worker'], { stdio: 'ignore' });
// Mark migration as complete
writeFileSync(pm2MigratedMarker, new Date().toISOString(), 'utf-8');
logger.debug('SYSTEM', 'PM2 cleanup completed and marked');
}
const port = getWorkerPort();
const result = await ProcessManager.start(port);
if (!result.success) {
logger.error('SYSTEM', 'Failed to start worker', {
platform: process.platform,
port,
error: result.error,
marketplaceRoot: MARKETPLACE_ROOT
});
}
return result.success;
}
/**
* Ensure worker service is running
* Checks health and auto-starts if not running
* Also ensures worker version matches plugin version
* Polls until worker is ready (assumes worker-cli.js start was called by hooks.json)
*/
export async function ensureWorkerRunning(): Promise<void> {
// Check if already healthy (will throw on fetch errors)
let healthy = false;
try {
healthy = await isWorkerHealthy();
} catch (error) {
// Worker not running or unreachable - continue to start it
healthy = false;
}
const maxRetries = 25; // 5 seconds total
const pollInterval = 200;
if (healthy) {
// Worker is healthy, but check if version matches
await ensureWorkerVersionMatches();
return;
}
// Try to start the worker
const started = await startWorker();
if (!started) {
const port = getWorkerPort();
throw new Error(
getWorkerRestartInstructions({
port,
customPrefix: `Worker service failed to start on port ${port}.`
})
);
}
// Wait for worker to become responsive after starting
// Try up to 5 times with 500ms delays (2.5 seconds total)
for (let i = 0; i < 5; i++) {
await new Promise(resolve => setTimeout(resolve, 500));
for (let i = 0; i < maxRetries; i++) {
try {
if (await isWorkerHealthy()) {
await ensureWorkerVersionMatches();
await checkWorkerVersion(); // logs warning on mismatch, doesn't restart
return;
}
} catch (error) {
// Continue trying
} catch {
// Continue polling
}
await new Promise(r => setTimeout(r, pollInterval));
}
// Worker started but isn't responding
const port = getWorkerPort();
logger.error('SYSTEM', 'Worker started but not responding to health checks');
throw new Error(
getWorkerRestartInstructions({
port,
customPrefix: `Worker service started but is not responding on port ${port}.`
})
);
throw new Error(getWorkerRestartInstructions({
port: getWorkerPort(),
customPrefix: 'Worker did not become ready within 5 seconds.'
}));
}
+106
View File
@@ -0,0 +1,106 @@
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { SessionStore } from '../src/services/sqlite/SessionStore.js';
describe('SessionStore', () => {
let store: SessionStore;
beforeEach(() => {
store = new SessionStore(':memory:');
});
afterEach(() => {
store.close();
});
it('should correctly count user prompts', () => {
const claudeId = 'claude-session-1';
store.createSDKSession(claudeId, 'test-project', 'initial prompt');
// Should be 0 initially
expect(store.getPromptNumberFromUserPrompts(claudeId)).toBe(0);
// Save prompt 1
store.saveUserPrompt(claudeId, 1, 'First prompt');
expect(store.getPromptNumberFromUserPrompts(claudeId)).toBe(1);
// Save prompt 2
store.saveUserPrompt(claudeId, 2, 'Second prompt');
expect(store.getPromptNumberFromUserPrompts(claudeId)).toBe(2);
// Save prompt for another session
store.createSDKSession('claude-session-2', 'test-project', 'initial prompt');
store.saveUserPrompt('claude-session-2', 1, 'Other prompt');
expect(store.getPromptNumberFromUserPrompts(claudeId)).toBe(2);
});
it('should store observation with timestamp override', () => {
const claudeId = 'claude-sess-obs';
const sdkId = store.createSDKSession(claudeId, 'test-project', 'initial prompt');
// Get the sdk_session_id string (createSDKSession returns number ID, need string for FK)
// Wait, createSDKSession inserts using sdk_session_id = claude_session_id in the current implementation
// "VALUES (?, ?, ?, ?, ?, ?, 'active')" -> claudeSessionId, claudeSessionId, ...
const obs = {
type: 'discovery',
title: 'Test Obs',
subtitle: null,
facts: [],
narrative: 'Testing',
concepts: [],
files_read: [],
files_modified: []
};
const pastTimestamp = 1600000000000; // Some time in the past
const result = store.storeObservation(
claudeId, // sdkSessionId is same as claudeSessionId in createSDKSession
'test-project',
obs,
1,
0,
pastTimestamp
);
expect(result.createdAtEpoch).toBe(pastTimestamp);
const stored = store.getObservationById(result.id);
expect(stored).not.toBeNull();
expect(stored?.created_at_epoch).toBe(pastTimestamp);
// Verify ISO string matches
expect(new Date(stored!.created_at).getTime()).toBe(pastTimestamp);
});
it('should store summary with timestamp override', () => {
const claudeId = 'claude-sess-sum';
store.createSDKSession(claudeId, 'test-project', 'initial prompt');
const summary = {
request: 'Do something',
investigated: 'Stuff',
learned: 'Things',
completed: 'Done',
next_steps: 'More',
notes: null
};
const pastTimestamp = 1650000000000;
const result = store.storeSummary(
claudeId,
'test-project',
summary,
1,
0,
pastTimestamp
);
expect(result.createdAtEpoch).toBe(pastTimestamp);
const stored = store.getSummaryForSession(claudeId);
expect(stored).not.toBeNull();
expect(stored?.created_at_epoch).toBe(pastTimestamp);
});
});
+54
View File
@@ -0,0 +1,54 @@
import { Database } from 'bun:sqlite';
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
describe('Refactor Validation: SQL Updates', () => {
let db: Database;
beforeEach(() => {
db = new Database(':memory:');
// Minimal schema for sdk_sessions based on SessionStore.ts migration004
db.run(`
CREATE TABLE sdk_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
claude_session_id TEXT UNIQUE NOT NULL,
sdk_session_id TEXT UNIQUE,
project TEXT NOT NULL,
user_prompt TEXT,
started_at TEXT,
started_at_epoch INTEGER,
completed_at TEXT,
completed_at_epoch INTEGER,
status TEXT DEFAULT 'active'
);
`);
});
afterEach(() => {
db.close();
});
it('should update sdk_session_id using direct SQL (replacing updateSDKSessionId)', () => {
// Setup initial state: A session without an sdk_session_id
const claudeId = 'claude-session-123';
const syntheticId = 'sdk-session-456';
db.prepare(`
INSERT INTO sdk_sessions (claude_session_id, project, started_at, started_at_epoch)
VALUES (?, ?, ?, ?)
`).run(claudeId, 'test-project', '2025-01-01T00:00:00Z', 1735689600000);
// Verify initial state
const before = db.prepare('SELECT sdk_session_id FROM sdk_sessions WHERE claude_session_id = ?').get(claudeId) as any;
expect(before.sdk_session_id).toBeNull();
// EXECUTE: The exact SQL statement from the refactor in import-xml-observations.ts
// Original code: db['db'].prepare('UPDATE sdk_sessions SET sdk_session_id = ? WHERE claude_session_id = ?').run(syntheticSdkSessionId, sessionMeta.sessionId);
const stmt = db.prepare('UPDATE sdk_sessions SET sdk_session_id = ? WHERE claude_session_id = ?');
stmt.run(syntheticId, claudeId);
// VERIFY: The update happened
const after = db.prepare('SELECT sdk_session_id FROM sdk_sessions WHERE claude_session_id = ?').get(claudeId) as any;
expect(after.sdk_session_id).toBe(syntheticId);
});
});