417acb0f81
* Add enforceable anti-pattern detection for try-catch abuse PROBLEM: - Overly-broad try-catch blocks waste 10+ hours of debugging time - Empty catch blocks silently swallow errors - AI assistants use try-catch to paper over uncertainty instead of doing research SOLUTION: 1. Created detect-error-handling-antipatterns.ts test - Detects empty catch blocks (45 CRITICAL found) - Detects catch without logging (45 CRITICAL total) - Detects large try blocks (>10 lines) - Detects generic catch without type checking - Detects catch-and-continue on critical paths - Exit code 1 if critical issues found 2. Updated CLAUDE.md with MANDATORY ERROR HANDLING RULES - 5-question pre-flight checklist before any try-catch - FORBIDDEN patterns with examples - ALLOWED patterns with examples - Meta-rule: UNCERTAINTY TRIGGERS RESEARCH, NOT TRY-CATCH - Critical path protection list 3. Created comprehensive try-catch audit report - Documents all 96 try-catch blocks in worker service - Identifies critical issue at worker-service.ts:748-750 - Categorizes patterns and provides recommendations This is enforceable via test, not just instructions that can be ignored. Current state: 163 anti-patterns detected (45 critical, 47 high, 71 medium) Next: Fix critical issues identified by test 🤖 Generated with Claude Code Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: add logging to 5 critical empty catch blocks (Wave 1) Wave 1 of error handling cleanup - fixing empty catch blocks that silently swallow errors without any trace. Fixed files: - src/bin/import-xml-observations.ts:80 - Log skipped invalid JSON - src/utils/bun-path.ts:33 - Log when bun not in PATH - src/utils/cursor-utils.ts:44 - Log failed registry reads - src/utils/cursor-utils.ts:149 - Log corrupt MCP config - src/shared/worker-utils.ts:128 - Log failed health checks All catch blocks now have proper logging with context and error details. Progress: 41 → 39 CRITICAL issues remaining 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: add logging to promise catches on critical paths (Wave 2) Wave 2 of error handling cleanup - fixing empty promise catch handlers that silently swallow errors on critical code paths. These are the patterns that caused the 10-hour debugging session. Fixed empty promise catches: - worker-service.ts:642 - Background initialization failures - SDKAgent.ts:372,446 - Session processor errors - GeminiAgent.ts:408,475 - Finalization failures - OpenRouterAgent.ts:451,518 - Finalization failures - SessionManager.ts:289 - Generator promise failures Added justification comments to catch-and-continue blocks: - worker-service.ts:68 - PID file removal (cleanup, non-critical) - worker-service.ts:130 - Cursor context update (non-critical) All promise rejection handlers now log errors with context, preventing silent failures that were nearly impossible to debug. Note: The anti-pattern detector only tracks try-catch blocks, not standalone promise chains. These fixes address the root cause of the original 10-hour debugging session even though the detector count remains unchanged. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: add logging and documentation to error handling patterns (Wave 3) Wave 3 of error handling cleanup - comprehensive review and fixes for remaining critical issues identified by the anti-pattern detector. Changes organized by severity: **Wave 3.1: Fixed 2 EMPTY_CATCH blocks** - worker-service.ts:162 - Health check polling now logs failures - worker-service.ts:610 - Process cleanup logs failures **Wave 3.2: Reviewed 12 CATCH_AND_CONTINUE patterns** - Verified all are correct (log errors AND exit/return HTTP errors) - Added justification comment to session recovery (line 829) - All patterns properly notify callers of failures **Wave 3.3: Fixed 29 NO_LOGGING_IN_CATCH issues** Added logging to 16 catch blocks: - UI layer: useSettings.ts, useContextPreview.ts (console logging) - Servers: mcp-server.ts health checks and tool execution - Worker: version fetch, cleanup, config corruption - Routes: error handler, session recovery, settings validation - Services: branch checkout, timeline queries Documented 13 intentional exceptions with comments explaining why: - Hot paths (port checks, process checks in tight loops) - Error accumulation (transcript parser collects for batch retrieval) - Special cases (logger can't log its own failures) - Fallback parsing (JSON parse in optional data structures) All changes follow error handling guidelines from CLAUDE.md: - Appropriate log levels (error/warn/debug) - Context objects with relevant details - Descriptive messages explaining failures - Error extraction pattern for Error instances Progress: 41 → 29 detector warnings Remaining warnings are conservative flags on verified-correct patterns (catch-and-continue blocks that properly log + notify callers). Build verified successful. All error handling now provides visibility for debugging while avoiding excessive logging on hot paths. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * feat: add queue:clear command to remove failed messages Added functionality to clear failed messages from the observation queue: **Changes:** - PendingMessageStore: Added clearFailed() method to delete failed messages - DataRoutes: Added DELETE /api/pending-queue/failed endpoint - CLI: Created scripts/clear-failed-queue.ts for interactive queue clearing - package.json: Added npm run queue:clear script **Usage:** npm run queue:clear # Interactive - prompts for confirmation npm run queue:clear -- --force # Non-interactive - clears without prompt Failed messages are observations that exceeded max retry count. They remain in the queue for debugging but won't be processed. This command removes them to clean up the queue. Works alongside existing queue:check and queue:process commands to provide complete queue management capabilities. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * feat: add --all flag to queue:clear for complete queue reset Extended queue clearing functionality to support clearing all messages, not just failed ones. **Changes:** - PendingMessageStore: Added clearAll() method to clear pending, processing, and failed - DataRoutes: Added DELETE /api/pending-queue/all endpoint - clear-failed-queue.ts: Added --all flag to clear everything - Updated help text and UI to distinguish between failed-only and all-clear modes **Usage:** npm run queue:clear # Clear failed only (interactive) npm run queue:clear -- --all # Clear ALL messages (interactive) npm run queue:clear -- --all --force # Clear all without confirmation The --all flag provides a complete queue reset, removing pending, processing, and failed messages. Useful when you want a fresh start or need to cancel stuck sessions. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * feat: add comprehensive documentation for session ID architecture and validation tests * feat: add logs viewer with clear functionality to UI - Add LogsRoutes API endpoint for fetching and clearing worker logs - Create LogsModal component with auto-refresh and clear button - Integrate logs viewer button into Header component - Add comprehensive CSS styling for logs modal - Logs accessible via new document icon button in header Logs viewer features: - Display last 1000 lines of current day's log file - Auto-refresh toggle (2s interval) - Clear logs button with confirmation - Monospace font for readable log output - Responsive modal design matching existing UI 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: redesign logs as Chrome DevTools-style console drawer Major UX improvements to match Chrome DevTools console: - Convert from modal to bottom drawer that slides up - Move toggle button to bottom-left corner (floating button) - Add draggable resize handle for height adjustment - Use plain monospace font (SF Mono/Monaco/Consolas) instead of Monaspace - Simplify controls with icon-only buttons - Add Console tab UI matching DevTools aesthetic Changes: - Renamed LogsModal to LogsDrawer with drawer implementation - Added resize functionality with mouse drag - Removed logs button from header - Added floating console toggle button in bottom-left - Updated all CSS to match Chrome console styling - Minimum height: 150px, maximum: window height - 100px 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: suppress /api/logs endpoint logging to reduce noise Skip logging GET /api/logs requests in HTTP middleware to prevent log spam from auto-refresh polling (every 2s). Keeps the auto-refresh feature functional while eliminating the repetitive log entries. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: enhance error handling guidelines with approved overrides for justified exceptions --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
390 lines
11 KiB
JavaScript
390 lines
11 KiB
JavaScript
#!/usr/bin/env node
|
|
/**
|
|
* Import XML observations back into the database
|
|
* Parses actual_xml_only_with_timestamps.xml and inserts observations via SessionStore
|
|
*/
|
|
|
|
import { readFileSync, readdirSync } from 'fs';
|
|
import { join } from 'path';
|
|
import { homedir } from 'os';
|
|
import { SessionStore } from '../services/sqlite/SessionStore.js';
|
|
import { logger } from '../utils/logger.js';
|
|
|
|
interface ObservationData {
|
|
type: string;
|
|
title: string;
|
|
subtitle: string;
|
|
facts: string[];
|
|
narrative: string;
|
|
concepts: string[];
|
|
files_read: string[];
|
|
files_modified: string[];
|
|
}
|
|
|
|
interface SummaryData {
|
|
request: string;
|
|
investigated: string;
|
|
learned: string;
|
|
completed: string;
|
|
next_steps: string;
|
|
notes: string | null;
|
|
}
|
|
|
|
interface SessionMetadata {
|
|
sessionId: string;
|
|
project: string;
|
|
}
|
|
|
|
interface TimestampMapping {
|
|
[timestamp: string]: SessionMetadata;
|
|
}
|
|
|
|
/**
|
|
* Build a map of timestamp (rounded to second) -> session metadata by reading all transcript files
|
|
* Since XML timestamps are rounded to seconds, we map by second
|
|
*/
|
|
function buildTimestampMap(): TimestampMapping {
|
|
const transcriptDir = join(homedir(), '.claude', 'projects', '-Users-alexnewman-Scripts-claude-mem');
|
|
const map: TimestampMapping = {};
|
|
|
|
console.log(`Reading transcript files from ${transcriptDir}...`);
|
|
|
|
const files = readdirSync(transcriptDir).filter(f => f.endsWith('.jsonl'));
|
|
console.log(`Found ${files.length} transcript files`);
|
|
|
|
for (const filename of files) {
|
|
const filepath = join(transcriptDir, filename);
|
|
const content = readFileSync(filepath, 'utf-8');
|
|
const lines = content.split('\n').filter(l => l.trim());
|
|
|
|
for (let index = 0; index < lines.length; index++) {
|
|
const line = lines[index];
|
|
try {
|
|
const data = JSON.parse(line);
|
|
const timestamp = data.timestamp;
|
|
const sessionId = data.sessionId;
|
|
const project = data.cwd;
|
|
|
|
if (timestamp && sessionId) {
|
|
// Round timestamp to second for matching with XML timestamps
|
|
const roundedTimestamp = new Date(timestamp);
|
|
roundedTimestamp.setMilliseconds(0);
|
|
const key = roundedTimestamp.toISOString();
|
|
|
|
// Only store first occurrence for each second (they're all the same session anyway)
|
|
if (!map[key]) {
|
|
map[key] = { sessionId, project };
|
|
}
|
|
}
|
|
} catch (e) {
|
|
logger.debug('IMPORT', 'Skipping invalid JSON line', {
|
|
lineNumber: index + 1,
|
|
filename,
|
|
error: e instanceof Error ? e.message : String(e)
|
|
});
|
|
}
|
|
}
|
|
}
|
|
|
|
console.log(`Built timestamp map with ${Object.keys(map).length} unique seconds`);
|
|
return map;
|
|
}
|
|
|
|
/**
|
|
* Parse XML text content and extract tag value
|
|
*/
|
|
function extractTag(xml: string, tagName: string): string {
|
|
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)</${tagName}>`, 'i');
|
|
const match = xml.match(regex);
|
|
return match ? match[1].trim() : '';
|
|
}
|
|
|
|
/**
|
|
* Parse XML array tags (facts, concepts, files, etc.)
|
|
*/
|
|
function extractArrayTags(xml: string, containerTag: string, itemTag: string): string[] {
|
|
const containerRegex = new RegExp(`<${containerTag}>([\\s\\S]*?)</${containerTag}>`, 'i');
|
|
const containerMatch = xml.match(containerRegex);
|
|
|
|
if (!containerMatch) {
|
|
return [];
|
|
}
|
|
|
|
const containerContent = containerMatch[1];
|
|
const itemRegex = new RegExp(`<${itemTag}>([\\s\\S]*?)</${itemTag}>`, 'gi');
|
|
const items: string[] = [];
|
|
let match;
|
|
|
|
while ((match = itemRegex.exec(containerContent)) !== null) {
|
|
items.push(match[1].trim());
|
|
}
|
|
|
|
return items;
|
|
}
|
|
|
|
/**
|
|
* Parse an observation block from XML
|
|
*/
|
|
function parseObservation(xml: string): ObservationData | null {
|
|
// Must be a complete observation block
|
|
if (!xml.includes('<observation>') || !xml.includes('</observation>')) {
|
|
return null;
|
|
}
|
|
|
|
try {
|
|
const observation: ObservationData = {
|
|
type: extractTag(xml, 'type'),
|
|
title: extractTag(xml, 'title'),
|
|
subtitle: extractTag(xml, 'subtitle'),
|
|
facts: extractArrayTags(xml, 'facts', 'fact'),
|
|
narrative: extractTag(xml, 'narrative'),
|
|
concepts: extractArrayTags(xml, 'concepts', 'concept'),
|
|
files_read: extractArrayTags(xml, 'files_read', 'file'),
|
|
files_modified: extractArrayTags(xml, 'files_modified', 'file'),
|
|
};
|
|
|
|
// Validate required fields
|
|
if (!observation.type || !observation.title) {
|
|
return null;
|
|
}
|
|
|
|
return observation;
|
|
} catch (e) {
|
|
console.error('Error parsing observation:', e);
|
|
return null;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Parse a summary block from XML
|
|
*/
|
|
function parseSummary(xml: string): SummaryData | null {
|
|
// Must be a complete summary block
|
|
if (!xml.includes('<summary>') || !xml.includes('</summary>')) {
|
|
return null;
|
|
}
|
|
|
|
try {
|
|
const summary: SummaryData = {
|
|
request: extractTag(xml, 'request'),
|
|
investigated: extractTag(xml, 'investigated'),
|
|
learned: extractTag(xml, 'learned'),
|
|
completed: extractTag(xml, 'completed'),
|
|
next_steps: extractTag(xml, 'next_steps'),
|
|
notes: extractTag(xml, 'notes') || null,
|
|
};
|
|
|
|
// Validate required fields
|
|
if (!summary.request) {
|
|
return null;
|
|
}
|
|
|
|
return summary;
|
|
} catch (e) {
|
|
console.error('Error parsing summary:', e);
|
|
return null;
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Extract timestamp from XML comment
|
|
* Format: <!-- Block N | 2025-10-19 03:03:23 UTC -->
|
|
*/
|
|
function extractTimestamp(commentLine: string): string | null {
|
|
const match = commentLine.match(/<!-- Block \d+ \| (.+?) -->/);
|
|
if (match) {
|
|
// Convert "2025-10-19 03:03:23 UTC" to ISO format
|
|
const dateStr = match[1].replace(' UTC', '').replace(' ', 'T') + 'Z';
|
|
return new Date(dateStr).toISOString();
|
|
}
|
|
return null;
|
|
}
|
|
|
|
/**
|
|
* Main import function
|
|
*/
|
|
function main() {
|
|
console.log('Starting XML observation import...\n');
|
|
|
|
// Build timestamp map
|
|
const timestampMap = buildTimestampMap();
|
|
|
|
// Open database connection
|
|
const db = new SessionStore();
|
|
|
|
// Create SDK sessions for all unique Claude Code sessions
|
|
console.log('\nCreating SDK sessions for imported data...');
|
|
const claudeSessionToSdkSession = new Map<string, string>();
|
|
|
|
for (const sessionMeta of Object.values(timestampMap)) {
|
|
if (!claudeSessionToSdkSession.has(sessionMeta.sessionId)) {
|
|
const syntheticSdkSessionId = `imported-${sessionMeta.sessionId}`;
|
|
|
|
// Try to find existing session first
|
|
const existingQuery = db['db'].prepare(`
|
|
SELECT memory_session_id
|
|
FROM sdk_sessions
|
|
WHERE content_session_id = ?
|
|
`);
|
|
const existing = existingQuery.get(sessionMeta.sessionId) as { memory_session_id: string | null } | undefined;
|
|
|
|
if (existing && existing.memory_session_id) {
|
|
// Use existing SDK session ID
|
|
claudeSessionToSdkSession.set(sessionMeta.sessionId, existing.memory_session_id);
|
|
} else if (existing && !existing.memory_session_id) {
|
|
// Session exists but memory_session_id is NULL, update it
|
|
db['db'].prepare('UPDATE sdk_sessions SET memory_session_id = ? WHERE content_session_id = ?')
|
|
.run(syntheticSdkSessionId, sessionMeta.sessionId);
|
|
claudeSessionToSdkSession.set(sessionMeta.sessionId, syntheticSdkSessionId);
|
|
} else {
|
|
// Create new SDK session
|
|
db.createSDKSession(
|
|
sessionMeta.sessionId,
|
|
sessionMeta.project,
|
|
'Imported from transcript XML'
|
|
);
|
|
|
|
// Update with synthetic SDK session ID
|
|
db['db'].prepare('UPDATE sdk_sessions SET memory_session_id = ? WHERE content_session_id = ?')
|
|
.run(syntheticSdkSessionId, sessionMeta.sessionId);
|
|
|
|
claudeSessionToSdkSession.set(sessionMeta.sessionId, syntheticSdkSessionId);
|
|
}
|
|
}
|
|
}
|
|
|
|
console.log(`Prepared ${claudeSessionToSdkSession.size} SDK sessions\n`);
|
|
|
|
// Read XML file
|
|
const xmlPath = join(process.cwd(), 'actual_xml_only_with_timestamps.xml');
|
|
console.log(`Reading XML file: ${xmlPath}`);
|
|
const xmlContent = readFileSync(xmlPath, 'utf-8');
|
|
|
|
// Split into blocks by comment markers
|
|
const blocks = xmlContent.split(/(?=<!-- Block \d+)/);
|
|
console.log(`Found ${blocks.length} blocks in XML file\n`);
|
|
|
|
let importedObs = 0;
|
|
let importedSum = 0;
|
|
let skipped = 0;
|
|
let duplicateObs = 0;
|
|
let duplicateSum = 0;
|
|
let noSession = 0;
|
|
|
|
for (const block of blocks) {
|
|
if (!block.trim() || block.startsWith('<?xml') || block.startsWith('<transcript_extracts')) {
|
|
continue;
|
|
}
|
|
|
|
// Extract timestamp from comment
|
|
const timestampIso = extractTimestamp(block);
|
|
if (!timestampIso) {
|
|
skipped++;
|
|
continue;
|
|
}
|
|
|
|
// Look up session metadata
|
|
const sessionMeta = timestampMap[timestampIso];
|
|
if (!sessionMeta) {
|
|
noSession++;
|
|
if (noSession <= 5) {
|
|
console.log(`⚠️ No session found for timestamp: ${timestampIso}`);
|
|
}
|
|
skipped++;
|
|
continue;
|
|
}
|
|
|
|
// Get SDK session ID
|
|
const memorySessionId = claudeSessionToSdkSession.get(sessionMeta.sessionId);
|
|
if (!memorySessionId) {
|
|
skipped++;
|
|
continue;
|
|
}
|
|
|
|
// Try parsing as observation first
|
|
const observation = parseObservation(block);
|
|
if (observation) {
|
|
// Check for duplicate
|
|
const existingObs = db['db'].prepare(`
|
|
SELECT id FROM observations
|
|
WHERE memory_session_id = ? AND title = ? AND subtitle = ? AND type = ?
|
|
`).get(memorySessionId, observation.title, observation.subtitle, observation.type);
|
|
|
|
if (existingObs) {
|
|
duplicateObs++;
|
|
continue;
|
|
}
|
|
|
|
try {
|
|
db.storeObservation(
|
|
memorySessionId,
|
|
sessionMeta.project,
|
|
observation
|
|
);
|
|
importedObs++;
|
|
|
|
if (importedObs % 50 === 0) {
|
|
console.log(`Imported ${importedObs} observations...`);
|
|
}
|
|
} catch (e) {
|
|
console.error(`Error storing observation:`, e);
|
|
skipped++;
|
|
}
|
|
continue;
|
|
}
|
|
|
|
// Try parsing as summary
|
|
const summary = parseSummary(block);
|
|
if (summary) {
|
|
// Check for duplicate
|
|
const existingSum = db['db'].prepare(`
|
|
SELECT id FROM session_summaries
|
|
WHERE memory_session_id = ? AND request = ? AND completed = ? AND learned = ?
|
|
`).get(memorySessionId, summary.request, summary.completed, summary.learned);
|
|
|
|
if (existingSum) {
|
|
duplicateSum++;
|
|
continue;
|
|
}
|
|
|
|
try {
|
|
db.storeSummary(
|
|
memorySessionId,
|
|
sessionMeta.project,
|
|
summary
|
|
);
|
|
importedSum++;
|
|
|
|
if (importedSum % 10 === 0) {
|
|
console.log(`Imported ${importedSum} summaries...`);
|
|
}
|
|
} catch (e) {
|
|
console.error(`Error storing summary:`, e);
|
|
skipped++;
|
|
}
|
|
continue;
|
|
}
|
|
|
|
// Neither observation nor summary - skip
|
|
skipped++;
|
|
}
|
|
|
|
db.close();
|
|
|
|
console.log('\n' + '='.repeat(60));
|
|
console.log('Import Complete!');
|
|
console.log('='.repeat(60));
|
|
console.log(`✓ Imported: ${importedObs} observations`);
|
|
console.log(`✓ Imported: ${importedSum} summaries`);
|
|
console.log(`✓ Total: ${importedObs + importedSum} items`);
|
|
console.log(`⊘ Skipped: ${skipped} blocks (not full observations or summaries)`);
|
|
console.log(`⊘ Duplicates skipped: ${duplicateObs} observations, ${duplicateSum} summaries`);
|
|
console.log(`⚠️ No session: ${noSession} blocks (timestamp not in transcripts)`);
|
|
console.log('='.repeat(60));
|
|
}
|
|
|
|
// Run if executed directly
|
|
if (import.meta.url === `file://${process.argv[1]}`) {
|
|
main();
|
|
}
|