Code quality: comprehensive nonsense audit cleanup (20 issues) (#400)
* fix: prevent initialization promise from resolving on failure Background initialization was resolving the promise even when it failed, causing the readiness check to incorrectly indicate the worker was ready. Now the promise stays pending on failure, ensuring /api/readiness continues returning 503 until initialization succeeds. Fixes critical issue #1 from nonsense audit. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: improve error handling in context inject and settings update routes * Enhance error handling for ChromaDB failures in SearchManager - Introduced a flag to track ChromaDB failure states. - Updated logging messages to provide clearer feedback on ChromaDB initialization and failure. - Modified the response structure to inform users when semantic search is unavailable due to ChromaDB issues, including installation instructions for UVX/Python. * refactor: remove deprecated silent-debug utility functions * Enhance error handling and validation in hooks - Added validation for required fields in `summary-hook.ts` and `save-hook.ts` to ensure necessary inputs are provided before processing. - Improved error messages for missing `cwd` in `save-hook.ts` and `transcript_path` in `summary-hook.ts`. - Cleaned up code by removing unnecessary error handling logic and directly throwing errors when required fields are missing. - Updated binary file `mem-search.zip` to reflect changes in the plugin. * fix: improve error handling in summary hook to ensure errors are not masked * fix: add error handling for unknown message content format in transcript parser * fix: log error when failing to notify worker of session end * Refactor date formatting functions: move to shared module - Removed redundant date formatting functions from SearchManager.ts. - Consolidated date formatting logic into shared timeline-formatting.ts. - Updated functions to accept both ISO date strings and epoch milliseconds. * Refactor tag stripping functions to extract shared logic - Introduced a new internal function `stripTagsInternal` to handle the common logic for stripping memory tags from both JSON and prompt content. - Updated `stripMemoryTagsFromJson` to utilize the new internal function, simplifying its implementation. - Modified `stripMemoryTagsFromPrompt` to also call `stripTagsInternal`, reducing code duplication and improving maintainability. - Removed redundant type checks and logging from both functions, as they now rely on the internal function for processing. * Refactor settings validation in SettingsRoutes - Consolidated multiple individual setting validations into a single validateSettings method. - Updated handleUpdateSettings to use the new validation method for improved maintainability. - Each setting now has its validation logic encapsulated within validateSettings, ensuring a single source of truth for validation rules. * fix: add error logging to ProcessManager.getPidInfo() Previously getPidInfo() returned null silently for three cases: 1. File not found (expected - no action needed) 2. JSON parse error (corrupted file - now logs warning) 3. Type validation failure (malformed data - now logs warning) This fix adds warning logs for cases 2 and 3 to provide visibility into PID file corruption issues. Logs include context like parsed data structure or error message with file path. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: remove overly defensive try-catch in SessionRoutes Remove unnecessary try-catch block that was masking potential errors when checking file paths for session-memory meta-observations. Property access on parsed JSON objects never throws - existing truthiness checks already safely handle undefined/null values. Issue #12 from nonsense audit: SessionRoutes catch-all exception masking 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: remove redundant try-catch from getWorkerPort() Simplified getWorkerPort() by removing unnecessary try-catch wrapper. SettingsDefaultsManager.loadFromFile() already handles missing files by returning defaults, and .get() never throws - making the catch block completely redundant. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: eliminate ceremonial wrapper in hook-response.ts Replace buildHookResponse() function with direct constant export. Most hook responses were calling a function just to return the same constant object. Only SessionStart with context needs special handling. Changes: - Export STANDARD_HOOK_RESPONSE constant directly - Simplify createHookResponse() to only handle SessionStart special case - Update all hooks to use STANDARD_HOOK_RESPONSE instead of function call - Eliminate buildHookResponse() function with redundant branching Files modified: - src/hooks/hook-response.ts: Export constant, simplify function - src/hooks/new-hook.ts: Use STANDARD_HOOK_RESPONSE - src/hooks/save-hook.ts: Use STANDARD_HOOK_RESPONSE - src/hooks/summary-hook.ts: Use STANDARD_HOOK_RESPONSE 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: make getWorkerHost() consistent with getWorkerPort() - Use SettingsDefaultsManager.get('CLAUDE_MEM_DATA_DIR') for path resolution instead of hardcoded ~/.claude-mem (supports custom data directories) - Add caching to getWorkerHost() (same pattern as getWorkerPort()) - Update clearPortCache() to also clear host cache - Both functions now have identical patterns: caching, consistent path resolution, and same error handling via SettingsDefaultsManager 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: inline single-use timeout constants in ProcessManager Remove 6 timeout constants used only once each, inlining their values directly at the point of use. Following YAGNI principle - constants should only exist when used multiple times. Removed constants: - PROCESS_STOP_TIMEOUT_MS (5000ms) - HEALTH_CHECK_TIMEOUT_MS (10000ms) - HEALTH_CHECK_INTERVAL_MS (200ms) - HEALTH_CHECK_FETCH_TIMEOUT_MS (1000ms) - PROCESS_EXIT_CHECK_INTERVAL_MS (100ms) - HTTP_SHUTDOWN_TIMEOUT_MS (2000ms) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: replace overly broad path filter in HTTP logging middleware Replace `req.path.includes('.')` with explicit static file extension checking to prevent incorrectly skipping API endpoint logging. - Add `staticExtensions` array with legitimate asset types - Use `.endsWith()` matching instead of `.includes()` - API endpoints containing periods (if any) now logged correctly - Static assets (.js, .css, .svg, etc.) still skip logging as intended 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: expand logger.formatTool() to handle all tool types Replace hard-coded tool formatting for 4 tools with comprehensive coverage: File operations (Read, Edit, Write, NotebookEdit): - Consolidated file_path handling for all file operations - Added notebook_path support for NotebookEdit - Shows filename only (not full path) Search tools (Glob, Grep): - Glob: shows pattern - Grep: shows pattern (truncated if > 30 chars) Network tools (WebFetch, WebSearch): - Shows URL or query (truncated if > 40 chars) Meta tools (Task, Skill, LSP): - Task: shows subagent_type or description - Skill: shows skill name - LSP: shows operation type This eliminates the "hard-coded 4 tools" limitation and provides meaningful log output for all tool types. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: remove all truncation from logger.formatTool() Truncation hides critical debugging information. Show everything: - Bash: full command (was truncated at 50 chars) - File operations: full path (was showing filename only) - Grep: full pattern (was truncated at 30 chars) - WebFetch/WebSearch: full URL/query (was truncated at 40 chars) - Task: full description (was truncated at 30 chars) Logs exist to provide complete information. Never hide details. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * refactor: replace array indexing with regex capture for drive letter Use explicit regex capture group to extract Windows drive letter instead of assuming cwd[0] is always the first character. Safer and more explicit. - Changed cwd.match(/^[A-Z]:\\/i) to cwd.match(/^([A-Z]):\\/i) - Extract drive letter from driveMatch[1] instead of cwd[0] - Restructured control flow to avoid nested conditionals 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: return computed values from DataRoutes processing endpoint The handleSetProcessing endpoint was computing queueDepth and activeSessions but not including them in the response. This commit includes all computed values in the API response. - Return queueDepth and activeSessions in /api/processing response - Eliminates dead code pattern where values are computed but unused - API callers can now access these metrics 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: move error handling into SettingsDefaultsManager.loadFromFile() Wrap the entire loadFromFile() method in try-catch so it handles ALL error cases (missing file, corrupted JSON, permission errors, I/O failures) instead of forcing every caller to add redundant try-catch blocks. This follows DRY principle: one function owns error handling, all callers stay simple and clean. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * Refactor hook response handling and optimize token estimation - Removed the HookType and HookResponse types and the createHookResponse function from hook-response.ts to simplify the response handling for hooks. - Introduced a standardized hook response for all hooks in hook-response.ts. - Moved the estimateTokens function from SearchManager.ts to timeline-formatting.ts for better reusability and clarity. - Cleaned up redundant estimateTokens function definitions in SearchManager.ts. --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
+107
-101
File diff suppressed because one or more lines are too long
Binary file not shown.
@@ -1,229 +0,0 @@
|
||||
#!/usr/bin/env node
|
||||
|
||||
/**
|
||||
* One-time script to extract tool handlers from mcp-server.ts into SearchManager.ts
|
||||
*/
|
||||
|
||||
import { readFileSync, writeFileSync } from 'fs';
|
||||
import { fileURLToPath } from 'url';
|
||||
import { dirname, join } from 'path';
|
||||
|
||||
const __filename = fileURLToPath(import.meta.url);
|
||||
const __dirname = dirname(__filename);
|
||||
const projectRoot = join(__dirname, '..');
|
||||
|
||||
const mcpServerPath = join(projectRoot, 'src/servers/mcp-server.ts');
|
||||
const outputPath = join(projectRoot, 'src/services/worker/SearchManager.ts');
|
||||
|
||||
console.log('Reading mcp-server.ts...');
|
||||
const content = readFileSync(mcpServerPath, 'utf-8');
|
||||
|
||||
// Extract just the sections we need by finding line numbers
|
||||
// This is more reliable than parsing
|
||||
|
||||
// Extract tool handler bodies by finding each "handler: async (args: any) => {"
|
||||
// and extracting until the matching closing brace
|
||||
|
||||
const extractHandlerBody = (content, startPattern) => {
|
||||
const lines = content.split('\n');
|
||||
const startIdx = lines.findIndex(line => line.includes(startPattern));
|
||||
|
||||
if (startIdx === -1) return null;
|
||||
|
||||
// Find the "handler: async (args: any) => {" line
|
||||
let handlerIdx = -1;
|
||||
for (let i = startIdx; i < Math.min(startIdx + 30, lines.length); i++) {
|
||||
if (lines[i].includes('handler: async (args: any) => {')) {
|
||||
handlerIdx = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (handlerIdx === -1) return null;
|
||||
|
||||
// Extract the body by counting braces
|
||||
let braceCount = 0;
|
||||
let bodyLines = [];
|
||||
let started = false;
|
||||
|
||||
for (let i = handlerIdx; i < lines.length; i++) {
|
||||
const line = lines[i];
|
||||
|
||||
for (const char of line) {
|
||||
if (char === '{') {
|
||||
braceCount++;
|
||||
started = true;
|
||||
} else if (char === '}') {
|
||||
braceCount--;
|
||||
}
|
||||
}
|
||||
|
||||
if (started) {
|
||||
bodyLines.push(line);
|
||||
}
|
||||
|
||||
if (started && braceCount === 0) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Remove the first line (handler wrapper) and last line (closing brace)
|
||||
if (bodyLines.length > 2) {
|
||||
bodyLines = bodyLines.slice(1, -1);
|
||||
}
|
||||
|
||||
return bodyLines.join('\n');
|
||||
};
|
||||
|
||||
// Tool name to search pattern mapping
|
||||
const tools = {
|
||||
'search': "name: 'search'",
|
||||
'timeline': "name: 'timeline'",
|
||||
'decisions': "name: 'decisions'",
|
||||
'changes': "name: 'changes'",
|
||||
'how_it_works': "name: 'how_it_works'",
|
||||
'search_observations': "name: 'search_observations'",
|
||||
'search_sessions': "name: 'search_sessions'",
|
||||
'search_user_prompts': "name: 'search_user_prompts'",
|
||||
'find_by_concept': "name: 'find_by_concept'",
|
||||
'find_by_file': "name: 'find_by_file'",
|
||||
'find_by_type': "name: 'find_by_type'",
|
||||
'get_recent_context': "name: 'get_recent_context'",
|
||||
'get_context_timeline': "name: 'get_context_timeline'",
|
||||
'get_timeline_by_query': "name: 'get_timeline_by_query'"
|
||||
};
|
||||
|
||||
console.log('Extracting tool handlers...');
|
||||
const handlers = {};
|
||||
|
||||
for (const [toolName, pattern] of Object.entries(tools)) {
|
||||
console.log(` Extracting ${toolName}...`);
|
||||
const body = extractHandlerBody(content, pattern);
|
||||
if (body) {
|
||||
handlers[toolName] = body;
|
||||
console.log(` ✓ ${body.split('\n').length} lines`);
|
||||
} else {
|
||||
console.log(` ✗ Not found`);
|
||||
}
|
||||
}
|
||||
|
||||
console.log(`\nExtracted ${Object.keys(handlers).length}/${Object.keys(tools).length} handlers`);
|
||||
|
||||
// Now generate SearchManager.ts
|
||||
console.log('\nGenerating SearchManager.ts...');
|
||||
|
||||
const methodBodies = Object.entries(handlers).map(([toolName, body]) => {
|
||||
// Convert tool name to camelCase method name
|
||||
const methodName = toolName.replace(/_([a-z])/g, (_, letter) => letter.toUpperCase());
|
||||
|
||||
// Replace standalone function calls with class methods
|
||||
let processedBody = body
|
||||
.replace(/formatSearchTips\(\)/g, 'this.formatter.formatSearchTips()')
|
||||
.replace(/formatObservationIndex\(/g, 'this.formatter.formatObservationIndex(')
|
||||
.replace(/formatSessionIndex\(/g, 'this.formatter.formatSessionIndex(')
|
||||
.replace(/formatUserPromptIndex\(/g, 'this.formatter.formatUserPromptIndex(')
|
||||
.replace(/formatObservationResult\(/g, 'this.formatter.formatObservationResult(')
|
||||
.replace(/formatSessionResult\(/g, 'this.formatter.formatSessionResult(')
|
||||
.replace(/formatUserPromptResult\(/g, 'this.formatter.formatUserPromptResult(')
|
||||
.replace(/filterTimelineByDepth\(/g, 'this.timeline.filterByDepth(')
|
||||
.replace(/\bsearch\./g, 'this.sessionSearch.')
|
||||
.replace(/\bstore\./g, 'this.sessionStore.')
|
||||
.replace(/queryChroma\(/g, 'this.queryChroma(')
|
||||
.replace(/normalizeParams\(/g, 'this.normalizeParams(')
|
||||
.replace(/chromaClient/g, 'this.chromaSync');
|
||||
|
||||
return ` /**
|
||||
* Tool handler: ${toolName}
|
||||
*/
|
||||
async ${methodName}(args: any): Promise<any> {
|
||||
${processedBody}
|
||||
}`;
|
||||
}).join('\n\n');
|
||||
|
||||
const searchManagerContent = `/**
|
||||
* SearchManager - Core search orchestration for claude-mem
|
||||
* Extracted from mcp-server.ts to centralize business logic in Worker services
|
||||
*
|
||||
* This class contains all tool handler logic that was previously in the MCP server.
|
||||
* The MCP server now acts as a thin HTTP wrapper that calls these methods via HTTP.
|
||||
*/
|
||||
|
||||
import { SessionSearch } from '../sqlite/SessionSearch.js';
|
||||
import { SessionStore } from '../sqlite/SessionStore.js';
|
||||
import { ChromaSync } from '../sync/ChromaSync.js';
|
||||
import { FormattingService } from './FormattingService.js';
|
||||
import { TimelineService, TimelineItem } from './TimelineService.js';
|
||||
import { ObservationSearchResult, SessionSummarySearchResult, UserPromptSearchResult } from '../sqlite/types.js';
|
||||
import { silentDebug } from '../../utils/silent-debug.js';
|
||||
|
||||
const COLLECTION_NAME = 'cm__claude-mem';
|
||||
|
||||
export class SearchManager {
|
||||
constructor(
|
||||
private sessionSearch: SessionSearch,
|
||||
private sessionStore: SessionStore,
|
||||
private chromaSync: ChromaSync,
|
||||
private formatter: FormattingService,
|
||||
private timeline: TimelineService
|
||||
) {}
|
||||
|
||||
/**
|
||||
* Query Chroma vector database via ChromaSync
|
||||
*/
|
||||
private async queryChroma(
|
||||
query: string,
|
||||
limit: number,
|
||||
whereFilter?: Record<string, any>
|
||||
): Promise<{ ids: number[]; distances: number[]; metadatas: any[] }> {
|
||||
return await this.chromaSync.queryChroma(query, limit, whereFilter);
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to normalize query parameters from URL-friendly format
|
||||
* Converts comma-separated strings to arrays and flattens date params
|
||||
*/
|
||||
private normalizeParams(args: any): any {
|
||||
const normalized: any = { ...args };
|
||||
|
||||
// Parse comma-separated concepts into array
|
||||
if (normalized.concepts && typeof normalized.concepts === 'string') {
|
||||
normalized.concepts = normalized.concepts.split(',').map((s: string) => s.trim()).filter(Boolean);
|
||||
}
|
||||
|
||||
// Parse comma-separated files into array
|
||||
if (normalized.files && typeof normalized.files === 'string') {
|
||||
normalized.files = normalized.files.split(',').map((s: string) => s.trim()).filter(Boolean);
|
||||
}
|
||||
|
||||
// Parse comma-separated obs_type into array
|
||||
if (normalized.obs_type && typeof normalized.obs_type === 'string') {
|
||||
normalized.obs_type = normalized.obs_type.split(',').map((s: string) => s.trim()).filter(Boolean);
|
||||
}
|
||||
|
||||
// Parse comma-separated type (for filterSchema) into array
|
||||
if (normalized.type && typeof normalized.type === 'string' && normalized.type.includes(',')) {
|
||||
normalized.type = normalized.type.split(',').map((s: string) => s.trim()).filter(Boolean);
|
||||
}
|
||||
|
||||
// Flatten dateStart/dateEnd into dateRange object
|
||||
if (normalized.dateStart || normalized.dateEnd) {
|
||||
normalized.dateRange = {
|
||||
start: normalized.dateStart,
|
||||
end: normalized.dateEnd
|
||||
};
|
||||
delete normalized.dateStart;
|
||||
delete normalized.dateEnd;
|
||||
}
|
||||
|
||||
return normalized;
|
||||
}
|
||||
|
||||
${methodBodies}
|
||||
}
|
||||
`;
|
||||
|
||||
writeFileSync(outputPath, searchManagerContent, 'utf-8');
|
||||
|
||||
console.log(`\n✅ SearchManager.ts generated at ${outputPath}`);
|
||||
console.log(` Total methods: ${Object.keys(handlers).length + 2} (${Object.keys(handlers).length} tools + queryChroma + normalizeParams)`);
|
||||
console.log(` File size: ${(searchManagerContent.length / 1024).toFixed(1)} KB`);
|
||||
@@ -48,6 +48,8 @@ async function cleanupHook(input?: SessionEndInput): Promise<void> {
|
||||
}
|
||||
} catch (error: any) {
|
||||
// Worker might not be running - that's okay (non-critical)
|
||||
// But we should still log it for visibility
|
||||
console.error('[cleanup-hook] Failed to notify worker of session end:', error.message);
|
||||
}
|
||||
|
||||
console.log('{"continue": true, "suppressOutput": true}');
|
||||
|
||||
@@ -1,72 +1,11 @@
|
||||
export type HookType = 'SessionStart' | 'UserPromptSubmit' | 'PostToolUse' | 'Stop';
|
||||
|
||||
export interface HookResponseOptions {
|
||||
reason?: string;
|
||||
context?: string;
|
||||
}
|
||||
|
||||
export interface HookResponse {
|
||||
continue?: boolean;
|
||||
suppressOutput?: boolean;
|
||||
stopReason?: string;
|
||||
hookSpecificOutput?: {
|
||||
hookEventName: 'SessionStart';
|
||||
additionalContext: string;
|
||||
};
|
||||
}
|
||||
|
||||
function buildHookResponse(
|
||||
hookType: HookType,
|
||||
success: boolean,
|
||||
options: HookResponseOptions
|
||||
): HookResponse {
|
||||
if (hookType === 'SessionStart') {
|
||||
if (success && options.context) {
|
||||
return {
|
||||
continue: true,
|
||||
suppressOutput: true,
|
||||
hookSpecificOutput: {
|
||||
hookEventName: 'SessionStart',
|
||||
additionalContext: options.context
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
continue: true,
|
||||
suppressOutput: true
|
||||
};
|
||||
}
|
||||
|
||||
if (hookType === 'UserPromptSubmit' || hookType === 'PostToolUse') {
|
||||
return {
|
||||
continue: true,
|
||||
suppressOutput: true
|
||||
};
|
||||
}
|
||||
|
||||
if (hookType === 'Stop') {
|
||||
return {
|
||||
continue: true,
|
||||
suppressOutput: true
|
||||
};
|
||||
}
|
||||
|
||||
return {
|
||||
continue: success,
|
||||
suppressOutput: true,
|
||||
...(options.reason && !success ? { stopReason: options.reason } : {})
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Creates a standardized hook response using the HookTemplates system.
|
||||
* Standard hook response for all hooks.
|
||||
* Tells Claude Code to continue processing and suppress the hook's output.
|
||||
*
|
||||
* Note: SessionStart uses context-hook.ts which constructs its own response
|
||||
* with hookSpecificOutput for context injection.
|
||||
*/
|
||||
export function createHookResponse(
|
||||
hookType: HookType,
|
||||
success: boolean,
|
||||
options: HookResponseOptions = {}
|
||||
): string {
|
||||
const response = buildHookResponse(hookType, success, options);
|
||||
return JSON.stringify(response);
|
||||
}
|
||||
export const STANDARD_HOOK_RESPONSE = JSON.stringify({
|
||||
continue: true,
|
||||
suppressOutput: true
|
||||
});
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
import { stdin } from 'process';
|
||||
import { createHookResponse } from './hook-response.js';
|
||||
import { STANDARD_HOOK_RESPONSE } from './hook-response.js';
|
||||
import { ensureWorkerRunning, getWorkerPort } from '../shared/worker-utils.js';
|
||||
import { handleWorkerError } from '../shared/hook-error-handler.js';
|
||||
import { handleFetchError } from './shared/error-handler.js';
|
||||
@@ -61,7 +61,7 @@ async function newHook(input?: UserPromptSubmitInput): Promise<void> {
|
||||
// Check if prompt was entirely private (worker performs privacy check)
|
||||
if (initResult.skipped && initResult.reason === 'private') {
|
||||
console.error(`[new-hook] Session ${sessionDbId}, prompt #${promptNumber} (fully private - skipped)`);
|
||||
console.log(createHookResponse('UserPromptSubmit', true));
|
||||
console.log(STANDARD_HOOK_RESPONSE);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -97,7 +97,7 @@ async function newHook(input?: UserPromptSubmitInput): Promise<void> {
|
||||
handleWorkerError(error);
|
||||
}
|
||||
|
||||
console.log(createHookResponse('UserPromptSubmit', true));
|
||||
console.log(STANDARD_HOOK_RESPONSE);
|
||||
}
|
||||
|
||||
// Entry Point
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
*/
|
||||
|
||||
import { stdin } from 'process';
|
||||
import { createHookResponse } from './hook-response.js';
|
||||
import { STANDARD_HOOK_RESPONSE } from './hook-response.js';
|
||||
import { logger } from '../utils/logger.js';
|
||||
import { ensureWorkerRunning, getWorkerPort } from '../shared/worker-utils.js';
|
||||
import { HOOK_TIMEOUTS } from '../shared/hook-constants.js';
|
||||
@@ -43,6 +43,11 @@ async function saveHook(input?: PostToolUseInput): Promise<void> {
|
||||
workerPort: port
|
||||
});
|
||||
|
||||
// Validate required fields before sending to worker
|
||||
if (!cwd) {
|
||||
throw new Error(`Missing cwd in PostToolUse hook input for session ${session_id}, tool ${tool_name}`);
|
||||
}
|
||||
|
||||
try {
|
||||
// Send to worker - worker handles privacy check and database operations
|
||||
const response = await fetch(`http://127.0.0.1:${port}/api/sessions/observations`, {
|
||||
@@ -53,13 +58,7 @@ async function saveHook(input?: PostToolUseInput): Promise<void> {
|
||||
tool_name,
|
||||
tool_input,
|
||||
tool_response,
|
||||
cwd: cwd || logger.happyPathError(
|
||||
'HOOK',
|
||||
'Missing cwd in PostToolUse hook input',
|
||||
undefined,
|
||||
{ session_id, tool_name },
|
||||
''
|
||||
)
|
||||
cwd
|
||||
}),
|
||||
signal: AbortSignal.timeout(HOOK_TIMEOUTS.DEFAULT)
|
||||
});
|
||||
@@ -80,7 +79,7 @@ async function saveHook(input?: PostToolUseInput): Promise<void> {
|
||||
handleWorkerError(error);
|
||||
}
|
||||
|
||||
console.log(createHookResponse('PostToolUse', true));
|
||||
console.log(STANDARD_HOOK_RESPONSE);
|
||||
}
|
||||
|
||||
// Entry Point
|
||||
|
||||
+18
-12
@@ -10,7 +10,7 @@
|
||||
*/
|
||||
|
||||
import { stdin } from 'process';
|
||||
import { createHookResponse } from './hook-response.js';
|
||||
import { STANDARD_HOOK_RESPONSE } from './hook-response.js';
|
||||
import { logger } from '../utils/logger.js';
|
||||
import { ensureWorkerRunning, getWorkerPort } from '../shared/worker-utils.js';
|
||||
import { HOOK_TIMEOUTS } from '../shared/hook-constants.js';
|
||||
@@ -39,16 +39,14 @@ async function summaryHook(input?: StopInput): Promise<void> {
|
||||
|
||||
const port = getWorkerPort();
|
||||
|
||||
// Validate required fields before processing
|
||||
if (!input.transcript_path) {
|
||||
throw new Error(`Missing transcript_path in Stop hook input for session ${session_id}`);
|
||||
}
|
||||
|
||||
// Extract last user AND assistant messages from transcript
|
||||
const transcriptPath = input.transcript_path || logger.happyPathError(
|
||||
'HOOK',
|
||||
'Missing transcript_path in Stop hook input',
|
||||
undefined,
|
||||
{ session_id },
|
||||
''
|
||||
);
|
||||
const lastUserMessage = extractLastMessage(transcriptPath, 'user');
|
||||
const lastAssistantMessage = extractLastMessage(transcriptPath, 'assistant', true);
|
||||
const lastUserMessage = extractLastMessage(input.transcript_path, 'user');
|
||||
const lastAssistantMessage = extractLastMessage(input.transcript_path, 'assistant', true);
|
||||
|
||||
logger.dataIn('HOOK', 'Stop: Requesting summary', {
|
||||
workerPort: port,
|
||||
@@ -56,6 +54,8 @@ async function summaryHook(input?: StopInput): Promise<void> {
|
||||
hasLastAssistantMessage: !!lastAssistantMessage
|
||||
});
|
||||
|
||||
let summaryError: Error | null = null;
|
||||
|
||||
try {
|
||||
// Send to worker - worker handles privacy check and database operations
|
||||
const response = await fetch(`http://127.0.0.1:${port}/api/sessions/summarize`, {
|
||||
@@ -81,9 +81,10 @@ async function summaryHook(input?: StopInput): Promise<void> {
|
||||
|
||||
logger.debug('HOOK', 'Summary request sent successfully');
|
||||
} catch (error: any) {
|
||||
summaryError = error;
|
||||
handleWorkerError(error);
|
||||
} finally {
|
||||
// Stop processing spinner
|
||||
// Stop processing spinner (non-critical operation, errors are logged but don't block)
|
||||
try {
|
||||
const spinnerResponse = await fetch(`http://127.0.0.1:${port}/api/processing`, {
|
||||
method: 'POST',
|
||||
@@ -99,7 +100,12 @@ async function summaryHook(input?: StopInput): Promise<void> {
|
||||
}
|
||||
}
|
||||
|
||||
console.log(createHookResponse('Stop', true));
|
||||
// Re-throw summary error after cleanup to ensure it's not masked by finally block
|
||||
if (summaryError) {
|
||||
throw summaryError;
|
||||
}
|
||||
|
||||
console.log(STANDARD_HOOK_RESPONSE);
|
||||
}
|
||||
|
||||
// Entry Point
|
||||
|
||||
@@ -11,14 +11,6 @@ const PID_FILE = join(DATA_DIR, 'worker.pid');
|
||||
const LOG_DIR = join(DATA_DIR, 'logs');
|
||||
const MARKETPLACE_ROOT = join(homedir(), '.claude', 'plugins', 'marketplaces', 'thedotmack');
|
||||
|
||||
// Timeout constants
|
||||
const PROCESS_STOP_TIMEOUT_MS = 5000;
|
||||
const HEALTH_CHECK_TIMEOUT_MS = 10000;
|
||||
const HEALTH_CHECK_INTERVAL_MS = 200;
|
||||
const HEALTH_CHECK_FETCH_TIMEOUT_MS = 1000;
|
||||
const PROCESS_EXIT_CHECK_INTERVAL_MS = 100;
|
||||
const HTTP_SHUTDOWN_TIMEOUT_MS = 2000;
|
||||
|
||||
interface PidInfo {
|
||||
pid: number;
|
||||
port: number;
|
||||
@@ -172,7 +164,7 @@ export class ProcessManager {
|
||||
}
|
||||
}
|
||||
|
||||
static async stop(timeout: number = PROCESS_STOP_TIMEOUT_MS): Promise<boolean> {
|
||||
static async stop(timeout: number = 5000): Promise<boolean> {
|
||||
const info = this.getPidInfo();
|
||||
|
||||
if (process.platform === 'win32') {
|
||||
@@ -285,7 +277,7 @@ export class ProcessManager {
|
||||
// Send shutdown request
|
||||
const response = await fetch(`http://127.0.0.1:${port}/api/admin/shutdown`, {
|
||||
method: 'POST',
|
||||
signal: AbortSignal.timeout(HTTP_SHUTDOWN_TIMEOUT_MS)
|
||||
signal: AbortSignal.timeout(2000)
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
@@ -293,7 +285,7 @@ export class ProcessManager {
|
||||
}
|
||||
|
||||
// Wait for worker to actually stop responding
|
||||
return await this.waitForWorkerDown(port, PROCESS_STOP_TIMEOUT_MS);
|
||||
return await this.waitForWorkerDown(port, 5000);
|
||||
} catch {
|
||||
// Worker not responding to HTTP - it may be dead or hung
|
||||
return false;
|
||||
@@ -312,7 +304,7 @@ export class ProcessManager {
|
||||
signal: AbortSignal.timeout(500)
|
||||
});
|
||||
// Still responding, wait and retry
|
||||
await new Promise(resolve => setTimeout(resolve, PROCESS_EXIT_CHECK_INTERVAL_MS));
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
} catch {
|
||||
// Worker stopped responding - success
|
||||
return true;
|
||||
@@ -331,10 +323,15 @@ export class ProcessManager {
|
||||
const parsed = JSON.parse(content);
|
||||
// Validate required fields have correct types
|
||||
if (typeof parsed.pid !== 'number' || typeof parsed.port !== 'number') {
|
||||
logger.warn('PROCESS', 'Malformed PID file: missing or invalid pid/port fields', {}, { parsed });
|
||||
return null;
|
||||
}
|
||||
return parsed as PidInfo;
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.warn('PROCESS', 'Failed to read PID file', {}, {
|
||||
error: error instanceof Error ? error.message : String(error),
|
||||
path: PID_FILE
|
||||
});
|
||||
return null;
|
||||
}
|
||||
}
|
||||
@@ -363,7 +360,7 @@ export class ProcessManager {
|
||||
}
|
||||
}
|
||||
|
||||
private static async waitForHealth(pid: number, port: number, timeoutMs: number = HEALTH_CHECK_TIMEOUT_MS): Promise<{ success: boolean; pid?: number; error?: string }> {
|
||||
private static async waitForHealth(pid: number, port: number, timeoutMs: number = 10000): Promise<{ success: boolean; pid?: number; error?: string }> {
|
||||
const startTime = Date.now();
|
||||
const isWindows = process.platform === 'win32';
|
||||
// Increase timeout on Windows to account for slower process startup
|
||||
@@ -381,7 +378,7 @@ export class ProcessManager {
|
||||
// Try readiness check (changed from /health to /api/readiness)
|
||||
try {
|
||||
const response = await fetch(`http://127.0.0.1:${port}/api/readiness`, {
|
||||
signal: AbortSignal.timeout(HEALTH_CHECK_FETCH_TIMEOUT_MS)
|
||||
signal: AbortSignal.timeout(1000)
|
||||
});
|
||||
if (response.ok) {
|
||||
return { success: true, pid };
|
||||
@@ -390,7 +387,7 @@ export class ProcessManager {
|
||||
// Not ready yet, continue polling
|
||||
}
|
||||
|
||||
await new Promise(resolve => setTimeout(resolve, HEALTH_CHECK_INTERVAL_MS));
|
||||
await new Promise(resolve => setTimeout(resolve, 200));
|
||||
}
|
||||
|
||||
const timeoutMsg = isWindows
|
||||
@@ -407,7 +404,7 @@ export class ProcessManager {
|
||||
if (!this.isProcessAlive(pid)) {
|
||||
return;
|
||||
}
|
||||
await new Promise(resolve => setTimeout(resolve, PROCESS_EXIT_CHECK_INTERVAL_MS));
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
}
|
||||
|
||||
throw new Error('Process did not exit within timeout');
|
||||
|
||||
@@ -475,8 +475,7 @@ export class WorkerService {
|
||||
logger.info('SYSTEM', 'Background initialization complete');
|
||||
} catch (error) {
|
||||
logger.error('SYSTEM', 'Background initialization failed', {}, error as Error);
|
||||
// Still resolve to prevent hanging requests, but they'll see searchRoutes is null
|
||||
this.resolveInitialization();
|
||||
// Don't resolve - let the promise remain pending so readiness check continues to fail
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -14,7 +14,7 @@ import { FormattingService } from './FormattingService.js';
|
||||
import { TimelineService, TimelineItem } from './TimelineService.js';
|
||||
import { ObservationSearchResult, SessionSummarySearchResult, UserPromptSearchResult } from '../sqlite/types.js';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import { formatDate, formatTime, extractFirstFile, groupByDate } from '../../shared/timeline-formatting.js';
|
||||
import { formatDate, formatTime, extractFirstFile, groupByDate, estimateTokens } from '../../shared/timeline-formatting.js';
|
||||
|
||||
const COLLECTION_NAME = 'cm__claude-mem';
|
||||
const RECENCY_WINDOW_DAYS = 90;
|
||||
@@ -91,6 +91,7 @@ export class SearchManager {
|
||||
let observations: ObservationSearchResult[] = [];
|
||||
let sessions: SessionSummarySearchResult[] = [];
|
||||
let prompts: UserPromptSearchResult[] = [];
|
||||
let chromaFailed = false;
|
||||
|
||||
// Determine which types to query based on type filter
|
||||
const searchObservations = !type || type === 'observations';
|
||||
@@ -181,17 +182,19 @@ export class SearchManager {
|
||||
logger.debug('SEARCH', 'ChromaDB found no matches (final result, no FTS5 fallback)', {});
|
||||
}
|
||||
} catch (chromaError: any) {
|
||||
logger.debug('SEARCH', 'ChromaDB failed - returning empty results (FTS5 fallback removed)', { error: chromaError.message });
|
||||
chromaFailed = true;
|
||||
logger.debug('SEARCH', 'ChromaDB failed - semantic search unavailable', { error: chromaError.message });
|
||||
logger.debug('SEARCH', 'Install UVX/Python to enable vector search', { url: 'https://docs.astral.sh/uv/getting-started/installation/' });
|
||||
// Return empty results - no fallback
|
||||
// Set empty results - will show error message to user
|
||||
observations = [];
|
||||
sessions = [];
|
||||
prompts = [];
|
||||
}
|
||||
}
|
||||
// ChromaDB not initialized - return empty results (no fallback)
|
||||
else {
|
||||
logger.debug('SEARCH', 'ChromaDB not initialized - returning empty results (FTS5 fallback removed)', {});
|
||||
// ChromaDB not initialized - mark as failed to show proper error message
|
||||
else if (query) {
|
||||
chromaFailed = true;
|
||||
logger.debug('SEARCH', 'ChromaDB not initialized - semantic search unavailable', {});
|
||||
logger.debug('SEARCH', 'Install UVX/Python to enable vector search', { url: 'https://docs.astral.sh/uv/getting-started/installation/' });
|
||||
observations = [];
|
||||
sessions = [];
|
||||
@@ -212,6 +215,14 @@ export class SearchManager {
|
||||
}
|
||||
|
||||
if (totalResults === 0) {
|
||||
if (chromaFailed) {
|
||||
return {
|
||||
content: [{
|
||||
type: 'text' as const,
|
||||
text: `⚠️ Vector search failed - semantic search unavailable.\n\nTo enable semantic search:\n1. Install uv: https://docs.astral.sh/uv/getting-started/installation/\n2. Restart the worker: npm run worker:restart\n\nNote: You can still use filter-only searches (date ranges, types, files) without a query term.`
|
||||
}]
|
||||
};
|
||||
}
|
||||
return {
|
||||
content: [{
|
||||
type: 'text' as const,
|
||||
@@ -484,41 +495,6 @@ export class SearchManager {
|
||||
};
|
||||
}
|
||||
|
||||
// Format timeline (helper functions)
|
||||
const formatDate = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
year: 'numeric'
|
||||
});
|
||||
};
|
||||
|
||||
const formatTime = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
};
|
||||
|
||||
const formatDateTime = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
};
|
||||
|
||||
const estimateTokens = (text: string | null): number => {
|
||||
if (!text) return 0;
|
||||
return Math.ceil(text.length / 4);
|
||||
};
|
||||
|
||||
// Format results
|
||||
const lines: string[] = [];
|
||||
|
||||
@@ -1603,41 +1579,6 @@ export class SearchManager {
|
||||
};
|
||||
}
|
||||
|
||||
// Helper functions matching context-hook.ts
|
||||
const formatDate = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
year: 'numeric'
|
||||
});
|
||||
};
|
||||
|
||||
const formatTime = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
};
|
||||
|
||||
const formatDateTime = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
};
|
||||
|
||||
const estimateTokens = (text: string | null): number => {
|
||||
if (!text) return 0;
|
||||
return Math.ceil(text.length / 4);
|
||||
};
|
||||
|
||||
// Format results matching context-hook.ts exactly
|
||||
const lines: string[] = [];
|
||||
|
||||
@@ -1893,41 +1834,6 @@ export class SearchManager {
|
||||
};
|
||||
}
|
||||
|
||||
// Helper functions (reused from get_context_timeline)
|
||||
const formatDate = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
year: 'numeric'
|
||||
});
|
||||
};
|
||||
|
||||
const formatTime = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
};
|
||||
|
||||
const formatDateTime = (epochMs: number): string => {
|
||||
const date = new Date(epochMs);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
};
|
||||
|
||||
const estimateTokens = (text: string | null): number => {
|
||||
if (!text) return 0;
|
||||
return Math.ceil(text.length / 4);
|
||||
};
|
||||
|
||||
// Format timeline (reused from get_context_timeline)
|
||||
const lines: string[] = [];
|
||||
|
||||
|
||||
@@ -30,7 +30,9 @@ export function createMiddleware(
|
||||
// HTTP request/response logging
|
||||
middlewares.push((req: Request, res: Response, next: NextFunction) => {
|
||||
// Skip logging for static assets and health checks
|
||||
if (req.path.startsWith('/health') || req.path === '/' || req.path.includes('.')) {
|
||||
const staticExtensions = ['.html', '.js', '.css', '.svg', '.png', '.jpg', '.jpeg', '.webp', '.woff', '.woff2', '.ttf', '.eot'];
|
||||
const isStaticAsset = staticExtensions.some(ext => req.path.endsWith(ext));
|
||||
if (req.path.startsWith('/health') || req.path === '/' || isStaticAsset) {
|
||||
return next();
|
||||
}
|
||||
|
||||
|
||||
@@ -276,7 +276,7 @@ export class DataRoutes extends BaseRouteHandler {
|
||||
const queueDepth = this.sessionManager.getTotalQueueDepth();
|
||||
const activeSessions = this.sessionManager.getActiveSessionCount();
|
||||
|
||||
res.json({ status: 'ok', isProcessing });
|
||||
res.json({ status: 'ok', isProcessing, queueDepth, activeSessions });
|
||||
});
|
||||
|
||||
/**
|
||||
|
||||
@@ -277,19 +277,14 @@ export class SessionRoutes extends BaseRouteHandler {
|
||||
// Skip meta-observations: file operations on session-memory files
|
||||
const fileOperationTools = new Set(['Edit', 'Write', 'Read', 'NotebookEdit']);
|
||||
if (fileOperationTools.has(tool_name) && tool_input) {
|
||||
try {
|
||||
const filePath = tool_input.file_path || tool_input.notebook_path;
|
||||
if (filePath && filePath.includes('session-memory')) {
|
||||
logger.debug('SESSION', 'Skipping meta-observation for session-memory file', {
|
||||
tool_name,
|
||||
file_path: filePath
|
||||
});
|
||||
res.json({ status: 'skipped', reason: 'session_memory_meta' });
|
||||
return;
|
||||
}
|
||||
} catch (error) {
|
||||
// If we can't parse tool_input, continue normally
|
||||
logger.debug('SESSION', 'Could not check file_path for session-memory filter', { tool_name }, error);
|
||||
const filePath = tool_input.file_path || tool_input.notebook_path;
|
||||
if (filePath && filePath.includes('session-memory')) {
|
||||
logger.debug('SESSION', 'Skipping meta-observation for session-memory file', {
|
||||
tool_name,
|
||||
file_path: filePath
|
||||
});
|
||||
res.json({ status: 'skipped', reason: 'session_memory_meta' });
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -59,70 +59,8 @@ export class SettingsRoutes extends BaseRouteHandler {
|
||||
* Update environment settings (in ~/.claude-mem/settings.json) with validation
|
||||
*/
|
||||
private handleUpdateSettings = this.wrapHandler((req: Request, res: Response): void => {
|
||||
// Validate CLAUDE_MEM_CONTEXT_OBSERVATIONS
|
||||
if (req.body.CLAUDE_MEM_CONTEXT_OBSERVATIONS) {
|
||||
const obsCount = parseInt(req.body.CLAUDE_MEM_CONTEXT_OBSERVATIONS, 10);
|
||||
if (isNaN(obsCount) || obsCount < 1 || obsCount > 200) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'CLAUDE_MEM_CONTEXT_OBSERVATIONS must be between 1 and 200'
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_WORKER_PORT
|
||||
if (req.body.CLAUDE_MEM_WORKER_PORT) {
|
||||
const port = parseInt(req.body.CLAUDE_MEM_WORKER_PORT, 10);
|
||||
if (isNaN(port) || port < 1024 || port > 65535) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'CLAUDE_MEM_WORKER_PORT must be between 1024 and 65535'
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_WORKER_HOST (IP address or 0.0.0.0)
|
||||
if (req.body.CLAUDE_MEM_WORKER_HOST) {
|
||||
const host = req.body.CLAUDE_MEM_WORKER_HOST;
|
||||
// Allow localhost variants and valid IP patterns
|
||||
const validHostPattern = /^(127\.0\.0\.1|0\.0\.0\.0|localhost|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})$/;
|
||||
if (!validHostPattern.test(host)) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'CLAUDE_MEM_WORKER_HOST must be a valid IP address (e.g., 127.0.0.1, 0.0.0.0)'
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_LOG_LEVEL
|
||||
if (req.body.CLAUDE_MEM_LOG_LEVEL) {
|
||||
const validLevels = ['DEBUG', 'INFO', 'WARN', 'ERROR', 'SILENT'];
|
||||
if (!validLevels.includes(req.body.CLAUDE_MEM_LOG_LEVEL.toUpperCase())) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'CLAUDE_MEM_LOG_LEVEL must be one of: DEBUG, INFO, WARN, ERROR, SILENT'
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_PYTHON_VERSION (must be valid Python version format)
|
||||
if (req.body.CLAUDE_MEM_PYTHON_VERSION) {
|
||||
const pythonVersionRegex = /^3\.\d{1,2}$/;
|
||||
if (!pythonVersionRegex.test(req.body.CLAUDE_MEM_PYTHON_VERSION)) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
error: 'CLAUDE_MEM_PYTHON_VERSION must be in format "3.X" or "3.XX" (e.g., "3.13")'
|
||||
});
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Validate context settings
|
||||
const validation = this.validateContextSettings(req.body);
|
||||
// Validate all settings
|
||||
const validation = this.validateSettings(req.body);
|
||||
if (!validation.valid) {
|
||||
res.status(400).json({
|
||||
success: false,
|
||||
@@ -274,9 +212,51 @@ export class SettingsRoutes extends BaseRouteHandler {
|
||||
});
|
||||
|
||||
/**
|
||||
* Validate context settings from request body
|
||||
* Validate all settings from request body (single source of truth)
|
||||
*/
|
||||
private validateContextSettings(settings: any): { valid: boolean; error?: string } {
|
||||
private validateSettings(settings: any): { valid: boolean; error?: string } {
|
||||
// Validate CLAUDE_MEM_CONTEXT_OBSERVATIONS
|
||||
if (settings.CLAUDE_MEM_CONTEXT_OBSERVATIONS) {
|
||||
const obsCount = parseInt(settings.CLAUDE_MEM_CONTEXT_OBSERVATIONS, 10);
|
||||
if (isNaN(obsCount) || obsCount < 1 || obsCount > 200) {
|
||||
return { valid: false, error: 'CLAUDE_MEM_CONTEXT_OBSERVATIONS must be between 1 and 200' };
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_WORKER_PORT
|
||||
if (settings.CLAUDE_MEM_WORKER_PORT) {
|
||||
const port = parseInt(settings.CLAUDE_MEM_WORKER_PORT, 10);
|
||||
if (isNaN(port) || port < 1024 || port > 65535) {
|
||||
return { valid: false, error: 'CLAUDE_MEM_WORKER_PORT must be between 1024 and 65535' };
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_WORKER_HOST (IP address or 0.0.0.0)
|
||||
if (settings.CLAUDE_MEM_WORKER_HOST) {
|
||||
const host = settings.CLAUDE_MEM_WORKER_HOST;
|
||||
// Allow localhost variants and valid IP patterns
|
||||
const validHostPattern = /^(127\.0\.0\.1|0\.0\.0\.0|localhost|\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})$/;
|
||||
if (!validHostPattern.test(host)) {
|
||||
return { valid: false, error: 'CLAUDE_MEM_WORKER_HOST must be a valid IP address (e.g., 127.0.0.1, 0.0.0.0)' };
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_LOG_LEVEL
|
||||
if (settings.CLAUDE_MEM_LOG_LEVEL) {
|
||||
const validLevels = ['DEBUG', 'INFO', 'WARN', 'ERROR', 'SILENT'];
|
||||
if (!validLevels.includes(settings.CLAUDE_MEM_LOG_LEVEL.toUpperCase())) {
|
||||
return { valid: false, error: 'CLAUDE_MEM_LOG_LEVEL must be one of: DEBUG, INFO, WARN, ERROR, SILENT' };
|
||||
}
|
||||
}
|
||||
|
||||
// Validate CLAUDE_MEM_PYTHON_VERSION (must be valid Python version format)
|
||||
if (settings.CLAUDE_MEM_PYTHON_VERSION) {
|
||||
const pythonVersionRegex = /^3\.\d{1,2}$/;
|
||||
if (!pythonVersionRegex.test(settings.CLAUDE_MEM_PYTHON_VERSION)) {
|
||||
return { valid: false, error: 'CLAUDE_MEM_PYTHON_VERSION must be in format "3.X" or "3.XX" (e.g., "3.13")' };
|
||||
}
|
||||
}
|
||||
|
||||
// Validate boolean string values
|
||||
const booleanSettings = [
|
||||
'CLAUDE_MEM_CONTEXT_SHOW_READ_TOKENS',
|
||||
|
||||
@@ -104,39 +104,45 @@ export class SettingsDefaultsManager {
|
||||
/**
|
||||
* Load settings from file with fallback to defaults
|
||||
* Returns merged settings with defaults as fallback
|
||||
* Handles all errors (missing file, corrupted JSON, permissions) by returning defaults
|
||||
*/
|
||||
static loadFromFile(settingsPath: string): SettingsDefaults {
|
||||
if (!existsSync(settingsPath)) {
|
||||
try {
|
||||
if (!existsSync(settingsPath)) {
|
||||
return this.getAllDefaults();
|
||||
}
|
||||
|
||||
const settingsData = readFileSync(settingsPath, 'utf-8');
|
||||
const settings = JSON.parse(settingsData);
|
||||
|
||||
// MIGRATION: Handle old nested schema { env: {...} }
|
||||
let flatSettings = settings;
|
||||
if (settings.env && typeof settings.env === 'object') {
|
||||
// Migrate from nested to flat schema
|
||||
flatSettings = settings.env;
|
||||
|
||||
// Auto-migrate the file to flat schema
|
||||
try {
|
||||
writeFileSync(settingsPath, JSON.stringify(flatSettings, null, 2), 'utf-8');
|
||||
logger.info('SETTINGS', 'Migrated settings file from nested to flat schema', { settingsPath });
|
||||
} catch (error) {
|
||||
logger.warn('SETTINGS', 'Failed to auto-migrate settings file', { settingsPath }, error);
|
||||
// Continue with in-memory migration even if write fails
|
||||
}
|
||||
}
|
||||
|
||||
// Merge file settings with defaults (flat schema)
|
||||
const result: SettingsDefaults = { ...this.DEFAULTS };
|
||||
for (const key of Object.keys(this.DEFAULTS) as Array<keyof SettingsDefaults>) {
|
||||
if (flatSettings[key] !== undefined) {
|
||||
result[key] = flatSettings[key];
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
} catch (error) {
|
||||
logger.warn('SETTINGS', 'Failed to load settings, using defaults', { settingsPath }, error);
|
||||
return this.getAllDefaults();
|
||||
}
|
||||
|
||||
const settingsData = readFileSync(settingsPath, 'utf-8');
|
||||
const settings = JSON.parse(settingsData);
|
||||
|
||||
// MIGRATION: Handle old nested schema { env: {...} }
|
||||
let flatSettings = settings;
|
||||
if (settings.env && typeof settings.env === 'object') {
|
||||
// Migrate from nested to flat schema
|
||||
flatSettings = settings.env;
|
||||
|
||||
// Auto-migrate the file to flat schema
|
||||
try {
|
||||
writeFileSync(settingsPath, JSON.stringify(flatSettings, null, 2), 'utf-8');
|
||||
logger.info('SETTINGS', 'Migrated settings file from nested to flat schema', { settingsPath });
|
||||
} catch (error) {
|
||||
logger.warn('SETTINGS', 'Failed to auto-migrate settings file', { settingsPath }, error);
|
||||
// Continue with in-memory migration even if write fails
|
||||
}
|
||||
}
|
||||
|
||||
// Merge file settings with defaults (flat schema)
|
||||
const result: SettingsDefaults = { ...this.DEFAULTS };
|
||||
for (const key of Object.keys(this.DEFAULTS) as Array<keyof SettingsDefaults>) {
|
||||
if (flatSettings[key] !== undefined) {
|
||||
result[key] = flatSettings[key];
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -22,9 +22,10 @@ export function parseJsonArray(json: string | null): string[] {
|
||||
|
||||
/**
|
||||
* Format date with time (e.g., "Dec 14, 7:30 PM")
|
||||
* Accepts either ISO date string or epoch milliseconds
|
||||
*/
|
||||
export function formatDateTime(dateStr: string): string {
|
||||
const date = new Date(dateStr);
|
||||
export function formatDateTime(dateInput: string | number): string {
|
||||
const date = new Date(dateInput);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
@@ -36,9 +37,10 @@ export function formatDateTime(dateStr: string): string {
|
||||
|
||||
/**
|
||||
* Format just time, no date (e.g., "7:30 PM")
|
||||
* Accepts either ISO date string or epoch milliseconds
|
||||
*/
|
||||
export function formatTime(dateStr: string): string {
|
||||
const date = new Date(dateStr);
|
||||
export function formatTime(dateInput: string | number): string {
|
||||
const date = new Date(dateInput);
|
||||
return date.toLocaleString('en-US', {
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
@@ -48,9 +50,10 @@ export function formatTime(dateStr: string): string {
|
||||
|
||||
/**
|
||||
* Format just date (e.g., "Dec 14, 2025")
|
||||
* Accepts either ISO date string or epoch milliseconds
|
||||
*/
|
||||
export function formatDate(dateStr: string): string {
|
||||
const date = new Date(dateStr);
|
||||
export function formatDate(dateInput: string | number): string {
|
||||
const date = new Date(dateInput);
|
||||
return date.toLocaleString('en-US', {
|
||||
month: 'short',
|
||||
day: 'numeric',
|
||||
@@ -76,6 +79,14 @@ export function extractFirstFile(filesModified: string | null, cwd: string): str
|
||||
return files.length > 0 ? toRelativePath(files[0], cwd) : 'General';
|
||||
}
|
||||
|
||||
/**
|
||||
* Estimate token count for text (rough approximation: ~4 chars per token)
|
||||
*/
|
||||
export function estimateTokens(text: string | null): number {
|
||||
if (!text) return 0;
|
||||
return Math.ceil(text.length / 4);
|
||||
}
|
||||
|
||||
/**
|
||||
* Group items by date
|
||||
*
|
||||
|
||||
@@ -56,6 +56,20 @@ export function extractLastMessage(
|
||||
.filter((c: any) => c.type === 'text')
|
||||
.map((c: any) => c.text)
|
||||
.join('\n');
|
||||
} else {
|
||||
// Unknown content format - log error and skip this message
|
||||
logger.error(
|
||||
'PARSER',
|
||||
'Unknown message content format',
|
||||
{
|
||||
role,
|
||||
transcriptPath,
|
||||
contentType: typeof msgContent,
|
||||
content: msgContent
|
||||
},
|
||||
new Error('Message content is neither string nor array')
|
||||
);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (stripSystemReminders) {
|
||||
|
||||
+24
-23
@@ -13,8 +13,9 @@ const MARKETPLACE_ROOT = path.join(homedir(), '.claude', 'plugins', 'marketplace
|
||||
// Named constants for health checks
|
||||
const HEALTH_CHECK_TIMEOUT_MS = getTimeout(HOOK_TIMEOUTS.HEALTH_CHECK);
|
||||
|
||||
// Port cache to avoid repeated settings file reads
|
||||
// Cache to avoid repeated settings file reads
|
||||
let cachedPort: number | null = null;
|
||||
let cachedHost: string | null = null;
|
||||
|
||||
/**
|
||||
* Get the worker port number from settings
|
||||
@@ -26,35 +27,35 @@ export function getWorkerPort(): number {
|
||||
return cachedPort;
|
||||
}
|
||||
|
||||
try {
|
||||
const settingsPath = path.join(SettingsDefaultsManager.get('CLAUDE_MEM_DATA_DIR'), 'settings.json');
|
||||
const settings = SettingsDefaultsManager.loadFromFile(settingsPath);
|
||||
cachedPort = parseInt(settings.CLAUDE_MEM_WORKER_PORT, 10);
|
||||
return cachedPort;
|
||||
} catch (error) {
|
||||
// Fallback to default if settings load fails
|
||||
logger.debug('SYSTEM', 'Failed to load port from settings, using default', { error });
|
||||
cachedPort = parseInt(SettingsDefaultsManager.get('CLAUDE_MEM_WORKER_PORT'), 10);
|
||||
return cachedPort;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the cached port value
|
||||
* Call this when settings are updated to force re-reading from file
|
||||
*/
|
||||
export function clearPortCache(): void {
|
||||
cachedPort = null;
|
||||
const settingsPath = path.join(SettingsDefaultsManager.get('CLAUDE_MEM_DATA_DIR'), 'settings.json');
|
||||
const settings = SettingsDefaultsManager.loadFromFile(settingsPath);
|
||||
cachedPort = parseInt(settings.CLAUDE_MEM_WORKER_PORT, 10);
|
||||
return cachedPort;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get the worker host address
|
||||
* Priority: ~/.claude-mem/settings.json > env var > default (127.0.0.1)
|
||||
* Uses CLAUDE_MEM_WORKER_HOST from settings file or default (127.0.0.1)
|
||||
* Caches the host value to avoid repeated file reads
|
||||
*/
|
||||
export function getWorkerHost(): string {
|
||||
const settingsPath = path.join(homedir(), '.claude-mem', 'settings.json');
|
||||
if (cachedHost !== null) {
|
||||
return cachedHost;
|
||||
}
|
||||
|
||||
const settingsPath = path.join(SettingsDefaultsManager.get('CLAUDE_MEM_DATA_DIR'), 'settings.json');
|
||||
const settings = SettingsDefaultsManager.loadFromFile(settingsPath);
|
||||
return settings.CLAUDE_MEM_WORKER_HOST;
|
||||
cachedHost = settings.CLAUDE_MEM_WORKER_HOST;
|
||||
return cachedHost;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the cached port and host values
|
||||
* Call this when settings are updated to force re-reading from file
|
||||
*/
|
||||
export function clearPortCache(): void {
|
||||
cachedPort = null;
|
||||
cachedHost = null;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
+45
-14
@@ -101,27 +101,58 @@ class Logger {
|
||||
try {
|
||||
const input = typeof toolInput === 'string' ? JSON.parse(toolInput) : toolInput;
|
||||
|
||||
// Special formatting for common tools
|
||||
// Bash: show full command
|
||||
if (toolName === 'Bash' && input.command) {
|
||||
const cmd = input.command.length > 50
|
||||
? input.command.substring(0, 50) + '...'
|
||||
: input.command;
|
||||
return `${toolName}(${cmd})`;
|
||||
return `${toolName}(${input.command})`;
|
||||
}
|
||||
|
||||
if (toolName === 'Read' && input.file_path) {
|
||||
const path = input.file_path.split('/').pop() || input.file_path;
|
||||
return `${toolName}(${path})`;
|
||||
// File operations: show full path
|
||||
if (input.file_path) {
|
||||
return `${toolName}(${input.file_path})`;
|
||||
}
|
||||
|
||||
if (toolName === 'Edit' && input.file_path) {
|
||||
const path = input.file_path.split('/').pop() || input.file_path;
|
||||
return `${toolName}(${path})`;
|
||||
// NotebookEdit: show full notebook path
|
||||
if (input.notebook_path) {
|
||||
return `${toolName}(${input.notebook_path})`;
|
||||
}
|
||||
|
||||
if (toolName === 'Write' && input.file_path) {
|
||||
const path = input.file_path.split('/').pop() || input.file_path;
|
||||
return `${toolName}(${path})`;
|
||||
// Glob: show full pattern
|
||||
if (toolName === 'Glob' && input.pattern) {
|
||||
return `${toolName}(${input.pattern})`;
|
||||
}
|
||||
|
||||
// Grep: show full pattern
|
||||
if (toolName === 'Grep' && input.pattern) {
|
||||
return `${toolName}(${input.pattern})`;
|
||||
}
|
||||
|
||||
// WebFetch/WebSearch: show full URL or query
|
||||
if (input.url) {
|
||||
return `${toolName}(${input.url})`;
|
||||
}
|
||||
|
||||
if (input.query) {
|
||||
return `${toolName}(${input.query})`;
|
||||
}
|
||||
|
||||
// Task: show subagent_type or full description
|
||||
if (toolName === 'Task') {
|
||||
if (input.subagent_type) {
|
||||
return `${toolName}(${input.subagent_type})`;
|
||||
}
|
||||
if (input.description) {
|
||||
return `${toolName}(${input.description})`;
|
||||
}
|
||||
}
|
||||
|
||||
// Skill: show skill name
|
||||
if (toolName === 'Skill' && input.skill) {
|
||||
return `${toolName}(${input.skill})`;
|
||||
}
|
||||
|
||||
// LSP: show operation type
|
||||
if (toolName === 'LSP' && input.operation) {
|
||||
return `${toolName}(${input.operation})`;
|
||||
}
|
||||
|
||||
// Default: just show tool name
|
||||
|
||||
@@ -22,15 +22,17 @@ export function getProjectName(cwd: string | null | undefined): string {
|
||||
if (basename === '') {
|
||||
// Extract drive letter on Windows, or use 'root' on Unix
|
||||
const isWindows = process.platform === 'win32';
|
||||
if (isWindows && cwd.match(/^[A-Z]:\\/i)) {
|
||||
const driveLetter = cwd[0].toUpperCase();
|
||||
const projectName = `drive-${driveLetter}`;
|
||||
logger.info('PROJECT_NAME', 'Drive root detected', { cwd, projectName });
|
||||
return projectName;
|
||||
} else {
|
||||
logger.warn('PROJECT_NAME', 'Root directory detected, using fallback', { cwd });
|
||||
return 'unknown-project';
|
||||
if (isWindows) {
|
||||
const driveMatch = cwd.match(/^([A-Z]):\\/i);
|
||||
if (driveMatch) {
|
||||
const driveLetter = driveMatch[1].toUpperCase();
|
||||
const projectName = `drive-${driveLetter}`;
|
||||
logger.info('PROJECT_NAME', 'Drive root detected', { cwd, projectName });
|
||||
return projectName;
|
||||
}
|
||||
}
|
||||
logger.warn('PROJECT_NAME', 'Root directory detected, using fallback', { cwd });
|
||||
return 'unknown-project';
|
||||
}
|
||||
|
||||
return basename;
|
||||
|
||||
@@ -1,77 +0,0 @@
|
||||
/**
|
||||
* Happy Path Error With Fallback
|
||||
*
|
||||
* @deprecated This function is deprecated. Use logger.happyPathError() instead.
|
||||
* All usages have been migrated to the new logger system which consolidates logs
|
||||
* into the regular worker logs instead of separate silent.log files.
|
||||
*
|
||||
* Migration example:
|
||||
* OLD: happy_path_error__with_fallback('Missing value', { data }, 'default')
|
||||
* NEW: logger.happyPathError('COMPONENT', 'Missing value', undefined, { data }, 'default')
|
||||
*
|
||||
* See: src/utils/logger.ts for the new happyPathError method
|
||||
* Issue: #312 - Consolidate silent logs into regular worker logs
|
||||
*/
|
||||
|
||||
import { appendFileSync } from 'fs';
|
||||
import { homedir } from 'os';
|
||||
import { join } from 'path';
|
||||
|
||||
const LOG_FILE = join(homedir(), '.claude-mem', 'silent.log');
|
||||
|
||||
/**
|
||||
* Write an error message to silent.log and return fallback value
|
||||
* @param message - Error message describing what went wrong
|
||||
* @param data - Optional data to include (will be JSON stringified)
|
||||
* @param fallback - Value to return (defaults to empty string)
|
||||
* @returns The fallback value (for use in || fallbacks)
|
||||
*/
|
||||
export function happy_path_error__with_fallback(message: string, data?: any, fallback: string = ''): string {
|
||||
const timestamp = new Date().toISOString();
|
||||
|
||||
// Capture stack trace to get caller location
|
||||
const stack = new Error().stack || '';
|
||||
const stackLines = stack.split('\n');
|
||||
// Line 0: "Error"
|
||||
// Line 1: "at silentDebug ..."
|
||||
// Line 2: "at <CALLER> ..." <- We want this one
|
||||
const callerLine = stackLines[2] || '';
|
||||
const callerMatch = callerLine.match(/at\s+(?:.*\s+)?\(?([^:]+):(\d+):(\d+)\)?/);
|
||||
const location = callerMatch
|
||||
? `${callerMatch[1].split('/').pop()}:${callerMatch[2]}`
|
||||
: 'unknown';
|
||||
|
||||
let logLine = `[${timestamp}] [HAPPY-PATH-ERROR] [${location}] ${message}`;
|
||||
|
||||
if (data !== undefined) {
|
||||
try {
|
||||
logLine += ` ${JSON.stringify(data)}`;
|
||||
} catch (error) {
|
||||
logLine += ` [stringify error: ${error}]`;
|
||||
}
|
||||
}
|
||||
|
||||
logLine += '\n';
|
||||
|
||||
try {
|
||||
appendFileSync(LOG_FILE, logLine);
|
||||
} catch (error) {
|
||||
// If we can't write to the log file, fail silently (it's a debug utility after all)
|
||||
// Only write to stderr as a last resort
|
||||
console.error('[silent-debug] Failed to write to log:', error);
|
||||
}
|
||||
|
||||
return fallback;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear the silent log file
|
||||
*/
|
||||
export function clearSilentLog(): void {
|
||||
try {
|
||||
appendFileSync(LOG_FILE, `\n${'='.repeat(80)}\n[${new Date().toISOString()}] Log cleared\n${'='.repeat(80)}\n\n`);
|
||||
} catch (error) {
|
||||
// Expected: Log file may not be writable
|
||||
}
|
||||
}
|
||||
|
||||
+15
-37
@@ -31,20 +31,10 @@ function countTags(content: string): number {
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip memory tags from JSON-serialized content (tool inputs/responses)
|
||||
*
|
||||
* @param content - Stringified JSON content from tool_input or tool_response
|
||||
* @returns Cleaned content with tags removed, or '{}' if non-string/invalid
|
||||
*
|
||||
* Note: Returns '{}' for non-strings because this is used in JSON context
|
||||
* where we need a valid JSON object if the input is invalid.
|
||||
* Internal function to strip memory tags from content
|
||||
* Shared logic extracted from both JSON and prompt stripping functions
|
||||
*/
|
||||
export function stripMemoryTagsFromJson(content: string): string {
|
||||
if (typeof content !== 'string') {
|
||||
logger.happyPathError('SYSTEM', 'received non-string for JSON context', undefined, { type: typeof content }, '{}');
|
||||
return '{}'; // Safe default for JSON context
|
||||
}
|
||||
|
||||
function stripTagsInternal(content: string): string {
|
||||
// ReDoS protection: limit tag count before regex processing
|
||||
const tagCount = countTags(content);
|
||||
if (tagCount > MAX_TAG_COUNT) {
|
||||
@@ -62,34 +52,22 @@ export function stripMemoryTagsFromJson(content: string): string {
|
||||
.trim();
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip memory tags from JSON-serialized content (tool inputs/responses)
|
||||
*
|
||||
* @param content - Stringified JSON content from tool_input or tool_response
|
||||
* @returns Cleaned content with tags removed, or '{}' if invalid
|
||||
*/
|
||||
export function stripMemoryTagsFromJson(content: string): string {
|
||||
return stripTagsInternal(content);
|
||||
}
|
||||
|
||||
/**
|
||||
* Strip memory tags from user prompt content
|
||||
*
|
||||
* @param content - Raw user prompt text
|
||||
* @returns Cleaned content with tags removed, or '' if non-string/invalid
|
||||
*
|
||||
* Note: Returns '' (empty string) for non-strings because this is used in prompt context
|
||||
* where an empty prompt indicates the user didn't provide any content.
|
||||
* @returns Cleaned content with tags removed
|
||||
*/
|
||||
export function stripMemoryTagsFromPrompt(content: string): string {
|
||||
if (typeof content !== 'string') {
|
||||
logger.happyPathError('SYSTEM', 'received non-string for prompt context', undefined, { type: typeof content }, '');
|
||||
return ''; // Safe default for prompt content
|
||||
}
|
||||
|
||||
// ReDoS protection: limit tag count before regex processing
|
||||
const tagCount = countTags(content);
|
||||
if (tagCount > MAX_TAG_COUNT) {
|
||||
logger.warn('SYSTEM', 'tag count exceeds limit', undefined, {
|
||||
tagCount,
|
||||
maxAllowed: MAX_TAG_COUNT,
|
||||
contentLength: content.length
|
||||
});
|
||||
// Still process but log the anomaly
|
||||
}
|
||||
|
||||
return content
|
||||
.replace(/<claude-mem-context>[\s\S]*?<\/claude-mem-context>/g, '')
|
||||
.replace(/<private>[\s\S]*?<\/private>/g, '')
|
||||
.trim();
|
||||
return stripTagsInternal(content);
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user