refactor: decompose monolithic services into modular architecture (#534)
* docs: add monolith refactor report with system breakdown Comprehensive analysis of codebase identifying: - 14 files over 500 lines requiring refactoring - 3 critical monoliths (SessionStore, SearchManager, worker-service) - 80% code duplication across agent files - 5-phase refactoring roadmap with domain-based architecture * fix: prevent memory_session_id from equaling content_session_id The bug: memory_session_id was initialized to contentSessionId as a "placeholder for FK purposes". This caused the SDK resume logic to inject memory agent messages into the USER's Claude Code transcript, corrupting their conversation history. Root cause: - SessionStore.createSDKSession initialized memory_session_id = contentSessionId - SDKAgent checked memorySessionId !== contentSessionId but this check only worked if the session was fetched fresh from DB The fix: - SessionStore: Initialize memory_session_id as NULL, not contentSessionId - SDKAgent: Simple truthy check !!session.memorySessionId (NULL = fresh start) - Database migration: Ran UPDATE to set memory_session_id = NULL for 1807 existing sessions that had the bug Also adds [ALIGNMENT] logging across the session lifecycle to help debug session continuity issues: - Hook entry: contentSessionId + promptNumber - DB lookup: contentSessionId → memorySessionId mapping proof - Resume decision: shows which memorySessionId will be used for resume - Capture: logs when memorySessionId is captured from first SDK response UI: Added "Alignment" quick filter button in LogsModal to show only alignment logs for debugging session continuity. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: improve error handling in worker-service.ts - Fix GENERIC_CATCH anti-patterns by logging full error objects instead of just messages - Add [ANTI-PATTERN IGNORED] markers for legitimate cases (cleanup, hot paths) - Simplify error handling comments to be more concise - Improve httpShutdown() error discrimination for ECONNREFUSED - Reduce LARGE_TRY_BLOCK issues in initialization code Part of anti-pattern cleanup plan (132 total issues) * refactor: improve error logging in SearchManager.ts - Pass full error objects to logger instead of just error.message - Fixes PARTIAL_ERROR_LOGGING anti-patterns (10 instances) - Better debugging visibility when Chroma queries fail Part of anti-pattern cleanup (133 remaining) * refactor: improve error logging across SessionStore and mcp-server - SessionStore.ts: Fix error logging in column rename utility - mcp-server.ts: Log full error objects instead of just error.message - Improve error handling in Worker API calls and tool execution Part of anti-pattern cleanup (133 remaining) * Refactor hooks to streamline error handling and loading states - Simplified error handling in useContextPreview by removing try-catch and directly checking response status. - Refactored usePagination to eliminate try-catch, improving readability and maintaining error handling through response checks. - Cleaned up useSSE by removing unnecessary try-catch around JSON parsing, ensuring clarity in message handling. - Enhanced useSettings by streamlining the saving process, removing try-catch, and directly checking the result for success. * refactor: add error handling back to SearchManager Chroma calls - Wrap queryChroma calls in try-catch to prevent generator crashes - Log Chroma errors as warnings and fall back gracefully - Fixes generator failures when Chroma has issues - Part of anti-pattern cleanup recovery * feat: Add generator failure investigation report and observation duplication regression report - Created a comprehensive investigation report detailing the root cause of generator failures during anti-pattern cleanup, including the impact, investigation process, and implemented fixes. - Documented the critical regression causing observation duplication due to race conditions in the SDK agent, outlining symptoms, root cause analysis, and proposed fixes. * fix: address PR #528 review comments - atomic cleanup and detector improvements This commit addresses critical review feedback from PR #528: ## 1. Atomic Message Cleanup (Fix Race Condition) **Problem**: SessionRoutes.ts generator error handler had race condition - Queried messages then marked failed in loop - If crash during loop → partial marking → inconsistent state **Solution**: - Added `markSessionMessagesFailed()` to PendingMessageStore.ts - Single atomic UPDATE statement replaces loop - Follows existing pattern from `resetProcessingToPending()` **Files**: - src/services/sqlite/PendingMessageStore.ts (new method) - src/services/worker/http/routes/SessionRoutes.ts (use new method) ## 2. Anti-Pattern Detector Improvements **Problem**: Detector didn't recognize logger.failure() method - Lines 212 & 335 already included "failure" - Lines 112-113 (PARTIAL_ERROR_LOGGING detection) did not **Solution**: Updated regex patterns to include "failure" for consistency **Files**: - scripts/anti-pattern-test/detect-error-handling-antipatterns.ts ## 3. Documentation **PR Comment**: Added clarification on memory_session_id fix location - Points to SessionStore.ts:1155 - Explains why NULL initialization prevents message injection bug ## Review Response Addresses "Must Address Before Merge" items from review: ✅ Clarified memory_session_id bug fix location (via PR comment) ✅ Made generator error handler message cleanup atomic ❌ Deferred comprehensive test suite to follow-up PR (keeps PR focused) ## Testing - Build passes with no errors - Anti-pattern detector runs successfully - Atomic cleanup follows proven pattern from existing methods 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: FOREIGN KEY constraint and missing failed_at_epoch column Two critical bugs fixed: 1. Missing failed_at_epoch column in pending_messages table - Added migration 20 to create the column - Fixes error when trying to mark messages as failed 2. FOREIGN KEY constraint failed when storing observations - All three agents (SDK, Gemini, OpenRouter) were passing session.contentSessionId instead of session.memorySessionId - storeObservationsAndMarkComplete expects memorySessionId - Added null check and clear error message However, observations still not saving - see investigation report. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com> * Refactor hook input parsing to improve error handling - Added a nested try-catch block in new-hook.ts, save-hook.ts, and summary-hook.ts to handle JSON parsing errors more gracefully. - Replaced direct error throwing with logging of the error details using logger.error. - Ensured that the process exits cleanly after handling input in all three hooks. * docs: update monolith report post session-logging merge - SessionStore grew to 2,011 lines (49 methods) - highest priority - SearchManager reduced to 1,778 lines (improved) - Agent files reduced by ~45 lines combined - Added trend indicators and post-merge observations - Core refactoring proposal remains valid * refactor(sqlite): decompose SessionStore into modular architecture Extract the 2011-line SessionStore.ts monolith into focused, single-responsibility modules following grep-optimized progressive disclosure pattern: New module structure: - sessions/ - Session creation and retrieval (create.ts, get.ts, types.ts) - observations/ - Observation storage and queries (store.ts, get.ts, recent.ts, files.ts, types.ts) - summaries/ - Summary storage and queries (store.ts, get.ts, recent.ts, types.ts) - prompts/ - User prompt management (store.ts, get.ts, types.ts) - timeline/ - Cross-entity timeline queries (queries.ts) - import/ - Bulk import operations (bulk.ts) - migrations/ - Database migrations (runner.ts) New coordinator files: - Database.ts - ClaudeMemDatabase class with re-exports - transactions.ts - Atomic cross-entity transactions - Named re-export facades (Sessions.ts, Observations.ts, etc.) Key design decisions: - All functions take `db: Database` as first parameter (functional style) - Named re-exports instead of index.ts for grep-friendliness - SessionStore retained as backward-compatible wrapper - Target file size: 50-150 lines (60% compliance) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor(agents): extract shared logic into modular architecture Consolidate duplicate code across SDKAgent, GeminiAgent, and OpenRouterAgent into focused utility modules. Total reduction: 500 lines (29%). New modules in src/services/worker/agents/: - ResponseProcessor.ts: Atomic DB transactions, Chroma sync, SSE broadcast - ObservationBroadcaster.ts: SSE event formatting and dispatch - SessionCleanupHelper.ts: Session state cleanup and stuck message reset - FallbackErrorHandler.ts: Provider error detection for fallback logic - types.ts: Shared interfaces (WorkerRef, SSE payloads, StorageResult) Bug fix: SDKAgent was incorrectly using obs.files instead of obs.files_read and hardcoding files_modified to empty array. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor(search): extract search strategies into modular architecture Decompose SearchManager into focused strategy pattern with: - SearchOrchestrator: Coordinates strategy selection and fallback - ChromaSearchStrategy: Vector semantic search via ChromaDB - SQLiteSearchStrategy: Filter-only queries for date/project/type - HybridSearchStrategy: Metadata filtering + semantic ranking - ResultFormatter: Markdown table formatting for results - TimelineBuilder: Chronological timeline construction - Filter modules: DateFilter, ProjectFilter, TypeFilter SearchManager now delegates to new infrastructure while maintaining full backward compatibility with existing public API. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor(context): decompose context-generator into modular architecture Extract 660-line monolith into focused components: - ContextBuilder: Main orchestrator (~160 lines) - ContextConfigLoader: Configuration loading - TokenCalculator: Token budget calculations - ObservationCompiler: Data retrieval and query building - MarkdownFormatter/ColorFormatter: Output formatting - Section renderers: Header, Timeline, Summary, Footer Maintains full backward compatibility - context-generator.ts now delegates to new ContextBuilder while preserving public API. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor(worker): decompose worker-service into modular infrastructure Split 2000+ line monolith into focused modules: Infrastructure: - ProcessManager: PID files, signal handlers, child process cleanup - HealthMonitor: Port checks, health polling, version matching - GracefulShutdown: Coordinated cleanup on exit Server: - Server: Express app setup, core routes, route registration - Middleware: Re-exports from existing middleware - ErrorHandler: Centralized error handling with AppError class Integrations: - CursorHooksInstaller: Full Cursor IDE integration (registry, hooks, MCP) WorkerService now acts as thin coordinator wiring all components together. Maintains full backward compatibility with existing public API. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Refactor session queue processing and database interactions - Implement claim-and-delete pattern in SessionQueueProcessor to simplify message handling and eliminate duplicate processing. - Update PendingMessageStore to support atomic claim-and-delete operations, removing the need for intermediate processing states. - Introduce storeObservations method in SessionStore for simplified observation and summary storage without message tracking. - Remove deprecated methods and clean up session state management in worker agents. - Adjust response processing to accommodate new storage patterns, ensuring atomic transactions for observations and summaries. - Remove unnecessary reset logic for stuck messages due to the new queue handling approach. * Add duplicate observation cleanup script Script to clean up duplicate observations created by the batching bug where observations were stored once per message ID instead of once per observation. Includes safety checks to always keep at least one copy. Usage: bun scripts/cleanup-duplicates.ts # Dry run bun scripts/cleanup-duplicates.ts --execute # Delete duplicates bun scripts/cleanup-duplicates.ts --aggressive # Ignore time window 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,73 @@
|
||||
/**
|
||||
* FallbackErrorHandler: Error detection for provider fallback
|
||||
*
|
||||
* Responsibility:
|
||||
* - Determine if an error should trigger fallback to Claude SDK
|
||||
* - Provide consistent error classification across Gemini and OpenRouter
|
||||
*/
|
||||
|
||||
import { FALLBACK_ERROR_PATTERNS } from './types.js';
|
||||
|
||||
/**
|
||||
* Check if an error should trigger fallback to Claude SDK
|
||||
*
|
||||
* Errors that trigger fallback:
|
||||
* - 429: Rate limit exceeded
|
||||
* - 500/502/503: Server errors
|
||||
* - ECONNREFUSED: Connection refused (server down)
|
||||
* - ETIMEDOUT: Request timeout
|
||||
* - fetch failed: Network failure
|
||||
*
|
||||
* @param error - Error object to check
|
||||
* @returns true if the error should trigger fallback to Claude
|
||||
*/
|
||||
export function shouldFallbackToClaude(error: unknown): boolean {
|
||||
const message = getErrorMessage(error);
|
||||
|
||||
return FALLBACK_ERROR_PATTERNS.some(pattern => message.includes(pattern));
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract error message from various error types
|
||||
*/
|
||||
function getErrorMessage(error: unknown): string {
|
||||
if (error === null || error === undefined) {
|
||||
return '';
|
||||
}
|
||||
|
||||
if (typeof error === 'string') {
|
||||
return error;
|
||||
}
|
||||
|
||||
if (error instanceof Error) {
|
||||
return error.message;
|
||||
}
|
||||
|
||||
if (typeof error === 'object' && 'message' in error) {
|
||||
return String((error as { message: unknown }).message);
|
||||
}
|
||||
|
||||
return String(error);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if error is an AbortError (user cancelled)
|
||||
*
|
||||
* @param error - Error object to check
|
||||
* @returns true if this is an abort/cancellation error
|
||||
*/
|
||||
export function isAbortError(error: unknown): boolean {
|
||||
if (error === null || error === undefined) {
|
||||
return false;
|
||||
}
|
||||
|
||||
if (error instanceof Error && error.name === 'AbortError') {
|
||||
return true;
|
||||
}
|
||||
|
||||
if (typeof error === 'object' && 'name' in error) {
|
||||
return (error as { name: unknown }).name === 'AbortError';
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
@@ -0,0 +1,54 @@
|
||||
/**
|
||||
* ObservationBroadcaster: SSE broadcasting for observations and summaries
|
||||
*
|
||||
* Responsibility:
|
||||
* - Broadcast new observations to SSE clients
|
||||
* - Broadcast new summaries to SSE clients
|
||||
* - Handle worker reference safely (null checks)
|
||||
*
|
||||
* BUGFIX: This module fixes the incorrect field names in SDKAgent:
|
||||
* - SDKAgent used `obs.files` which doesn't exist - should be `obs.files_read`
|
||||
* - SDKAgent used hardcoded `files_modified: JSON.stringify([])` - should use `obs.files_modified`
|
||||
*/
|
||||
|
||||
import type { WorkerRef, ObservationSSEPayload, SummarySSEPayload } from './types.js';
|
||||
|
||||
/**
|
||||
* Broadcast a new observation to SSE clients
|
||||
*
|
||||
* @param worker - Worker reference with SSE broadcaster (can be undefined)
|
||||
* @param payload - Observation data to broadcast
|
||||
*/
|
||||
export function broadcastObservation(
|
||||
worker: WorkerRef | undefined,
|
||||
payload: ObservationSSEPayload
|
||||
): void {
|
||||
if (!worker?.sseBroadcaster) {
|
||||
return;
|
||||
}
|
||||
|
||||
worker.sseBroadcaster.broadcast({
|
||||
type: 'new_observation',
|
||||
observation: payload
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Broadcast a new summary to SSE clients
|
||||
*
|
||||
* @param worker - Worker reference with SSE broadcaster (can be undefined)
|
||||
* @param payload - Summary data to broadcast
|
||||
*/
|
||||
export function broadcastSummary(
|
||||
worker: WorkerRef | undefined,
|
||||
payload: SummarySSEPayload
|
||||
): void {
|
||||
if (!worker?.sseBroadcaster) {
|
||||
return;
|
||||
}
|
||||
|
||||
worker.sseBroadcaster.broadcast({
|
||||
type: 'new_summary',
|
||||
summary: payload
|
||||
});
|
||||
}
|
||||
@@ -0,0 +1,270 @@
|
||||
/**
|
||||
* ResponseProcessor: Shared response processing for all agent implementations
|
||||
*
|
||||
* Responsibility:
|
||||
* - Parse observations and summaries from agent responses
|
||||
* - Execute atomic database transactions
|
||||
* - Orchestrate Chroma sync (fire-and-forget)
|
||||
* - Broadcast to SSE clients
|
||||
* - Clean up processed messages
|
||||
*
|
||||
* This module extracts 150+ lines of duplicate code from SDKAgent, GeminiAgent, and OpenRouterAgent.
|
||||
*/
|
||||
|
||||
import { logger } from '../../../utils/logger.js';
|
||||
import { parseObservations, parseSummary, type ParsedObservation, type ParsedSummary } from '../../../sdk/parser.js';
|
||||
import { updateCursorContextForProject } from '../../worker-service.js';
|
||||
import { getWorkerPort } from '../../../shared/worker-utils.js';
|
||||
import type { ActiveSession } from '../../worker-types.js';
|
||||
import type { DatabaseManager } from '../DatabaseManager.js';
|
||||
import type { SessionManager } from '../SessionManager.js';
|
||||
import type { WorkerRef, StorageResult } from './types.js';
|
||||
import { broadcastObservation, broadcastSummary } from './ObservationBroadcaster.js';
|
||||
import { cleanupProcessedMessages } from './SessionCleanupHelper.js';
|
||||
|
||||
/**
|
||||
* Process agent response text (parse XML, save to database, sync to Chroma, broadcast SSE)
|
||||
*
|
||||
* This is the unified response processor that handles:
|
||||
* 1. Adding response to conversation history (for provider interop)
|
||||
* 2. Parsing observations and summaries from XML
|
||||
* 3. Atomic database transaction to store observations + summary
|
||||
* 4. Async Chroma sync (fire-and-forget, failures are non-critical)
|
||||
* 5. SSE broadcast to web UI clients
|
||||
* 6. Session cleanup
|
||||
*
|
||||
* @param text - Response text from the agent
|
||||
* @param session - Active session being processed
|
||||
* @param dbManager - Database manager for storage operations
|
||||
* @param sessionManager - Session manager for message tracking
|
||||
* @param worker - Worker reference for SSE broadcasting (optional)
|
||||
* @param discoveryTokens - Token cost delta for this response
|
||||
* @param originalTimestamp - Original epoch when message was queued (for accurate timestamps)
|
||||
* @param agentName - Name of the agent for logging (e.g., 'SDK', 'Gemini', 'OpenRouter')
|
||||
*/
|
||||
export async function processAgentResponse(
|
||||
text: string,
|
||||
session: ActiveSession,
|
||||
dbManager: DatabaseManager,
|
||||
sessionManager: SessionManager,
|
||||
worker: WorkerRef | undefined,
|
||||
discoveryTokens: number,
|
||||
originalTimestamp: number | null,
|
||||
agentName: string
|
||||
): Promise<void> {
|
||||
// Add assistant response to shared conversation history for provider interop
|
||||
if (text) {
|
||||
session.conversationHistory.push({ role: 'assistant', content: text });
|
||||
}
|
||||
|
||||
// Parse observations and summary
|
||||
const observations = parseObservations(text, session.contentSessionId);
|
||||
const summary = parseSummary(text, session.sessionDbId);
|
||||
|
||||
// Convert nullable fields to empty strings for storeSummary (if summary exists)
|
||||
const summaryForStore = normalizeSummaryForStorage(summary);
|
||||
|
||||
// Get session store for atomic transaction
|
||||
const sessionStore = dbManager.getSessionStore();
|
||||
|
||||
// CRITICAL: Must use memorySessionId (not contentSessionId) for FK constraint
|
||||
if (!session.memorySessionId) {
|
||||
throw new Error('Cannot store observations: memorySessionId not yet captured');
|
||||
}
|
||||
|
||||
// ATOMIC TRANSACTION: Store observations + summary ONCE
|
||||
// Messages are already deleted from queue on claim, so no completion tracking needed
|
||||
const result = sessionStore.storeObservations(
|
||||
session.memorySessionId,
|
||||
session.project,
|
||||
observations,
|
||||
summaryForStore,
|
||||
session.lastPromptNumber,
|
||||
discoveryTokens,
|
||||
originalTimestamp ?? undefined
|
||||
);
|
||||
|
||||
// Log what was saved
|
||||
logger.info('SDK', `${agentName} observations and summary saved atomically`, {
|
||||
sessionId: session.sessionDbId,
|
||||
observationCount: result.observationIds.length,
|
||||
hasSummary: !!result.summaryId,
|
||||
atomicTransaction: true
|
||||
});
|
||||
|
||||
// AFTER transaction commits - async operations (can fail safely without data loss)
|
||||
await syncAndBroadcastObservations(
|
||||
observations,
|
||||
result,
|
||||
session,
|
||||
dbManager,
|
||||
worker,
|
||||
discoveryTokens,
|
||||
agentName
|
||||
);
|
||||
|
||||
// Sync and broadcast summary if present
|
||||
await syncAndBroadcastSummary(
|
||||
summary,
|
||||
summaryForStore,
|
||||
result,
|
||||
session,
|
||||
dbManager,
|
||||
worker,
|
||||
discoveryTokens,
|
||||
agentName
|
||||
);
|
||||
|
||||
// Clean up session state
|
||||
cleanupProcessedMessages(session, worker);
|
||||
}
|
||||
|
||||
/**
|
||||
* Normalize summary for storage (convert null fields to empty strings)
|
||||
*/
|
||||
function normalizeSummaryForStorage(summary: ParsedSummary | null): {
|
||||
request: string;
|
||||
investigated: string;
|
||||
learned: string;
|
||||
completed: string;
|
||||
next_steps: string;
|
||||
notes: string | null;
|
||||
} | null {
|
||||
if (!summary) return null;
|
||||
|
||||
return {
|
||||
request: summary.request || '',
|
||||
investigated: summary.investigated || '',
|
||||
learned: summary.learned || '',
|
||||
completed: summary.completed || '',
|
||||
next_steps: summary.next_steps || '',
|
||||
notes: summary.notes
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync observations to Chroma and broadcast to SSE clients
|
||||
*/
|
||||
async function syncAndBroadcastObservations(
|
||||
observations: ParsedObservation[],
|
||||
result: StorageResult,
|
||||
session: ActiveSession,
|
||||
dbManager: DatabaseManager,
|
||||
worker: WorkerRef | undefined,
|
||||
discoveryTokens: number,
|
||||
agentName: string
|
||||
): Promise<void> {
|
||||
for (let i = 0; i < observations.length; i++) {
|
||||
const obsId = result.observationIds[i];
|
||||
const obs = observations[i];
|
||||
const chromaStart = Date.now();
|
||||
|
||||
// Sync to Chroma (fire-and-forget)
|
||||
dbManager.getChromaSync().syncObservation(
|
||||
obsId,
|
||||
session.contentSessionId,
|
||||
session.project,
|
||||
obs,
|
||||
session.lastPromptNumber,
|
||||
result.createdAtEpoch,
|
||||
discoveryTokens
|
||||
).then(() => {
|
||||
const chromaDuration = Date.now() - chromaStart;
|
||||
logger.debug('CHROMA', 'Observation synced', {
|
||||
obsId,
|
||||
duration: `${chromaDuration}ms`,
|
||||
type: obs.type,
|
||||
title: obs.title || '(untitled)'
|
||||
});
|
||||
}).catch((error) => {
|
||||
logger.warn('CHROMA', `${agentName} chroma sync failed, continuing without vector search`, {
|
||||
obsId,
|
||||
type: obs.type,
|
||||
title: obs.title || '(untitled)'
|
||||
}, error);
|
||||
});
|
||||
|
||||
// Broadcast to SSE clients (for web UI)
|
||||
// BUGFIX: Use obs.files_read and obs.files_modified (not obs.files)
|
||||
broadcastObservation(worker, {
|
||||
id: obsId,
|
||||
memory_session_id: session.memorySessionId,
|
||||
session_id: session.contentSessionId,
|
||||
type: obs.type,
|
||||
title: obs.title,
|
||||
subtitle: obs.subtitle,
|
||||
text: null, // text field is not in ParsedObservation
|
||||
narrative: obs.narrative || null,
|
||||
facts: JSON.stringify(obs.facts || []),
|
||||
concepts: JSON.stringify(obs.concepts || []),
|
||||
files_read: JSON.stringify(obs.files_read || []),
|
||||
files_modified: JSON.stringify(obs.files_modified || []),
|
||||
project: session.project,
|
||||
prompt_number: session.lastPromptNumber,
|
||||
created_at_epoch: result.createdAtEpoch
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Sync summary to Chroma and broadcast to SSE clients
|
||||
*/
|
||||
async function syncAndBroadcastSummary(
|
||||
summary: ParsedSummary | null,
|
||||
summaryForStore: { request: string; investigated: string; learned: string; completed: string; next_steps: string; notes: string | null } | null,
|
||||
result: StorageResult,
|
||||
session: ActiveSession,
|
||||
dbManager: DatabaseManager,
|
||||
worker: WorkerRef | undefined,
|
||||
discoveryTokens: number,
|
||||
agentName: string
|
||||
): Promise<void> {
|
||||
if (!summaryForStore || !result.summaryId) {
|
||||
return;
|
||||
}
|
||||
|
||||
const chromaStart = Date.now();
|
||||
|
||||
// Sync to Chroma (fire-and-forget)
|
||||
dbManager.getChromaSync().syncSummary(
|
||||
result.summaryId,
|
||||
session.contentSessionId,
|
||||
session.project,
|
||||
summaryForStore,
|
||||
session.lastPromptNumber,
|
||||
result.createdAtEpoch,
|
||||
discoveryTokens
|
||||
).then(() => {
|
||||
const chromaDuration = Date.now() - chromaStart;
|
||||
logger.debug('CHROMA', 'Summary synced', {
|
||||
summaryId: result.summaryId,
|
||||
duration: `${chromaDuration}ms`,
|
||||
request: summaryForStore.request || '(no request)'
|
||||
});
|
||||
}).catch((error) => {
|
||||
logger.warn('CHROMA', `${agentName} chroma sync failed, continuing without vector search`, {
|
||||
summaryId: result.summaryId,
|
||||
request: summaryForStore.request || '(no request)'
|
||||
}, error);
|
||||
});
|
||||
|
||||
// Broadcast to SSE clients (for web UI)
|
||||
broadcastSummary(worker, {
|
||||
id: result.summaryId,
|
||||
session_id: session.contentSessionId,
|
||||
request: summary!.request,
|
||||
investigated: summary!.investigated,
|
||||
learned: summary!.learned,
|
||||
completed: summary!.completed,
|
||||
next_steps: summary!.next_steps,
|
||||
notes: summary!.notes,
|
||||
project: session.project,
|
||||
prompt_number: session.lastPromptNumber,
|
||||
created_at_epoch: result.createdAtEpoch
|
||||
});
|
||||
|
||||
// Update Cursor context file for registered projects (fire-and-forget)
|
||||
updateCursorContextForProject(session.project, getWorkerPort()).catch(error => {
|
||||
logger.warn('CURSOR', 'Context update failed (non-critical)', { project: session.project }, error as Error);
|
||||
});
|
||||
}
|
||||
@@ -0,0 +1,36 @@
|
||||
/**
|
||||
* SessionCleanupHelper: Session state cleanup after response processing
|
||||
*
|
||||
* Responsibility:
|
||||
* - Reset earliest pending timestamp
|
||||
* - Broadcast processing status updates
|
||||
*
|
||||
* NOTE: With claim-and-delete queue pattern, messages are deleted on claim,
|
||||
* so there's no pendingProcessingIds tracking or processed message cleanup.
|
||||
*/
|
||||
|
||||
import type { ActiveSession } from '../../worker-types.js';
|
||||
import type { WorkerRef } from './types.js';
|
||||
|
||||
/**
|
||||
* Clean up session state after response processing
|
||||
*
|
||||
* With claim-and-delete queue pattern, this function simply:
|
||||
* 1. Resets the earliest pending timestamp
|
||||
* 2. Broadcasts updated processing status to SSE clients
|
||||
*
|
||||
* @param session - Active session to clean up
|
||||
* @param worker - Worker reference for status broadcasting (optional)
|
||||
*/
|
||||
export function cleanupProcessedMessages(
|
||||
session: ActiveSession,
|
||||
worker: WorkerRef | undefined
|
||||
): void {
|
||||
// Reset earliest pending timestamp for next batch
|
||||
session.earliestPendingTimestamp = null;
|
||||
|
||||
// Broadcast activity status after processing (queue may have changed)
|
||||
if (worker && typeof worker.broadcastProcessingStatus === 'function') {
|
||||
worker.broadcastProcessingStatus();
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,38 @@
|
||||
/**
|
||||
* Agent Consolidation Module
|
||||
*
|
||||
* This module provides shared utilities for SDK, Gemini, and OpenRouter agents.
|
||||
* It extracts common patterns to reduce code duplication and ensure consistent behavior.
|
||||
*
|
||||
* Usage:
|
||||
* ```typescript
|
||||
* import { processAgentResponse, shouldFallbackToClaude } from './agents/index.js';
|
||||
* ```
|
||||
*/
|
||||
|
||||
// Types
|
||||
export type {
|
||||
WorkerRef,
|
||||
ObservationSSEPayload,
|
||||
SummarySSEPayload,
|
||||
SSEEventPayload,
|
||||
StorageResult,
|
||||
ResponseProcessingContext,
|
||||
ParsedResponse,
|
||||
FallbackAgent,
|
||||
BaseAgentConfig,
|
||||
} from './types.js';
|
||||
|
||||
export { FALLBACK_ERROR_PATTERNS } from './types.js';
|
||||
|
||||
// Response Processing
|
||||
export { processAgentResponse } from './ResponseProcessor.js';
|
||||
|
||||
// SSE Broadcasting
|
||||
export { broadcastObservation, broadcastSummary } from './ObservationBroadcaster.js';
|
||||
|
||||
// Session Cleanup
|
||||
export { cleanupProcessedMessages } from './SessionCleanupHelper.js';
|
||||
|
||||
// Error Handling
|
||||
export { shouldFallbackToClaude, isAbortError } from './FallbackErrorHandler.js';
|
||||
@@ -0,0 +1,133 @@
|
||||
/**
|
||||
* Shared agent types for SDK, Gemini, and OpenRouter agents
|
||||
*
|
||||
* Responsibility:
|
||||
* - Define common interfaces used across all agent implementations
|
||||
* - Provide type safety for response processing and broadcasting
|
||||
*/
|
||||
|
||||
import type { ActiveSession } from '../../worker-types.js';
|
||||
import type { ParsedObservation, ParsedSummary } from '../../../sdk/parser.js';
|
||||
|
||||
// ============================================================================
|
||||
// Worker Reference Type
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Worker reference for SSE broadcasting and status updates
|
||||
* Both sseBroadcaster and broadcastProcessingStatus are optional
|
||||
* to allow agents to run without a full worker context (e.g., testing)
|
||||
*/
|
||||
export interface WorkerRef {
|
||||
sseBroadcaster?: {
|
||||
broadcast(event: SSEEventPayload): void;
|
||||
};
|
||||
broadcastProcessingStatus?: () => void;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// SSE Event Payloads
|
||||
// ============================================================================
|
||||
|
||||
export interface ObservationSSEPayload {
|
||||
id: number;
|
||||
memory_session_id: string | null;
|
||||
session_id: string;
|
||||
type: string;
|
||||
title: string | null;
|
||||
subtitle: string | null;
|
||||
text: string | null;
|
||||
narrative: string | null;
|
||||
facts: string; // JSON stringified
|
||||
concepts: string; // JSON stringified
|
||||
files_read: string; // JSON stringified
|
||||
files_modified: string; // JSON stringified
|
||||
project: string;
|
||||
prompt_number: number;
|
||||
created_at_epoch: number;
|
||||
}
|
||||
|
||||
export interface SummarySSEPayload {
|
||||
id: number;
|
||||
session_id: string;
|
||||
request: string | null;
|
||||
investigated: string | null;
|
||||
learned: string | null;
|
||||
completed: string | null;
|
||||
next_steps: string | null;
|
||||
notes: string | null;
|
||||
project: string;
|
||||
prompt_number: number;
|
||||
created_at_epoch: number;
|
||||
}
|
||||
|
||||
export type SSEEventPayload =
|
||||
| { type: 'new_observation'; observation: ObservationSSEPayload }
|
||||
| { type: 'new_summary'; summary: SummarySSEPayload };
|
||||
|
||||
// ============================================================================
|
||||
// Response Processing Types
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Result from atomic database transaction for observations/summary storage
|
||||
*/
|
||||
export interface StorageResult {
|
||||
observationIds: number[];
|
||||
summaryId: number | null;
|
||||
createdAtEpoch: number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Context needed for response processing
|
||||
*/
|
||||
export interface ResponseProcessingContext {
|
||||
session: ActiveSession;
|
||||
worker: WorkerRef | undefined;
|
||||
discoveryTokens: number;
|
||||
originalTimestamp: number | null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Parsed response data ready for storage
|
||||
*/
|
||||
export interface ParsedResponse {
|
||||
observations: ParsedObservation[];
|
||||
summary: ParsedSummary | null;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Fallback Agent Interface
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Interface for fallback agent (used by Gemini/OpenRouter to fall back to Claude)
|
||||
*/
|
||||
export interface FallbackAgent {
|
||||
startSession(session: ActiveSession, worker?: WorkerRef): Promise<void>;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Agent Configuration Types
|
||||
// ============================================================================
|
||||
|
||||
/**
|
||||
* Base configuration shared across all agents
|
||||
*/
|
||||
export interface BaseAgentConfig {
|
||||
dbManager: import('../DatabaseManager.js').DatabaseManager;
|
||||
sessionManager: import('../SessionManager.js').SessionManager;
|
||||
}
|
||||
|
||||
/**
|
||||
* Error codes that should trigger fallback to Claude
|
||||
*/
|
||||
export const FALLBACK_ERROR_PATTERNS = [
|
||||
'429', // Rate limit
|
||||
'500', // Internal server error
|
||||
'502', // Bad gateway
|
||||
'503', // Service unavailable
|
||||
'ECONNREFUSED', // Connection refused
|
||||
'ETIMEDOUT', // Timeout
|
||||
'fetch failed', // Network failure
|
||||
] as const;
|
||||
Reference in New Issue
Block a user