94d592f212
* docs: pathfinder refactor corpus + Node 20 preflight
Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 01 — data integrity
Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.
- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
UNIQUE(memory_session_id, content_hash) on observations; dedup
duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.
Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/01-data-integrity.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 02 — process lifecycle
OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).
- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
canonical registry at src/supervisor/process-registry.ts is the
sole survivor; SDK spawn site consolidated into it via new
createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
process.kill(-pgid, signal) on Unix when pgid is recorded;
Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
staleSessionReaperInterval setInterval (including the co-located
WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
WAL growth without an app-level timer), killIdleDaemonChildren,
killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
via generatorPromise.finally() already lives in worker-service
startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
lazy-spawn — consults isWorkerPortAlive (which gates
captureProcessStartToken for PID-reuse safety via commit
99060bac), then spawns detached with unref(), then
waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
only on external SIGTERM via supervisor signal handlers.
Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.
All 10 verification greps return 0. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast
Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
- Phase 1/2: ObservationRow type doesn't exist; the four
"formatters" operate on three incompatible types.
- Phase 4: RECENCY_WINDOW_MS already imported from
SEARCH_CONSTANTS at every call site.
- Phase 7: getExistingChromaIds is NOT @deprecated and has an
active caller in ChromaSync.backfillMissingSyncs.
- Phase 8: estimateTokens already consolidated.
- Phase 9: knowledge-corpus rewrite blocked on PG-3
prompt-caching cost smoke test.
Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.
Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.
Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).
Tests updated (Principle 7 — delete in same PR):
- search-orchestrator.test.ts: "fall back to SQLite" rewritten
as "throw ChromaUnavailableError (HTTP 503)".
- chroma/hybrid/sqlite-search-strategy tests: rewritten to
rejects.toThrow; removed fellBack assertions.
Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 03 — ingestion path
Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.
- Phase 0: Created src/services/worker/http/shared.ts exporting
ingestObservation/ingestPrompt/ingestSummary as direct
in-process functions plus ingestEventBus (Node EventEmitter,
reusing existing pattern — no third event bus introduced).
setIngestContext wires the SessionManager dependency from
worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
returning { valid:true; kind: 'observation'|'summary'; data }
| { valid:false; reason: string }. Inspects root element;
<skip_summary reason="…"/> is a first-class summary case
with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
branches on the discriminated union. On invalid → markFailed
+ logger.warn(reason). On observation → ingestObservation.
On summary → ingestSummary then emit summaryStoredEvent
{ sessionId, messageId } (consumed by Plan 05's blocking
/api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
(ResponseProcessor + SessionManager + worker-types) and
MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
replaced with fs.watch(transcriptsRoot, { recursive: true,
persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
Map deleted. tool_use rows insert with INSERT OR IGNORE on
UNIQUE(session_id, tool_use_id) (added by Plan 01). New
pairToolUsesByJoin query in PendingMessageStore for read-time
pairing (UNIQUE INDEX provides idempotency; explicit consumer
not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
direct ingestObservation call. maybeParseJson silent-passthrough
rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
class) deleted. The active extractLastMessage at
src/shared/transcript-parser.ts:41-144 is the sole survivor.
Tests updated (Principle 7 — same-PR delete):
- tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
to assert discriminated-union shape; coercion-specific
scenarios collapse into { valid:false } assertions.
- tests/worker/agents/response-processor.test.ts: circuit-breaker
describe block skipped; non-XML/empty-response tests assert
fail-fast markFailed behavior.
Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.
Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.
Plan: PATHFINDER-2026-04-22/03-ingestion-path.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 05 — hook surface
Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.
- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
1..20; do curl -sf .../health && break; sleep 0.1; done` shell
retry wrappers deleted. Hook commands invoke their bun entry
point directly.
- Phase 2: src/shared/worker-utils.ts — added
executeWithWorkerFallback<T>(url, method, body) returning
T | { continue: true; reason?: string }. All 8 hook handlers
(observation, session-init, context, file-context, file-edit,
summarize, session-complete, user-message) rewritten to use
it instead of duplicating the ensureWorkerRunning →
workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
using validateBody + sessionEndSchema (z.object({sessionId})).
One-shot ingestEventBus.on('summaryStoredEvent') listener,
30 s timer, req.aborted handler — all share one cleanup so
the listener cannot leak. summarize.ts polling loop, plus
MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
memoizes SettingsDefaultsManager.loadFromFile per process.
Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
check entry; isProjectExcluded no longer referenced from
src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
(all 6 adapters: claude-code, cursor, raw, gemini-cli,
windsurf). New AdapterRejectedInput error in
src/cli/adapters/errors.ts. Handler-level isValidCwd checks
deleted from file-edit.ts and observation.ts. hook-command.ts
catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
initAgent is idempotent. tests/hooks/context-reinjection-guard
test (validated the deleted conditional) deleted in same PR
per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
.json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
_THRESHOLD setting (default 3). On consecutive worker-unreachable
≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
wrapping ensureWorkerRunning. executeWithWorkerFallback calls
the memoized version.
Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.
Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.
Plan: PATHFINDER-2026-04-22/05-hook-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 06 — API surface
One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.
- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
in src/services/worker/http/middleware/validateBody.ts —
safeParse → 400 { error: 'ValidationError', issues: [...] }
on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
route file. 24 POST endpoints across SessionRoutes,
CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
LogsRoutes, SettingsRoutes now wrap with validateBody().
/api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
along with every call site. Inline coercion helpers
(coerceStringArray, coercePositiveInteger) and inline
if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
from src/services/worker/http/middleware.ts. Worker binds
127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
via fs.readFileSync; served as Buffer with text/html content
type. SKILL.md + per-operation .md files cached in
Server.ts as Map<string, string>; loadInstructionContent
helper deleted. NO fs.watch, NO TTL — process restart is the
cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
— /api/pending-queue (GET), /api/pending-queue/process (POST),
/api/pending-queue/failed (DELETE), /api/pending-queue/all
(DELETE). Helper methods that ONLY served them
(getQueueMessages, getStuckCount, getRecentlyProcessed,
clearFailed, clearAll) deleted from PendingMessageStore.
KEPT: /api/processing-status (observability), /health
(used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
GracefulShutdown now calls getSupervisor().stop() directly.
Two functions retained with clear roles:
- performGracefulShutdown — worker-side 6-step shutdown
- runShutdownCascade — supervisor-side child teardown
(process.kill(-pgid), Windows tree-kill, PID-file cleanup)
Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
failure-marking path on PendingMessageStore. Old methods
markSessionMessagesFailed and markAllSessionMessagesAbandoned
deleted along with all callers (worker-service,
SessionCompletionHandler, tests/zombie-prevention).
Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.
Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.
Plan: PATHFINDER-2026-04-22/06-api-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 07 — dead code sweep
ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.
Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments
Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
builders, ParsedObservation, ParsedSummary, ParseResult,
SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
via dynamic await import('../../../context-generator.js') in
worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
— used via dynamic await import in npx-cli/install.ts +
uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
orphan-recovery caller in worker-service.ts plus
zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
in same file.
- All Database.ts barrel re-exports — used downstream.
Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
the methods are not thin wrappers but ~900 LoC of bodies, and
two methods are documented as intentional mirrors so the
context-generator.cjs bundle stays schema-consistent without
pulling MigrationRunner. Deserves its own plan, not a sweep.
Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.
Plan: PATHFINDER-2026-04-22/07-dead-code.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove residual ProcessRegistry comment reference
Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review (P1 + 2× P2)
P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
- Added optional timeoutMs to executeWithWorkerFallback,
forwarded to workerHttpRequest.
- summarize.ts call site now passes 35_000 (5 s above server
hold window).
P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
- ResponseProcessor now calls ingestSummary({ kind: 'parsed',
sessionDbId, messageId, contentSessionId, parsed }) so the
event-emission path is single-sourced.
- ingestSummary's requireContext() resolution moved inside the
'queue' branch (the only branch that needs sessionManager /
dbManager). 'parsed' is a pure event-bus emission and
doesn't need worker-internal context — fixes mocked
ResponseProcessor unit tests that don't call
setIngestContext.
P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
- Added a Symbol.for('claude-mem/worker-fallback') brand to
WorkerFallback. isWorkerFallback now checks the brand, not
a duck-typed property name.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 2 (P1 + P2)
P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.
- Gate ingestSummary call on (parsed.data.skipped ||
session.lastSummaryStored). Skipped summaries are an explicit
no-op bypass and still confirm; real summaries only confirm
when storage actually wrote a row.
- Non-skipped + summaryId === null path logs a warn and lets
the server-side timeout (504) surface to the hook instead of
a false ok:true.
P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.
- Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
log instead of the misleading ENQUEUED line. No behavior
change — the duplicate is still correctly suppressed by the
DB (Principle 3); only the log surface is corrected.
- confirmProcessed is never called with the enqueue() return
value (it operates on session.processingMessageIds[] from
claimNextMessage), so no caller is broken; the visibility
fix prevents future misuse.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 3 (P1 + 2× P2)
- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
context after SessionRoutes is constructed. setIngestContext runs
before routes exist, so transcript-watcher observations queued via
ingestObservation() had no way to auto-start the SDK generator.
Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
/api/session/end calls register one listener each and clean up on
completion, so the default-10 warning fires spuriously under normal
load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
ingestObservation() instead of duplicating skip-tool / meta /
privacy / queue logic. Single helper, matching the Plan 03 goal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)
- processor.handleToolResult: restore in-memory tool-use→tool-result
pairing via session.pendingTools for schemas (e.g. Codex) whose
tool_result events carry only tool_use_id + output. Without this,
neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
of throwing. Previously a single malformed JSON-shaped field caused
handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
for purely-glob inputs so the caller skips the watch instead of
anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
log on the returned id; the SessionManager branches on id === 0.
* fix: forward tool_use_id through ingestObservation (Greptile iter 5)
P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.
- shared.ingestObservation: forward payload.toolUseId to
queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
tool_use_id (HTTP convention) and toolUseId (JS convention) from
req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
validator doesn't rely on .passthrough() alone.
* fix: drop dead pairToolUsesByJoin, close session-end listener race
- PendingMessageStore: delete pairToolUsesByJoin. The method was never
called and its self-join semantics are structurally incompatible
with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
collapses any second row with the same pair, so a self-join can
only ever match a row to itself. In-memory pendingTools in
processor.ts remains the pairing path for split-event schemas.
- IngestEventBus: retain a short-lived (60s) recentStored map keyed
by sessionId. Populated on summaryStoredEvent emit, evicted on
consume or TTL.
- handleSessionEnd: drain the recent-events buffer before attaching
the listener. Closes the register-after-emit race where the summary
can persist between the hook's summarize POST and its session/end
POST — previously that window returned 504 after the 30s timeout.
* chore: merge origin/main into vivacious-teeth
Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).
Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
summaryStoredEvent supersedes main's SessionCompletionHandler DI
refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
reason; generator .finally() Stop-hook self-clean is a guard for a
path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
#2084) while preserving our Zod validateBody schema.
Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings
1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
in wrapHandler — synchronous exceptions would hang the client rather
than surfacing as 500s. Wrap it like every other handler.
2) processor.handleToolResult only consumed the session.pendingTools
entry when the tool_result arrived without a toolName. In the
split-schema path where tool_result carries both toolName and toolId,
the entry was never deleted and the map grew for the life of the
session. Consume the entry whenever toolId is present.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: typing cleanup and viewer tsconfig split for PR feedback
- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings (iter 2)
- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
the unscoped-drain branch that would nuke every pending/processing
row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
cached event until TTL eviction so a retried Stop hook's second
/api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
already tailed (JSONL appends fire on every line; only unknown
paths warrant a rescan).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: call finalizeSession in terminal session paths (Greptile iter 3)
terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.
Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: GC failed pending_messages rows at startup (Greptile iter 4)
Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.
Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)
1. startSessionProcessor success branch now calls completionHandler.
finalizeSession before removeSessionImmediate. Hooks-disabled installs
(and any Stop hook that fails before POST /api/sessions/complete) no
longer leave sdk_sessions rows as status='active' forever. Idempotent
— a subsequent /api/sessions/complete is a no-op.
2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
closures that reference it (TDZ safety; safe at runtime today but
fragile if timeout ever shrinks).
3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
instead of constructing its own — prevents silent divergence if the
handler ever becomes stateful.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: stop runaway crash-recovery loop on dead sessions
Two distinct bugs were combining to keep a dead session restarting forever:
Bug 1 (uncaught "The operation was aborted."):
child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
signal aborts. spawnSdkProcess() never attached an 'error' listener, so
any async spawn failure became uncaughtException and escaped to the
daemon-level handler. Attach an 'error' listener immediately after spawn,
before the !child.pid early-return, so async spawn errors are logged
(with errno code) and swallowed locally.
Bug 2 (sliding-window limiter never trips on slow restart cadence):
RestartGuard tripped only when restartTimestamps.length exceeded
MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
session that fail-restart-fail-restart on 8s cycles would loop forever
(consecutiveRestarts climbing past 30+ in observed logs). Add a
consecutiveFailures counter that increments on every restart and resets
only on recordSuccess(). Trip when consecutive failures exceed
MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
processing in between proves the session is dead. Both guards now run in
parallel: tight loops still trip the windowed cap; slow loops trip the
consecutive-failure cap.
Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* perf: streamline worker startup and consolidate database connections
1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)
* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations
Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.
- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
when shouldTrackProject(cwd) is false, so the observer's own hooks
cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
on observations) inline so bundled artifacts (worker-service.cjs,
context-generator.cjs) stay schema-consistent — without it, the
ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
supervisor can actually feed the observer's stdin.
Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.
* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)
Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.
* fix: cross-platform observer-dir containment; clarify SDK stdin pipe
claude-review feedback on PR #2124.
- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
hard-coded a POSIX separator and missed Windows backslash paths plus any
trailing-slash variance. Switched to a path.relative-based isWithin()
helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
consumes that pipe; 'ignore' would null it and the null-check below
would tear the child down on every spawn.
* fix: make Stop hook fire-and-forget; remove dead /api/session/end
The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.
The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.
- summarize.ts: drop the /api/session/end long-poll and the trailing
/api/sessions/complete await; ~40 lines removed; unused
SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
route registration. Drop the now-unused ingestEventBus and
SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
comments that referenced the dead endpoint. The IngestEventBus is
left in place dormant (no listeners) for follow-up cleanup so this
PR stays focused on the blocker.
Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.
Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* deps: bump all dependencies to latest including majors
Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.
Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: surface real chroma errors and add deep status probe
Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.
Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.
Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: rebuild worker-service bundle to match merged src
Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: address coderabbit feedback on PLAN-fix-mcp-search.md
- replace machine-specific /Users/alexnewman absolute paths with portable
<repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
1377 lines
56 KiB
TypeScript
1377 lines
56 KiB
TypeScript
/**
|
|
* Worker Service - Slim Orchestrator
|
|
*
|
|
* Refactored from 2000-line monolith to ~300-line orchestrator.
|
|
* Delegates to specialized modules:
|
|
* - src/services/server/ - HTTP server, middleware, error handling
|
|
* - src/services/infrastructure/ - Process management, health monitoring, shutdown
|
|
* - src/services/integrations/ - IDE integrations (Cursor)
|
|
* - src/services/worker/ - Business logic, routes, agents
|
|
*/
|
|
|
|
import path from 'path';
|
|
import { existsSync } from 'fs';
|
|
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
|
|
import { StdioClientTransport } from '@modelcontextprotocol/sdk/client/stdio.js';
|
|
import { getWorkerPort, getWorkerHost } from '../shared/worker-utils.js';
|
|
import { HOOK_TIMEOUTS } from '../shared/hook-constants.js';
|
|
import { SettingsDefaultsManager } from '../shared/SettingsDefaultsManager.js';
|
|
import { getAuthMethodDescription } from '../shared/EnvManager.js';
|
|
import { logger } from '../utils/logger.js';
|
|
import { ChromaMcpManager } from './sync/ChromaMcpManager.js';
|
|
import { ChromaSync } from './sync/ChromaSync.js';
|
|
import { configureSupervisorSignalHandlers, getSupervisor, startSupervisor } from '../supervisor/index.js';
|
|
import { sanitizeEnv } from '../supervisor/env-sanitizer.js';
|
|
|
|
// Worker spawn / Windows-cooldown helpers are defined in ./worker-spawner.ts
|
|
// so that lightweight consumers (e.g. the MCP server running under Node) can
|
|
// ensure the worker daemon is up without importing this entire module — which
|
|
// transitively pulls in the SQLite database layer via ChromaSync/DatabaseManager.
|
|
import { ensureWorkerStarted as ensureWorkerStartedShared } from './worker-spawner.js';
|
|
import { RestartGuard } from './worker/RestartGuard.js';
|
|
|
|
// Re-export for backward compatibility — canonical implementation in shared/plugin-state.ts
|
|
export { isPluginDisabledInClaudeSettings } from '../shared/plugin-state.js';
|
|
import { isPluginDisabledInClaudeSettings } from '../shared/plugin-state.js';
|
|
|
|
// Version injected at build time by esbuild define
|
|
declare const __DEFAULT_PACKAGE_VERSION__: string;
|
|
const packageVersion = typeof __DEFAULT_PACKAGE_VERSION__ !== 'undefined' ? __DEFAULT_PACKAGE_VERSION__ : '0.0.0-dev';
|
|
|
|
// Infrastructure imports
|
|
import {
|
|
writePidFile,
|
|
readPidFile,
|
|
removePidFile,
|
|
getPlatformTimeout,
|
|
aggressiveStartupCleanup,
|
|
runOneTimeChromaMigration,
|
|
runOneTimeCwdRemap,
|
|
cleanStalePidFile,
|
|
verifyPidFileOwnership,
|
|
spawnDaemon,
|
|
touchPidFile
|
|
} from './infrastructure/ProcessManager.js';
|
|
import {
|
|
isPortInUse,
|
|
waitForHealth,
|
|
waitForReadiness,
|
|
waitForPortFree,
|
|
httpShutdown
|
|
} from './infrastructure/HealthMonitor.js';
|
|
import { performGracefulShutdown } from './infrastructure/GracefulShutdown.js';
|
|
import { adoptMergedWorktrees, adoptMergedWorktreesForAllKnownRepos } from './infrastructure/WorktreeAdoption.js';
|
|
|
|
// Server imports
|
|
import { Server } from './server/Server.js';
|
|
|
|
// Integration imports
|
|
import {
|
|
updateCursorContextForProject,
|
|
handleCursorCommand
|
|
} from './integrations/CursorHooksInstaller.js';
|
|
import {
|
|
handleGeminiCliCommand
|
|
} from './integrations/GeminiCliHooksInstaller.js';
|
|
|
|
// Service layer imports
|
|
import { DatabaseManager } from './worker/DatabaseManager.js';
|
|
import { SessionManager } from './worker/SessionManager.js';
|
|
import { SSEBroadcaster } from './worker/SSEBroadcaster.js';
|
|
import { SDKAgent } from './worker/SDKAgent.js';
|
|
import type { WorkerRef } from './worker/agents/types.js';
|
|
import { GeminiAgent, isGeminiSelected, isGeminiAvailable } from './worker/GeminiAgent.js';
|
|
import { OpenRouterAgent, isOpenRouterSelected, isOpenRouterAvailable } from './worker/OpenRouterAgent.js';
|
|
import { PaginationHelper } from './worker/PaginationHelper.js';
|
|
import { SettingsManager } from './worker/SettingsManager.js';
|
|
import { SearchManager } from './worker/SearchManager.js';
|
|
import { FormattingService } from './worker/FormattingService.js';
|
|
import { TimelineService } from './worker/TimelineService.js';
|
|
import { SessionEventBroadcaster } from './worker/events/SessionEventBroadcaster.js';
|
|
import { SessionCompletionHandler } from './worker/session/SessionCompletionHandler.js';
|
|
import { setIngestContext, attachIngestGeneratorStarter } from './worker/http/shared.js';
|
|
import { DEFAULT_CONFIG_PATH, DEFAULT_STATE_PATH, expandHomePath, loadTranscriptWatchConfig, writeSampleConfig } from './transcripts/config.js';
|
|
import { TranscriptWatcher } from './transcripts/watcher.js';
|
|
|
|
// HTTP route handlers
|
|
import { ViewerRoutes } from './worker/http/routes/ViewerRoutes.js';
|
|
import { SessionRoutes } from './worker/http/routes/SessionRoutes.js';
|
|
import { DataRoutes } from './worker/http/routes/DataRoutes.js';
|
|
import { SearchRoutes } from './worker/http/routes/SearchRoutes.js';
|
|
import { SettingsRoutes } from './worker/http/routes/SettingsRoutes.js';
|
|
import { LogsRoutes } from './worker/http/routes/LogsRoutes.js';
|
|
import { MemoryRoutes } from './worker/http/routes/MemoryRoutes.js';
|
|
import { CorpusRoutes } from './worker/http/routes/CorpusRoutes.js';
|
|
import { ChromaRoutes } from './worker/http/routes/ChromaRoutes.js';
|
|
|
|
// Knowledge agent services
|
|
import { CorpusStore } from './worker/knowledge/CorpusStore.js';
|
|
import { CorpusBuilder } from './worker/knowledge/CorpusBuilder.js';
|
|
import { KnowledgeAgent } from './worker/knowledge/KnowledgeAgent.js';
|
|
|
|
// Primary-path session lifecycle helpers — no reapers, no orphan sweeps.
|
|
// The SDK subprocess is spawned in its own POSIX process group via
|
|
// createSdkSpawnFactory; teardown via ensureSdkProcessExit kills the whole
|
|
// group so no descendants leak (Principle 5).
|
|
import { getSdkProcessForSession, ensureSdkProcessExit } from '../supervisor/process-registry.js';
|
|
|
|
/**
|
|
* Build JSON status output for hook framework communication.
|
|
* This is a pure function extracted for testability.
|
|
*
|
|
* @param status - 'ready' for successful startup, 'error' for failures
|
|
* @param message - Optional error message (only included when provided)
|
|
* @returns JSON object with continue, suppressOutput, status, and optionally message
|
|
*/
|
|
export interface StatusOutput {
|
|
continue: true;
|
|
suppressOutput: true;
|
|
status: 'ready' | 'error';
|
|
message?: string;
|
|
}
|
|
|
|
export function buildStatusOutput(status: 'ready' | 'error', message?: string): StatusOutput {
|
|
return {
|
|
continue: true,
|
|
suppressOutput: true,
|
|
status,
|
|
...(message && { message })
|
|
};
|
|
}
|
|
|
|
export class WorkerService implements WorkerRef {
|
|
private server: Server;
|
|
private startTime: number = Date.now();
|
|
private mcpClient: Client;
|
|
|
|
// Initialization flags
|
|
private mcpReady: boolean = false;
|
|
private initializationCompleteFlag: boolean = false;
|
|
private isShuttingDown: boolean = false;
|
|
|
|
// Service layer
|
|
private dbManager: DatabaseManager;
|
|
private sessionManager: SessionManager;
|
|
public sseBroadcaster: SSEBroadcaster;
|
|
private sdkAgent: SDKAgent;
|
|
private geminiAgent: GeminiAgent;
|
|
private openRouterAgent: OpenRouterAgent;
|
|
private paginationHelper: PaginationHelper;
|
|
private settingsManager: SettingsManager;
|
|
private sessionEventBroadcaster: SessionEventBroadcaster;
|
|
private completionHandler: SessionCompletionHandler;
|
|
private corpusStore: CorpusStore;
|
|
|
|
// Route handlers
|
|
private searchRoutes: SearchRoutes | null = null;
|
|
|
|
// Chroma MCP manager (lazy - connects on first use)
|
|
private chromaMcpManager: ChromaMcpManager | null = null;
|
|
|
|
// Transcript watcher for Codex and other transcript-based clients
|
|
private transcriptWatcher: TranscriptWatcher | null = null;
|
|
|
|
// Initialization tracking
|
|
private initializationComplete: Promise<void>;
|
|
private resolveInitialization!: () => void;
|
|
|
|
// AI interaction tracking for health endpoint
|
|
private lastAiInteraction: {
|
|
timestamp: number;
|
|
success: boolean;
|
|
provider: string;
|
|
error?: string;
|
|
} | null = null;
|
|
|
|
constructor() {
|
|
// Initialize the promise that will resolve when background initialization completes
|
|
this.initializationComplete = new Promise((resolve) => {
|
|
this.resolveInitialization = resolve;
|
|
});
|
|
|
|
// Initialize service layer
|
|
this.dbManager = new DatabaseManager();
|
|
this.sessionManager = new SessionManager(this.dbManager);
|
|
this.sseBroadcaster = new SSEBroadcaster();
|
|
this.sdkAgent = new SDKAgent(this.dbManager, this.sessionManager);
|
|
this.geminiAgent = new GeminiAgent(this.dbManager, this.sessionManager);
|
|
this.openRouterAgent = new OpenRouterAgent(this.dbManager, this.sessionManager);
|
|
|
|
this.paginationHelper = new PaginationHelper(this.dbManager);
|
|
this.settingsManager = new SettingsManager(this.dbManager);
|
|
this.sessionEventBroadcaster = new SessionEventBroadcaster(this.sseBroadcaster, this);
|
|
this.completionHandler = new SessionCompletionHandler(
|
|
this.sessionManager,
|
|
this.sessionEventBroadcaster,
|
|
this.dbManager,
|
|
);
|
|
this.corpusStore = new CorpusStore();
|
|
|
|
// Wire ingest helpers (plan 03 phase 0). Worker-internal callers use these
|
|
// directly instead of HTTP-loopback into our own routes.
|
|
setIngestContext({
|
|
sessionManager: this.sessionManager,
|
|
dbManager: this.dbManager,
|
|
eventBroadcaster: this.sessionEventBroadcaster,
|
|
});
|
|
|
|
// Set callback for when sessions are deleted
|
|
this.sessionManager.setOnSessionDeleted(() => {
|
|
this.broadcastProcessingStatus();
|
|
});
|
|
|
|
|
|
// Initialize MCP client
|
|
// Empty capabilities object: this client only calls tools, doesn't expose any
|
|
this.mcpClient = new Client({
|
|
name: 'worker-search-proxy',
|
|
version: packageVersion
|
|
}, { capabilities: {} });
|
|
|
|
// Initialize HTTP server with core routes
|
|
this.server = new Server({
|
|
getInitializationComplete: () => this.initializationCompleteFlag,
|
|
getMcpReady: () => this.mcpReady,
|
|
onShutdown: () => this.shutdown(),
|
|
onRestart: () => this.shutdown(),
|
|
workerPath: __filename,
|
|
getAiStatus: () => {
|
|
let provider = 'claude';
|
|
if (isOpenRouterSelected() && isOpenRouterAvailable()) provider = 'openrouter';
|
|
else if (isGeminiSelected() && isGeminiAvailable()) provider = 'gemini';
|
|
return {
|
|
provider,
|
|
authMethod: getAuthMethodDescription(),
|
|
lastInteraction: this.lastAiInteraction
|
|
? {
|
|
timestamp: this.lastAiInteraction.timestamp,
|
|
success: this.lastAiInteraction.success,
|
|
...(this.lastAiInteraction.error && { error: this.lastAiInteraction.error }),
|
|
}
|
|
: null,
|
|
};
|
|
},
|
|
});
|
|
|
|
// Register route handlers
|
|
this.registerRoutes();
|
|
|
|
// Register signal handlers early to ensure cleanup even if start() hasn't completed
|
|
this.registerSignalHandlers();
|
|
}
|
|
|
|
/**
|
|
* Register signal handlers for graceful shutdown
|
|
*/
|
|
private registerSignalHandlers(): void {
|
|
configureSupervisorSignalHandlers(async () => {
|
|
this.isShuttingDown = true;
|
|
await this.shutdown();
|
|
});
|
|
}
|
|
|
|
/**
|
|
* Register all route handlers with the server
|
|
*/
|
|
private registerRoutes(): void {
|
|
// IMPORTANT: Middleware must be registered BEFORE routes (Express processes in order)
|
|
|
|
// Register Chroma routes immediately so they bypass the initialization guard
|
|
this.server.registerRoutes(new ChromaRoutes());
|
|
|
|
// Early handler for /api/context/inject — fail open if not yet initialized
|
|
this.server.app.get('/api/context/inject', async (req, res, next) => {
|
|
if (!this.initializationCompleteFlag || !this.searchRoutes) {
|
|
logger.warn('SYSTEM', 'Context requested before initialization complete, returning empty');
|
|
res.status(200).json({ content: [{ type: 'text', text: '' }] });
|
|
return;
|
|
}
|
|
|
|
next(); // Delegate to SearchRoutes handler
|
|
});
|
|
|
|
// Guard ALL /api/* routes during initialization — wait for DB with timeout
|
|
// Exceptions: /api/health, /api/readiness, /api/version (handled by Server.ts core routes)
|
|
// and /api/chroma/status (diagnostic endpoint)
|
|
this.server.app.use('/api', async (req, res, next) => {
|
|
// Bypass guard for diagnostic endpoints
|
|
if (req.path === '/chroma/status' || req.path === '/health' || req.path === '/readiness' || req.path === '/version') {
|
|
next();
|
|
return;
|
|
}
|
|
|
|
if (this.initializationCompleteFlag) {
|
|
next();
|
|
return;
|
|
}
|
|
|
|
const timeoutMs = 120000; // 2 minutes
|
|
const timeoutPromise = new Promise<void>((_, reject) =>
|
|
setTimeout(() => reject(new Error('Database initialization timeout')), timeoutMs)
|
|
);
|
|
|
|
try {
|
|
await Promise.race([this.initializationComplete, timeoutPromise]);
|
|
next();
|
|
} catch (error) {
|
|
if (error instanceof Error) {
|
|
logger.error('WORKER', `Request to ${req.method} ${req.path} rejected — DB not initialized`, {}, error);
|
|
} else {
|
|
logger.error('WORKER', `Request to ${req.method} ${req.path} rejected — DB not initialized with non-Error`, {}, new Error(String(error)));
|
|
}
|
|
res.status(503).json({
|
|
error: 'Service initializing',
|
|
message: 'Database is still initializing, please retry'
|
|
});
|
|
return;
|
|
}
|
|
});
|
|
|
|
// Standard routes (registered AFTER guard middleware)
|
|
this.server.registerRoutes(new ViewerRoutes(this.sseBroadcaster, this.dbManager, this.sessionManager));
|
|
const sessionRoutes = new SessionRoutes(this.sessionManager, this.dbManager, this.sdkAgent, this.geminiAgent, this.openRouterAgent, this.sessionEventBroadcaster, this, this.completionHandler);
|
|
this.server.registerRoutes(sessionRoutes);
|
|
// Wire the generator-starter callback now that SessionRoutes exists.
|
|
// `setIngestContext` ran in the constructor before routes were
|
|
// constructed; transcript-watcher observations depend on this side-effect
|
|
// to auto-start the SDK generator after enqueue.
|
|
attachIngestGeneratorStarter((sessionDbId, source) =>
|
|
sessionRoutes.ensureGeneratorRunning(sessionDbId, source),
|
|
);
|
|
this.server.registerRoutes(new DataRoutes(this.paginationHelper, this.dbManager, this.sessionManager, this.sseBroadcaster, this, this.startTime));
|
|
this.server.registerRoutes(new SettingsRoutes(this.settingsManager));
|
|
this.server.registerRoutes(new LogsRoutes());
|
|
this.server.registerRoutes(new MemoryRoutes(this.dbManager, 'claude-mem'));
|
|
}
|
|
|
|
/**
|
|
* Start the worker service
|
|
*/
|
|
async start(): Promise<void> {
|
|
const port = getWorkerPort();
|
|
const host = getWorkerHost();
|
|
|
|
await startSupervisor();
|
|
|
|
// Start HTTP server FIRST - make it available immediately
|
|
await this.server.listen(port, host);
|
|
|
|
// Worker writes its own PID - reliable on all platforms
|
|
// This happens after listen() succeeds, ensuring the worker is actually ready
|
|
// On Windows, the spawner's PID is cmd.exe (useless), so worker must write its own
|
|
writePidFile({
|
|
pid: process.pid,
|
|
port,
|
|
startedAt: new Date().toISOString()
|
|
});
|
|
|
|
getSupervisor().registerProcess('worker', {
|
|
pid: process.pid,
|
|
type: 'worker',
|
|
startedAt: new Date().toISOString()
|
|
});
|
|
|
|
logger.info('SYSTEM', 'Worker started', { host, port, pid: process.pid });
|
|
|
|
// Do slow initialization in background (non-blocking)
|
|
this.initializeBackground().catch((error) => {
|
|
logger.error('SYSTEM', 'Background initialization failed', {}, error as Error);
|
|
});
|
|
}
|
|
|
|
/**
|
|
* Background initialization - runs after HTTP server is listening
|
|
*/
|
|
private async initializeBackground(): Promise<void> {
|
|
try {
|
|
logger.info('WORKER', 'Background initialization starting...');
|
|
await aggressiveStartupCleanup();
|
|
|
|
// Load mode configuration
|
|
const { ModeManager } = await import('./domain/ModeManager.js');
|
|
const { SettingsDefaultsManager } = await import('../shared/SettingsDefaultsManager.js');
|
|
const { USER_SETTINGS_PATH } = await import('../shared/paths.js');
|
|
|
|
const settings = SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH);
|
|
|
|
const modeId = settings.CLAUDE_MEM_MODE;
|
|
ModeManager.getInstance().loadMode(modeId);
|
|
logger.info('SYSTEM', `Mode loaded: ${modeId}`);
|
|
|
|
// One-time chroma wipe for users upgrading from versions with duplicate worker bugs.
|
|
if (settings.CLAUDE_MEM_MODE === 'local' || !settings.CLAUDE_MEM_MODE) {
|
|
logger.info('WORKER', 'Checking for one-time Chroma migration...');
|
|
runOneTimeChromaMigration();
|
|
}
|
|
|
|
// One-time remap of pre-worktree project names using pending_messages.cwd.
|
|
logger.info('WORKER', 'Checking for one-time CWD remap...');
|
|
runOneTimeCwdRemap();
|
|
|
|
// Stamp merged worktrees (Non-blocking, fire-and-forget)
|
|
logger.info('WORKER', 'Adopting merged worktrees (background)...');
|
|
adoptMergedWorktreesForAllKnownRepos({}).then(adoptions => {
|
|
if (adoptions) {
|
|
for (const adoption of adoptions) {
|
|
if (adoption.adoptedObservations > 0 || adoption.adoptedSummaries > 0 || adoption.chromaUpdates > 0) {
|
|
logger.info('SYSTEM', 'Merged worktrees adopted in background', adoption);
|
|
}
|
|
if (adoption.errors.length > 0) {
|
|
logger.warn('SYSTEM', 'Worktree adoption had per-branch errors', {
|
|
repoPath: adoption.repoPath,
|
|
errors: adoption.errors
|
|
});
|
|
}
|
|
}
|
|
}
|
|
}).catch(err => {
|
|
logger.error('WORKER', 'Worktree adoption failed (background)', {}, err instanceof Error ? err : new Error(String(err)));
|
|
});
|
|
|
|
// Initialize ChromaMcpManager only if Chroma is enabled
|
|
const chromaEnabled = settings.CLAUDE_MEM_CHROMA_ENABLED !== 'false';
|
|
if (chromaEnabled) {
|
|
this.chromaMcpManager = ChromaMcpManager.getInstance();
|
|
logger.info('SYSTEM', 'ChromaMcpManager initialized (lazy - connects on first use)');
|
|
} else {
|
|
logger.info('SYSTEM', 'Chroma disabled via CLAUDE_MEM_CHROMA_ENABLED=false, skipping ChromaMcpManager');
|
|
}
|
|
|
|
logger.info('WORKER', 'Initializing database manager...');
|
|
await this.dbManager.initialize();
|
|
|
|
// One-shot GC for terminally-failed rows
|
|
try {
|
|
logger.info('WORKER', 'Running startup GC for pending messages...');
|
|
const { PendingMessageStore } = await import('./sqlite/PendingMessageStore.js');
|
|
const pendingStore = new PendingMessageStore(this.dbManager.getSessionStore().db, 3);
|
|
const cleared = pendingStore.clearFailedOlderThan(7 * 24 * 60 * 60 * 1000);
|
|
if (cleared > 0) {
|
|
logger.info('QUEUE', 'Startup GC cleared old failed pending_messages rows', { cleared });
|
|
}
|
|
} catch (err) {
|
|
logger.warn('QUEUE', 'Startup GC for failed pending_messages rows failed', {}, err instanceof Error ? err : undefined);
|
|
}
|
|
|
|
// Initialize search services
|
|
logger.info('WORKER', 'Initializing search services...');
|
|
const formattingService = new FormattingService();
|
|
const timelineService = new TimelineService();
|
|
const searchManager = new SearchManager(
|
|
this.dbManager.getSessionSearch(),
|
|
this.dbManager.getSessionStore(),
|
|
this.dbManager.getChromaSync(),
|
|
formattingService,
|
|
timelineService
|
|
);
|
|
this.searchRoutes = new SearchRoutes(searchManager);
|
|
this.server.registerRoutes(this.searchRoutes);
|
|
logger.info('WORKER', 'SearchManager initialized and search routes registered');
|
|
|
|
// Register corpus routes (knowledge agents) — needs SearchOrchestrator from search module
|
|
const { SearchOrchestrator } = await import('./worker/search/SearchOrchestrator.js');
|
|
const corpusSearchOrchestrator = new SearchOrchestrator(
|
|
this.dbManager.getSessionSearch(),
|
|
this.dbManager.getSessionStore(),
|
|
this.dbManager.getChromaSync()
|
|
);
|
|
const corpusBuilder = new CorpusBuilder(
|
|
this.dbManager.getSessionStore(),
|
|
corpusSearchOrchestrator,
|
|
this.corpusStore
|
|
);
|
|
const knowledgeAgent = new KnowledgeAgent(this.corpusStore);
|
|
this.server.registerRoutes(new CorpusRoutes(this.corpusStore, corpusBuilder, knowledgeAgent));
|
|
logger.info('WORKER', 'CorpusRoutes registered');
|
|
|
|
// DB and search are ready — mark initialization complete so hooks can proceed.
|
|
this.initializationCompleteFlag = true;
|
|
this.resolveInitialization();
|
|
logger.info('SYSTEM', 'Core initialization complete (DB + search ready)');
|
|
|
|
await this.startTranscriptWatcher(settings);
|
|
|
|
// Auto-backfill Chroma for all projects if out of sync with SQLite (fire-and-forget)
|
|
if (this.chromaMcpManager) {
|
|
ChromaSync.backfillAllProjects(this.dbManager.getSessionStore()).then(() => {
|
|
logger.info('CHROMA_SYNC', 'Backfill check complete for all projects');
|
|
}).catch(error => {
|
|
logger.error('CHROMA_SYNC', 'Backfill failed (non-blocking)', {}, error as Error);
|
|
});
|
|
}
|
|
|
|
// Mark MCP as externally ready once the bundled stdio server binary exists.
|
|
const mcpServerPath = path.join(__dirname, 'mcp-server.cjs');
|
|
this.mcpReady = existsSync(mcpServerPath);
|
|
|
|
// Best-effort loopback MCP self-check (Non-blocking, F&F)
|
|
this.runMcpSelfCheck(mcpServerPath).catch(err => {
|
|
logger.debug('WORKER', 'MCP self-check failed (non-fatal)', { error: err.message });
|
|
});
|
|
|
|
return;
|
|
} catch (error) {
|
|
// Background initialization failed - log and let worker fail health checks
|
|
logger.error('SYSTEM', 'Background initialization failed', {}, error instanceof Error ? error : undefined);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Run a best-effort loopback MCP self-check to verify the bundled server can start.
|
|
* This is entirely diagnostic and does not block worker availability.
|
|
*/
|
|
private async runMcpSelfCheck(mcpServerPath: string): Promise<void> {
|
|
try {
|
|
getSupervisor().assertCanSpawn('mcp server');
|
|
const transport = new StdioClientTransport({
|
|
command: process.execPath,
|
|
args: [mcpServerPath],
|
|
env: Object.fromEntries(
|
|
Object.entries(sanitizeEnv(process.env)).filter(([, value]) => value !== undefined)
|
|
) as Record<string, string>
|
|
});
|
|
|
|
const MCP_INIT_TIMEOUT_MS = 60000; // 1 minute is plenty for local check
|
|
const mcpConnectionPromise = this.mcpClient.connect(transport);
|
|
|
|
const timeoutPromise = new Promise<never>((_, reject) => {
|
|
setTimeout(
|
|
() => reject(new Error('MCP connection timeout')),
|
|
60000
|
|
);
|
|
});
|
|
|
|
await Promise.race([mcpConnectionPromise, timeoutPromise]);
|
|
logger.info('WORKER', 'MCP loopback self-check connected successfully');
|
|
|
|
// Cleanup
|
|
await transport.close();
|
|
} catch (error) {
|
|
logger.warn('WORKER', 'MCP loopback self-check failed', {
|
|
error: error instanceof Error ? error.message : String(error)
|
|
});
|
|
}
|
|
}
|
|
|
|
/**
|
|
* Start transcript watcher for Codex and other transcript-based clients.
|
|
* This is intentionally non-fatal so Claude hooks remain usable even if
|
|
* transcript ingestion is misconfigured.
|
|
*/
|
|
private async startTranscriptWatcher(settings: ReturnType<typeof SettingsDefaultsManager.loadFromFile>): Promise<void> {
|
|
const transcriptsEnabled = settings.CLAUDE_MEM_TRANSCRIPTS_ENABLED !== 'false';
|
|
if (!transcriptsEnabled) {
|
|
logger.info('TRANSCRIPT', 'Transcript watcher disabled via CLAUDE_MEM_TRANSCRIPTS_ENABLED=false');
|
|
return;
|
|
}
|
|
|
|
const configPath = settings.CLAUDE_MEM_TRANSCRIPTS_CONFIG_PATH || DEFAULT_CONFIG_PATH;
|
|
const resolvedConfigPath = expandHomePath(configPath);
|
|
|
|
// Ensure sample config exists (setup, outside try)
|
|
if (!existsSync(resolvedConfigPath)) {
|
|
writeSampleConfig(configPath);
|
|
logger.info('TRANSCRIPT', 'Created default transcript watch config', {
|
|
configPath: resolvedConfigPath
|
|
});
|
|
}
|
|
|
|
const transcriptConfig = loadTranscriptWatchConfig(configPath);
|
|
const statePath = expandHomePath(transcriptConfig.stateFile ?? DEFAULT_STATE_PATH);
|
|
|
|
try {
|
|
this.transcriptWatcher = new TranscriptWatcher(transcriptConfig, statePath);
|
|
await this.transcriptWatcher.start();
|
|
} catch (error) {
|
|
this.transcriptWatcher?.stop();
|
|
this.transcriptWatcher = null;
|
|
if (error instanceof Error) {
|
|
logger.error('WORKER', 'Failed to start transcript watcher (continuing without Codex ingestion)', {
|
|
configPath: resolvedConfigPath
|
|
}, error);
|
|
} else {
|
|
logger.error('WORKER', 'Failed to start transcript watcher with non-Error (continuing without Codex ingestion)', {
|
|
configPath: resolvedConfigPath
|
|
}, new Error(String(error)));
|
|
}
|
|
// [ANTI-PATTERN IGNORED]: Transcript watcher is intentionally non-fatal so Claude hooks remain usable even if transcript ingestion is misconfigured
|
|
return;
|
|
}
|
|
logger.info('TRANSCRIPT', 'Transcript watcher started', {
|
|
configPath: resolvedConfigPath,
|
|
statePath,
|
|
watches: transcriptConfig.watches.length
|
|
});
|
|
}
|
|
|
|
/**
|
|
* Get the appropriate agent based on provider settings.
|
|
* Same logic as SessionRoutes.getActiveAgent() for consistency.
|
|
*/
|
|
private getActiveAgent(): SDKAgent | GeminiAgent | OpenRouterAgent {
|
|
if (isOpenRouterSelected() && isOpenRouterAvailable()) {
|
|
return this.openRouterAgent;
|
|
}
|
|
if (isGeminiSelected() && isGeminiAvailable()) {
|
|
return this.geminiAgent;
|
|
}
|
|
return this.sdkAgent;
|
|
}
|
|
|
|
/**
|
|
* Start a session processor
|
|
* On SDK resume failure (terminated session), falls back to Gemini/OpenRouter if available,
|
|
* otherwise marks messages abandoned and removes session so queue does not grow unbounded.
|
|
*/
|
|
private startSessionProcessor(
|
|
session: ReturnType<typeof this.sessionManager.getSession>,
|
|
source: string
|
|
): void {
|
|
if (!session) return;
|
|
|
|
const sid = session.sessionDbId;
|
|
const agent = this.getActiveAgent();
|
|
const providerName = agent.constructor.name;
|
|
|
|
// Before starting generator, check if AbortController is already aborted
|
|
// This can happen after a previous generator was aborted but the session still has pending work
|
|
if (session.abortController.signal.aborted) {
|
|
logger.debug('SYSTEM', 'Replacing aborted AbortController before starting generator', {
|
|
sessionId: session.sessionDbId
|
|
});
|
|
session.abortController = new AbortController();
|
|
}
|
|
|
|
// Track whether generator failed with an unrecoverable error to prevent infinite restart loops
|
|
let hadUnrecoverableError = false;
|
|
let sessionFailed = false;
|
|
|
|
logger.info('SYSTEM', `Starting generator (${source}) using ${providerName}`, { sessionId: sid });
|
|
|
|
// Track generator activity for stale detection (Issue #1099)
|
|
session.lastGeneratorActivity = Date.now();
|
|
|
|
session.generatorPromise = agent.startSession(session, this)
|
|
.catch(async (error: unknown) => {
|
|
const errorMessage = (error as Error)?.message || '';
|
|
|
|
// Detect unrecoverable errors that should NOT trigger restart
|
|
// These errors will fail immediately on retry, causing infinite loops
|
|
const unrecoverablePatterns = [
|
|
'Claude executable not found',
|
|
'CLAUDE_CODE_PATH',
|
|
'ENOENT',
|
|
'spawn',
|
|
'Invalid API key',
|
|
'API_KEY_INVALID',
|
|
'API key expired',
|
|
'API key not valid',
|
|
'PERMISSION_DENIED',
|
|
'Gemini API error: 400',
|
|
'Gemini API error: 401',
|
|
'Gemini API error: 403',
|
|
'FOREIGN KEY constraint failed',
|
|
];
|
|
if (unrecoverablePatterns.some(pattern => errorMessage.includes(pattern))) {
|
|
hadUnrecoverableError = true;
|
|
this.lastAiInteraction = {
|
|
timestamp: Date.now(),
|
|
success: false,
|
|
provider: providerName,
|
|
error: errorMessage,
|
|
};
|
|
logger.error('SDK', 'Unrecoverable generator error - will NOT restart', {
|
|
sessionId: session.sessionDbId,
|
|
project: session.project,
|
|
errorMessage
|
|
});
|
|
return;
|
|
}
|
|
|
|
// Fallback for terminated SDK sessions (provider abstraction)
|
|
if (this.isSessionTerminatedError(error)) {
|
|
logger.warn('SDK', 'SDK resume failed, falling back to standalone processing', {
|
|
sessionId: session.sessionDbId,
|
|
project: session.project,
|
|
reason: error instanceof Error ? error.message : String(error)
|
|
});
|
|
return this.runFallbackForTerminatedSession(session, error);
|
|
}
|
|
|
|
// Detect stale resume failures - SDK session context was lost
|
|
const staleResumePatterns = ['aborted by user', 'No conversation found'];
|
|
if (staleResumePatterns.some(p => errorMessage.includes(p))
|
|
&& session.memorySessionId) {
|
|
logger.warn('SDK', 'Detected stale resume failure, clearing memorySessionId for fresh start', {
|
|
sessionId: session.sessionDbId,
|
|
memorySessionId: session.memorySessionId,
|
|
errorMessage
|
|
});
|
|
// Clear stale memorySessionId and force fresh init on next attempt
|
|
this.dbManager.getSessionStore().updateMemorySessionId(session.sessionDbId, null);
|
|
session.memorySessionId = null;
|
|
session.forceInit = true;
|
|
}
|
|
logger.error('SDK', 'Session generator failed', {
|
|
sessionId: session.sessionDbId,
|
|
project: session.project,
|
|
provider: providerName
|
|
}, error as Error);
|
|
sessionFailed = true;
|
|
this.lastAiInteraction = {
|
|
timestamp: Date.now(),
|
|
success: false,
|
|
provider: providerName,
|
|
error: errorMessage,
|
|
};
|
|
throw error;
|
|
})
|
|
.finally(async () => {
|
|
// Primary-path subprocess teardown — process-group kill ensures any
|
|
// SDK descendants are reaped too (Principle 5).
|
|
const trackedProcess = getSdkProcessForSession(session.sessionDbId);
|
|
if (trackedProcess && trackedProcess.process.exitCode === null) {
|
|
await ensureSdkProcessExit(trackedProcess, 5000);
|
|
}
|
|
|
|
session.generatorPromise = null;
|
|
|
|
// Record successful AI interaction if no error occurred
|
|
if (!sessionFailed && !hadUnrecoverableError) {
|
|
this.lastAiInteraction = {
|
|
timestamp: Date.now(),
|
|
success: true,
|
|
provider: providerName,
|
|
};
|
|
}
|
|
|
|
// Do NOT restart after unrecoverable errors - prevents infinite loops
|
|
if (hadUnrecoverableError) {
|
|
this.terminateSession(session.sessionDbId, 'unrecoverable_error');
|
|
return;
|
|
}
|
|
|
|
const pendingStore = this.sessionManager.getPendingMessageStore();
|
|
|
|
// Check if there's pending work that needs processing with a fresh AbortController
|
|
const pendingCount = pendingStore.getPendingCount(session.sessionDbId);
|
|
|
|
// Idle timeout means no new work arrived for 3 minutes - don't restart
|
|
// But check pendingCount first: a message may have arrived between idle
|
|
// abort and .finally(), and we must not abandon it
|
|
if (session.idleTimedOut) {
|
|
session.idleTimedOut = false; // Reset flag
|
|
if (pendingCount === 0) {
|
|
this.terminateSession(session.sessionDbId, 'idle_timeout');
|
|
return;
|
|
}
|
|
// Fall through to pending-work restart below
|
|
}
|
|
if (pendingCount > 0) {
|
|
// Windowed restart guard: only blocks tight-loop restarts, not spread-out ones (#2053)
|
|
if (!session.restartGuard) session.restartGuard = new RestartGuard();
|
|
const restartAllowed = session.restartGuard.recordRestart();
|
|
session.consecutiveRestarts = (session.consecutiveRestarts || 0) + 1; // Keep for logging
|
|
|
|
if (!restartAllowed) {
|
|
logger.error('SYSTEM', 'Restart guard tripped: session is dead, terminating', {
|
|
sessionId: session.sessionDbId,
|
|
pendingCount,
|
|
restartsInWindow: session.restartGuard.restartsInWindow,
|
|
windowMs: session.restartGuard.windowMs,
|
|
maxRestarts: session.restartGuard.maxRestarts,
|
|
consecutiveFailures: session.restartGuard.consecutiveFailuresSinceSuccess,
|
|
maxConsecutiveFailures: session.restartGuard.maxConsecutiveFailures
|
|
});
|
|
session.consecutiveRestarts = 0;
|
|
this.terminateSession(session.sessionDbId, 'max_restarts_exceeded');
|
|
return;
|
|
}
|
|
|
|
logger.info('SYSTEM', 'Pending work remains after generator exit, restarting with fresh AbortController', {
|
|
sessionId: session.sessionDbId,
|
|
pendingCount,
|
|
attempt: session.consecutiveRestarts
|
|
});
|
|
// Reset AbortController for restart
|
|
session.abortController = new AbortController();
|
|
// Restart processor
|
|
this.startSessionProcessor(session, 'pending-work-restart');
|
|
this.broadcastProcessingStatus();
|
|
} else {
|
|
// Successful completion with no pending work — finalize then drop
|
|
// in-memory state. finalizeSession flips sdk_sessions.status to
|
|
// 'completed', drains orphaned pendings, broadcasts; idempotent so
|
|
// the later POST /api/sessions/complete from the Stop hook is a
|
|
// no-op. Without this, hooks-disabled installs (and any session
|
|
// whose Stop hook fails before /api/sessions/complete) leave the
|
|
// DB row permanently 'active'.
|
|
session.restartGuard?.recordSuccess();
|
|
session.consecutiveRestarts = 0;
|
|
this.completionHandler.finalizeSession(session.sessionDbId);
|
|
this.sessionManager.removeSessionImmediate(session.sessionDbId);
|
|
}
|
|
});
|
|
}
|
|
|
|
/**
|
|
* Match errors that indicate the Claude Code process/session is gone (resume impossible).
|
|
* Used to trigger graceful fallback instead of leaving pending messages stuck forever.
|
|
*
|
|
* These patterns come from the Claude SDK's ProcessTransport and related internals.
|
|
* The SDK does not export typed error classes, so string matching on normalized
|
|
* messages is the only reliable detection method. Each pattern corresponds to a
|
|
* specific SDK failure mode:
|
|
* - 'process aborted by user': user cancelled the Claude Code session
|
|
* - 'processtransport': transport layer disconnected
|
|
* - 'not ready for writing': stdio pipe to Claude process is closed
|
|
* - 'session generator failed': wrapper error from our own agent layer
|
|
* - 'claude code process': process exited or was killed
|
|
*/
|
|
private static readonly SESSION_TERMINATED_PATTERNS = [
|
|
'process aborted by user',
|
|
'processtransport',
|
|
'not ready for writing',
|
|
'session generator failed',
|
|
'claude code process',
|
|
] as const;
|
|
|
|
private isSessionTerminatedError(error: unknown): boolean {
|
|
const msg = error instanceof Error ? error.message : String(error);
|
|
const normalized = msg.toLowerCase();
|
|
return WorkerService.SESSION_TERMINATED_PATTERNS.some(
|
|
pattern => normalized.includes(pattern)
|
|
);
|
|
}
|
|
|
|
/**
|
|
* When SDK resume fails due to terminated session: try Gemini then OpenRouter to drain
|
|
* pending messages; if no fallback available, mark messages abandoned and remove session.
|
|
*/
|
|
private async runFallbackForTerminatedSession(
|
|
session: ReturnType<typeof this.sessionManager.getSession>,
|
|
_originalError: unknown
|
|
): Promise<void> {
|
|
if (!session) return;
|
|
|
|
const sessionDbId = session.sessionDbId;
|
|
|
|
// Fallback agents need memorySessionId for storeObservations
|
|
if (!session.memorySessionId) {
|
|
const syntheticId = `fallback-${sessionDbId}-${Date.now()}`;
|
|
session.memorySessionId = syntheticId;
|
|
this.dbManager.getSessionStore().updateMemorySessionId(sessionDbId, syntheticId);
|
|
}
|
|
|
|
if (isGeminiAvailable()) {
|
|
try {
|
|
await this.geminiAgent.startSession(session, this);
|
|
return;
|
|
} catch (e) {
|
|
// [ANTI-PATTERN IGNORED]: Fallback chain by design — Gemini failure falls through to OpenRouter attempt
|
|
if (e instanceof Error) {
|
|
logger.warn('WORKER', 'Fallback Gemini failed, trying OpenRouter', {
|
|
sessionId: sessionDbId,
|
|
});
|
|
logger.error('WORKER', 'Gemini fallback error detail', { sessionId: sessionDbId }, e);
|
|
} else {
|
|
logger.error('WORKER', 'Gemini fallback failed with non-Error', { sessionId: sessionDbId }, new Error(String(e)));
|
|
}
|
|
}
|
|
}
|
|
|
|
if (isOpenRouterAvailable()) {
|
|
try {
|
|
await this.openRouterAgent.startSession(session, this);
|
|
return;
|
|
} catch (e) {
|
|
// [ANTI-PATTERN IGNORED]: Last fallback in chain — failure falls through to message abandonment, which is the designed terminal behavior
|
|
if (e instanceof Error) {
|
|
logger.error('WORKER', 'Fallback OpenRouter failed, will abandon messages', { sessionId: sessionDbId }, e);
|
|
} else {
|
|
logger.error('WORKER', 'Fallback OpenRouter failed with non-Error, will abandon messages', { sessionId: sessionDbId }, new Error(String(e)));
|
|
}
|
|
}
|
|
}
|
|
|
|
// No fallback or both failed: mark session completed in DB (drain pending
|
|
// + broadcast via finalizeSession, idempotent) then drop in-memory state.
|
|
// Without this, sdk_sessions.status stays 'active' forever — the deleted
|
|
// reapStaleSessions interval was the only prior backstop.
|
|
this.completionHandler.finalizeSession(sessionDbId);
|
|
this.sessionManager.removeSessionImmediate(sessionDbId);
|
|
}
|
|
|
|
/**
|
|
* Terminate a session that will not restart.
|
|
* Enforces the restart-or-terminate invariant: every generator exit
|
|
* must either call startSessionProcessor() or terminateSession().
|
|
* No zombie sessions allowed.
|
|
*
|
|
* GENERATOR EXIT INVARIANT:
|
|
* .finally() → restart? → startSessionProcessor()
|
|
* no? → terminateSession()
|
|
*/
|
|
private terminateSession(sessionDbId: number, reason: string): void {
|
|
logger.info('SYSTEM', 'Session terminated', { sessionId: sessionDbId, reason });
|
|
|
|
// finalizeSession marks sdk_sessions.status='completed', drains pending
|
|
// messages, and broadcasts. Idempotent. Without this, wall-clock-limited
|
|
// and unrecoverable-error paths leave DB rows as 'active' forever.
|
|
this.completionHandler.finalizeSession(sessionDbId);
|
|
|
|
// removeSessionImmediate fires onSessionDeletedCallback → broadcastProcessingStatus()
|
|
this.sessionManager.removeSessionImmediate(sessionDbId);
|
|
}
|
|
|
|
/**
|
|
* Process pending session queues
|
|
*/
|
|
async processPendingQueues(sessionLimit: number = 10): Promise<{
|
|
totalPendingSessions: number;
|
|
sessionsStarted: number;
|
|
sessionsSkipped: number;
|
|
startedSessionIds: number[];
|
|
}> {
|
|
const { PendingMessageStore } = await import('./sqlite/PendingMessageStore.js');
|
|
const pendingStore = new PendingMessageStore(this.dbManager.getSessionStore().db, 3);
|
|
const sessionStore = this.dbManager.getSessionStore();
|
|
|
|
// Clean up stale 'active' sessions before processing
|
|
// Sessions older than 6 hours without activity are likely orphaned
|
|
const STALE_SESSION_THRESHOLD_MS = 6 * 60 * 60 * 1000;
|
|
const staleThreshold = Date.now() - STALE_SESSION_THRESHOLD_MS;
|
|
|
|
const staleSessionIds = sessionStore.db.prepare(`
|
|
SELECT id FROM sdk_sessions
|
|
WHERE status = 'active' AND started_at_epoch < ?
|
|
`).all(staleThreshold) as { id: number }[];
|
|
|
|
if (staleSessionIds.length > 0) {
|
|
const ids = staleSessionIds.map(r => r.id);
|
|
const placeholders = ids.map(() => '?').join(',');
|
|
const now = Date.now();
|
|
|
|
try {
|
|
sessionStore.db.prepare(`
|
|
UPDATE sdk_sessions
|
|
SET status = 'failed', completed_at_epoch = ?
|
|
WHERE id IN (${placeholders})
|
|
`).run(now, ...ids);
|
|
logger.info('SYSTEM', `Marked ${ids.length} stale sessions as failed`);
|
|
} catch (error) {
|
|
// [ANTI-PATTERN IGNORED]: Stale session cleanup is best-effort; pending queue processing below must still proceed
|
|
if (error instanceof Error) {
|
|
logger.error('WORKER', 'Failed to mark stale sessions as failed', { staleCount: ids.length }, error);
|
|
} else {
|
|
logger.error('WORKER', 'Failed to mark stale sessions as failed with non-Error', { staleCount: ids.length }, new Error(String(error)));
|
|
}
|
|
}
|
|
|
|
try {
|
|
const msgResult = sessionStore.db.prepare(`
|
|
UPDATE pending_messages
|
|
SET status = 'failed', failed_at_epoch = ?
|
|
WHERE status = 'pending'
|
|
AND session_db_id IN (${placeholders})
|
|
`).run(now, ...ids);
|
|
if (msgResult.changes > 0) {
|
|
logger.info('SYSTEM', `Marked ${msgResult.changes} pending messages from stale sessions as failed`);
|
|
}
|
|
} catch (error) {
|
|
// [ANTI-PATTERN IGNORED]: Pending message cleanup is best-effort; queue processing below must still proceed
|
|
if (error instanceof Error) {
|
|
logger.error('WORKER', 'Failed to clean up stale pending messages', { staleCount: ids.length }, error);
|
|
} else {
|
|
logger.error('WORKER', 'Failed to clean up stale pending messages with non-Error', { staleCount: ids.length }, new Error(String(error)));
|
|
}
|
|
}
|
|
}
|
|
|
|
const orphanedSessionIds = pendingStore.getSessionsWithPendingMessages();
|
|
|
|
const result = {
|
|
totalPendingSessions: orphanedSessionIds.length,
|
|
sessionsStarted: 0,
|
|
sessionsSkipped: 0,
|
|
startedSessionIds: [] as number[]
|
|
};
|
|
|
|
if (orphanedSessionIds.length === 0) return result;
|
|
|
|
logger.info('SYSTEM', `Processing up to ${sessionLimit} of ${orphanedSessionIds.length} pending session queues`);
|
|
|
|
for (const sessionDbId of orphanedSessionIds) {
|
|
if (result.sessionsStarted >= sessionLimit) break;
|
|
|
|
const existingSession = this.sessionManager.getSession(sessionDbId);
|
|
if (existingSession?.generatorPromise) {
|
|
result.sessionsSkipped++;
|
|
continue;
|
|
}
|
|
|
|
try {
|
|
const session = this.sessionManager.initializeSession(sessionDbId);
|
|
this.startSessionProcessor(session, 'startup-recovery');
|
|
result.sessionsStarted++;
|
|
result.startedSessionIds.push(sessionDbId);
|
|
} catch (error) {
|
|
if (error instanceof Error) {
|
|
logger.error('WORKER', `Failed to initialize/start session ${sessionDbId}`, { sessionDbId }, error);
|
|
} else {
|
|
logger.error('WORKER', `Failed to initialize/start session ${sessionDbId} with non-Error`, { sessionDbId }, new Error(String(error)));
|
|
}
|
|
result.sessionsSkipped++;
|
|
// [ANTI-PATTERN IGNORED]: Per-session failure must not abort the loop; other sessions may still be recoverable
|
|
continue;
|
|
}
|
|
|
|
logger.info('SYSTEM', `Starting processor for session ${sessionDbId}`, {
|
|
project: this.sessionManager.getSession(sessionDbId)?.project,
|
|
pendingCount: pendingStore.getPendingCount(sessionDbId)
|
|
});
|
|
|
|
await new Promise(resolve => setTimeout(resolve, 100));
|
|
}
|
|
|
|
return result;
|
|
}
|
|
|
|
/**
|
|
* Shutdown the worker service
|
|
*/
|
|
async shutdown(): Promise<void> {
|
|
if (this.transcriptWatcher) {
|
|
this.transcriptWatcher.stop();
|
|
this.transcriptWatcher = null;
|
|
logger.info('TRANSCRIPT', 'Transcript watcher stopped');
|
|
}
|
|
|
|
await performGracefulShutdown({
|
|
server: this.server.getHttpServer(),
|
|
sessionManager: this.sessionManager,
|
|
mcpClient: this.mcpClient,
|
|
dbManager: this.dbManager,
|
|
chromaMcpManager: this.chromaMcpManager || undefined
|
|
});
|
|
}
|
|
|
|
/**
|
|
* Broadcast processing status change to SSE clients
|
|
*/
|
|
broadcastProcessingStatus(): void {
|
|
const queueDepth = this.sessionManager.getTotalActiveWork();
|
|
const isProcessing = queueDepth > 0;
|
|
const activeSessions = this.sessionManager.getActiveSessionCount();
|
|
|
|
logger.info('WORKER', 'Broadcasting processing status', {
|
|
isProcessing,
|
|
queueDepth,
|
|
activeSessions
|
|
});
|
|
|
|
this.sseBroadcaster.broadcast({
|
|
type: 'processing_status',
|
|
isProcessing,
|
|
queueDepth
|
|
});
|
|
}
|
|
}
|
|
|
|
// ============================================================================
|
|
// Reusable Worker Startup Logic
|
|
// ============================================================================
|
|
|
|
/**
|
|
* Ensures the worker is started and healthy.
|
|
*
|
|
* Thin wrapper around the canonical implementation in ./worker-spawner.ts.
|
|
*
|
|
* `__filename` is forwarded as the worker script path because, in the CJS
|
|
* bundle that ships to users, `__filename` always resolves to the compiled
|
|
* `worker-service.cjs` itself — which is exactly the script the spawner
|
|
* needs to relaunch as a detached daemon. The MCP server (a separate Node
|
|
* bundle) cannot rely on its own `__filename` because that would point at
|
|
* `mcp-server.cjs`, so it computes the worker path explicitly via
|
|
* `dirname(__filename) + 'worker-service.cjs'` instead.
|
|
*
|
|
* @param port - The TCP port (used for port-in-use checks and daemon spawn)
|
|
* @returns true if worker is healthy (existing or newly started), false on failure
|
|
*/
|
|
export async function ensureWorkerStarted(port: number): Promise<boolean> {
|
|
return ensureWorkerStartedShared(port, __filename);
|
|
}
|
|
|
|
// ============================================================================
|
|
// CLI Entry Point
|
|
// ============================================================================
|
|
|
|
async function main() {
|
|
const command = process.argv[2];
|
|
|
|
// Early exit if plugin is disabled in Claude Code settings (#781).
|
|
// Only gate hook-initiated commands; CLI management (stop/status) still works.
|
|
const hookInitiatedCommands = ['start', 'hook', 'restart', '--daemon'];
|
|
if ((hookInitiatedCommands.includes(command) || command === undefined) && isPluginDisabledInClaudeSettings()) {
|
|
process.exit(0);
|
|
}
|
|
|
|
const port = getWorkerPort();
|
|
|
|
// Helper for JSON status output in 'start' command
|
|
// Exit code 0 ensures Windows Terminal doesn't keep tabs open
|
|
function exitWithStatus(status: 'ready' | 'error', message?: string): never {
|
|
const output = buildStatusOutput(status, message);
|
|
console.log(JSON.stringify(output));
|
|
process.exit(0);
|
|
}
|
|
|
|
switch (command) {
|
|
case 'start': {
|
|
const success = await ensureWorkerStarted(port);
|
|
if (success) {
|
|
exitWithStatus('ready');
|
|
} else {
|
|
exitWithStatus('error', 'Failed to start worker');
|
|
}
|
|
break;
|
|
}
|
|
|
|
case 'stop': {
|
|
await httpShutdown(port);
|
|
const freed = await waitForPortFree(port, getPlatformTimeout(15000));
|
|
if (!freed) {
|
|
logger.warn('SYSTEM', 'Port did not free up after shutdown', { port });
|
|
}
|
|
removePidFile();
|
|
logger.info('SYSTEM', 'Worker stopped successfully');
|
|
process.exit(0);
|
|
break;
|
|
}
|
|
|
|
case 'restart': {
|
|
logger.info('SYSTEM', 'Restarting worker');
|
|
await httpShutdown(port);
|
|
const restartFreed = await waitForPortFree(port, getPlatformTimeout(15000));
|
|
if (!restartFreed) {
|
|
logger.error('SYSTEM', 'Port did not free up after shutdown, aborting restart', { port });
|
|
process.exit(0);
|
|
}
|
|
removePidFile();
|
|
|
|
const pid = spawnDaemon(__filename, port);
|
|
if (pid === undefined) {
|
|
logger.error('SYSTEM', 'Failed to spawn worker daemon during restart');
|
|
// Exit gracefully: Windows Terminal won't keep tab open on exit 0
|
|
// The wrapper/plugin will handle restart logic if needed
|
|
process.exit(0);
|
|
}
|
|
|
|
// PID file is written by the worker itself after listen() succeeds
|
|
// This is race-free and works correctly on Windows where cmd.exe PID is useless
|
|
|
|
const healthy = await waitForHealth(port, getPlatformTimeout(HOOK_TIMEOUTS.POST_SPAWN_WAIT));
|
|
if (!healthy) {
|
|
removePidFile();
|
|
logger.error('SYSTEM', 'Worker failed to restart');
|
|
// Exit gracefully: Windows Terminal won't keep tab open on exit 0
|
|
// The wrapper/plugin will handle restart logic if needed
|
|
process.exit(0);
|
|
}
|
|
|
|
logger.info('SYSTEM', 'Worker restarted successfully');
|
|
process.exit(0);
|
|
break;
|
|
}
|
|
|
|
case 'status': {
|
|
const portInUse = await isPortInUse(port);
|
|
const pidInfo = readPidFile();
|
|
if (portInUse && pidInfo) {
|
|
console.log('Worker is running');
|
|
console.log(` PID: ${pidInfo.pid}`);
|
|
console.log(` Port: ${pidInfo.port}`);
|
|
console.log(` Started: ${pidInfo.startedAt}`);
|
|
} else {
|
|
console.log('Worker is not running');
|
|
}
|
|
process.exit(0);
|
|
break;
|
|
}
|
|
|
|
case 'cursor': {
|
|
const subcommand = process.argv[3];
|
|
const cursorResult = await handleCursorCommand(subcommand, process.argv.slice(4));
|
|
process.exit(cursorResult);
|
|
break;
|
|
}
|
|
|
|
case 'gemini-cli': {
|
|
const geminiSubcommand = process.argv[3];
|
|
const geminiResult = await handleGeminiCliCommand(geminiSubcommand, process.argv.slice(4));
|
|
process.exit(geminiResult);
|
|
break;
|
|
}
|
|
|
|
case 'hook': {
|
|
// Validate CLI args first (before any I/O)
|
|
const platform = process.argv[3];
|
|
const event = process.argv[4];
|
|
if (!platform || !event) {
|
|
console.error('Usage: claude-mem hook <platform> <event>');
|
|
console.error('Platforms: claude-code, cursor, gemini-cli, raw');
|
|
console.error('Events: context, session-init, observation, summarize, session-complete, user-message');
|
|
process.exit(1);
|
|
}
|
|
|
|
// Ensure worker is running as a detached daemon (#1249).
|
|
//
|
|
// IMPORTANT: The hook process MUST NOT become the worker. Starting the
|
|
// worker in-process makes it a grandchild of Claude Code, which the
|
|
// sandbox kills. Instead, ensureWorkerStarted() spawns a fully detached
|
|
// daemon (detached: true, stdio: 'ignore', child.unref()) that survives
|
|
// the hook process's exit and is invisible to Claude Code's sandbox.
|
|
const workerReady = await ensureWorkerStarted(port);
|
|
if (!workerReady) {
|
|
logger.warn('SYSTEM', 'Worker failed to start before hook, handler will proceed gracefully');
|
|
}
|
|
|
|
const { hookCommand } = await import('../cli/hook-command.js');
|
|
await hookCommand(platform, event);
|
|
break;
|
|
}
|
|
|
|
case 'generate': {
|
|
const dryRun = process.argv.includes('--dry-run');
|
|
const { generateClaudeMd } = await import('../cli/claude-md-commands.js');
|
|
const result = await generateClaudeMd(dryRun);
|
|
process.exit(result);
|
|
break;
|
|
}
|
|
|
|
case 'clean': {
|
|
const dryRun = process.argv.includes('--dry-run');
|
|
const { cleanClaudeMd } = await import('../cli/claude-md-commands.js');
|
|
const result = await cleanClaudeMd(dryRun);
|
|
process.exit(result);
|
|
break;
|
|
}
|
|
|
|
case 'adopt': {
|
|
const dryRun = process.argv.includes('--dry-run');
|
|
const branchIndex = process.argv.indexOf('--branch');
|
|
const branchValue = branchIndex !== -1 ? process.argv[branchIndex + 1] : undefined;
|
|
if (branchIndex !== -1 && (!branchValue || branchValue.startsWith('--'))) {
|
|
console.error('Usage: adopt [--dry-run] [--branch <branch>] [--cwd <path>]');
|
|
process.exit(1);
|
|
}
|
|
const onlyBranch = branchValue;
|
|
// Honor an explicit --cwd override so the NPX CLI can pass through the
|
|
// user's working directory (the spawn sets cwd to the marketplace dir).
|
|
const cwdIndex = process.argv.indexOf('--cwd');
|
|
const cwdValue = cwdIndex !== -1 ? process.argv[cwdIndex + 1] : undefined;
|
|
if (cwdIndex !== -1 && (!cwdValue || cwdValue.startsWith('--'))) {
|
|
console.error('Usage: adopt [--dry-run] [--branch <branch>] [--cwd <path>]');
|
|
process.exit(1);
|
|
}
|
|
const repoPath = cwdValue ?? process.cwd();
|
|
|
|
const result = await adoptMergedWorktrees({ repoPath, dryRun, onlyBranch });
|
|
|
|
const tag = result.dryRun ? '(dry-run)' : '(applied)';
|
|
console.log(`\nWorktree adoption ${tag}`);
|
|
console.log(` Parent project: ${result.parentProject || '(unknown)'}`);
|
|
console.log(` Repo: ${result.repoPath}`);
|
|
console.log(` Worktrees scanned: ${result.scannedWorktrees}`);
|
|
console.log(` Merged branches: ${result.mergedBranches.join(', ') || '(none)'}`);
|
|
console.log(` Observations adopted: ${result.adoptedObservations}`);
|
|
console.log(` Summaries adopted: ${result.adoptedSummaries}`);
|
|
console.log(` Chroma docs updated: ${result.chromaUpdates}`);
|
|
if (result.chromaFailed > 0) {
|
|
console.log(` Chroma sync failures: ${result.chromaFailed} (will retry on next run)`);
|
|
}
|
|
for (const err of result.errors) {
|
|
console.log(` ! ${err.worktree}: ${err.error}`);
|
|
}
|
|
process.exit(0);
|
|
}
|
|
|
|
case '--daemon':
|
|
default: {
|
|
// GUARD 1: Refuse to start if another worker is already alive.
|
|
// Verifies PID *identity* (via start-time token) not just liveness, so a
|
|
// stale PID file pointing at a PID that's since been reused by an
|
|
// unrelated process (e.g. container restart reusing low PIDs) doesn't
|
|
// false-positive.
|
|
const existingPidInfo = readPidFile();
|
|
if (verifyPidFileOwnership(existingPidInfo)) {
|
|
logger.info('SYSTEM', 'Worker already running (PID alive), refusing to start duplicate', {
|
|
existingPid: existingPidInfo.pid,
|
|
existingPort: existingPidInfo.port,
|
|
startedAt: existingPidInfo.startedAt
|
|
});
|
|
process.exit(0);
|
|
}
|
|
|
|
// GUARD 2: Refuse to start if the port is already bound.
|
|
// Catches the race where two daemons start simultaneously before
|
|
// either writes a PID file. Must run BEFORE constructing WorkerService
|
|
// because the constructor registers signal handlers and timers that
|
|
// prevent the process from exiting even if listen() fails later.
|
|
if (await isPortInUse(port)) {
|
|
logger.info('SYSTEM', 'Port already in use, refusing to start duplicate', { port });
|
|
process.exit(0);
|
|
}
|
|
|
|
// Prevent daemon from dying silently on unhandled errors.
|
|
// The HTTP server can continue serving even if a background task throws.
|
|
process.on('unhandledRejection', (reason) => {
|
|
logger.error('SYSTEM', 'Unhandled rejection in daemon', {
|
|
reason: reason instanceof Error ? reason.message : String(reason)
|
|
});
|
|
});
|
|
process.on('uncaughtException', (error) => {
|
|
logger.error('SYSTEM', 'Uncaught exception in daemon', {}, error as Error);
|
|
// Don't exit — keep the HTTP server running
|
|
});
|
|
|
|
const worker = new WorkerService();
|
|
worker.start().catch(async (error) => {
|
|
// Port race: when the MCP server and SessionStart hook both spawn a daemon
|
|
// concurrently, one will lose the bind race with EADDRINUSE or Bun's equivalent
|
|
// "port in use" error. If the winner is already healthy, exit cleanly (#1447).
|
|
const isPortConflict = error instanceof Error && (
|
|
(error as NodeJS.ErrnoException).code === 'EADDRINUSE' ||
|
|
/port.*in use|address.*in use/i.test(error.message)
|
|
);
|
|
if (isPortConflict && await waitForHealth(port, 3000)) {
|
|
logger.info('SYSTEM', 'Duplicate daemon exiting — another worker already claimed port', { port });
|
|
process.exit(0);
|
|
}
|
|
logger.failure('SYSTEM', 'Worker failed to start', {}, error as Error);
|
|
removePidFile();
|
|
// Exit gracefully: Windows Terminal won't keep tab open on exit 0
|
|
// The wrapper/plugin will handle restart logic if needed
|
|
process.exit(0);
|
|
});
|
|
}
|
|
}
|
|
}
|
|
|
|
// Check if running as main module in both ESM and CommonJS
|
|
// The CLAUDE_MEM_MANAGED check handles Bun on Windows where require.main !== module
|
|
// in CJS mode despite being the entry point (see #1450)
|
|
const isMainModule = typeof require !== 'undefined' && typeof module !== 'undefined'
|
|
? require.main === module || !module.parent || process.env.CLAUDE_MEM_MANAGED === 'true'
|
|
: import.meta.url === `file://${process.argv[1]}`
|
|
|| process.argv[1]?.endsWith('worker-service')
|
|
|| process.argv[1]?.endsWith('worker-service.cjs')
|
|
|| process.argv[1]?.replaceAll('\\', '/') === __filename?.replaceAll('\\', '/');
|
|
|
|
if (isMainModule) {
|
|
main().catch((error) => {
|
|
logger.error('SYSTEM', 'Fatal error in main', {}, error instanceof Error ? error : undefined);
|
|
process.exit(0); // Exit 0: don't block Claude Code, don't leave Windows Terminal tabs open
|
|
});
|
|
}
|