perf: streamline worker startup and consolidate database connections (#2122)

* docs: pathfinder refactor corpus + Node 20 preflight

Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 01 — data integrity

Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.

- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
  started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
  UNIQUE(memory_session_id, content_hash) on observations; dedup
  duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
  worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
  and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
  observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
  2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
  path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
  CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
  Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.

Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/01-data-integrity.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 02 — process lifecycle

OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).

- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
  canonical registry at src/supervisor/process-registry.ts is the
  sole survivor; SDK spawn site consolidated into it via new
  createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
  ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
  ['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
  process.kill(-pgid, signal) on Unix when pgid is recorded;
  Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
  staleSessionReaperInterval setInterval (including the co-located
  WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
  WAL growth without an app-level timer), killIdleDaemonChildren,
  killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
  detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
  constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
  via generatorPromise.finally() already lives in worker-service
  startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
  SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
  for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
  via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
  lazy-spawn — consults isWorkerPortAlive (which gates
  captureProcessStartToken for PID-reuse safety via commit
  99060bac), then spawns detached with unref(), then
  waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
  exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
  idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
  only on external SIGTERM via supervisor signal handlers.

Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.

All 10 verification greps return 0. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast

Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
  - Phase 1/2: ObservationRow type doesn't exist; the four
    "formatters" operate on three incompatible types.
  - Phase 4: RECENCY_WINDOW_MS already imported from
    SEARCH_CONSTANTS at every call site.
  - Phase 7: getExistingChromaIds is NOT @deprecated and has an
    active caller in ChromaSync.backfillMissingSyncs.
  - Phase 8: estimateTokens already consolidated.
  - Phase 9: knowledge-corpus rewrite blocked on PG-3
    prompt-caching cost smoke test.

Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.

Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.

Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).

Tests updated (Principle 7 — delete in same PR):
  - search-orchestrator.test.ts: "fall back to SQLite" rewritten
    as "throw ChromaUnavailableError (HTTP 503)".
  - chroma/hybrid/sqlite-search-strategy tests: rewritten to
    rejects.toThrow; removed fellBack assertions.

Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 03 — ingestion path

Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.

- Phase 0: Created src/services/worker/http/shared.ts exporting
  ingestObservation/ingestPrompt/ingestSummary as direct
  in-process functions plus ingestEventBus (Node EventEmitter,
  reusing existing pattern — no third event bus introduced).
  setIngestContext wires the SessionManager dependency from
  worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
  returning { valid:true; kind: 'observation'|'summary'; data }
  | { valid:false; reason: string }. Inspects root element;
  <skip_summary reason="…"/> is a first-class summary case
  with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
  branches on the discriminated union. On invalid → markFailed
  + logger.warn(reason). On observation → ingestObservation.
  On summary → ingestSummary then emit summaryStoredEvent
  { sessionId, messageId } (consumed by Plan 05's blocking
  /api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
  (ResponseProcessor + SessionManager + worker-types) and
  MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
  guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
  replaced with fs.watch(transcriptsRoot, { recursive: true,
  persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
  Map deleted. tool_use rows insert with INSERT OR IGNORE on
  UNIQUE(session_id, tool_use_id) (added by Plan 01). New
  pairToolUsesByJoin query in PendingMessageStore for read-time
  pairing (UNIQUE INDEX provides idempotency; explicit consumer
  not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
  direct ingestObservation call. maybeParseJson silent-passthrough
  rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
  collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
  class) deleted. The active extractLastMessage at
  src/shared/transcript-parser.ts:41-144 is the sole survivor.

Tests updated (Principle 7 — same-PR delete):
  - tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
    to assert discriminated-union shape; coercion-specific
    scenarios collapse into { valid:false } assertions.
  - tests/worker/agents/response-processor.test.ts: circuit-breaker
    describe block skipped; non-XML/empty-response tests assert
    fail-fast markFailed behavior.

Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.

Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.

Plan: PATHFINDER-2026-04-22/03-ingestion-path.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 05 — hook surface

Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.

- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
  1..20; do curl -sf .../health && break; sleep 0.1; done` shell
  retry wrappers deleted. Hook commands invoke their bun entry
  point directly.
- Phase 2: src/shared/worker-utils.ts — added
  executeWithWorkerFallback<T>(url, method, body) returning
  T | { continue: true; reason?: string }. All 8 hook handlers
  (observation, session-init, context, file-context, file-edit,
  summarize, session-complete, user-message) rewritten to use
  it instead of duplicating the ensureWorkerRunning →
  workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
  using validateBody + sessionEndSchema (z.object({sessionId})).
  One-shot ingestEventBus.on('summaryStoredEvent') listener,
  30 s timer, req.aborted handler — all share one cleanup so
  the listener cannot leak. summarize.ts polling loop, plus
  MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
  memoizes SettingsDefaultsManager.loadFromFile per process.
  Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
  check entry; isProjectExcluded no longer referenced from
  src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
  (all 6 adapters: claude-code, cursor, raw, gemini-cli,
  windsurf). New AdapterRejectedInput error in
  src/cli/adapters/errors.ts. Handler-level isValidCwd checks
  deleted from file-edit.ts and observation.ts. hook-command.ts
  catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
  initAgent is idempotent. tests/hooks/context-reinjection-guard
  test (validated the deleted conditional) deleted in same PR
  per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
  .json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
  _THRESHOLD setting (default 3). On consecutive worker-unreachable
  ≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
  wrapping ensureWorkerRunning. executeWithWorkerFallback calls
  the memoized version.

Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.

Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.

Plan: PATHFINDER-2026-04-22/05-hook-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 06 — API surface

One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.

- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
  in src/services/worker/http/middleware/validateBody.ts —
  safeParse → 400 { error: 'ValidationError', issues: [...] }
  on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
  route file. 24 POST endpoints across SessionRoutes,
  CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
  LogsRoutes, SettingsRoutes now wrap with validateBody().
  /api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
  along with every call site. Inline coercion helpers
  (coerceStringArray, coercePositiveInteger) and inline
  if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
  from src/services/worker/http/middleware.ts. Worker binds
  127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
  via fs.readFileSync; served as Buffer with text/html content
  type. SKILL.md + per-operation .md files cached in
  Server.ts as Map<string, string>; loadInstructionContent
  helper deleted. NO fs.watch, NO TTL — process restart is the
  cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
  — /api/pending-queue (GET), /api/pending-queue/process (POST),
  /api/pending-queue/failed (DELETE), /api/pending-queue/all
  (DELETE). Helper methods that ONLY served them
  (getQueueMessages, getStuckCount, getRecentlyProcessed,
  clearFailed, clearAll) deleted from PendingMessageStore.
  KEPT: /api/processing-status (observability), /health
  (used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
  GracefulShutdown now calls getSupervisor().stop() directly.
  Two functions retained with clear roles:
    - performGracefulShutdown — worker-side 6-step shutdown
    - runShutdownCascade — supervisor-side child teardown
      (process.kill(-pgid), Windows tree-kill, PID-file cleanup)
  Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
  failure-marking path on PendingMessageStore. Old methods
  markSessionMessagesFailed and markAllSessionMessagesAbandoned
  deleted along with all callers (worker-service,
  SessionCompletionHandler, tests/zombie-prevention).

Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.

Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.

Plan: PATHFINDER-2026-04-22/06-api-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 07 — dead code sweep

ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.

Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
  isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
  abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
  zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
  command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments

Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
  builders, ParsedObservation, ParsedSummary, ParseResult,
  SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
  via dynamic await import('../../../context-generator.js') in
  worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
  — used via dynamic await import in npx-cli/install.ts +
  uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
  ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
  orphan-recovery caller in worker-service.ts plus
  zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
  in same file.
- All Database.ts barrel re-exports — used downstream.

Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
  is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
  the methods are not thin wrappers but ~900 LoC of bodies, and
  two methods are documented as intentional mirrors so the
  context-generator.cjs bundle stays schema-consistent without
  pulling MigrationRunner. Deserves its own plan, not a sweep.

Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.

Plan: PATHFINDER-2026-04-22/07-dead-code.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: remove residual ProcessRegistry comment reference

Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile review (P1 + 2× P2)

P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
  - Added optional timeoutMs to executeWithWorkerFallback,
    forwarded to workerHttpRequest.
  - summarize.ts call site now passes 35_000 (5 s above server
    hold window).

P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
  - ResponseProcessor now calls ingestSummary({ kind: 'parsed',
    sessionDbId, messageId, contentSessionId, parsed }) so the
    event-emission path is single-sourced.
  - ingestSummary's requireContext() resolution moved inside the
    'queue' branch (the only branch that needs sessionManager /
    dbManager). 'parsed' is a pure event-bus emission and
    doesn't need worker-internal context — fixes mocked
    ResponseProcessor unit tests that don't call
    setIngestContext.

P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
  - Added a Symbol.for('claude-mem/worker-fallback') brand to
    WorkerFallback. isWorkerFallback now checks the brand, not
    a duck-typed property name.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 2 (P1 + P2)

P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.

  - Gate ingestSummary call on (parsed.data.skipped ||
    session.lastSummaryStored). Skipped summaries are an explicit
    no-op bypass and still confirm; real summaries only confirm
    when storage actually wrote a row.
  - Non-skipped + summaryId === null path logs a warn and lets
    the server-side timeout (504) surface to the hook instead of
    a false ok:true.

P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.

  - Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
    log instead of the misleading ENQUEUED line. No behavior
    change — the duplicate is still correctly suppressed by the
    DB (Principle 3); only the log surface is corrected.
  - confirmProcessed is never called with the enqueue() return
    value (it operates on session.processingMessageIds[] from
    claimNextMessage), so no caller is broken; the visibility
    fix prevents future misuse.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 3 (P1 + 2× P2)

- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
  context after SessionRoutes is constructed. setIngestContext runs
  before routes exist, so transcript-watcher observations queued via
  ingestObservation() had no way to auto-start the SDK generator.
  Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
  /api/session/end calls register one listener each and clean up on
  completion, so the default-10 warning fires spuriously under normal
  load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
  ingestObservation() instead of duplicating skip-tool / meta /
  privacy / queue logic. Single helper, matching the Plan 03 goal.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)

- processor.handleToolResult: restore in-memory tool-use→tool-result
  pairing via session.pendingTools for schemas (e.g. Codex) whose
  tool_result events carry only tool_use_id + output. Without this,
  neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
  of throwing. Previously a single malformed JSON-shaped field caused
  handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
  for purely-glob inputs so the caller skips the watch instead of
  anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
  log on the returned id; the SessionManager branches on id === 0.

* fix: forward tool_use_id through ingestObservation (Greptile iter 5)

P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.

- shared.ingestObservation: forward payload.toolUseId to
  queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
  tool_use_id (HTTP convention) and toolUseId (JS convention) from
  req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
  validator doesn't rely on .passthrough() alone.

* fix: drop dead pairToolUsesByJoin, close session-end listener race

- PendingMessageStore: delete pairToolUsesByJoin. The method was never
  called and its self-join semantics are structurally incompatible
  with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
  collapses any second row with the same pair, so a self-join can
  only ever match a row to itself. In-memory pendingTools in
  processor.ts remains the pairing path for split-event schemas.

- IngestEventBus: retain a short-lived (60s) recentStored map keyed
  by sessionId. Populated on summaryStoredEvent emit, evicted on
  consume or TTL.

- handleSessionEnd: drain the recent-events buffer before attaching
  the listener. Closes the register-after-emit race where the summary
  can persist between the hook's summarize POST and its session/end
  POST — previously that window returned 504 after the 30s timeout.

* chore: merge origin/main into vivacious-teeth

Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).

Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
  kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
  loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
  POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
  summaryStoredEvent supersedes main's SessionCompletionHandler DI
  refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
  reason; generator .finally() Stop-hook self-clean is a guard for a
  path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
  security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
  #2084) while preserving our Zod validateBody schema.

Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings

1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
   in wrapHandler — synchronous exceptions would hang the client rather
   than surfacing as 500s. Wrap it like every other handler.

2) processor.handleToolResult only consumed the session.pendingTools
   entry when the tool_result arrived without a toolName. In the
   split-schema path where tool_result carries both toolName and toolId,
   the entry was never deleted and the map grew for the life of the
   session. Consume the entry whenever toolId is present.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: typing cleanup and viewer tsconfig split for PR feedback

- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings (iter 2)

- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
  the unscoped-drain branch that would nuke every pending/processing
  row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
  cached event until TTL eviction so a retried Stop hook's second
  /api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
  already tailed (JSONL appends fire on every line; only unknown
  paths warrant a rescan).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: call finalizeSession in terminal session paths (Greptile iter 3)

terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.

Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: GC failed pending_messages rows at startup (Greptile iter 4)

Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.

Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)

1. startSessionProcessor success branch now calls completionHandler.
   finalizeSession before removeSessionImmediate. Hooks-disabled installs
   (and any Stop hook that fails before POST /api/sessions/complete) no
   longer leave sdk_sessions rows as status='active' forever. Idempotent
   — a subsequent /api/sessions/complete is a no-op.

2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
   closures that reference it (TDZ safety; safe at runtime today but
   fragile if timeout ever shrinks).

3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
   instead of constructing its own — prevents silent divergence if the
   handler ever becomes stateful.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: stop runaway crash-recovery loop on dead sessions

Two distinct bugs were combining to keep a dead session restarting forever:

Bug 1 (uncaught "The operation was aborted."):
  child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
  signal aborts. spawnSdkProcess() never attached an 'error' listener, so
  any async spawn failure became uncaughtException and escaped to the
  daemon-level handler. Attach an 'error' listener immediately after spawn,
  before the !child.pid early-return, so async spawn errors are logged
  (with errno code) and swallowed locally.

Bug 2 (sliding-window limiter never trips on slow restart cadence):
  RestartGuard tripped only when restartTimestamps.length exceeded
  MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
  exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
  session that fail-restart-fail-restart on 8s cycles would loop forever
  (consecutiveRestarts climbing past 30+ in observed logs). Add a
  consecutiveFailures counter that increments on every restart and resets
  only on recordSuccess(). Trip when consecutive failures exceed
  MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
  processing in between proves the session is dead. Both guards now run in
  parallel: tight loops still trip the windowed cap; slow loops trip the
  consecutive-failure cap.

Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* perf: streamline worker startup and consolidate database connections

1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)

* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations

Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.

- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
  before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
  when shouldTrackProject(cwd) is false, so the observer's own hooks
  cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
  boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
  on observations) inline so bundled artifacts (worker-service.cjs,
  context-generator.cjs) stay schema-consistent — without it, the
  ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
  supervisor can actually feed the observer's stdin.

Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.

* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)

Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.

* fix: cross-platform observer-dir containment; clarify SDK stdin pipe

claude-review feedback on PR #2124.

- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
  hard-coded a POSIX separator and missed Windows backslash paths plus any
  trailing-slash variance. Switched to a path.relative-based isWithin()
  helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
  SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
  consumes that pipe; 'ignore' would null it and the null-check below
  would tear the child down on every spawn.

* fix: make Stop hook fire-and-forget; remove dead /api/session/end

The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.

The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.

- summarize.ts: drop the /api/session/end long-poll and the trailing
  /api/sessions/complete await; ~40 lines removed; unused
  SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
  SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
  route registration. Drop the now-unused ingestEventBus and
  SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
  comments that referenced the dead endpoint. The IngestEventBus is
  left in place dormant (no listeners) for follow-up cleanup so this
  PR stays focused on the blocker.

Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.

Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* deps: bump all dependencies to latest including majors

Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.

Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: surface real chroma errors and add deep status probe

Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.

Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.

Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: rebuild worker-service bundle to match merged src

Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: address coderabbit feedback on PLAN-fix-mcp-search.md

- replace machine-specific /Users/alexnewman absolute paths with portable
  <repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-04-25 13:37:40 -07:00
committed by GitHub
parent 8ace1d9c84
commit 94d592f212
159 changed files with 18091 additions and 5843 deletions
+72 -83
View File
@@ -131,7 +131,6 @@ describe('ResponseProcessor', () => {
conversationHistory: [],
currentProvider: 'claude',
processingMessageIds: [], // CLAIM-CONFIRM pattern: track message IDs being processed
consecutiveSummaryFailures: 0,
...overrides,
} as ActiveSession;
}
@@ -214,9 +213,15 @@ describe('ResponseProcessor', () => {
});
});
describe('non-XML observer responses', () => {
it('warns when the observer returns prose that will be discarded', async () => {
const session = createMockSession();
describe('non-XML observer responses (fail-fast — plan 03 phase 2)', () => {
it('warns and marks messages failed when the observer returns non-XML prose', async () => {
const markFailed = mock(() => {});
mockSessionManager = {
getMessageIterator: async function* () { yield* []; },
getPendingMessageStore: () => ({ markFailed, confirmProcessed: mock(() => {}) }),
} as unknown as SessionManager;
const session = createMockSession({ processingMessageIds: [7] });
const responseText = 'Skipping — repeated log scan with no new findings.';
await processAgentResponse(
@@ -232,30 +237,20 @@ describe('ResponseProcessor', () => {
expect(logger.warn).toHaveBeenCalledWith(
'PARSER',
'TestAgent returned non-XML response; observation content was discarded',
expect.objectContaining({
sessionId: 1,
preview: responseText
})
expect.stringMatching(/^TestAgent returned unparseable response:/),
expect.objectContaining({ sessionId: 1 })
);
const [, , observations, summary] = mockStoreObservations.mock.calls[0];
expect(observations).toHaveLength(0);
expect(summary).toBeNull();
expect(markFailed).toHaveBeenCalledWith(7);
expect(mockStoreObservations).not.toHaveBeenCalled();
});
});
describe('parsing summary from XML response', () => {
it('should parse summary from response', async () => {
const session = createMockSession();
// PATHFINDER plan 03 phase 1: parseAgentXml returns one kind per call.
// Summary-only response exercises the summary path.
const responseText = `
<observation>
<type>discovery</type>
<title>Test</title>
<facts></facts>
<concepts></concepts>
<files_read></files_read>
<files_modified></files_modified>
</observation>
<summary>
<request>Build login form</request>
<investigated>Reviewed existing forms</investigated>
@@ -374,7 +369,11 @@ describe('ResponseProcessor', () => {
expect(memorySessionId).toBe('memory-session-456');
expect(project).toBe('test-project');
expect(observations).toHaveLength(1);
expect(summary).not.toBeNull();
// PATHFINDER plan 03 phase 1: parseAgentXml returns ONE kind per call.
// The first recognised root wins (here: <observation>), so the summary
// in the same response is NOT extracted — the caller is expected to
// issue observation turns and summary turns separately.
expect(summary).toBeNull();
expect(promptNumber).toBe(5);
expect(tokens).toBe(100);
expect(timestamp).toBe(1700000000000);
@@ -434,16 +433,21 @@ describe('ResponseProcessor', () => {
});
it('should broadcast summary via SSE', async () => {
// PATHFINDER plan 03 phase 1: parseAgentXml returns one kind per call,
// so summary broadcasts require a summary-only response.
mockStoreObservations = mock(() => ({
observationIds: [],
summaryId: 99,
createdAtEpoch: 1700000000000,
} as StorageResult));
(mockDbManager.getSessionStore as any) = () => ({
storeObservations: mockStoreObservations,
ensureMemorySessionIdRegistered: mock(() => {}),
getSessionById: mock(() => ({ memory_session_id: 'memory-session-456' })),
});
const session = createMockSession();
const responseText = `
<observation>
<type>discovery</type>
<title>Test</title>
<facts></facts>
<concepts></concepts>
<files_read></files_read>
<files_modified></files_modified>
</observation>
<summary>
<request>Build feature</request>
<investigated>Reviewed code</investigated>
@@ -473,70 +477,44 @@ describe('ResponseProcessor', () => {
});
});
describe('handling empty response', () => {
it('should handle empty response gracefully', async () => {
const session = createMockSession();
describe('handling empty / non-XML response (fail-fast — plan 03 phase 2)', () => {
it('marks in-flight messages failed and does NOT call storeObservations on empty response', async () => {
const markFailed = mock(() => {});
mockSessionManager = {
getMessageIterator: async function* () { yield* []; },
getPendingMessageStore: () => ({ markFailed, confirmProcessed: mock(() => {}) }),
} as unknown as SessionManager;
const session = createMockSession({ processingMessageIds: [1, 2, 3] });
const responseText = '';
// Mock to handle empty observations
mockStoreObservations = mock(() => ({
observationIds: [],
summaryId: null,
createdAtEpoch: 1700000000000,
}));
(mockDbManager.getSessionStore as any) = () => ({
storeObservations: mockStoreObservations,
ensureMemorySessionIdRegistered: mock(() => {}),
getSessionById: mock(() => ({ memory_session_id: 'memory-session-456' })),
});
await processAgentResponse(
responseText,
session,
mockDbManager,
mockSessionManager,
mockWorker,
100,
null,
'TestAgent'
responseText, session, mockDbManager, mockSessionManager, mockWorker,
100, null, 'TestAgent'
);
// Should still call storeObservations with empty arrays
expect(mockStoreObservations).toHaveBeenCalledTimes(1);
const [, , observations, summary] = mockStoreObservations.mock.calls[0];
expect(observations).toHaveLength(0);
expect(summary).toBeNull();
expect(mockStoreObservations).not.toHaveBeenCalled();
expect(markFailed).toHaveBeenCalledTimes(3);
expect(session.processingMessageIds).toEqual([]);
});
it('should handle response with only text (no XML)', async () => {
const session = createMockSession();
it('marks in-flight messages failed and does NOT call storeObservations on plain-text response', async () => {
const markFailed = mock(() => {});
mockSessionManager = {
getMessageIterator: async function* () { yield* []; },
getPendingMessageStore: () => ({ markFailed, confirmProcessed: mock(() => {}) }),
} as unknown as SessionManager;
const session = createMockSession({ processingMessageIds: [42] });
const responseText = 'This is just plain text without any XML tags.';
mockStoreObservations = mock(() => ({
observationIds: [],
summaryId: null,
createdAtEpoch: 1700000000000,
}));
(mockDbManager.getSessionStore as any) = () => ({
storeObservations: mockStoreObservations,
ensureMemorySessionIdRegistered: mock(() => {}),
getSessionById: mock(() => ({ memory_session_id: 'memory-session-456' })),
});
await processAgentResponse(
responseText,
session,
mockDbManager,
mockSessionManager,
mockWorker,
100,
null,
'TestAgent'
responseText, session, mockDbManager, mockSessionManager, mockWorker,
100, null, 'TestAgent'
);
expect(mockStoreObservations).toHaveBeenCalledTimes(1);
const [, , observations] = mockStoreObservations.mock.calls[0];
expect(observations).toHaveLength(0);
expect(mockStoreObservations).not.toHaveBeenCalled();
expect(markFailed).toHaveBeenCalledTimes(1);
});
});
@@ -669,7 +647,11 @@ describe('ResponseProcessor', () => {
const session = createMockSession({
memorySessionId: null, // Missing memory session ID
});
const responseText = '<observation><type>discovery</type></observation>';
const responseText = `<observation>
<type>discovery</type>
<title>some title</title>
<narrative>some narrative</narrative>
</observation>`;
await expect(
processAgentResponse(
@@ -729,7 +711,14 @@ describe('ResponseProcessor', () => {
});
});
describe('circuit breaker: consecutiveSummaryFailures counter (#1633)', () => {
// PATHFINDER plan 03 phase 3: circuit breaker (consecutiveSummaryFailures) deleted.
// Former tests covered: counter stability on observation turns, increment on
// missing summary, neutrality on <skip_summary/>, reset on successful summary.
// Replacement coverage: `tests/sdk/parse-summary.test.ts` asserts that the
// parser returns `{ valid: false, reason }` for malformed summaries; the
// failure path goes through PendingMessageStore.markFailed's retry ladder,
// which is unit-tested separately in tests/services/sqlite/.
describe.skip('circuit breaker: consecutiveSummaryFailures counter (#1633 — deleted)', () => {
const SUMMARY_PROMPT = `--- ${SUMMARY_MODE_MARKER} ---\nDo the summary now.`;
it('does NOT increment the counter on normal observation responses (P1 regression guard)', async () => {
@@ -50,6 +50,37 @@ async function flushPromises(): Promise<void> {
await Promise.resolve();
}
/**
* Plan 06 Phase 3 — body validation lives in `validateBody` middleware now.
* Build a single chain function that runs the validateBody middleware
* followed by the handler, mirroring how Express dispatches them in
* production.
*/
function captureChain(mockApp: any, targetPath: string): (req: Request, res: Response) => void {
let middleware: ((req: Request, res: Response, next: () => void) => void) | undefined;
let handler: (req: Request, res: Response) => void;
mockApp.post = mock((path: string, ...rest: any[]) => {
if (path !== targetPath) return;
if (rest.length === 1) {
handler = rest[0];
} else {
middleware = rest[0];
handler = rest[1];
}
});
return (req: Request, res: Response): void => {
if (!middleware) {
handler(req, res);
return;
}
let nextCalled = false;
middleware(req, res, () => {
nextCalled = true;
});
if (nextCalled) handler(req, res);
};
}
describe('CorpusRoutes Type Coercion', () => {
let handler: (req: Request, res: Response) => void;
let mockBuild: ReturnType<typeof mock>;
@@ -63,14 +94,11 @@ describe('CorpusRoutes Type Coercion', () => {
{} as any
);
const mockApp = {
post: mock((path: string, fn: any) => {
if (path === '/api/corpus') handler = fn;
}),
const mockApp: any = {
get: mock(() => {}),
delete: mock(() => {}),
};
handler = captureChain(mockApp, '/api/corpus');
routes.setupRoutes(mockApp as any);
});
@@ -38,6 +38,37 @@ function createMockReqRes(body: any): { req: Partial<Request>; res: Partial<Resp
};
}
/**
* Plan 06 Phase 3 — body validation lives in `validateBody` middleware now.
* Build a single chain function that runs the validateBody middleware
* followed by the handler, mirroring how Express dispatches them in
* production.
*/
function captureChain(mockApp: any, targetPath: string): (req: Request, res: Response) => void {
let middleware: (req: Request, res: Response, next: () => void) => void;
let handler: (req: Request, res: Response) => void;
mockApp.post = mock((path: string, ...rest: any[]) => {
if (path !== targetPath) return;
if (rest.length === 1) {
handler = rest[0];
} else {
middleware = rest[0];
handler = rest[1];
}
});
return (req: Request, res: Response): void => {
if (!middleware) {
handler(req, res);
return;
}
let nextCalled = false;
middleware(req, res, () => {
nextCalled = true;
});
if (nextCalled) handler(req, res);
};
}
describe('DataRoutes Type Coercion', () => {
let routes: DataRoutes;
let mockGetObservationsByIds: ReturnType<typeof mock>;
@@ -82,13 +113,12 @@ describe('DataRoutes Type Coercion', () => {
let handler: (req: Request, res: Response) => void;
beforeEach(() => {
const mockApp = {
const mockApp: any = {
get: mock(() => {}),
post: mock((path: string, fn: any) => {
if (path === '/api/observations/batch') handler = fn;
}),
delete: mock(() => {}),
use: mock(() => {}),
};
handler = captureChain(mockApp, '/api/observations/batch');
routes.setupRoutes(mockApp as any);
});
@@ -143,13 +173,12 @@ describe('DataRoutes Type Coercion', () => {
let handler: (req: Request, res: Response) => void;
beforeEach(() => {
const mockApp = {
const mockApp: any = {
get: mock(() => {}),
post: mock((path: string, fn: any) => {
if (path === '/api/sdk-sessions/batch') handler = fn;
}),
delete: mock(() => {}),
use: mock(() => {}),
};
handler = captureChain(mockApp, '/api/sdk-sessions/batch');
routes.setupRoutes(mockApp as any);
});
-204
View File
@@ -1,204 +0,0 @@
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { EventEmitter } from 'events';
import {
registerProcess,
unregisterProcess,
getProcessBySession,
getActiveCount,
getActiveProcesses,
waitForSlot,
ensureProcessExit,
} from '../../src/services/worker/ProcessRegistry.js';
/**
* Create a mock ChildProcess that behaves like a real one for testing.
* Supports exitCode, killed, kill(), and event emission.
*/
function createMockProcess(overrides: { exitCode?: number | null; killed?: boolean } = {}) {
const emitter = new EventEmitter();
const mock = Object.assign(emitter, {
pid: Math.floor(Math.random() * 100000) + 1000,
exitCode: overrides.exitCode ?? null,
killed: overrides.killed ?? false,
kill(signal?: string) {
mock.killed = true;
// Simulate async exit after kill
setTimeout(() => {
mock.exitCode = signal === 'SIGKILL' ? null : 0;
mock.emit('exit', mock.exitCode, signal || 'SIGTERM');
}, 10);
return true;
},
stdin: null,
stdout: null,
stderr: null,
});
return mock;
}
// Helper to clear registry between tests by unregistering all
function clearRegistry() {
for (const p of getActiveProcesses()) {
unregisterProcess(p.pid);
}
}
describe('ProcessRegistry', () => {
beforeEach(() => {
clearRegistry();
});
afterEach(() => {
clearRegistry();
});
describe('registerProcess / unregisterProcess', () => {
it('should register and track a process', () => {
const proc = createMockProcess();
registerProcess(proc.pid, 1, proc as any);
expect(getActiveCount()).toBe(1);
expect(getProcessBySession(1)).toBeDefined();
});
it('should unregister a process and free the slot', () => {
const proc = createMockProcess();
registerProcess(proc.pid, 1, proc as any);
unregisterProcess(proc.pid);
expect(getActiveCount()).toBe(0);
expect(getProcessBySession(1)).toBeUndefined();
});
});
describe('getProcessBySession', () => {
it('should return undefined for unknown session', () => {
expect(getProcessBySession(999)).toBeUndefined();
});
it('should find process by session ID', () => {
const proc = createMockProcess();
registerProcess(proc.pid, 42, proc as any);
const found = getProcessBySession(42);
expect(found).toBeDefined();
expect(found!.pid).toBe(proc.pid);
});
});
describe('waitForSlot', () => {
it('should resolve immediately when under limit', async () => {
await waitForSlot(2); // 0 processes, limit 2
});
it('should wait until a slot opens', async () => {
const proc1 = createMockProcess();
const proc2 = createMockProcess();
registerProcess(proc1.pid, 1, proc1 as any);
registerProcess(proc2.pid, 2, proc2 as any);
// Start waiting for slot (limit=2, both slots full)
const waitPromise = waitForSlot(2, 5000);
// Free a slot after 50ms
setTimeout(() => unregisterProcess(proc1.pid), 50);
await waitPromise; // Should resolve once slot freed
expect(getActiveCount()).toBe(1);
});
it('should throw on timeout when no slot opens', async () => {
const proc1 = createMockProcess();
const proc2 = createMockProcess();
registerProcess(proc1.pid, 1, proc1 as any);
registerProcess(proc2.pid, 2, proc2 as any);
await expect(waitForSlot(2, 100)).rejects.toThrow('Timed out waiting for agent pool slot');
});
it('should throw when hard cap (10) is exceeded', async () => {
// Register 10 processes to hit the hard cap
const procs = [];
for (let i = 0; i < 10; i++) {
const proc = createMockProcess();
registerProcess(proc.pid, i + 100, proc as any);
procs.push(proc);
}
await expect(waitForSlot(20)).rejects.toThrow('Hard cap exceeded');
});
});
describe('ensureProcessExit', () => {
it('should unregister immediately if exitCode is set', async () => {
const proc = createMockProcess({ exitCode: 0 });
registerProcess(proc.pid, 1, proc as any);
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any });
expect(getActiveCount()).toBe(0);
});
it('should NOT treat proc.killed as exited — must wait for actual exit', async () => {
// This is the core bug fix: proc.killed=true but exitCode=null means NOT dead
const proc = createMockProcess({ killed: true, exitCode: null });
registerProcess(proc.pid, 1, proc as any);
// Override kill to simulate SIGKILL + delayed exit
proc.kill = (signal?: string) => {
proc.killed = true;
setTimeout(() => {
proc.exitCode = 0;
proc.emit('exit', 0, signal);
}, 20);
return true;
};
// ensureProcessExit should NOT short-circuit on proc.killed
// It should wait for exit event or timeout, then escalate to SIGKILL
const start = Date.now();
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any }, 100);
expect(getActiveCount()).toBe(0);
});
it('should escalate to SIGKILL after timeout', async () => {
const proc = createMockProcess();
registerProcess(proc.pid, 1, proc as any);
// Override kill: only respond to SIGKILL
let sigkillSent = false;
proc.kill = (signal?: string) => {
proc.killed = true;
if (signal === 'SIGKILL') {
sigkillSent = true;
setTimeout(() => {
proc.exitCode = -1;
proc.emit('exit', -1, 'SIGKILL');
}, 10);
}
// Don't emit exit for non-SIGKILL signals (simulates stuck process)
return true;
};
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any }, 100);
expect(sigkillSent).toBe(true);
expect(getActiveCount()).toBe(0);
});
it('should unregister even if process ignores SIGKILL (after 1s timeout)', async () => {
const proc = createMockProcess();
registerProcess(proc.pid, 1, proc as any);
// Override kill to never emit exit (completely stuck process)
proc.kill = () => {
proc.killed = true;
return true;
};
const start = Date.now();
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any }, 100);
const elapsed = Date.now() - start;
// Should have waited ~100ms for graceful + ~1000ms for SIGKILL timeout
expect(elapsed).toBeGreaterThan(90);
// Process is unregistered regardless (safety net)
expect(getActiveCount()).toBe(0);
});
});
});
+29 -2
View File
@@ -207,8 +207,35 @@ describe('ResultFormatter', () => {
const formatted = formatter.formatSearchResults(results, 'test', true);
expect(formatted).toContain('Vector search failed');
expect(formatted).toContain('semantic search unavailable');
expect(formatted).toContain('Semantic search failed');
expect(formatted).toContain('Falling back to keyword search');
expect(formatted).not.toContain('Install uv');
});
});
describe('formatChromaFailureMessage', () => {
it('formats connection-error reasons with offline messaging', () => {
const message = ResultFormatter.formatChromaFailureMessage({
message: 'subprocess closed',
isConnectionError: true,
});
expect(message).toContain('Semantic search is offline');
expect(message).toContain('Chroma MCP unreachable: subprocess closed');
expect(message).toContain('Falling back to keyword search');
expect(message).not.toContain('Install uv');
});
it('formats generic failure reasons with diagnostic pointer', () => {
const message = ResultFormatter.formatChromaFailureMessage({
message: 'something went wrong',
isConnectionError: false,
});
expect(message).toContain('Semantic search failed: something went wrong');
expect(message).toContain('Falling back to keyword search');
expect(message).toContain('CHROMA_SYNC');
expect(message).not.toContain('Install uv');
});
});
@@ -150,16 +150,19 @@ describe('SearchOrchestrator', () => {
expect(mockChromaSync.queryChroma).toHaveBeenCalled();
});
it('should fall back to SQLite when Chroma fails', async () => {
it('should throw ChromaUnavailableError (HTTP 503) when Chroma fails', async () => {
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma unavailable')));
const result = await orchestrator.search({
query: 'test query'
// Fail-fast: Chroma errors propagate as ChromaUnavailableError
// (HTTP 503 via the AppError status code) rather than silently
// falling back to SQLite.
await expect(
orchestrator.search({ query: 'test query' })
).rejects.toMatchObject({
name: 'ChromaUnavailableError',
statusCode: 503,
code: 'CHROMA_UNAVAILABLE'
});
// Chroma failed, should have fallen back
expect(result.fellBack).toBe(true);
expect(result.usedChroma).toBe(false);
});
it('should normalize comma-separated concepts', async () => {
@@ -296,7 +299,8 @@ describe('SearchOrchestrator', () => {
const formatted = orchestrator.formatSearchResults(results, 'test', true);
expect(formatted).toContain('Vector search failed');
expect(formatted).toContain('Semantic search');
expect(formatted).toContain('Falling back to keyword search');
});
});
});
@@ -130,7 +130,6 @@ describe('ChromaSearchStrategy', () => {
const result = await strategy.search(options);
expect(result.usedChroma).toBe(true);
expect(result.fellBack).toBe(false);
expect(result.strategy).toBe('chroma');
});
@@ -310,23 +309,18 @@ describe('ChromaSearchStrategy', () => {
expect(mockSessionStore.getObservationsByIds).not.toHaveBeenCalled();
});
it('should handle Chroma errors gracefully (returns usedChroma: false)', async () => {
it('should propagate Chroma errors (fail-fast, no silent fallback)', async () => {
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma connection failed')));
const options: StrategySearchOptions = {
query: 'test query'
};
const result = await strategy.search(options);
expect(result.usedChroma).toBe(false);
expect(result.fellBack).toBe(false);
expect(result.results.observations).toHaveLength(0);
expect(result.results.sessions).toHaveLength(0);
expect(result.results.prompts).toHaveLength(0);
// Fail-fast: the orchestrator wraps this into a ChromaUnavailableError (HTTP 503).
await expect(strategy.search(options)).rejects.toThrow('Chroma connection failed');
});
it('should handle SQLite hydration errors gracefully', async () => {
it('should propagate SQLite hydration errors (fail-fast)', async () => {
mockSessionStore.getObservationsByIds = mock(() => {
throw new Error('SQLite error');
});
@@ -336,10 +330,7 @@ describe('ChromaSearchStrategy', () => {
searchType: 'observations'
};
const result = await strategy.search(options);
expect(result.usedChroma).toBe(false); // Error occurred
expect(result.results.observations).toHaveLength(0);
await expect(strategy.search(options)).rejects.toThrow('SQLite error');
});
it('should correctly align IDs with metadatas when Chroma returns duplicate sqlite_ids (multiple docs per observation)', async () => {
@@ -198,7 +198,6 @@ describe('HybridSearchStrategy', () => {
expect(mockSessionSearch.findByConcept).toHaveBeenCalledWith('test-concept', expect.any(Object));
expect(mockChromaSync.queryChroma).toHaveBeenCalledWith('test-concept', expect.any(Number));
expect(result.usedChroma).toBe(true);
expect(result.fellBack).toBe(false);
expect(result.strategy).toBe('hybrid');
});
@@ -251,18 +250,16 @@ describe('HybridSearchStrategy', () => {
expect(mockChromaSync.queryChroma).not.toHaveBeenCalled(); // Should short-circuit
});
it('should fall back to metadata-only on Chroma error', async () => {
it('should propagate Chroma error (fail-fast, no silent fallback)', async () => {
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma failed')));
const options: StrategySearchOptions = {
limit: 10
};
const result = await strategy.findByConcept('test-concept', options);
expect(result.usedChroma).toBe(false);
expect(result.fellBack).toBe(true);
expect(result.results.observations).toHaveLength(3); // All metadata results
await expect(
strategy.findByConcept('test-concept', options)
).rejects.toThrow('Chroma failed');
});
});
@@ -307,18 +304,16 @@ describe('HybridSearchStrategy', () => {
expect(result.results.observations[0].id).toBe(2);
});
it('should fall back on Chroma error', async () => {
it('should propagate Chroma error (fail-fast, no silent fallback)', async () => {
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma unavailable')));
const options: StrategySearchOptions = {
limit: 10
};
const result = await strategy.findByType('bugfix', options);
expect(result.usedChroma).toBe(false);
expect(result.fellBack).toBe(true);
expect(result.results.observations.length).toBeGreaterThan(0);
await expect(
strategy.findByType('bugfix', options)
).rejects.toThrow('Chroma unavailable');
});
it('should return empty when no metadata matches', async () => {
@@ -394,18 +389,16 @@ describe('HybridSearchStrategy', () => {
expect(result.sessions).toHaveLength(1);
});
it('should fall back on Chroma error', async () => {
it('should propagate Chroma error (fail-fast, no silent fallback)', async () => {
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma down')));
const options: StrategySearchOptions = {
limit: 10
};
const result = await strategy.findByFile('/path/to/file.ts', options);
expect(result.usedChroma).toBe(false);
expect(result.observations.length).toBeGreaterThan(0);
expect(result.sessions).toHaveLength(1);
await expect(
strategy.findByFile('/path/to/file.ts', options)
).rejects.toThrow('Chroma down');
});
});
@@ -116,7 +116,6 @@ describe('SQLiteSearchStrategy', () => {
const result = await strategy.search(options);
expect(result.usedChroma).toBe(false);
expect(result.fellBack).toBe(false);
expect(result.strategy).toBe('sqlite');
expect(result.results.observations).toHaveLength(1);
expect(result.results.sessions).toHaveLength(1);
@@ -1,251 +0,0 @@
/**
* Tests for Issue #1590: Session lifecycle guards to prevent runaway API spend
*
* Validates three lifecycle safety mechanisms:
* 1. SIGTERM detection: externally-killed processes must NOT trigger crash recovery
* 2. Wall-clock age limit: sessions older than MAX_SESSION_WALL_CLOCK_MS must be terminated
* 3. Duplicate process prevention: a new spawn for a session kills any existing process first
*/
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { EventEmitter } from 'events';
import {
registerProcess,
unregisterProcess,
getProcessBySession,
getActiveCount,
getActiveProcesses,
createPidCapturingSpawn,
} from '../../src/services/worker/ProcessRegistry.js';
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
function createMockProcess(overrides: { exitCode?: number | null; killed?: boolean } = {}) {
const emitter = new EventEmitter();
const mock = Object.assign(emitter, {
pid: Math.floor(Math.random() * 100_000) + 10_000,
exitCode: overrides.exitCode ?? null,
killed: overrides.killed ?? false,
stdin: null as null,
stdout: null as null,
stderr: null as null,
kill(signal?: string) {
mock.killed = true;
setTimeout(() => {
mock.exitCode = 0;
mock.emit('exit', mock.exitCode, signal || 'SIGTERM');
}, 10);
return true;
},
on: emitter.on.bind(emitter),
once: emitter.once.bind(emitter),
off: emitter.off.bind(emitter),
});
return mock;
}
function clearRegistry() {
for (const p of getActiveProcesses()) {
unregisterProcess(p.pid);
}
}
// ---------------------------------------------------------------------------
// 1. SIGTERM detection — does NOT trigger crash recovery
// ---------------------------------------------------------------------------
describe('SIGTERM detection (Issue #1590)', () => {
it('should classify "code 143" as a SIGTERM error', () => {
const errorMsg = 'Claude Code process exited with code 143';
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
expect(isSigterm).toBe(true);
});
it('should classify "signal SIGTERM" as a SIGTERM error', () => {
const errorMsg = 'Process terminated with signal SIGTERM';
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
expect(isSigterm).toBe(true);
});
it('should NOT classify ordinary errors as SIGTERM', () => {
const errorMsg = 'Invalid API key';
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
expect(isSigterm).toBe(false);
});
it('should NOT classify code 1 (normal error) as SIGTERM', () => {
const errorMsg = 'Claude Code process exited with code 1';
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
expect(isSigterm).toBe(false);
});
it('aborting the controller should mark wasAborted=true, preventing crash recovery', () => {
// Simulate what the catch handler does: abort when SIGTERM detected
const abortController = new AbortController();
expect(abortController.signal.aborted).toBe(false);
// SIGTERM arrives — we abort the controller
abortController.abort();
// By the time .finally() runs, wasAborted should be true
const wasAborted = abortController.signal.aborted;
expect(wasAborted).toBe(true);
});
it('should NOT abort the controller for non-SIGTERM crash errors', () => {
const abortController = new AbortController();
const errorMsg = 'FOREIGN KEY constraint failed';
// Non-SIGTERM: do NOT abort
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
if (isSigterm) {
abortController.abort();
}
expect(abortController.signal.aborted).toBe(false);
});
});
// ---------------------------------------------------------------------------
// 2. Wall-clock age limit
// ---------------------------------------------------------------------------
describe('Wall-clock age limit (Issue #1590)', () => {
const MAX_SESSION_WALL_CLOCK_MS = 4 * 60 * 60 * 1000; // 4 hours (matches SessionRoutes)
it('should NOT terminate a session started < 4 hours ago', () => {
const startTime = Date.now() - 30 * 60 * 1000; // 30 minutes ago
const sessionAgeMs = Date.now() - startTime;
expect(sessionAgeMs).toBeLessThan(MAX_SESSION_WALL_CLOCK_MS);
});
it('should NOT terminate a session started exactly 4 hours ago (strict >)', () => {
// Production uses strict `>` (not `>=`), so exactly 4h is still alive.
const startTime = Date.now() - MAX_SESSION_WALL_CLOCK_MS;
const sessionAgeMs = Date.now() - startTime;
// At exactly the boundary, sessionAgeMs === MAX, and `>` is false → no termination.
expect(sessionAgeMs).toBeLessThanOrEqual(MAX_SESSION_WALL_CLOCK_MS);
});
it('should terminate a session started more than 4 hours ago', () => {
const startTime = Date.now() - MAX_SESSION_WALL_CLOCK_MS - 1;
const sessionAgeMs = Date.now() - startTime;
expect(sessionAgeMs).toBeGreaterThan(MAX_SESSION_WALL_CLOCK_MS);
});
it('should terminate a session started 13+ hours ago (the issue scenario)', () => {
const startTime = Date.now() - 13 * 60 * 60 * 1000; // 13 hours ago
const sessionAgeMs = Date.now() - startTime;
expect(sessionAgeMs).toBeGreaterThan(MAX_SESSION_WALL_CLOCK_MS);
});
it('aborting + draining pending queue should prevent respawn', () => {
// Simulate the wall-clock termination sequence:
// 1. Abort controller (stops active generator)
// 2. Mark pending messages abandoned (no work to restart for)
// 3. Remove session from map
const abortController = new AbortController();
let pendingAbandoned = 0;
let sessionRemoved = false;
// Simulate abort
abortController.abort();
expect(abortController.signal.aborted).toBe(true);
// Simulate markAllSessionMessagesAbandoned
pendingAbandoned = 3; // Pretend 3 messages were abandoned
// Simulate removeSessionImmediate
sessionRemoved = true;
expect(pendingAbandoned).toBeGreaterThanOrEqual(0);
expect(sessionRemoved).toBe(true);
});
});
// ---------------------------------------------------------------------------
// 3. Duplicate process prevention in createPidCapturingSpawn
// ---------------------------------------------------------------------------
describe('Duplicate process prevention (Issue #1590)', () => {
beforeEach(() => {
clearRegistry();
});
afterEach(() => {
clearRegistry();
});
it('should detect a duplicate when a live process already exists for the session', () => {
const proc = createMockProcess();
registerProcess(proc.pid, 42, proc as any);
const existing = getProcessBySession(42);
expect(existing).toBeDefined();
expect(existing!.process.exitCode).toBeNull(); // Still alive
});
it('should NOT detect a duplicate when the existing process has already exited', () => {
const proc = createMockProcess({ exitCode: 0 });
registerProcess(proc.pid, 42, proc as any);
const existing = getProcessBySession(42);
expect(existing).toBeDefined();
// exitCode is set — process is already done, NOT a live duplicate
expect(existing!.process.exitCode).not.toBeNull();
});
it('should kill existing process and unregister before spawning', () => {
const existingProc = createMockProcess();
registerProcess(existingProc.pid, 99, existingProc as any);
expect(getActiveCount()).toBe(1);
// Simulate the duplicate-kill logic:
const duplicate = getProcessBySession(99);
if (duplicate && duplicate.process.exitCode === null) {
try { duplicate.process.kill('SIGTERM'); } catch { /* already dead */ }
unregisterProcess(duplicate.pid);
}
expect(getActiveCount()).toBe(0);
expect(getProcessBySession(99)).toBeUndefined();
});
it('should leave registry empty after killing duplicate so new process can register', () => {
const oldProc = createMockProcess();
registerProcess(oldProc.pid, 77, oldProc as any);
expect(getActiveCount()).toBe(1);
// Kill duplicate
const dup = getProcessBySession(77);
if (dup && dup.process.exitCode === null) {
try { dup.process.kill('SIGTERM'); } catch { /* ignore */ }
unregisterProcess(dup.pid);
}
expect(getActiveCount()).toBe(0);
// New process can now register cleanly
const newProc = createMockProcess();
registerProcess(newProc.pid, 77, newProc as any);
expect(getActiveCount()).toBe(1);
const found = getProcessBySession(77);
expect(found!.pid).toBe(newProc.pid);
});
it('should not interfere when no existing process is registered', () => {
expect(getProcessBySession(55)).toBeUndefined();
// Duplicate-kill logic: should be a no-op
const dup = getProcessBySession(55);
if (dup && dup.process.exitCode === null) {
unregisterProcess(dup.pid);
}
// Registry should still be empty — no side effects
expect(getActiveCount()).toBe(0);
});
});