perf: streamline worker startup and consolidate database connections (#2122)

* docs: pathfinder refactor corpus + Node 20 preflight

Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 01 — data integrity

Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.

- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
  started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
  UNIQUE(memory_session_id, content_hash) on observations; dedup
  duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
  worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
  and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
  observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
  2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
  path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
  CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
  Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.

Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/01-data-integrity.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 02 — process lifecycle

OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).

- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
  canonical registry at src/supervisor/process-registry.ts is the
  sole survivor; SDK spawn site consolidated into it via new
  createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
  ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
  ['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
  process.kill(-pgid, signal) on Unix when pgid is recorded;
  Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
  staleSessionReaperInterval setInterval (including the co-located
  WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
  WAL growth without an app-level timer), killIdleDaemonChildren,
  killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
  detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
  constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
  via generatorPromise.finally() already lives in worker-service
  startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
  SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
  for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
  via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
  lazy-spawn — consults isWorkerPortAlive (which gates
  captureProcessStartToken for PID-reuse safety via commit
  99060bac), then spawns detached with unref(), then
  waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
  exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
  idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
  only on external SIGTERM via supervisor signal handlers.

Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.

All 10 verification greps return 0. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast

Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
  - Phase 1/2: ObservationRow type doesn't exist; the four
    "formatters" operate on three incompatible types.
  - Phase 4: RECENCY_WINDOW_MS already imported from
    SEARCH_CONSTANTS at every call site.
  - Phase 7: getExistingChromaIds is NOT @deprecated and has an
    active caller in ChromaSync.backfillMissingSyncs.
  - Phase 8: estimateTokens already consolidated.
  - Phase 9: knowledge-corpus rewrite blocked on PG-3
    prompt-caching cost smoke test.

Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.

Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.

Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).

Tests updated (Principle 7 — delete in same PR):
  - search-orchestrator.test.ts: "fall back to SQLite" rewritten
    as "throw ChromaUnavailableError (HTTP 503)".
  - chroma/hybrid/sqlite-search-strategy tests: rewritten to
    rejects.toThrow; removed fellBack assertions.

Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 03 — ingestion path

Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.

- Phase 0: Created src/services/worker/http/shared.ts exporting
  ingestObservation/ingestPrompt/ingestSummary as direct
  in-process functions plus ingestEventBus (Node EventEmitter,
  reusing existing pattern — no third event bus introduced).
  setIngestContext wires the SessionManager dependency from
  worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
  returning { valid:true; kind: 'observation'|'summary'; data }
  | { valid:false; reason: string }. Inspects root element;
  <skip_summary reason="…"/> is a first-class summary case
  with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
  branches on the discriminated union. On invalid → markFailed
  + logger.warn(reason). On observation → ingestObservation.
  On summary → ingestSummary then emit summaryStoredEvent
  { sessionId, messageId } (consumed by Plan 05's blocking
  /api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
  (ResponseProcessor + SessionManager + worker-types) and
  MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
  guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
  replaced with fs.watch(transcriptsRoot, { recursive: true,
  persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
  Map deleted. tool_use rows insert with INSERT OR IGNORE on
  UNIQUE(session_id, tool_use_id) (added by Plan 01). New
  pairToolUsesByJoin query in PendingMessageStore for read-time
  pairing (UNIQUE INDEX provides idempotency; explicit consumer
  not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
  direct ingestObservation call. maybeParseJson silent-passthrough
  rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
  collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
  class) deleted. The active extractLastMessage at
  src/shared/transcript-parser.ts:41-144 is the sole survivor.

Tests updated (Principle 7 — same-PR delete):
  - tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
    to assert discriminated-union shape; coercion-specific
    scenarios collapse into { valid:false } assertions.
  - tests/worker/agents/response-processor.test.ts: circuit-breaker
    describe block skipped; non-XML/empty-response tests assert
    fail-fast markFailed behavior.

Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.

Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.

Plan: PATHFINDER-2026-04-22/03-ingestion-path.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 05 — hook surface

Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.

- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
  1..20; do curl -sf .../health && break; sleep 0.1; done` shell
  retry wrappers deleted. Hook commands invoke their bun entry
  point directly.
- Phase 2: src/shared/worker-utils.ts — added
  executeWithWorkerFallback<T>(url, method, body) returning
  T | { continue: true; reason?: string }. All 8 hook handlers
  (observation, session-init, context, file-context, file-edit,
  summarize, session-complete, user-message) rewritten to use
  it instead of duplicating the ensureWorkerRunning →
  workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
  using validateBody + sessionEndSchema (z.object({sessionId})).
  One-shot ingestEventBus.on('summaryStoredEvent') listener,
  30 s timer, req.aborted handler — all share one cleanup so
  the listener cannot leak. summarize.ts polling loop, plus
  MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
  memoizes SettingsDefaultsManager.loadFromFile per process.
  Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
  check entry; isProjectExcluded no longer referenced from
  src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
  (all 6 adapters: claude-code, cursor, raw, gemini-cli,
  windsurf). New AdapterRejectedInput error in
  src/cli/adapters/errors.ts. Handler-level isValidCwd checks
  deleted from file-edit.ts and observation.ts. hook-command.ts
  catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
  initAgent is idempotent. tests/hooks/context-reinjection-guard
  test (validated the deleted conditional) deleted in same PR
  per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
  .json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
  _THRESHOLD setting (default 3). On consecutive worker-unreachable
  ≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
  wrapping ensureWorkerRunning. executeWithWorkerFallback calls
  the memoized version.

Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.

Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.

Plan: PATHFINDER-2026-04-22/05-hook-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 06 — API surface

One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.

- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
  in src/services/worker/http/middleware/validateBody.ts —
  safeParse → 400 { error: 'ValidationError', issues: [...] }
  on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
  route file. 24 POST endpoints across SessionRoutes,
  CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
  LogsRoutes, SettingsRoutes now wrap with validateBody().
  /api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
  along with every call site. Inline coercion helpers
  (coerceStringArray, coercePositiveInteger) and inline
  if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
  from src/services/worker/http/middleware.ts. Worker binds
  127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
  via fs.readFileSync; served as Buffer with text/html content
  type. SKILL.md + per-operation .md files cached in
  Server.ts as Map<string, string>; loadInstructionContent
  helper deleted. NO fs.watch, NO TTL — process restart is the
  cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
  — /api/pending-queue (GET), /api/pending-queue/process (POST),
  /api/pending-queue/failed (DELETE), /api/pending-queue/all
  (DELETE). Helper methods that ONLY served them
  (getQueueMessages, getStuckCount, getRecentlyProcessed,
  clearFailed, clearAll) deleted from PendingMessageStore.
  KEPT: /api/processing-status (observability), /health
  (used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
  GracefulShutdown now calls getSupervisor().stop() directly.
  Two functions retained with clear roles:
    - performGracefulShutdown — worker-side 6-step shutdown
    - runShutdownCascade — supervisor-side child teardown
      (process.kill(-pgid), Windows tree-kill, PID-file cleanup)
  Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
  failure-marking path on PendingMessageStore. Old methods
  markSessionMessagesFailed and markAllSessionMessagesAbandoned
  deleted along with all callers (worker-service,
  SessionCompletionHandler, tests/zombie-prevention).

Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.

Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.

Plan: PATHFINDER-2026-04-22/06-api-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 07 — dead code sweep

ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.

Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
  isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
  abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
  zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
  command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments

Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
  builders, ParsedObservation, ParsedSummary, ParseResult,
  SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
  via dynamic await import('../../../context-generator.js') in
  worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
  — used via dynamic await import in npx-cli/install.ts +
  uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
  ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
  orphan-recovery caller in worker-service.ts plus
  zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
  in same file.
- All Database.ts barrel re-exports — used downstream.

Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
  is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
  the methods are not thin wrappers but ~900 LoC of bodies, and
  two methods are documented as intentional mirrors so the
  context-generator.cjs bundle stays schema-consistent without
  pulling MigrationRunner. Deserves its own plan, not a sweep.

Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.

Plan: PATHFINDER-2026-04-22/07-dead-code.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: remove residual ProcessRegistry comment reference

Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile review (P1 + 2× P2)

P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
  - Added optional timeoutMs to executeWithWorkerFallback,
    forwarded to workerHttpRequest.
  - summarize.ts call site now passes 35_000 (5 s above server
    hold window).

P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
  - ResponseProcessor now calls ingestSummary({ kind: 'parsed',
    sessionDbId, messageId, contentSessionId, parsed }) so the
    event-emission path is single-sourced.
  - ingestSummary's requireContext() resolution moved inside the
    'queue' branch (the only branch that needs sessionManager /
    dbManager). 'parsed' is a pure event-bus emission and
    doesn't need worker-internal context — fixes mocked
    ResponseProcessor unit tests that don't call
    setIngestContext.

P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
  - Added a Symbol.for('claude-mem/worker-fallback') brand to
    WorkerFallback. isWorkerFallback now checks the brand, not
    a duck-typed property name.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 2 (P1 + P2)

P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.

  - Gate ingestSummary call on (parsed.data.skipped ||
    session.lastSummaryStored). Skipped summaries are an explicit
    no-op bypass and still confirm; real summaries only confirm
    when storage actually wrote a row.
  - Non-skipped + summaryId === null path logs a warn and lets
    the server-side timeout (504) surface to the hook instead of
    a false ok:true.

P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.

  - Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
    log instead of the misleading ENQUEUED line. No behavior
    change — the duplicate is still correctly suppressed by the
    DB (Principle 3); only the log surface is corrected.
  - confirmProcessed is never called with the enqueue() return
    value (it operates on session.processingMessageIds[] from
    claimNextMessage), so no caller is broken; the visibility
    fix prevents future misuse.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 3 (P1 + 2× P2)

- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
  context after SessionRoutes is constructed. setIngestContext runs
  before routes exist, so transcript-watcher observations queued via
  ingestObservation() had no way to auto-start the SDK generator.
  Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
  /api/session/end calls register one listener each and clean up on
  completion, so the default-10 warning fires spuriously under normal
  load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
  ingestObservation() instead of duplicating skip-tool / meta /
  privacy / queue logic. Single helper, matching the Plan 03 goal.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)

- processor.handleToolResult: restore in-memory tool-use→tool-result
  pairing via session.pendingTools for schemas (e.g. Codex) whose
  tool_result events carry only tool_use_id + output. Without this,
  neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
  of throwing. Previously a single malformed JSON-shaped field caused
  handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
  for purely-glob inputs so the caller skips the watch instead of
  anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
  log on the returned id; the SessionManager branches on id === 0.

* fix: forward tool_use_id through ingestObservation (Greptile iter 5)

P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.

- shared.ingestObservation: forward payload.toolUseId to
  queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
  tool_use_id (HTTP convention) and toolUseId (JS convention) from
  req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
  validator doesn't rely on .passthrough() alone.

* fix: drop dead pairToolUsesByJoin, close session-end listener race

- PendingMessageStore: delete pairToolUsesByJoin. The method was never
  called and its self-join semantics are structurally incompatible
  with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
  collapses any second row with the same pair, so a self-join can
  only ever match a row to itself. In-memory pendingTools in
  processor.ts remains the pairing path for split-event schemas.

- IngestEventBus: retain a short-lived (60s) recentStored map keyed
  by sessionId. Populated on summaryStoredEvent emit, evicted on
  consume or TTL.

- handleSessionEnd: drain the recent-events buffer before attaching
  the listener. Closes the register-after-emit race where the summary
  can persist between the hook's summarize POST and its session/end
  POST — previously that window returned 504 after the 30s timeout.

* chore: merge origin/main into vivacious-teeth

Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).

Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
  kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
  loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
  POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
  summaryStoredEvent supersedes main's SessionCompletionHandler DI
  refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
  reason; generator .finally() Stop-hook self-clean is a guard for a
  path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
  security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
  #2084) while preserving our Zod validateBody schema.

Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings

1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
   in wrapHandler — synchronous exceptions would hang the client rather
   than surfacing as 500s. Wrap it like every other handler.

2) processor.handleToolResult only consumed the session.pendingTools
   entry when the tool_result arrived without a toolName. In the
   split-schema path where tool_result carries both toolName and toolId,
   the entry was never deleted and the map grew for the life of the
   session. Consume the entry whenever toolId is present.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: typing cleanup and viewer tsconfig split for PR feedback

- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings (iter 2)

- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
  the unscoped-drain branch that would nuke every pending/processing
  row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
  cached event until TTL eviction so a retried Stop hook's second
  /api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
  already tailed (JSONL appends fire on every line; only unknown
  paths warrant a rescan).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: call finalizeSession in terminal session paths (Greptile iter 3)

terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.

Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: GC failed pending_messages rows at startup (Greptile iter 4)

Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.

Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)

1. startSessionProcessor success branch now calls completionHandler.
   finalizeSession before removeSessionImmediate. Hooks-disabled installs
   (and any Stop hook that fails before POST /api/sessions/complete) no
   longer leave sdk_sessions rows as status='active' forever. Idempotent
   — a subsequent /api/sessions/complete is a no-op.

2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
   closures that reference it (TDZ safety; safe at runtime today but
   fragile if timeout ever shrinks).

3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
   instead of constructing its own — prevents silent divergence if the
   handler ever becomes stateful.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: stop runaway crash-recovery loop on dead sessions

Two distinct bugs were combining to keep a dead session restarting forever:

Bug 1 (uncaught "The operation was aborted."):
  child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
  signal aborts. spawnSdkProcess() never attached an 'error' listener, so
  any async spawn failure became uncaughtException and escaped to the
  daemon-level handler. Attach an 'error' listener immediately after spawn,
  before the !child.pid early-return, so async spawn errors are logged
  (with errno code) and swallowed locally.

Bug 2 (sliding-window limiter never trips on slow restart cadence):
  RestartGuard tripped only when restartTimestamps.length exceeded
  MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
  exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
  session that fail-restart-fail-restart on 8s cycles would loop forever
  (consecutiveRestarts climbing past 30+ in observed logs). Add a
  consecutiveFailures counter that increments on every restart and resets
  only on recordSuccess(). Trip when consecutive failures exceed
  MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
  processing in between proves the session is dead. Both guards now run in
  parallel: tight loops still trip the windowed cap; slow loops trip the
  consecutive-failure cap.

Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* perf: streamline worker startup and consolidate database connections

1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)

* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations

Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.

- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
  before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
  when shouldTrackProject(cwd) is false, so the observer's own hooks
  cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
  boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
  on observations) inline so bundled artifacts (worker-service.cjs,
  context-generator.cjs) stay schema-consistent — without it, the
  ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
  supervisor can actually feed the observer's stdin.

Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.

* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)

Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.

* fix: cross-platform observer-dir containment; clarify SDK stdin pipe

claude-review feedback on PR #2124.

- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
  hard-coded a POSIX separator and missed Windows backslash paths plus any
  trailing-slash variance. Switched to a path.relative-based isWithin()
  helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
  SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
  consumes that pipe; 'ignore' would null it and the null-check below
  would tear the child down on every spawn.

* fix: make Stop hook fire-and-forget; remove dead /api/session/end

The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.

The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.

- summarize.ts: drop the /api/session/end long-poll and the trailing
  /api/sessions/complete await; ~40 lines removed; unused
  SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
  SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
  route registration. Drop the now-unused ingestEventBus and
  SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
  comments that referenced the dead endpoint. The IngestEventBus is
  left in place dormant (no listeners) for follow-up cleanup so this
  PR stays focused on the blocker.

Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.

Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* deps: bump all dependencies to latest including majors

Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.

Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: surface real chroma errors and add deep status probe

Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.

Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.

Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: rebuild worker-service bundle to match merged src

Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: address coderabbit feedback on PLAN-fix-mcp-search.md

- replace machine-specific /Users/alexnewman absolute paths with portable
  <repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-04-25 13:37:40 -07:00
committed by GitHub
parent 8ace1d9c84
commit 94d592f212
159 changed files with 18091 additions and 5843 deletions
@@ -0,0 +1,529 @@
# Implementation Plan: session-lifecycle-management
**Flowchart**: PATHFINDER-2026-04-21/05-clean-flowcharts.md § 3.8 ("session-lifecycle-management (clean) — BIGGEST CULL")
**Before-state**: PATHFINDER-2026-04-21/01-flowcharts/session-lifecycle-management.md
**Scope** (revised 2026-04-22: zero-timer model): delete all three repeating background timers in the worker layer — no `ReaperTick` replacement, no `sqliteHousekeepingInterval`. Replace each recurring check with one of: (a) the `child.on('exit')` handlers already wired at `ProcessRegistry.ts:479` (SDK) and `worker-service.ts:530` (MCP), (b) the per-iterator 3-min idle `setTimeout` already wired at `SessionQueueProcessor.ts:6` (covers hung-generator case on its own), (c) a per-session `setTimeout(deleteSession, 15min)` scheduled on last-generator-completion and cleared on new activity (covers abandoned-session case), (d) a boot-once reconciliation block that calls the existing `killSystemOrphans()` + `supervisor.pruneDeadEntries()` + `recoverStuckProcessing()` + `clearFailedOlderThan(1h)` once at worker startup. Delete the worker-level `ProcessRegistry` facade (528 LoC). Inline the SIGTERM→SIGKILL ladder. Implement blocking `POST /api/session/end`.
**Target LoC**: process-lifecycle ~900 → ~400.
**Target repeating-timer count in `src/services/worker/` + `worker-service.ts`**: 3 → **0**. (The only `setTimeout` calls that remain are the per-operation escalation ladder, per-session idle, per-session abandonment, and the generator-exit race — all non-repeating, all correct.)
---
## Dependencies
### Upstream (must land first)
- **01-privacy-tag-filtering** — defines shared `stripMemoryTags(text)` in `src/utils/tag-stripping.ts`. Phase 1 of THIS plan introduces `ingestObservation` / `ingestPrompt` / `ingestSummary` helpers that call that function. If 01 has not landed, Phase 1 here imports the existing wrappers, but the ingest-helper location (`src/services/ingest/`) is authoritative and 01 rewires its call-sites into these helpers.
- **02-sqlite-persistence** — owns the boot-recovery section of `sqlite-persistence (clean)` (§ 3.3 bottom box `BootOnce`). V19 per-claim 60-s reset (`PendingMessageStore.ts:99-145`) is deleted by Phase 5 of THIS plan and replaced with a single `PendingMessageStore.recoverStuckProcessing()` called once in worker boot. 02 codifies the broader schema-recovery ordering; Phase 5 slots `recoverStuckProcessing()` into that boot sequence.
- **03-response-parsing-storage** — defines `ResponseProcessor` + `session.recordFailure()` contract. Phase 7 (blocking `/api/session/end`) awaits the `summary_stored` flag that `ResponseProcessor` sets after a successful summary commit. The "summary_stored OR 110s timeout" integration point lives inside this plan (Phase 7) but depends on 03 wiring the flag.
### Downstream (this plan enables)
- **09-lifecycle-hooks** — hook layer consumes the blocking `POST /api/session/end` built in Phase 7 (replaces the current 500-ms polling loop in `src/cli/handlers/summarize.ts:117-150`). That plan's hook simplification is blocked until Phase 7 ships.
---
## Concrete findings from live code
### `src/services/worker/ProcessRegistry.ts` (527 lines — entire file slated for deletion)
Exposed surface (every export → supervisor-registry method it should hit directly):
| Worker export | File:line | Replacement |
|---|---|---|
| `registerProcess(pid, sessionDbId, process)` | `:57-65` | `getSupervisor().registerProcess(id, info, procRef)` — already the body of this function |
| `unregisterProcess(pid)` | `:70-79` | `getSupervisor().getRegistry().getByPid(pid)` + `getSupervisor().unregisterProcess(record.id)` — already the body |
| `getProcessBySession(sessionDbId)` | `:85-94` | Move to free helper `findSessionProcess(id)` in `src/services/worker/process-spawning.ts`; body iterates `getRegistry().getAll()` + filters by `type==='sdk'` (same as `getTrackedProcesses` helper at `:34-52`) |
| `getActiveCount()` | `:99-101` | Direct: `getSupervisor().getRegistry().getAll().filter(r => r.type==='sdk').length` |
| `waitForSlot(max, timeout, evict)` | `:122-167` | Pool-slot bookkeeping is worker-scoped, **not** a supervisor concern. Keep as free function in `process-spawning.ts`. The `slotWaiters` array (`:104`) stays module-local. |
| `notifySlotAvailable()` (internal) | `:109-112` | Stays module-local in `process-spawning.ts`; called from the `exit` event handler inside `createPidCapturingSpawn`. Under the zero-timer model, `exit` is the sole runtime trigger, so slot notification happens directly from the handler that already owns subprocess-death semantics. No scanner involved. |
| `getActiveProcesses()` | `:172-179` | Free helper in `process-spawning.ts` (still used for stats / debug endpoints). |
| `ensureProcessExit(tracked, timeoutMs=5000)` | `:185-229` | **Inline** into `deleteSession` (SessionManager.ts:406-413) as 12-line block: check `exitCode`, `Promise.race([once('exit'), setTimeout])`, SIGKILL, race again. Per audit item #9 and anti-pattern guard A. |
| `killIdleDaemonChildren()` | `:244-309` | **Delete**. Its runtime role (cleaning up our own idle daemons) is covered by the `child.on('exit')` handler at `ProcessRegistry.ts:479` which already calls `unregisterProcess(pid)`, combined with the per-iterator 3-min idle `setTimeout` at `SessionQueueProcessor.ts:6` that aborts hung generators. Ppid=1 leftovers from a prior worker crash are caught by boot-once `killSystemOrphans()` (see next row). |
| `killSystemOrphans()` | `:315-344` | **Keep function body; move call from interval to boot-once.** Ppid=1 Claude processes can only exist because a *previous* worker crashed without reaping them — during the current worker's lifetime, `exit` handlers catch subprocess death. So one call at worker startup covers the full scope. Called from worker boot init (Phase 3), never scheduled. |
| `reapOrphanedProcesses(activeSessionIds)` | `:349-382` | **Delete**. Runtime component: covered by `exit` handlers. Cross-restart component: covered by boot-once `supervisor.pruneDeadEntries()` which walks the registry and drops entries whose PIDs are no longer in the OS. |
| `createPidCapturingSpawn(sessionDbId)` | `:393-502` | Move verbatim to `process-spawning.ts` as free function. It already wires `child.on('exit')``unregisterProcess(pid)` at `:479-486` — keep that path; it's the sole runtime subprocess-death signal under the zero-timer model. |
| `startOrphanReaper(getActiveSessionIds, intervalMs=30_000)` | `:508-527` | **Delete**; no replacement timer. |
Caller fan-out (every `from '.../ProcessRegistry'` site must be re-pointed):
- `src/services/worker/SessionManager.ts:17` — imports `getProcessBySession, ensureProcessExit`. Rewrite: import from `./process-spawning.js` (findSessionProcess), and inline the exit wait in `deleteSession`.
- `src/services/worker/SDKAgent.ts:24` — imports `createPidCapturingSpawn, getProcessBySession, ensureProcessExit, waitForSlot`. Rewrite: import from `./process-spawning.js`. The `ensureProcessExit` call-site (search inside SDKAgent) goes away when we route through `deleteSession`.
- `src/services/worker-service.ts:109` — imports `startOrphanReaper, reapOrphanedProcesses, getProcessBySession, ensureProcessExit`. After Phase 3, imports shrink to `{ getActiveProcesses }` from `./process-spawning.js`. `startOrphanReaper` + `reapOrphanedProcesses` delete. The `ensureProcessExit` at `worker-service.ts:786` inlines.
### `src/supervisor/process-registry.ts` (408 lines — authoritative, stays as-is)
Relevant API (no changes needed):
- `class ProcessRegistry` at `:175``register`, `unregister`, `getAll`, `getBySession`, `getByPid`, `getRuntimeProcess`, `pruneDeadEntries` (`:269-285`, uses `isPidAlive`), `reapSession(sessionId)` (`:292-385`, implements SIGTERM → wait 5 s → SIGKILL → wait 1 s).
- `isPidAlive(pid)` at `:28-45` — reused directly by boot-once `supervisor.pruneDeadEntries()` (Phase 3 Mechanism C) and by the inlined `killSystemOrphans()` body, both called exactly once per worker boot. Not called by any repeating timer.
- `getSupervisor().getRegistry()` — how worker code reaches this class (verified in worker/ProcessRegistry.ts:39, 71, 353).
### `src/services/worker/worker-service.ts`
- Line `109`: import site that must shrink.
- Line `174`: `private staleSessionReaperInterval: ReturnType<typeof setInterval> | null = null;` — delete field.
- Line `537`: `this.stopOrphanReaper = startOrphanReaper(() => { ... });` — delete outright, no replacement timer. Runtime subprocess death is handled by `child.on('exit')` handlers; cross-restart orphans are handled by boot-once `killSystemOrphans()` + `supervisor.pruneDeadEntries()`.
- Line `547`: `this.staleSessionReaperInterval = setInterval(async () => { ... }, 2*60*1000)`**delete the entire block** (outer wrapper + body). Disposition of the three things it did under the zero-timer model:
- `reapStaleSessions()` → deleted (no replacement timer). Hung-generator case is covered by the per-iterator idle `setTimeout` at `SessionQueueProcessor.ts:6`; no-generator abandonment is covered by the per-session `abandonedTimer` (Phase 3 Mechanism B).
- `clearFailedOlderThan(1h)` → moved to boot-once (Phase 3 Mechanism C step 4, co-owned with plan 02).
- `PRAGMA wal_checkpoint(PASSIVE)` → deleted outright. SQLite's default `wal_autocheckpoint=1000` pages is the contract (confirmed at `Database.ts:162-168` — no override).
- Line `786`: `await ensureProcessExit(trackedProcess, 5000)` — inline.
- Line `1108-1110`: shutdown path clears `staleSessionReaperInterval`. **Delete both shutdown clauses outright** — there is nothing to clear since no `setInterval` remains in the worker layer.
### `src/services/worker/SessionManager.ts`
- `MAX_GENERATOR_IDLE_MS = 5*60*1000` at `:23`**delete**. Hung-generator detection is now owned by `SessionQueueProcessor.ts:6` (`IDLE_TIMEOUT_MS = 3*60*1000`) at the stream level. The 5-min worker-layer threshold is redundant with the 3-min per-iterator threshold and the old split created two sources of truth.
- `MAX_SESSION_IDLE_MS = 15*60*1000` at `:26` — keep; now consumed by the per-session `scheduleAbandonedCheck()` method (Phase 3 Mechanism B).
- `detectStaleGenerator(session, proc, now)` at `:59-84`**delete**. Its consumer (`reapStaleSessions`) is being deleted; its logic (compare `lastGeneratorActivity` against a threshold) is superseded by the per-iterator idle `setTimeout` in `SessionQueueProcessor.ts` which resets on every chunk and fires `onIdleTimeout``abortController.abort()` at the stream level, not from a scanner.
- `deleteSession(sessionDbId)` at `:381-446` — inline `ensureProcessExit` at `:412`; additionally, clear `session.abandonedTimer` at the top of this method if set (per Phase 3 Mechanism B wiring).
- `reapStaleSessions()` at `:516-568`**delete method**, no replacement closure. The two branches:
- Generator-active branch at `:520-549`: replaced by the per-iterator idle `setTimeout` at `SessionQueueProcessor.ts:6` which aborts the controller when the stream is silent ≥3 min. The subprocess's `exit` handler then unregisters.
- No-generator branch at `:550-561`: replaced by the per-session `abandonedTimer` `setTimeout` scheduled on last-generator-completion and cleared on new activity (Phase 3 Mechanism B).
- `queueSummarize(sessionDbId, lastAssistantMessage)` at `:329-377` — unchanged; Phase 7's blocking endpoint calls this first, then awaits.
### `src/services/worker/SDKAgent.ts`
- Line `24` imports.
- The iterator pattern uses `session.abortController` (established in `SessionManager.initializeSession`); Phase 7's `/api/session/end` calls `session.abortController.abort()` after awaiting summary_stored. No change to SDKAgent body needed for abort semantics — the AbortSignal flows through the SDK query already (confirmed by SessionManager.ts:390 existing abort path).
### `src/services/sqlite/PendingMessageStore.ts`
- `STALE_PROCESSING_THRESHOLD_MS = 60_000` at `:6`.
- `claimNextMessage(sessionDbId)` at `:99-145` — the transaction body currently does both self-heal (`:103-116`) and claim (`:118-140`). Phase 5: keep the transaction, delete lines `103-116`, add a new public method `recoverStuckProcessing(): number` that runs the same UPDATE **unscoped by session id** once at worker boot.
- No behavior regression: the only functional change is timing. Crashed sessions are recovered on next worker boot (correct crash-recovery semantic), not on every claim call (polling anti-pattern).
### Blocking `POST /api/session/end` (Phase 7) — current state
- Existing endpoints (to consolidate):
- `POST /api/sessions/summarize` at `SessionRoutes.ts:387` → handler `handleSummarizeByClaudeId` → calls `queueSummarize` (`:705`) and returns immediately.
- `POST /api/sessions/complete` at `SessionRoutes.ts:753` → clears active session map.
- `GET /api/sessions/status?contentSessionId=...` at hook-side polling (`src/cli/handlers/summarize.ts:123`) — returns `{queueLength, summaryStored}`.
- `session.lastSummaryStored` is already written inside `ResponseProcessor` (see `SessionRoutes.ts:747` where it is read). This is the flag Phase 7 awaits.
- Phase 7 delivers: `POST /api/session/end` — body `{sessionDbId, last_assistant_message}`. Server-side: call `queueSummarize`, then `await` a `Promise` that resolves when `session.lastSummaryStored` flips, with a hard 110 000 ms timeout, then `session.abortController.abort()`, then `deleteSession`. Returns `{summaryId or null}`.
- Hook simplification (in 09-lifecycle-hooks plan) replaces the 220-iteration 500-ms poll loop at `summarize.ts:117-150` with one POST.
---
## Copy-ready snippet locations — event-driven + boot-once + per-session timers (revised 2026-04-22)
No new file. No `reaper.ts`. No `ReaperTick`. Three mechanisms, spread across existing modules:
### Mechanism A — `child.on('exit')` handlers (already wired; verify and keep)
- SDK spawn: `ProcessRegistry.ts:475-486` → moves to `process-spawning.ts:createPidCapturingSpawn` in Phase 2. The `on('exit', ...)` at `:479` must continue to call `unregisterProcess(child.pid)` at `:484`. Do not modify.
- MCP spawn: `worker-service.ts:523-532`. The `once('exit', ...)` at `:530` must continue to call `getSupervisor().unregisterProcess('mcp-server')` at `:531`. Do not modify.
- Per-iterator 3-min idle timeout: `SessionQueueProcessor.ts:6` (`IDLE_TIMEOUT_MS`), resets at `:51-52, :62-63`, fires `onIdleTimeout` at `:93-104``SessionManager.ts:651-655``session.abortController.abort()` → the abort signal reaches the spawn at `ProcessRegistry.ts:463` → child exits → `exit` handler unregisters. This chain already exists and covers the hung-generator case entirely.
**No code edit** — this mechanism is the verification target, not the change target. Phase 3 verification greps confirm these handlers are still in place after Phase 2's extraction.
### Mechanism B — Per-session abandoned-session `setTimeout` (new, replaces `reapAbandonedSessions`)
Goal: when a session has no generator running and no pending messages for 15 min, delete it. Detected at the session itself rather than by a global scanner.
Add to `SessionManager.ts`:
```ts
// In ActiveSession interface — add:
abandonedTimer?: ReturnType<typeof setTimeout>;
// New private method on SessionManager:
private scheduleAbandonedCheck(sessionDbId: number): void {
const session = this.sessions.get(sessionDbId);
if (!session) return;
if (session.abandonedTimer) clearTimeout(session.abandonedTimer);
session.abandonedTimer = setTimeout(() => {
const s = this.sessions.get(sessionDbId);
if (!s) return;
if (s.generatorPromise !== null) return; // still working — drop the timer silently
if (this.pendingStore.getPendingCount(sessionDbId) > 0) {
this.scheduleAbandonedCheck(sessionDbId); // work arrived while we waited — reschedule
return;
}
void this.deleteSession(sessionDbId); // truly abandoned — clean up
}, MAX_SESSION_IDLE_MS);
}
// In every code path that marks "work finished" — call scheduleAbandonedCheck
// In every code path that marks "new work arrived" — call clearTimeout(session.abandonedTimer)
```
Call-sites (derived from `SessionManager.ts`):
- Schedule (work finished): after `generatorPromise` resolves at `SessionManager.ts:~335` (`queueSummarize` fire-and-forget completion) and after `iterator` exits at `SessionManager.ts:~648` (the for-await loop exit).
- Clear (new work arrived): at the top of `initializeSession()` when a pending message lands; inside `queueSummarize()`; inside any `ingestObservation` path that sets `lastActivity`.
The timer is per-session, not repeating. When it fires it either deletes the session or reschedules itself if new work snuck in — no drift, no thundering-herd scan.
### Mechanism C — Boot-once reconciliation block (new helper in `worker-service.ts`)
Goal: at worker startup, in ONE sequential block, reconcile all state that event handlers cannot catch (i.e., state that can only have been orphaned by a previous worker instance).
Add to `worker-service.ts` boot init, immediately after `resetStaleProcessingMessages(0)` at `:424`:
```ts
// Boot-once reconciliation — runs exactly ONCE per worker process lifetime.
// Catches state orphaned by a previous (possibly crashed) worker instance.
await this.reconcileWorkerStartup();
// private method:
private async reconcileWorkerStartup(): Promise<void> {
// 1. Kill ppid=1 Claude processes leftover from a crashed prior worker.
// (Copy body of killSystemOrphans from ProcessRegistry.ts:315-344 into
// process-spawning.ts as a free helper before Phase 2 deletes the file.)
await killSystemOrphans();
// 2. Prune registry entries whose PID is no longer in the OS (crash-recovery).
getSupervisor().getRegistry().pruneDeadEntries();
// 3. pending_messages stuck on 'processing' from a crashed worker.
// (Moved from per-claim 60-s reset — see Phase 5.)
this.sessionManager.getPendingMessageStore().recoverStuckProcessing();
// 4. SQLite housekeeping (moved from the deleted stale-reaper interval).
// (Covered by plan 02's boot-once SQLite housekeeping phase — this
// plan assumes 02 has landed; if it has not, copy the call here.)
this.sessionManager.getPendingMessageStore().clearFailedOlderThan(60 * 60 * 1000);
}
```
No `setInterval` anywhere in this block. Each step runs exactly once. Explicit `PRAGMA wal_checkpoint` is **not** in this block because SQLite's default `wal_autocheckpoint=1000` pages (`Database.ts:162-168` sets no override) is the contract — see plan 02.
### What's deleted outright (no replacement)
- `src/services/worker/reaper.ts` (never created in this revision).
- `startReaperTick` export (never created).
- `staleSessionReaperInterval` (`worker-service.ts:174, :547`).
- `startOrphanReaper` (`ProcessRegistry.ts:508-527`, `worker-service.ts:537-544`).
- `reapStaleSessions` (`SessionManager.ts:516-568`).
- `reapOrphanedProcesses` (`ProcessRegistry.ts:349-382`).
- `killIdleDaemonChildren` as a runtime sweep (`ProcessRegistry.ts:244-309`) — function deleted entirely; its role is already covered by `exit` handlers + per-iterator idle timeout.
- Periodic `PRAGMA wal_checkpoint(PASSIVE)` call at `worker-service.ts:~581` — SQLite default covers it.
- Periodic `clearFailedOlderThan(1h)` call at `worker-service.ts:~567` — moved to boot-once (Mechanism C step 4).
---
## Phases
Every phase must satisfy: (a) precise "Copy from …" pointer, (b) doc citations, (c) verification, (d) anti-pattern guards (A invent supervisor API; B polling; D facade-over-facade).
### Phase 1 — Introduce ingest helpers (`ingestObservation` / `ingestPrompt` / `ingestSummary`)
(a) **Implement**:
- Create `src/services/ingest/index.ts` (new module). Three exports:
- `ingestObservation(payload: ObservationPayload): { id: number; skipped: boolean }`
- `ingestPrompt(payload: PromptPayload): { id: number; skipped: boolean }`
- `ingestSummary(payload: SummaryPayload): { id: number; skipped: boolean }`
- Each helper: `stripMemoryTags` all user-facing text fields → `PrivacyCheckValidator.validate(operationType)` (existing at `src/services/worker/validation/PrivacyCheckValidator.ts:17-24`) → `INSERT pending_messages` via `PendingMessageStore.enqueue`.
- Copy from: current HTTP-boundary strip + validate + enqueue sequence in `SessionRoutes.ts:696-705` (summarize branch) and the observation-queue path in `SessionManager.ts:276`. Consolidate.
(b) **Docs**:
- 05 § 3.8 — "`POST /api/session/observation``ingestObservation(payload) strip → validate → INSERT pending_messages` → emit 'message' event"
- 05 Part 2 D1 ("One observation ingest path")
- 05 § 3.2 call-site list (`C1` ingestObservation, `C2` ingestPrompt, `C3` ingestSummary — **C3 closes the summary privacy gap**)
- 06 cites `src/services/worker/validation/PrivacyCheckValidator.ts:17-24`
- Live: `src/services/worker/http/routes/SessionRoutes.ts:696-705`, `src/services/worker/SessionManager.ts:276`
(c) **Verification**:
- Grep `stripMemoryTags` usage: exactly 3 call-sites (one per helper) + unit test imports.
- Unit test: `ingestSummary({ last_assistant_message: "<private>secret</private> clean text" })` → DB row's `last_assistant_message` field does not contain "secret" (closes P1).
- `POST /api/sessions/summarize` call-path routes through `ingestSummary` (no direct strip call in `SessionRoutes.ts` anymore).
(d) **Guards**:
- A: do **not** add a fourth "`ingestAny(type, payload)`" dispatcher; the three shapes have different required fields and privacy rules. Separate functions → explicit failure modes.
- D: do **not** keep the old HTTP-boundary strip calls as a "belt-and-suspenders" second pass. Edge-processing only.
### Phase 2 — Delete `src/services/worker/ProcessRegistry.ts`; extract spawn helpers
(a) **Implement**:
- Create `src/services/worker/process-spawning.ts`:
- `createPidCapturingSpawn(sessionDbId)` — copy verbatim from `ProcessRegistry.ts:393-502`.
- `findSessionProcess(sessionDbId): TrackedProcess | undefined` — copy from `ProcessRegistry.ts:85-94` (`getProcessBySession` renamed for clarity).
- `getActiveProcesses()` — copy from `:172-179`.
- `getActiveProcessCount()` — copy from `:99-101`.
- `waitForSlot(max, timeoutMs, evict)` + `notifySlotAvailable()` + `slotWaiters` array + `TOTAL_PROCESS_HARD_CAP` — copy from `:104-167`.
- `TrackedProcess` interface — copy from `:27-32`.
- Inline helper `getTrackedProcesses()` — copy from `:34-52`.
- Rewire imports in:
- `SessionManager.ts:17``{ findSessionProcess }` from `./process-spawning.js`.
- `SDKAgent.ts:24``{ createPidCapturingSpawn, findSessionProcess, waitForSlot }`.
- `worker-service.ts:109``{ getActiveProcesses }`.
- Delete `src/services/worker/ProcessRegistry.ts`.
(b) **Docs**:
- 05 § 3.8 "Deleted: `src/services/worker/ProcessRegistry.ts` (facade, 528 lines) — supervisor registry is source of truth"
- 05 Part 1 item #4
- 06 Phase 5 "Delete worker ProcessRegistry facade" (Phase 5 :246-280)
- V5, V6
- Live: `ProcessRegistry.ts:1-527`, `worker-service.ts:109, 537, 786`, `SessionManager.ts:17, 412`, `SDKAgent.ts:24`
(c) **Verification**:
- `test -f src/services/worker/ProcessRegistry.ts` → false.
- `grep -rn "worker/ProcessRegistry" src/` → 0.
- `npx tsc --noEmit` clean.
- Manual: spawn SDK subprocess, kill with `kill -TERM <pid>`; subprocess exits; supervisor-registry prunes dead PID on next reaper tick (Phase 3 verifies the prune).
(d) **Guards**:
- D: no compat shim re-exporting deleted symbols.
- A: do **not** invent new methods on `supervisor/process-registry.ts` — use its existing public API (`register`, `unregister`, `getByPid`, `getBySession`, `getAll`, `pruneDeadEntries`, `reapSession`, `getRuntimeProcess`).
### Phase 3 — Wire event-driven cleanup + boot-once reconciliation + per-session abandoned-session timer (revised 2026-04-22)
**Previously proposed:** build a new `reaper.ts` module exporting a `ReaperTick` with three skippable checks on a 30-s interval; additionally introduce a dedicated `sqliteHousekeepingInterval` for `clearFailedOlderThan` + `wal_checkpoint`. Both were rejected as band-aids by investigation 2026-04-22 — see `08-reconciliation.md` Part 4 revision. This phase is now a **three-part change with zero new `setInterval`s.**
(a) **Implement — Part 1 (Mechanism A: verify existing event handlers survive Phase 2's extraction)**:
After Phase 2 moved `createPidCapturingSpawn` from `ProcessRegistry.ts:393-502` to `process-spawning.ts`, verify the subprocess `exit` handler still:
- At `ProcessRegistry.ts:479` (now `process-spawning.ts` in its new location): `child.on('exit', ...)` is present.
- Calls `unregisterProcess(child.pid)` (line `:484` relative) on exit.
- Also calls `notifySlotAvailable()` inside the same handler (keeps pool bookkeeping correct without a scanner).
No code change beyond what Phase 2 already did — the handler was already correct; this phase is where it *becomes load-bearing* because the sweeper it was backing up is being deleted.
(a) **Implement — Part 2 (Mechanism B: per-session abandoned-session `setTimeout`)**:
In `SessionManager.ts`:
1. Add `abandonedTimer?: ReturnType<typeof setTimeout>` to `ActiveSession` interface.
2. Add private `scheduleAbandonedCheck(sessionDbId: number): void` per the Copy-ready snippet section (Mechanism B). Threshold: `MAX_SESSION_IDLE_MS = 15*60*1000` (re-home from the module-level const at `:26` to a `thresholds` object — or leave in place and import into the method).
3. Wire schedule-on-idle call-sites:
- Inside `queueSummarize()` fire-and-forget completion handler (around `:335` — the `.finally` branch on the generator promise): `this.scheduleAbandonedCheck(sessionDbId)`.
- Inside the for-await iterator exit in `getMessageIterator()` consumer (around `:648`): `this.scheduleAbandonedCheck(sessionDbId)`.
4. Wire clear-on-activity call-sites:
- Top of `initializeSession()`: if `sessions.has(id)` and `session.abandonedTimer`, `clearTimeout(session.abandonedTimer)` + `session.abandonedTimer = undefined`.
- Inside `queueSummarize()` at entry: same clear.
- Inside observation enqueue path (wherever `ingestObservation` bumps `lastActivity`): same clear.
5. Inside `deleteSession()`: `if (session.abandonedTimer) clearTimeout(session.abandonedTimer)`. (Prevents firing after deletion.)
(a) **Implement — Part 3 (Mechanism C: boot-once reconciliation in `worker-service.ts`)**:
In `worker-service.ts`, replace the deleted blocks at lines `537-544` (`startOrphanReaper`) and `547-589` (stale reaper + WAL + failed-purge) with the boot-once call per the Copy-ready snippet section (Mechanism C). Insertion point: immediately after the existing `resetStaleProcessingMessages(0)` at `:424`.
Move the body of `killSystemOrphans` out of the doomed `ProcessRegistry.ts` **before** Phase 2 deletes that file. Two options:
- Land Phase 3 before Phase 2 and keep a direct import until Phase 2 runs; then move the function along with `createPidCapturingSpawn` into `process-spawning.ts` and re-export. (Chosen — preserves Phase ordering.)
- Copy the body inline into `worker-service.ts` boot helper. (Fallback if circular-import issues arise.)
`supervisor.getRegistry().pruneDeadEntries()` is used directly — no new method on the supervisor, per anti-pattern guard A.
(b) **Docs**:
- 05 § 3.8 revised subgraph "Event-driven cleanup — no repeating timers" and "Worker startup — boot-once reconciliation".
- 05 Part 2 **D3** ("Zero repeating background timers").
- 05 Part 4 timer census ("Repeating background timers: 3 → 0") — revision 2026-04-22.
- 08-reconciliation.md Part 4 (revised) — zero-timer model rationale + invariants.
- V6 (register ownership), V19 (stale-reset relocation to boot-once).
- Live: `ProcessRegistry.ts:315-344, 475-486, 479-484`, `worker-service.ts:421-427, 523-532, 537-589`, `SessionManager.ts:26, 59-84, 516-568, 648-656, 651-655`, `SessionQueueProcessor.ts:6, 51-52, 62-63, 93-104`, `supervisor/process-registry.ts` (pruneDeadEntries).
(c) **Verification**:
- **Zero `setInterval` in the worker layer**:
```
grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts
```
Expected: **0** matches. No exclusions, no parenthetical carve-outs.
- **Zero references to the deleted sweeper names**:
```
grep -rn "ReaperTick\|startReaperTick\|startOrphanReaper\|staleSessionReaperInterval\|reapStaleSessions\|reapOrphanedProcesses\|killIdleDaemonChildren\|sqliteHousekeepingInterval" src/
```
Expected: **0**.
- **`killSystemOrphans` is called exactly once per worker boot**:
```
grep -rn "killSystemOrphans" src/
```
Expected: 2 matches — the definition and a single call site inside the boot-once helper. No call site inside any handler or interval.
- **Abandoned-session timer**:
- Unit test: initialize a session, fire-and-forget resolve its generator, advance a fake clock 15 min — assert `deleteSession` was called exactly once.
- Unit test: initialize a session, let it go idle for 14 min, then enqueue an observation — assert `abandonedTimer` was cleared and nothing was deleted.
- Unit test: initialize a session, idle 15 min, timer fires, but `pendingStore.getPendingCount()` returns > 0 at the moment of firing — assert timer reschedules and no delete occurs.
- **Hung-generator path**:
- Integration test: spawn an SDK session, freeze its stream (SIGSTOP the subprocess); after 3 min the per-iterator idle timeout at `SessionQueueProcessor.ts` fires, `abortController.abort()` fires, the child exits, the `exit` handler unregisters. No background scanner involved.
- **Boot-once reconciliation**:
- Integration test: before starting the worker, spawn a detached Claude subprocess whose ppid is `1` (simulate a crashed prior worker). Boot the worker. Within 1 s of boot completion, that process is SIGKILLed. Registry is clean.
- Integration test: seed `pending_messages` with a row in `status='processing'` from a prior (fake-crashed) worker; boot; assert the row is reset to `status='pending'` within 1 s.
- **Subprocess crash-recovery during runtime**:
- Integration test: while the worker is running, `kill -9` an active SDK subprocess. Within 500 ms the `exit` handler fires, `unregisterProcess` is called, pool slot is released. No timer involved.
(d) **Guards**:
- **B (no polling, no new interval)**: the definitive grep. `grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts` must return **0**. Any hit is a regression — the fix is to either remove the call or convert it to an event-driven / per-session pattern.
- **A (no invented supervisor API)**: `pruneDeadEntries`, `getByPid`, `getBySession`, `getAll`, `reapSession`, `getRuntimeProcess`, `unregisterProcess`, `registerProcess` are the full public surface — any other method name in a diff is an invented API and must be reverted.
- **D (no facade-over-facade)**: the per-session abandoned-session timer lives on `ActiveSession` as a field — no new `AbandonedSessionManager` class, no `SessionTimeoutScheduler` abstraction. If a second per-session timer needs to be added later, *then* extract.
- **E (one code path per concern)**: the only subprocess-death signal at runtime is `child.on('exit')`. Do not add a second redundant signal (no `pid-alive` poller, no "heartbeat check").
### Phase 4 — Delete `staleSessionReaperInterval` + `startOrphanReaper` + periodic SQLite housekeeping (revised 2026-04-22)
(a) **Implement**:
- Delete `src/services/worker/worker-service.ts:174` field declaration (`private staleSessionReaperInterval`).
- Delete `worker-service.ts:537-544` (startOrphanReaper call + `this.stopOrphanReaper` wiring).
- Delete `worker-service.ts:547-589` (entire stale-reaper block, including its embedded `clearFailedOlderThan` and `PRAGMA wal_checkpoint(PASSIVE)` calls). **Do not** create a new `setInterval` in their place. `clearFailedOlderThan` has moved to boot-once (Phase 3 Mechanism C step 4, co-owned with plan 02). `wal_checkpoint` is deleted outright — SQLite's default `wal_autocheckpoint=1000` pages covers it (`Database.ts:162-168` sets no override; the default is active).
- Delete shutdown clauses at `worker-service.ts:1108-1110` (both `clearInterval(this.staleSessionReaperInterval)` and `this.stopOrphanReaper?.()`). The boot-once block has nothing to clear on shutdown.
- Delete `startOrphanReaper` export from `ProcessRegistry.ts` (already removed by Phase 2's file deletion).
- Delete `SessionManager.reapStaleSessions()` method entirely (`SessionManager.ts:516-568`). No stub; no replacement — both of its branches are covered by the per-iterator idle timeout (hung-generator branch) and the per-session abandoned-session timer from Phase 3 (no-generator branch).
- Keep module-level `MAX_SESSION_IDLE_MS` in `SessionManager.ts:26` — it is now consumed by `scheduleAbandonedCheck()` (Phase 3 Mechanism B). Keep `MAX_GENERATOR_IDLE_MS` at `:23` — unchanged usage by `detectStaleGenerator`.
(b) **Docs**:
- 05 § 3.8 Deleted list (`staleSessionReaperInterval`, `startOrphanReaper`, `reapStaleSessions`, periodic `clearFailedOlderThan`, periodic `wal_checkpoint`).
- 05 Part 1 items #5, #6, #7.
- 05 Part 4 timer census (revised 2026-04-22 — 3 → 0).
- 05 Part 2 **D3** (zero repeating background timers).
- 08-reconciliation.md Part 4 revised + C7 revised (no `sqliteHousekeepingInterval`).
- V6.
- Live: `worker-service.ts:174, 537, 547-589, 1108`, `SessionManager.ts:516-568`, `Database.ts:162-168` (auto-checkpoint confirmation).
(c) **Verification**:
- `grep -rn "staleSessionReaperInterval\|startOrphanReaper\|reapStaleSessions\|sqliteHousekeepingInterval" src/` → **0** (tests included).
- `grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts` → **0**. No carve-outs, no exclusions. If any match appears, the fix is to delete or convert to event-driven, never to add an exclusion comment.
- `grep -rn "wal_checkpoint" src/` → 0 in `worker-service.ts`. (The `PRAGMA wal_autocheckpoint` read at boot for observability is fine if introduced by plan 02.)
- `grep -rn "clearFailedOlderThan" src/` → 2 matches: the definition in `PendingMessageStore.ts` and a single call site inside the boot-once reconciliation block.
(d) **Guards**:
- D: no "deprecated stub" left behind for `reapStaleSessions`; no shim for `startOrphanReaper`; no renamed variant of `sqliteHousekeepingInterval`.
- B: no `setInterval` added anywhere in the worker layer — the grep above is the canonical check.
### Phase 5 — Move `PendingMessageStore` 60-s reset to one-shot boot recovery
(a) **Implement**:
- In `src/services/sqlite/PendingMessageStore.ts`:
- Delete lines `103-116` (self-heal UPDATE inside `claimNextMessage` transaction).
- Add a new public method:
```ts
recoverStuckProcessing(): number {
const stmt = this.db.prepare(`
UPDATE pending_messages
SET status = 'pending', started_processing_at_epoch = NULL
WHERE status = 'processing'
`);
const result = stmt.run();
if (result.changes > 0) {
logger.info('QUEUE', `BOOT_RECOVERY | recovered ${result.changes} stuck processing message(s)`);
}
return result.changes;
}
```
- Note the one-shot version is **unscoped by session** and **unscoped by threshold** — on boot, any `processing` row is by definition stuck (worker was not running a moment ago), so the 60-s guard is not needed. This is cleaner than copying the threshold logic.
- Delete `STALE_PROCESSING_THRESHOLD_MS` constant (line 6) — no remaining caller.
- In `src/services/worker-service.ts`, call `pendingStore.recoverStuckProcessing()` once during boot as part of the boot-once reconciliation block (Phase 3 Mechanism C step 3), after DB initialization. (Co-owned with 02-sqlite-persistence; that plan may also call it — this plan guarantees the call exists.)
(b) **Docs**:
- 05 § 3.3 bottom box "BootOnce → Recover" (authoritative).
- 05 Part 1 item #16.
- 05 § 3.8 bottom "Worker startup → UPDATE pending_messages status processing → pending".
- 06 Phase 6 task 3.
- V19.
- Live: `src/services/sqlite/PendingMessageStore.ts:6, 99-145`.
(c) **Verification**:
- `grep -n "STALE_PROCESSING_THRESHOLD_MS" src/` → 0.
- Integration test: insert `pending_messages` row with `status='processing', started_processing_at_epoch=now-2*3600*1000`; start worker; assert row flips to `pending` before first `claimNextMessage` is called.
- Unit test: `claimNextMessage` is now a pure SELECT+UPDATE transaction; passing a row with `started_processing_at_epoch=now-10000` (stale by old threshold) is **not** reset — confirms boot-only recovery.
(d) **Guards**:
- B: `claimNextMessage` no longer mutates on read path.
- A: `recoverStuckProcessing` is a method on `PendingMessageStore`, not a new table / migration.
### Phase 6 — Inline SIGTERM → wait 5 s → SIGKILL
(a) **Implement**:
- In `SessionManager.deleteSession` (`:381-446`), replace the call at `:412` (`await ensureProcessExit(tracked, 5000)`) with the inlined ladder. 12-line block:
```ts
if (tracked.process.exitCode !== null) {
// already exited
} else {
const exited = new Promise<void>(resolve => tracked.process.once('exit', () => resolve()));
const timed = new Promise<void>(resolve => setTimeout(resolve, 5000));
await Promise.race([exited, timed]);
if (tracked.process.exitCode === null) {
try { tracked.process.kill('SIGKILL'); } catch { /* dead */ }
const killed = new Promise<void>(resolve => tracked.process.once('exit', () => resolve()));
const killTimed = new Promise<void>(resolve => setTimeout(resolve, 1000));
await Promise.race([killed, killTimed]);
}
}
// unregister via supervisor
for (const rec of getSupervisor().getRegistry().getByPid(tracked.pid)) {
if (rec.type === 'sdk') getSupervisor().unregisterProcess(rec.id);
}
notifySlotAvailable();
```
- Do the same inline at `worker-service.ts:786` (other call-site).
- Delete `ensureProcessExit` (already removed with `ProcessRegistry.ts` in Phase 2; this phase also removes its re-export if any temporary shim existed).
(b) **Docs**:
- 05 Part 1 item #9 ("Keep SIGTERM → SIGKILL, delete the ladder framework — inline it").
- 05 § 3.8 Deleted list.
- 06 Phase 5 task 1 ("`ensureProcessExit` → keep as free function... Remove the ladder-framework packaging").
- Live: `ProcessRegistry.ts:185-229`, `SessionManager.ts:412`, `worker-service.ts:786`.
(c) **Verification**:
- `grep -n "ensureProcessExit" src/` → 0.
- Manual: spawn subprocess that ignores SIGTERM (`trap '' TERM; sleep 60`); call `deleteSession`; observe SIGKILL 5 s after the abort.
(d) **Guards**:
- A: no new `EscalationLadder` class, no `ProcessControl` wrapper.
### Phase 7 — Blocking `POST /api/session/end`
(a) **Implement**:
- Add new route in `src/services/worker/http/routes/SessionRoutes.ts`:
```ts
app.post('/api/session/end', this.handleSessionEnd.bind(this));
```
- Handler body (copy and simplify from `handleSummarizeByClaudeId` at `:663-720` + the hook-side wait at `summarize.ts:117-150`):
1. Resolve `session = sessionManager.getSession(sessionDbId)`; if missing, try to init from DB (same pattern `queueSummarize` uses at `SessionManager.ts:332-334`).
2. `sessionManager.queueSummarize(sessionDbId, last_assistant_message)`. Also call `ensureGeneratorRunning(sessionDbId, 'summarize')` (same helper used at `SessionRoutes.ts:500, 708`).
3. Await `session.lastSummaryStored` flag flipping (currently written by `ResponseProcessor` — see 03-response-parsing-storage). Implementation: expose an `awaitSummary(sessionDbId, timeoutMs)` helper on `SessionManager` that returns a `Promise<{ summaryId: number | null; timedOut: boolean }>`. Internally: subscribe to the existing `sessionQueues` EventEmitter for a `summary-stored` event, OR fall back to polling `session.lastSummaryStored` once per 200 ms. *Recommendation: add a `session.summaryStoredEvent = new EventEmitter()` field and have `ResponseProcessor` emit `'stored'` with the summary id; `awaitSummary` uses `events.once(emitter, 'stored')` raced against `setTimeout(110_000)`.*
4. After the promise resolves (or times out): `session.abortController.abort()`. Wait briefly (≤1 s) for generator, then `sessionManager.deleteSession(sessionDbId)` (which runs the inline SIGTERM→SIGKILL from Phase 6 + supervisor `reapSession`).
5. **(Preflight edit 2026-04-22 — reconciliation B2)** Return `{ summaryId, timedOut }` with **HTTP 200 on both success and timeout**. Do NOT return 504 on timeout — that status was rejected in reconciliation. Windows Terminal closes tabs only when the hook exits with code 0; hook 09 Phase 3 maps HTTP 200 → exit 0 unconditionally. If the endpoint returns any non-200, the hook must fall through to exit 1 which accumulates Windows Terminal tabs per CLAUDE.md. Contract: timeout path response is `{ summaryId: null, timedOut: true }` with status 200; success path is `{ summaryId: <number>, timedOut: false }` with status 200. Only programmer errors (400 invalid body, 404 missing session) use non-200.
6. **(Preflight edit 2026-04-22 — reconciliation C6)** Initialize `session.summaryStoredEvent = new EventEmitter()` when an `ActiveSession` is created in `SessionManager` (likely the `initializeSession` method). The emitter is consumed by `awaitSummary` above and produced by `ResponseProcessor` per plan 03 Phase 2 step 5. Field addition on `ActiveSession` shape: `summaryStoredEvent?: EventEmitter`. Use `events.once(session.summaryStoredEvent, 'stored')` raced against `setTimeout(110_000)` inside `awaitSummary`.
- Delete after hook 09 lands: `POST /api/sessions/complete` (`:753`) and `GET /api/sessions/status` consumers in hooks (the hook-side poll loop at `summarize.ts:117-150`). Keep the status endpoint for the viewer UI short-term.
(b) **Docs**:
- 05 § 3.8 `End → queueSummarize → await summary_stored OR 110s → abortController.abort → delete` (authoritative).
- 05 § 3.1 (STOP box: "BLOCKS until summary written or 110s timeout").
- 05 Part 1 item #11 ("`/api/sessions/summarize` blocks until done... Hook waits on one call").
- 05 Part 2 D6.
- Live: `src/cli/handlers/summarize.ts:25, 89, 117-150`, `src/services/worker/http/routes/SessionRoutes.ts:379-720, 747-753`, `src/services/worker/SessionManager.ts:329-377`, `src/services/worker/agents/ObservationBroadcaster.ts:43-55`.
(c) **Verification**:
- Hook-less integration test: POST `/api/session/end` with a valid sessionDbId that has queued work; response arrives only after the summary row exists in `session_summaries`; **HTTP 200** with `{ summaryId: <number>, timedOut: false }`; total latency <5 s in happy path.
- Timeout test: POST with a session whose SDK is hung; response at 110 s with **HTTP 200** and `{ summaryId: null, timedOut: true }`; subprocess is killed (verify PID gone from registry). Assert status code is 200, not 504 — this is a Windows Terminal contract gate (preflight edit B2).
- Hook 09 plan's verification runs one POST (no 500-ms loop) and asserts hook exit 0 on both the success and timeout paths.
(d) **Guards**:
- B: no 500-ms polling loop in the server handler either — use the event emitter or single 200-ms fall-back.
- D: do not keep `/api/sessions/complete` as a "safety net" — one endpoint owns session termination.
- A: do not extend `SessionRoutes` with a seventh summary endpoint; route-count goal is shrink, not grow.
### Phase 8 — Verification
(a) **Run**:
- `grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts` → **0** matches. No repeating intervals in the worker layer at all.
- `wc -l src/services/worker/ProcessRegistry.ts 2>/dev/null || echo DELETED` → DELETED.
- `wc -l src/services/worker/process-spawning.ts` → ~150 LoC (contains `createPidCapturingSpawn`, `findSessionProcess`, `getActiveProcesses`, `waitForSlot`, `notifySlotAvailable`, `killSystemOrphans` as free helpers). No `reaper.ts` exists.
- Session-lifecycle total: `SessionManager.ts` (~570 after deleting `reapStaleSessions` + `detectStaleGenerator` + `MAX_GENERATOR_IDLE_MS`, adding `scheduleAbandonedCheck` + `abandonedTimer` wiring) + `process-spawning.ts` (~150) + worker-service boot-once block (~40 added, ~55 removed from the deleted stale-reaper block) + `supervisor/process-registry.ts` (unchanged 408) ≈ **~450 LoC reduction** from today's ~900 in worker-layer lifecycle code.
(b) **Regression suite**:
- Subprocess crash recovery: kill SDK subprocess → within ~500 ms the `child.on('exit')` handler fires at `process-spawning.ts` (copied from `ProcessRegistry.ts:479`) and calls `unregisterProcess(pid)`. No scanner involved.
- Hung-generator kill: SDK subprocess frozen (SIGSTOP) → after 3 min of stream silence the per-iterator idle `setTimeout` at `SessionQueueProcessor.ts:6` fires `onIdleTimeout` → `SessionManager.ts:651-655` → `abortController.abort()` → child exits → `exit` handler unregisters. No scanner involved.
- Abandoned-session cleanup: session with no generator and no pending for 15 min → the per-session `abandonedTimer` (scheduled on last-generator-completion) fires, calls `deleteSession(id)`. If new work arrived first, the timer was cleared on activity. No scanner involved.
- Cross-restart orphans: ppid=1 Claude processes from a previously crashed worker are cleaned up exactly once, at the next worker's boot, by `killSystemOrphans()` in the boot-once reconciliation block. No repeating sweep.
- PID reuse: supervisor `isPidAlive` + `verifyPidFileOwnership` (already at `supervisor/process-registry.ts:28-172`) catches PID reuse — no behavior change.
- Privacy gap closed: end-to-end test with `<private>` tag in `last_assistant_message` — not persisted to `session_summaries`.
- Blocking `/api/session/end`: one request, ≤110 s, returns summary id or null.
(c) **Doc-driven coverage check**: every item in 05 § 3.8 "Deleted" list corresponds to a Phase and a grep-based verification.
(d) **Guards audit**: no new timers, no new classes over 5 LoC, no supervisor-registry surface extension.
---
## Confidence + gaps
### High confidence
- Worker-layer `ProcessRegistry.ts` (527 LoC) is a pure facade over `supervisor/process-registry.ts`: every method body I audited (`:34-52`, `:57-65`, `:70-79`, `:85-94`, `:99-101`, `:349-382`) already delegates via `getSupervisor().getRegistry()`. Deletion is mechanical.
- `reapStaleSessions` (SessionManager.ts:516-568) has two independent branches that map cleanly onto existing mechanisms: the generator-active branch is already covered by `SessionQueueProcessor.ts:6` (per-iterator 3-min idle `setTimeout` that resets on every chunk and aborts the controller — then `child.on('exit')` unregisters); the no-generator branch is covered by the new per-session `abandonedTimer` `setTimeout` (Phase 3 Mechanism B). `detectStaleGenerator` (`:59-84`) is deleted along with `reapStaleSessions` — the per-iterator timer at the stream level is the single source of truth for "silent generator."
- Supervisor `reapSession` (`supervisor/process-registry.ts:292-385`) already implements SIGTERM → 5 s → SIGKILL; the worker-layer `ensureProcessExit` (`ProcessRegistry.ts:185-229`) duplicates this for the ChildProcess reference. Inlining the worker version keeps per-process escalation while supervisor-level reap handles the session-wide sweep on `deleteSession`.
- Cadence math: 30 s tick × 4 = 2 min matches the current `staleSessionReaperInterval` cadence at `worker-service.ts:589`. Zero timing regression.
### Gaps / open integration points
1. **`summary_stored` wiring (Phase 7)** — the cleanest implementation needs `ResponseProcessor` (03-response-parsing-storage) to emit a per-session event on successful summary write. Today `session.lastSummaryStored` is written (referenced at `SessionRoutes.ts:747`) but there is no event — only a polled read. **Blocking coordinate point: 09-lifecycle-hooks cannot simplify its hook until Phase 7 is wired, and Phase 7 cannot wire `awaitSummary` cleanly until 03 exposes an emitter.** Concrete ask from 03: add `session.summaryStoredEvent = new EventEmitter()` populated inside `ResponseProcessor` after the commit (approx. location: `src/services/worker/agents/ResponseProcessor.ts:228` region where `broadcastSummary` is already called). Fallback if 03 can't accommodate: Phase 7 polls `session.lastSummaryStored` at 200 ms with the 110 s timeout — still one HTTP call from the hook's perspective, still blocking server-side, just internally polled. Degrades cleanly.
2. **SQLite housekeeping in `worker-service.ts:547-589`** (resolved 2026-04-22) — the stale-reaper block today also runs `clearFailedOlderThan(1h)` and `PRAGMA wal_checkpoint`. Under the zero-timer model: `clearFailedOlderThan` moves to boot-once (co-owned with plan 02's boot-once SQLite housekeeping phase); `wal_checkpoint` explicit calls are deleted outright because `Database.ts:162-168` sets no `wal_autocheckpoint` override, so SQLite's default of 1000 pages is the active policy. This plan's Phase 4 deletes all three items together — no transient "two `setInterval` hits" in the diff.