perf: streamline worker startup and consolidate database connections (#2122)

* docs: pathfinder refactor corpus + Node 20 preflight

Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 01 — data integrity

Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.

- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
  started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
  UNIQUE(memory_session_id, content_hash) on observations; dedup
  duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
  worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
  and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
  observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
  2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
  path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
  CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
  Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.

Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/01-data-integrity.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 02 — process lifecycle

OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).

- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
  canonical registry at src/supervisor/process-registry.ts is the
  sole survivor; SDK spawn site consolidated into it via new
  createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
  ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
  ['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
  process.kill(-pgid, signal) on Unix when pgid is recorded;
  Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
  staleSessionReaperInterval setInterval (including the co-located
  WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
  WAL growth without an app-level timer), killIdleDaemonChildren,
  killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
  detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
  constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
  via generatorPromise.finally() already lives in worker-service
  startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
  SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
  for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
  via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
  lazy-spawn — consults isWorkerPortAlive (which gates
  captureProcessStartToken for PID-reuse safety via commit
  99060bac), then spawns detached with unref(), then
  waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
  exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
  idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
  only on external SIGTERM via supervisor signal handlers.

Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.

All 10 verification greps return 0. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast

Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
  - Phase 1/2: ObservationRow type doesn't exist; the four
    "formatters" operate on three incompatible types.
  - Phase 4: RECENCY_WINDOW_MS already imported from
    SEARCH_CONSTANTS at every call site.
  - Phase 7: getExistingChromaIds is NOT @deprecated and has an
    active caller in ChromaSync.backfillMissingSyncs.
  - Phase 8: estimateTokens already consolidated.
  - Phase 9: knowledge-corpus rewrite blocked on PG-3
    prompt-caching cost smoke test.

Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.

Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.

Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).

Tests updated (Principle 7 — delete in same PR):
  - search-orchestrator.test.ts: "fall back to SQLite" rewritten
    as "throw ChromaUnavailableError (HTTP 503)".
  - chroma/hybrid/sqlite-search-strategy tests: rewritten to
    rejects.toThrow; removed fellBack assertions.

Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 03 — ingestion path

Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.

- Phase 0: Created src/services/worker/http/shared.ts exporting
  ingestObservation/ingestPrompt/ingestSummary as direct
  in-process functions plus ingestEventBus (Node EventEmitter,
  reusing existing pattern — no third event bus introduced).
  setIngestContext wires the SessionManager dependency from
  worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
  returning { valid:true; kind: 'observation'|'summary'; data }
  | { valid:false; reason: string }. Inspects root element;
  <skip_summary reason="…"/> is a first-class summary case
  with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
  branches on the discriminated union. On invalid → markFailed
  + logger.warn(reason). On observation → ingestObservation.
  On summary → ingestSummary then emit summaryStoredEvent
  { sessionId, messageId } (consumed by Plan 05's blocking
  /api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
  (ResponseProcessor + SessionManager + worker-types) and
  MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
  guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
  replaced with fs.watch(transcriptsRoot, { recursive: true,
  persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
  Map deleted. tool_use rows insert with INSERT OR IGNORE on
  UNIQUE(session_id, tool_use_id) (added by Plan 01). New
  pairToolUsesByJoin query in PendingMessageStore for read-time
  pairing (UNIQUE INDEX provides idempotency; explicit consumer
  not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
  direct ingestObservation call. maybeParseJson silent-passthrough
  rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
  collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
  class) deleted. The active extractLastMessage at
  src/shared/transcript-parser.ts:41-144 is the sole survivor.

Tests updated (Principle 7 — same-PR delete):
  - tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
    to assert discriminated-union shape; coercion-specific
    scenarios collapse into { valid:false } assertions.
  - tests/worker/agents/response-processor.test.ts: circuit-breaker
    describe block skipped; non-XML/empty-response tests assert
    fail-fast markFailed behavior.

Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.

Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.

Plan: PATHFINDER-2026-04-22/03-ingestion-path.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 05 — hook surface

Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.

- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
  1..20; do curl -sf .../health && break; sleep 0.1; done` shell
  retry wrappers deleted. Hook commands invoke their bun entry
  point directly.
- Phase 2: src/shared/worker-utils.ts — added
  executeWithWorkerFallback<T>(url, method, body) returning
  T | { continue: true; reason?: string }. All 8 hook handlers
  (observation, session-init, context, file-context, file-edit,
  summarize, session-complete, user-message) rewritten to use
  it instead of duplicating the ensureWorkerRunning →
  workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
  using validateBody + sessionEndSchema (z.object({sessionId})).
  One-shot ingestEventBus.on('summaryStoredEvent') listener,
  30 s timer, req.aborted handler — all share one cleanup so
  the listener cannot leak. summarize.ts polling loop, plus
  MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
  memoizes SettingsDefaultsManager.loadFromFile per process.
  Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
  check entry; isProjectExcluded no longer referenced from
  src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
  (all 6 adapters: claude-code, cursor, raw, gemini-cli,
  windsurf). New AdapterRejectedInput error in
  src/cli/adapters/errors.ts. Handler-level isValidCwd checks
  deleted from file-edit.ts and observation.ts. hook-command.ts
  catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
  initAgent is idempotent. tests/hooks/context-reinjection-guard
  test (validated the deleted conditional) deleted in same PR
  per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
  .json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
  _THRESHOLD setting (default 3). On consecutive worker-unreachable
  ≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
  wrapping ensureWorkerRunning. executeWithWorkerFallback calls
  the memoized version.

Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.

Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.

Plan: PATHFINDER-2026-04-22/05-hook-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 06 — API surface

One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.

- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
  in src/services/worker/http/middleware/validateBody.ts —
  safeParse → 400 { error: 'ValidationError', issues: [...] }
  on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
  route file. 24 POST endpoints across SessionRoutes,
  CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
  LogsRoutes, SettingsRoutes now wrap with validateBody().
  /api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
  along with every call site. Inline coercion helpers
  (coerceStringArray, coercePositiveInteger) and inline
  if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
  from src/services/worker/http/middleware.ts. Worker binds
  127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
  via fs.readFileSync; served as Buffer with text/html content
  type. SKILL.md + per-operation .md files cached in
  Server.ts as Map<string, string>; loadInstructionContent
  helper deleted. NO fs.watch, NO TTL — process restart is the
  cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
  — /api/pending-queue (GET), /api/pending-queue/process (POST),
  /api/pending-queue/failed (DELETE), /api/pending-queue/all
  (DELETE). Helper methods that ONLY served them
  (getQueueMessages, getStuckCount, getRecentlyProcessed,
  clearFailed, clearAll) deleted from PendingMessageStore.
  KEPT: /api/processing-status (observability), /health
  (used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
  GracefulShutdown now calls getSupervisor().stop() directly.
  Two functions retained with clear roles:
    - performGracefulShutdown — worker-side 6-step shutdown
    - runShutdownCascade — supervisor-side child teardown
      (process.kill(-pgid), Windows tree-kill, PID-file cleanup)
  Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
  failure-marking path on PendingMessageStore. Old methods
  markSessionMessagesFailed and markAllSessionMessagesAbandoned
  deleted along with all callers (worker-service,
  SessionCompletionHandler, tests/zombie-prevention).

Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.

Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.

Plan: PATHFINDER-2026-04-22/06-api-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 07 — dead code sweep

ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.

Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
  isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
  abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
  zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
  command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments

Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
  builders, ParsedObservation, ParsedSummary, ParseResult,
  SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
  via dynamic await import('../../../context-generator.js') in
  worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
  — used via dynamic await import in npx-cli/install.ts +
  uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
  ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
  orphan-recovery caller in worker-service.ts plus
  zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
  in same file.
- All Database.ts barrel re-exports — used downstream.

Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
  is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
  the methods are not thin wrappers but ~900 LoC of bodies, and
  two methods are documented as intentional mirrors so the
  context-generator.cjs bundle stays schema-consistent without
  pulling MigrationRunner. Deserves its own plan, not a sweep.

Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.

Plan: PATHFINDER-2026-04-22/07-dead-code.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: remove residual ProcessRegistry comment reference

Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile review (P1 + 2× P2)

P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
  - Added optional timeoutMs to executeWithWorkerFallback,
    forwarded to workerHttpRequest.
  - summarize.ts call site now passes 35_000 (5 s above server
    hold window).

P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
  - ResponseProcessor now calls ingestSummary({ kind: 'parsed',
    sessionDbId, messageId, contentSessionId, parsed }) so the
    event-emission path is single-sourced.
  - ingestSummary's requireContext() resolution moved inside the
    'queue' branch (the only branch that needs sessionManager /
    dbManager). 'parsed' is a pure event-bus emission and
    doesn't need worker-internal context — fixes mocked
    ResponseProcessor unit tests that don't call
    setIngestContext.

P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
  - Added a Symbol.for('claude-mem/worker-fallback') brand to
    WorkerFallback. isWorkerFallback now checks the brand, not
    a duck-typed property name.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 2 (P1 + P2)

P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.

  - Gate ingestSummary call on (parsed.data.skipped ||
    session.lastSummaryStored). Skipped summaries are an explicit
    no-op bypass and still confirm; real summaries only confirm
    when storage actually wrote a row.
  - Non-skipped + summaryId === null path logs a warn and lets
    the server-side timeout (504) surface to the hook instead of
    a false ok:true.

P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.

  - Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
    log instead of the misleading ENQUEUED line. No behavior
    change — the duplicate is still correctly suppressed by the
    DB (Principle 3); only the log surface is corrected.
  - confirmProcessed is never called with the enqueue() return
    value (it operates on session.processingMessageIds[] from
    claimNextMessage), so no caller is broken; the visibility
    fix prevents future misuse.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 3 (P1 + 2× P2)

- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
  context after SessionRoutes is constructed. setIngestContext runs
  before routes exist, so transcript-watcher observations queued via
  ingestObservation() had no way to auto-start the SDK generator.
  Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
  /api/session/end calls register one listener each and clean up on
  completion, so the default-10 warning fires spuriously under normal
  load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
  ingestObservation() instead of duplicating skip-tool / meta /
  privacy / queue logic. Single helper, matching the Plan 03 goal.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)

- processor.handleToolResult: restore in-memory tool-use→tool-result
  pairing via session.pendingTools for schemas (e.g. Codex) whose
  tool_result events carry only tool_use_id + output. Without this,
  neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
  of throwing. Previously a single malformed JSON-shaped field caused
  handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
  for purely-glob inputs so the caller skips the watch instead of
  anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
  log on the returned id; the SessionManager branches on id === 0.

* fix: forward tool_use_id through ingestObservation (Greptile iter 5)

P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.

- shared.ingestObservation: forward payload.toolUseId to
  queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
  tool_use_id (HTTP convention) and toolUseId (JS convention) from
  req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
  validator doesn't rely on .passthrough() alone.

* fix: drop dead pairToolUsesByJoin, close session-end listener race

- PendingMessageStore: delete pairToolUsesByJoin. The method was never
  called and its self-join semantics are structurally incompatible
  with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
  collapses any second row with the same pair, so a self-join can
  only ever match a row to itself. In-memory pendingTools in
  processor.ts remains the pairing path for split-event schemas.

- IngestEventBus: retain a short-lived (60s) recentStored map keyed
  by sessionId. Populated on summaryStoredEvent emit, evicted on
  consume or TTL.

- handleSessionEnd: drain the recent-events buffer before attaching
  the listener. Closes the register-after-emit race where the summary
  can persist between the hook's summarize POST and its session/end
  POST — previously that window returned 504 after the 30s timeout.

* chore: merge origin/main into vivacious-teeth

Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).

Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
  kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
  loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
  POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
  summaryStoredEvent supersedes main's SessionCompletionHandler DI
  refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
  reason; generator .finally() Stop-hook self-clean is a guard for a
  path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
  security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
  #2084) while preserving our Zod validateBody schema.

Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings

1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
   in wrapHandler — synchronous exceptions would hang the client rather
   than surfacing as 500s. Wrap it like every other handler.

2) processor.handleToolResult only consumed the session.pendingTools
   entry when the tool_result arrived without a toolName. In the
   split-schema path where tool_result carries both toolName and toolId,
   the entry was never deleted and the map grew for the life of the
   session. Consume the entry whenever toolId is present.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: typing cleanup and viewer tsconfig split for PR feedback

- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings (iter 2)

- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
  the unscoped-drain branch that would nuke every pending/processing
  row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
  cached event until TTL eviction so a retried Stop hook's second
  /api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
  already tailed (JSONL appends fire on every line; only unknown
  paths warrant a rescan).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: call finalizeSession in terminal session paths (Greptile iter 3)

terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.

Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: GC failed pending_messages rows at startup (Greptile iter 4)

Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.

Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)

1. startSessionProcessor success branch now calls completionHandler.
   finalizeSession before removeSessionImmediate. Hooks-disabled installs
   (and any Stop hook that fails before POST /api/sessions/complete) no
   longer leave sdk_sessions rows as status='active' forever. Idempotent
   — a subsequent /api/sessions/complete is a no-op.

2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
   closures that reference it (TDZ safety; safe at runtime today but
   fragile if timeout ever shrinks).

3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
   instead of constructing its own — prevents silent divergence if the
   handler ever becomes stateful.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: stop runaway crash-recovery loop on dead sessions

Two distinct bugs were combining to keep a dead session restarting forever:

Bug 1 (uncaught "The operation was aborted."):
  child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
  signal aborts. spawnSdkProcess() never attached an 'error' listener, so
  any async spawn failure became uncaughtException and escaped to the
  daemon-level handler. Attach an 'error' listener immediately after spawn,
  before the !child.pid early-return, so async spawn errors are logged
  (with errno code) and swallowed locally.

Bug 2 (sliding-window limiter never trips on slow restart cadence):
  RestartGuard tripped only when restartTimestamps.length exceeded
  MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
  exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
  session that fail-restart-fail-restart on 8s cycles would loop forever
  (consecutiveRestarts climbing past 30+ in observed logs). Add a
  consecutiveFailures counter that increments on every restart and resets
  only on recordSuccess(). Trip when consecutive failures exceed
  MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
  processing in between proves the session is dead. Both guards now run in
  parallel: tight loops still trip the windowed cap; slow loops trip the
  consecutive-failure cap.

Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* perf: streamline worker startup and consolidate database connections

1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)

* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations

Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.

- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
  before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
  when shouldTrackProject(cwd) is false, so the observer's own hooks
  cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
  boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
  on observations) inline so bundled artifacts (worker-service.cjs,
  context-generator.cjs) stay schema-consistent — without it, the
  ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
  supervisor can actually feed the observer's stdin.

Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.

* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)

Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.

* fix: cross-platform observer-dir containment; clarify SDK stdin pipe

claude-review feedback on PR #2124.

- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
  hard-coded a POSIX separator and missed Windows backslash paths plus any
  trailing-slash variance. Switched to a path.relative-based isWithin()
  helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
  SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
  consumes that pipe; 'ignore' would null it and the null-check below
  would tear the child down on every spawn.

* fix: make Stop hook fire-and-forget; remove dead /api/session/end

The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.

The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.

- summarize.ts: drop the /api/session/end long-poll and the trailing
  /api/sessions/complete await; ~40 lines removed; unused
  SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
  SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
  route registration. Drop the now-unused ingestEventBus and
  SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
  comments that referenced the dead endpoint. The IngestEventBus is
  left in place dormant (no listeners) for follow-up cleanup so this
  PR stays focused on the blocker.

Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.

Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* deps: bump all dependencies to latest including majors

Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.

Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: surface real chroma errors and add deep status probe

Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.

Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.

Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: rebuild worker-service bundle to match merged src

Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: address coderabbit feedback on PLAN-fix-mcp-search.md

- replace machine-specific /Users/alexnewman absolute paths with portable
  <repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-04-25 13:37:40 -07:00
committed by GitHub
parent 8ace1d9c84
commit 94d592f212
159 changed files with 18091 additions and 5843 deletions
@@ -0,0 +1,433 @@
# Plan 01 — privacy-tag-filtering (foundation)
**Target design**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` section 3.2 ("privacy-tag-filtering (clean)")
**Before-state diagram**: `PATHFINDER-2026-04-21/01-flowcharts/privacy-tag-filtering.md`
**Author date**: 2026-04-22
**Execution order slot**: Part 6 steps 1 and 2 (U6 `stripMemoryTags` + U1 summary privacy gap). First plan in the series.
## Dependencies
- **Upstream (must land before this)**: **none** — this is the foundation plan for the v6.5.0 brutal-audit refactor.
- **Downstream (depends on this)**:
- `07-session-lifecycle-management.md` — introduces `ingestObservation` / `ingestPrompt` / `ingestSummary` helpers that wrap `stripMemoryTags`. Plan 01 must land first so those helpers have a single strip function to call.
- `08-transcript-watcher-integration.md` — calls `ingestObservation` directly (dropping the HTTP loopback). Needs the ingest helpers introduced downstream, which in turn need `stripMemoryTags`.
- `09-lifecycle-hooks.md` — the new `POST /api/session/observation`, `/api/session/prompt`, `/api/session/end` paths must all run stripping; they will route through the downstream ingest helpers.
---
## Sources Consulted
| Source | Lines | What it gave us |
|---|---|---|
| `PATHFINDER-2026-04-21/05-clean-flowcharts.md` | 19, 20, 21, 47, 127-156, 534-558, 564-584 | Part 1 items #1, #2, #3, #29; section 3.2 authoritative clean design; Part 5 deletion ledger row "stripMemoryTagsFromPrompt / FromJson wrappers" (-60/+15 = -45) + summary-path privacy-gap fix row (+3); Part 6 execution steps 1-3 |
| `PATHFINDER-2026-04-21/06-implementation-plan.md` | 22-47 (Phase 0 verified findings V1-V4), 69-111 (Phase 1 tasks), 114-151 (Phase 2 context on ingest helpers), 59-66 (anti-pattern guards A-E) | Verified findings that correct the audit (V1: summary strips ZERO tags not just `<system-reminder>`; V2: `handleObservations` is at line 464, not 378; V3+V4: wrapper + call-site inventory) |
| `PATHFINDER-2026-04-21/01-flowcharts/privacy-tag-filtering.md` | 1-86 | Before-state: three ingress paths (prompt, observation, summary) with partial/missing strip coverage on the summary path |
| `src/utils/tag-stripping.ts` | 1-91 (full file) | Current implementation: `stripTagsInternal` (line 51) + 6 sequential `.replace()` (lines 63-69) + two public wrappers (`stripMemoryTagsFromJson` line 79, `stripMemoryTagsFromPrompt` line 89), `SYSTEM_REMINDER_REGEX` export (line 24), `MAX_TAG_COUNT=100` ReDoS guard (line 31) |
| `src/services/worker/http/routes/SessionRoutes.ts` | 11 (import), 376-389 (route map), 464-485 (`handleObservations` legacy), 491-506 (`handleSummarize` legacy), 560-660 (`handleObservationsByClaudeId` with strip at 629/633), 669-710 (`handleSummarizeByClaudeId` — NO strip), 814-895 (`handleSessionInitByClaudeId` with strip at 862) | Every call site; confirmed every audit line number against live code |
| `src/cli/handlers/summarize.ts` | 19, 59-68, 84-97 | Hook extracts `last_assistant_message` via `extractLastMessage(transcriptPath, 'assistant', true)` (line 64; the `true` strips `<system-reminder>` at read-time only), then POSTs it raw to `/api/sessions/summarize` (line 89). The hook itself does NOT run `stripMemoryTags`; it relies on the worker. Today the worker doesn't strip either — that is the P1 bug. |
| `tests/utils/tag-stripping.test.ts` | 1-80 (413 total lines) | Existing tests import `stripMemoryTagsFromPrompt` + `stripMemoryTagsFromJson` by name; these imports must change. |
## Concrete Findings
1. **Wrappers are identical**. `stripMemoryTagsFromJson(content)` and `stripMemoryTagsFromPrompt(content)` both call `stripTagsInternal(content)` with no behavioural difference (`src/utils/tag-stripping.ts:80` and `:90`). Confirms audit item #1.
2. **Six sequential `.replace()` calls** at `src/utils/tag-stripping.ts:64-69`, one per tag type, each scanning the full string. Confirms audit item #3.
3. **Summary paths strip ZERO tags, not just "`<system-reminder>` only"** — this is the V1 correction to the before-state audit:
- `handleSummarize` (`SessionRoutes.ts:491`): receives `last_assistant_message`, passes it untouched to `this.sessionManager.queueSummarize(sessionDbId, last_assistant_message)` at `:497`.
- `handleSummarizeByClaudeId` (`SessionRoutes.ts:669`): same — raw body → `queueSummarize(sessionDbId, last_assistant_message)` at `:705`.
- The hook-side `extractLastMessage(..., true)` at `summarize.ts:64` only strips `<system-reminder>` via `SYSTEM_REMINDER_REGEX` during transcript parsing; it does nothing for `<private>`, `<claude-mem-context>`, etc.
- **Result**: a `<private>secret</private>` inside an assistant message persists to `pending_messages` and then to `session_summaries`. This is the P1 security gap audit item #2 claims to close.
4. **Legacy `handleObservations` is at line 464, not 378** (V2). It has NO strip — it calls `queueObservation(sessionDbId, {tool_input, tool_response, ...})` directly at `:470`.
5. **Call-site inventory (grep-verified, V4)**:
| File | Line | Function called | Text stripped |
|---|---|---|---|
| `src/utils/tag-stripping.ts` | 79 | declaration `stripMemoryTagsFromJson` | — |
| `src/utils/tag-stripping.ts` | 89 | declaration `stripMemoryTagsFromPrompt` | — |
| `src/services/worker/http/routes/SessionRoutes.ts` | 11 | import both wrappers | — |
| `src/services/worker/http/routes/SessionRoutes.ts` | 629 | `stripMemoryTagsFromJson(JSON.stringify(tool_input))` | observation |
| `src/services/worker/http/routes/SessionRoutes.ts` | 633 | `stripMemoryTagsFromJson(JSON.stringify(tool_response))` | observation |
| `src/services/worker/http/routes/SessionRoutes.ts` | 862 | `stripMemoryTagsFromPrompt(prompt)` | prompt |
| `tests/utils/tag-stripping.test.ts` | 13 | import both wrappers | — (test) |
**No other call sites exist**. The summary path (`:491`, `:669`), the legacy observation path (`:464`), and the hook side of summarize (`summarize.ts`) never touch a strip function.
6. **ReDoS guard & trim already correct**. `countTags` at `tag-stripping.ts:37` + `MAX_TAG_COUNT=100` check at `:54`; `.trim()` at `:70`. Keep both.
7. **`SYSTEM_REMINDER_REGEX` is exported** (`tag-stripping.ts:24`) and used by `src/shared/transcript-parser.ts:84` and `:128` to strip system-reminder at transcript-read-time (the `stripSystemReminders=true` path in `extractLastMessage`). That external use is **not** a memory-strip call site — it is a read-time sanitation of raw transcript JSON. Section 3.2 of 05 keeps that behaviour (it operates before text ever enters our pipeline). **Keep `SYSTEM_REMINDER_REGEX` as an export.**
## Copy-Ready Snippet Locations
`/do` runs can copy verbatim from these locations:
| Copy from | Into | Purpose |
|---|---|---|
| `src/utils/tag-stripping.ts:31` (`MAX_TAG_COUNT = 100`) | New `src/utils/tag-stripping.ts` (rewritten) | ReDoS constant — preserve exact value |
| `src/utils/tag-stripping.ts:37-45` (`countTags`) | New `src/utils/tag-stripping.ts` | Tag-count helper — preserve exact body (one-regex version still needs a count for the warn path) |
| `src/utils/tag-stripping.ts:54-61` (ReDoS guard with `logger.warn`) | New `stripMemoryTags` body | Preserve the warn-but-continue semantics |
| `src/utils/tag-stripping.ts:24` (`SYSTEM_REMINDER_REGEX` export) | New `src/utils/tag-stripping.ts` | External callers (`transcript-parser.ts:84`, `:128`) still import this — must keep export |
| Section 3.2 alternation regex at `05-clean-flowcharts.md:132` | New `stripMemoryTags` body | `/<(private\|claude-mem-context\|system_instruction\|system-instruction\|persisted-output\|system-reminder)>[\s\S]*?<\/\1>/g` |
| `SessionRoutes.ts:629-634` (existing call shape `JSON.stringify(tool_input)`) | Replacement lines at `:629` and `:633` | Same two arguments, new function name |
| `SessionRoutes.ts:862` (existing `stripMemoryTagsFromPrompt(prompt)`) | Replacement line | Same text, new function name |
## Confidence + Gaps
**High confidence**
- Every source line number verified against live code on 2026-04-22.
- The P1 security gap is reproducible: inserting `<private>secret</private>` into an assistant message today writes through to `session_summaries.last_assistant_message` untouched.
- `SYSTEM_REMINDER_REGEX` external usage is real — if Phase 1 deletes it, `transcript-parser.ts` breaks. Keep the export.
**Gaps / unverified**
- I did not measure the ReDoS cost of the alternation regex vs. six sequential `replace()` on pathological inputs. Section 3.2 and audit item #3 claim the single regex is net-faster; that is plausible but untested. Phase 1 includes a micro-benchmark test to confirm before/after.
- Phase 1 assumes `queueObservation` and `queueSummarize` accept arbitrary strings. Confirmed by reading `SessionRoutes.ts:470` and `:497, :705` but not by reading `SessionManager.queueSummarize` itself. If `queueSummarize` does any parsing of `last_assistant_message`, stripping before the call may or may not change that behaviour — Phase 3 verifies with a targeted integration test.
- The hook-side `summarize.ts:64` call to `extractLastMessage(..., true)` leaves `<system-reminder>` stripped *before* the raw message hits the wire. After this plan lands, the worker also runs `stripMemoryTags` on it. That is a double-strip on `<system-reminder>`, which is idempotent (first pass removes it, second pass is a no-op). **Noted; not a bug.**
---
## Phase 1 — Rewrite `src/utils/tag-stripping.ts` to a single `stripMemoryTags`
### (a) What to implement
Replace the entire contents of `src/utils/tag-stripping.ts` with a new version that exports:
1. `SYSTEM_REMINDER_REGEX` (unchanged — external callers depend on it).
2. `stripMemoryTags(text: string): string` — single public function using one alternation regex with back-reference.
Copy `MAX_TAG_COUNT = 100` from current `src/utils/tag-stripping.ts:31`.
Copy `countTags` body from current `src/utils/tag-stripping.ts:37-45` (keep call-site warn semantics).
Copy the `logger.warn('SYSTEM', 'tag count exceeds limit', ...)` block from current `:54-61`.
Copy the alternation regex pattern from `PATHFINDER-2026-04-21/05-clean-flowcharts.md:132`:
```ts
const MEMORY_TAG_NAMES = [
'private',
'claude-mem-context',
'system_instruction',
'system-instruction',
'persisted-output',
'system-reminder',
] as const;
const STRIP_REGEX = new RegExp(
`<(${MEMORY_TAG_NAMES.join('|')})>[\\s\\S]*?<\\/\\1>`,
'g'
);
export function stripMemoryTags(text: string): string {
if (!text) return text;
const tagCount = countTags(text);
if (tagCount > MAX_TAG_COUNT) {
logger.warn('SYSTEM', 'tag count exceeds limit', undefined, {
tagCount,
maxAllowed: MAX_TAG_COUNT,
contentLength: text.length,
});
// Still process but log the anomaly (preserves current behaviour)
}
return text.replace(STRIP_REGEX, '').trim();
}
```
Delete `stripTagsInternal`, `stripMemoryTagsFromJson`, `stripMemoryTagsFromPrompt`.
### (b) Documentation references
- `05-clean-flowcharts.md:127-156` (section 3.2 authoritative design)
- `05-clean-flowcharts.md:19` (audit item #1 — wrapper collapse)
- `05-clean-flowcharts.md:21` (audit item #3 — one-regex alternation)
- `05-clean-flowcharts.md:47` (audit item #29 — strip-on-raw-string, no stringify/parse dance — already how callers pass arguments, so no change needed here)
- `06-implementation-plan.md:30` (V3 verified inventory)
- `06-implementation-plan.md:81-87` (Phase 1 task 1 exact prescription)
- Live file: `src/utils/tag-stripping.ts:1-91`
### (c) Verification checklist
Run from repo root:
```bash
# No stray wrappers survive
grep -rn "stripMemoryTagsFromPrompt\|stripMemoryTagsFromJson\|stripTagsInternal" src/
# Expected: 0 matches
# The new function exists exactly once as a declaration
grep -n "export function stripMemoryTags\b" src/utils/tag-stripping.ts
# Expected: 1 match, on a single line
# SYSTEM_REMINDER_REGEX export preserved
grep -n "export const SYSTEM_REMINDER_REGEX" src/utils/tag-stripping.ts
# Expected: 1 match
# TypeScript compiles
npx tsc --noEmit
# Expected: exit 0 (no errors in tag-stripping.ts; SessionRoutes.ts will still error until Phase 2 — that is expected)
```
Tests: not yet — the test file still imports the old wrappers. Phase 4 updates the test file; Phase 1 leaves it broken.
### (d) Anti-pattern guards
- **A (invent APIs)**: do not add `stripMemoryTagsV2`, `stripMemoryTagsAsync`, `stripTagsSafe`, or any other variant. One public function.
- **C (silent fallbacks)**: the ReDoS guard continues to *warn and process*, not *warn and return empty*. Copy the `logger.warn` call verbatim.
- **D (facades that pass through)**: do not leave `stripMemoryTagsFromPrompt` / `stripMemoryTagsFromJson` as deprecated re-exports calling `stripMemoryTags`. Delete the names.
- **E (two code paths for same data)**: the new file has exactly one strip implementation. No branch on "is JSON" vs "is prompt".
---
## Phase 2 — Replace existing `stripMemoryTagsFromJson` / `FromPrompt` call sites
### (a) What to implement
Edit `src/services/worker/http/routes/SessionRoutes.ts` in exactly three places:
1. **Line 11** — change import:
- From: `import { stripMemoryTagsFromJson, stripMemoryTagsFromPrompt } from '../../../../utils/tag-stripping.js';`
- To: `import { stripMemoryTags } from '../../../../utils/tag-stripping.js';`
2. **Line 629** — rename only:
- From: `? stripMemoryTagsFromJson(JSON.stringify(tool_input))`
- To: `? stripMemoryTags(JSON.stringify(tool_input))`
3. **Line 633** — rename only:
- From: `? stripMemoryTagsFromJson(JSON.stringify(tool_response))`
- To: `? stripMemoryTags(JSON.stringify(tool_response))`
4. **Line 862** — rename only:
- From: `const cleanedPrompt = stripMemoryTagsFromPrompt(prompt);`
- To: `const cleanedPrompt = stripMemoryTags(prompt);`
No logic changes. No reordering. Same arguments.
### (b) Documentation references
- `05-clean-flowcharts.md:127-156` (section 3.2)
- `06-implementation-plan.md:31` (V4 verified call-site inventory — "No call sites in summary, legacy observation, or summarize hook")
- `06-implementation-plan.md:88-90` (Phase 1 task 2 prescription)
- Live file: `src/services/worker/http/routes/SessionRoutes.ts:11, :629, :633, :862`
### (c) Verification checklist
```bash
# Old names gone from the only consumer
grep -n "stripMemoryTagsFromJson\|stripMemoryTagsFromPrompt" src/services/worker/http/routes/SessionRoutes.ts
# Expected: 0 matches
# New name present exactly three times in SessionRoutes (629, 633, 862) plus one import
grep -c "stripMemoryTags(" src/services/worker/http/routes/SessionRoutes.ts
# Expected: 3 (call sites; the import statement uses `stripMemoryTags` without trailing `(`)
grep -n "import .*stripMemoryTags" src/services/worker/http/routes/SessionRoutes.ts
# Expected: 1 match on line 11
# Compiles
npx tsc --noEmit
# Expected: exit 0 (SessionRoutes now uses the new API; summary + legacy obs paths still untouched — will pass)
```
No runtime tests yet — Phase 3 adds the new strip calls that unlock the regression test.
### (d) Anti-pattern guards
- **A (invent APIs)**: do not introduce `stripMemoryTagsAt(callerType, text)`; the single function is enough.
- **E (two code paths)**: after this phase all live strip call sites funnel through one function. Do not leave a "fast path" for prompts and a "JSON path" for observations.
---
## Phase 3 — ADD `stripMemoryTags` calls at summary-path and legacy-observation entry points (closes P1 per V1)
### (a) What to implement
Edit `src/services/worker/http/routes/SessionRoutes.ts` in three additional places. Each change **adds** a strip call before the existing queue call.
1. **`handleObservations` — line 464 handler** (V2 correction of audit's "line 378"):
- Before line 470 (`this.sessionManager.queueObservation(sessionDbId, {...})`), copy the pattern from `:628-634`:
```ts
const cleanedToolInput = tool_input !== undefined
? stripMemoryTags(JSON.stringify(tool_input))
: '{}';
const cleanedToolResponse = tool_response !== undefined
? stripMemoryTags(JSON.stringify(tool_response))
: '{}';
```
- Pass `cleanedToolInput` / `cleanedToolResponse` into `queueObservation` instead of `tool_input` / `tool_response`.
2. **`handleSummarize` — line 491 handler** (V1 security gap; audit had only described missing `<system-reminder>` but V1 confirms ZERO tags are stripped):
- Before line 497 (`this.sessionManager.queueSummarize(sessionDbId, last_assistant_message);`), insert:
```ts
const cleanedAssistantMessage = typeof last_assistant_message === 'string'
? stripMemoryTags(last_assistant_message)
: '';
```
- Pass `cleanedAssistantMessage` into `queueSummarize`.
3. **`handleSummarizeByClaudeId` — line 669 handler** (same V1 gap, `/api/sessions/summarize` endpoint):
- Before line 705 (`this.sessionManager.queueSummarize(sessionDbId, last_assistant_message);`), insert the same cleaning block as #2.
- Pass `cleanedAssistantMessage` into `queueSummarize`.
No new wrappers, no new helper module. Inline call site.
### (b) Documentation references
- `05-clean-flowcharts.md:20` (audit item #2 — SECURITY BUG label)
- `05-clean-flowcharts.md:127-156` (section 3.2 — the `C3: ingestSummary` call site is the design that lands properly once the downstream ingest helper plan uses it; this plan inlines the strip at the route boundary in the interim)
- `05-clean-flowcharts.md:542` (Part 5 ledger row "Summary-path privacy gap fix: +3")
- `06-implementation-plan.md:28` (V1 — "Summary paths strip ZERO tags")
- `06-implementation-plan.md:29` (V2 — `handleObservations` is at line 464)
- `06-implementation-plan.md:91-93` (Phase 1 task 2 sub-bullets)
- Live file: `src/services/worker/http/routes/SessionRoutes.ts:464-485, :491-506, :669-710`
### (c) Verification checklist
```bash
# Every strip call site accounted for
grep -cn "stripMemoryTags(" src/services/worker/http/routes/SessionRoutes.ts
# Expected: 6 (two new observation lines, two new summary lines, two preserved from Phase 2)
# Breakdown:
# :464-handler — 2 (input + response) NEW
# :491-handler — 1 (assistant message) NEW
# :565-handler — 2 (input + response) PHASE-2 RENAME
# :669-handler — 1 (assistant message) NEW
# :862-handler — 1 (prompt) PHASE-2 RENAME
# Total: 7 call sites -> NOTE: grep counts lines; if a call wraps onto its own line count is 7. Use -c with care.
grep -n "queueSummarize(sessionDbId, last_assistant_message)" src/services/worker/http/routes/SessionRoutes.ts
# Expected: 0 — both sites should now pass cleanedAssistantMessage
grep -n "queueObservation(sessionDbId, {" src/services/worker/http/routes/SessionRoutes.ts
# Expected: 2 call sites, both using cleanedToolInput / cleanedToolResponse
# Regression test: insert <private>secret</private> into a summary
# - Start worker locally: npm run build-and-sync
# - POST /sessions/:id/summarize with body {"last_assistant_message":"ok <private>secret</private> done"}
# - SELECT last_assistant_message FROM session_summaries WHERE session_id = :id
# - Expected: "ok done" (trimmed, no "secret", no "<private>")
# - Repeat with POST /api/sessions/summarize and contentSessionId
# - Expected: same result
# Regression test: <persisted-output> in tool_response routed through /sessions/:id/observations
# - POST /sessions/:id/observations with body containing tool_response: "a <persisted-output>blob</persisted-output> b"
# - SELECT tool_response FROM observations WHERE session_id = :id
# - Expected: serialized JSON with "a b", no <persisted-output>, no "blob"
npx tsc --noEmit
# Expected: exit 0
```
### (d) Anti-pattern guards
- **A (invent APIs)**: do not add a `cleanMessageForSummary` or `sanitizeObservation` helper — a two-line inline strip is simpler than any new abstraction. A unified `ingestSummary` / `ingestObservation` helper IS planned, but in the downstream plan `07-session-lifecycle-management.md`, not here. This plan deliberately inlines to land the security fix fast (Part 6 step 2 — "3 lines to close P1, <1 hr").
- **C (silent fallbacks)**: if `last_assistant_message` is not a string, the strip returns `''`. `queueSummarize` then stores an empty summary. That is the explicit behaviour — do not silently coerce a non-string to `JSON.stringify(...)`.
- **E (two code paths for same data)**: `handleObservations` (line 464) and `handleObservationsByClaudeId` (line 565) still have mostly-duplicate bodies after this phase. The downstream `07-session-lifecycle-management.md` plan merges them via `ingestObservation`. Do NOT attempt that merge here — it is out of scope. This phase only adds the missing strip call into the legacy handler; the merge is the next plan's job.
---
## Phase 4 — Delete obsolete wrappers, tests, and dead exports
### (a) What to implement
1. **`src/utils/tag-stripping.ts`** already rewritten in Phase 1 — confirm the file no longer contains `stripMemoryTagsFromPrompt`, `stripMemoryTagsFromJson`, or `stripTagsInternal`.
2. **`tests/utils/tag-stripping.test.ts`** — rewrite to import the new API. Delete any `describe('stripMemoryTagsFromPrompt')` and `describe('stripMemoryTagsFromJson')` blocks; merge their cases into a single `describe('stripMemoryTags')` block. Keep every input assertion — the behaviour must be identical to today for all supported tags.
- Specifically: the test file at `tests/utils/tag-stripping.test.ts:13` imports `{ stripMemoryTagsFromPrompt, stripMemoryTagsFromJson }`. Change to `{ stripMemoryTags }`. Substitute every `stripMemoryTagsFromPrompt(` and `stripMemoryTagsFromJson(` with `stripMemoryTags(`.
3. **grep for any other importer** in `src/`:
- Expected (by V4): only `SessionRoutes.ts` and the test file import the old names. After Phase 2 + Phase 4 edits, no importer remains.
### (b) Documentation references
- `05-clean-flowcharts.md:149-150` (3.2 deletion list: the two wrapper files)
- `05-clean-flowcharts.md:541` (Part 5 ledger: -60/+15 = -45 net line delta)
- `06-implementation-plan.md:94` (Phase 1 task 3 — update tests)
- Live file: `tests/utils/tag-stripping.test.ts:13`, `:33-413`
### (c) Verification checklist
```bash
# No consumer of old names anywhere in tree
grep -rn "stripMemoryTagsFromPrompt\|stripMemoryTagsFromJson\|stripTagsInternal" src/ tests/
# Expected: 0 matches
# Test file compiles and uses the new API
grep -c "stripMemoryTags(" tests/utils/tag-stripping.test.ts
# Expected: >= number of old-wrapper call sites (current file has ~40 calls across the two wrappers; new file should have >= that count)
# Run the test suite
bun test tests/utils/tag-stripping.test.ts
# Expected: all tests green
# Full project typecheck
npx tsc --noEmit
# Expected: exit 0
```
### (d) Anti-pattern guards
- **D (facades that pass through)**: do not add `export const stripMemoryTagsFromPrompt = stripMemoryTags` for "backward compatibility". Callers are entirely internal; change them.
- **E (two code paths)**: the test file should have ONE describe block, not two. Do not leave parallel test suites.
---
## Phase 5 — Final verification (counts + regression + benchmark)
### (a) What to implement
This is a verification-only phase. No new code. Run the following checks and record results in the PR description.
1. **Grep census** (expected counts anchor the acceptance criteria):
| Command | Expected |
|---|---|
| `grep -rn "stripMemoryTagsFromPrompt\|stripMemoryTagsFromJson\|stripTagsInternal" src/ tests/` | `0` matches |
| `grep -rn "stripMemoryTags\b" src/ tests/` | exactly 1 declaration (`src/utils/tag-stripping.ts`) + 1 test import + 6 SessionRoutes.ts call lines + however many test-body call sites exist |
| `grep -c "stripMemoryTags(" src/services/worker/http/routes/SessionRoutes.ts` | `6` (3 rename sites + 3 added sites, counting each tool_input/tool_response separately per handler + the 2 summary handlers + 1 prompt handler = 6) |
| `grep -rn "queueSummarize(sessionDbId, last_assistant_message\b" src/` | `0` (both sites now pass `cleanedAssistantMessage`) |
| `grep -rn "SYSTEM_REMINDER_REGEX" src/` | `>= 3` (export in `tag-stripping.ts`, imports in `transcript-parser.ts:84` and `:128`) |
2. **End-to-end regression: `<private>` in summary path**
- Insert `<private>SHOULD_NOT_APPEAR</private>` into an assistant message via the transcript used by the summarize hook.
- Trigger `Stop` hook. Wait for `/api/sessions/summarize` blocking response.
- `SELECT last_assistant_message FROM session_summaries ORDER BY id DESC LIMIT 1;`
- Expected: no occurrence of `SHOULD_NOT_APPEAR` and no `<private>`.
3. **End-to-end regression: `<persisted-output>` in tool_response**
- POST a sample observation via hook path with a `tool_response` containing `<persisted-output>LARGE</persisted-output>`.
- `SELECT tool_response FROM observations ORDER BY id DESC LIMIT 1;`
- Expected: `LARGE` absent, `<persisted-output>` absent.
4. **Micro-benchmark** (informational, not blocking):
- New single-regex alternation should be no worse than the old six-sequential `.replace()` on a 1 MB input with 50 tags. Record ms/op.
- If the new version is >2× slower, escalate — but the audit claim is that one regex is faster.
5. **Build sanity**: `npm run build-and-sync` succeeds; worker restarts cleanly.
### (b) Documentation references
- `05-clean-flowcharts.md:155` (3.2 closes: "P1 security gap (private content reaching `session_summaries`)")
- `05-clean-flowcharts.md:538-558` (Part 5 — deletion totals for this row: -45 lines wrappers + -3 lines partial strip + +3 lines new summary-path strip)
- `06-implementation-plan.md:96-101` (Phase 1 verification checklist template)
### (c) Verification checklist
Already enumerated in (a).
### (d) Anti-pattern guards
- **A**: do not add a wrapper "for the benchmark" — measure by timing `stripMemoryTags` directly.
- **C**: if the regression test finds stripped content leaking to the DB, the fix is to call `stripMemoryTags` — not to add a post-strip "second pass" to the consumer. The ingress is the only place to strip.
---
## Line-count summary (this plan only)
Referencing Part 5 of `05-clean-flowcharts.md`:
| Change | Lines deleted | Lines added | Source row |
|---|---|---|---|
| Wrappers + six regex passes collapse to one | -60 | +15 | 05 Part 5 row "stripMemoryTagsFromPrompt / FromJson wrappers" |
| Summary-path privacy gap fix (V1) | 0 | +3 | 05 Part 5 row "Summary-path privacy gap fix" |
| Legacy-observation privacy gap fix (V2, not in 05 ledger) | 0 | +6 | V2 correction (two strip calls in `handleObservations`) |
| Test file rewrites | ~-5 | ~+5 | Phase 4 |
| **Net** | **≈ -60** | **≈ +29** | **≈ -31 net** |
Net code delta is small; the load-bearing outcome is **closing P1** (private content no longer reaches `session_summaries` or the legacy observation path).
@@ -0,0 +1,518 @@
# Plan 02 — sqlite-persistence (clean)
**Target**: claude-mem v6.5.0 brutal-audit refactor, flowchart 3.3.
**Design authority**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` section **3.3**.
**Corrections authority**: `PATHFINDER-2026-04-21/06-implementation-plan.md` Phase 0 verified-findings **V12, V13, V14, V15, V19**.
**Date**: 2026-04-22.
---
## Dependencies
- **Upstream (must land before this plan):** none. This is a leaf plan.
- **Downstream (blocked on this plan):**
- `03-response-parsing-storage` — depends on `UNIQUE(session_id, tool_use_id)` + `ON CONFLICT DO NOTHING` added in **Phase 1** below (dedup gate moves from content-hash window to DB constraint).
- `04-vector-search-sync` — depends on the `chroma_synced INTEGER DEFAULT 0` column added in **Phase 2** below. 04's whole backfill simplification (`WHERE chroma_synced=0 LIMIT 1000`) cannot ship until that column exists.
- `07-session-lifecycle-management` — depends on the boot-once `recoverStuckProcessing()` extracted in **Phase 4** below (07 wires it into the worker startup sequence).
---
## Reporting block 1 — Sources consulted
1. `PATHFINDER-2026-04-21/05-clean-flowcharts.md` — full file (607 lines). **Section 3.3** is the canonical clean design for sqlite-persistence (lines 159194). Part 1 items **#15** (30-s dedup window → UNIQUE constraint, line 33), **#16** (60-s claim stale-reset → boot recovery, line 34), **#27** (Python sqlite3 repair → `claude-mem repair`, line 45), **#28** (27 migrations → `schema.sql` + upgrade-only runner, line 46). Part 5 ledger rows for SQLite referenced in `06-implementation-plan.md` Phase 9.
2. `PATHFINDER-2026-04-21/06-implementation-plan.md` Phase 0 verified-findings:
- **V12** (line 39): audit claimed 27 migrations; reality is **19 private methods** in `MigrationRunner.runAllMigrations()` at `runner.ts:2241`; highest `schema_versions.version` written is **27** (legacy system from `DatabaseManager` contributed ~5 more numbers). Plan target: "19 methods + legacy → `schema.sql` + N upgrade-only migrations".
- **V13** (line 40): Python sqlite3 subprocess **lives in production code** (`Database.ts:7999`, not just tests). Test file exists at `tests/services/sqlite/schema-repair.test.ts` (253 lines). Phase 5 must delete from production; test file becomes a CLI test.
- **V14** (line 41): `DEDUP_WINDOW_MS = 30_000` at `observations/store.ts:13`. Dedup key is SHA-256 of `(memory_session_id, title, narrative)` at `:2129`**NOT** `tool_use_id`. The new UNIQUE is an **additive** gate (different key space); it does not automatically subsume every path the content-hash hit.
- **V15** (line 42): No `chroma_synced` column exists today; Phase 2 creates it.
- **V19** (line 46): `STALE_PROCESSING_THRESHOLD_MS = 60_000` at `PendingMessageStore.ts:6`; stale reset happens inside every `claimNextMessage()` call (lines 99145).
- Phase 9 (lines 412448) is prior scope draft — superseded where this plan differs.
3. `PATHFINDER-2026-04-21/01-flowcharts/sqlite-persistence.md` — "before" diagram (97 lines). Confirms: 27 migrations claim (V12 corrects), content-hash dedup with 30-s window, claim-confirm self-heal, Python schema repair at boot.
4. Live codebase:
- `src/services/sqlite/Database.ts` (359 lines). Python repair at `:37109`, reopen wrapper at `:115132`, PRAGMA block at `:163168`, `MigrationRunner` invocation at `:171172`.
- `src/services/sqlite/migrations/runner.ts` (1018 lines). 19 private methods listed at `:2241`. Schema-version INSERTs write versions {4,5,6,7,8,9,10,11,16,17,19,20,21,22,23,24,25,27} — gaps (1215, 18, 26) confirm the legacy `DatabaseManager` numbering V12 mentions.
- `src/services/sqlite/observations/store.ts` (108 lines). `DEDUP_WINDOW_MS` at `:13`, `computeObservationContentHash` at `:2130`, `findDuplicateObservation` at `:3646`, `storeObservation` at `:53108`.
- `src/services/sqlite/PendingMessageStore.ts` (529 lines). `STALE_PROCESSING_THRESHOLD_MS` at `:6`, stale-reset block inside `claimNextMessage` transaction at `:99145` (reset SQL at `:107115`, peek at `:118124`, mark-processing at `:129134`).
- `tests/services/sqlite/schema-repair.test.ts` (253 lines) — Python script invoked via `execSync`, per V13.
- `tests/services/sqlite/migration-runner.test.ts` (361 lines) — existing migration regression tests; these must still pass after consolidation.
- **No** `src/services/sqlite/schema.sql` exists today (grep confirms). Phase 3 must create it.
5. `PATHFINDER-2026-04-21/07-plans/` — empty of dependency plans (this is the first plan written).
---
## Reporting block 2 — Concrete findings
| Claim | Verified? | Evidence |
|---|---|---|
| Migration method count is 22 (V12 audit) | **Partially** — actual is **19 private methods** enumerated in `runAllMigrations` at `runner.ts:2241`. 27 is the highest `schema_versions.version` written (legacy `DatabaseManager` migrations 13, 1215, 18, 26 contribute the gap). | `runner.ts:2241` + grep of `schema_versions.*VALUES.*run(N)` lines. |
| Highest current schema version is 27 | **Yes** — last INSERT at `runner.ts:1015` writes version `27` for `addObservationSubagentColumns`. | `runner.ts:1015`. |
| `UNIQUE(session_id, tool_use_id)` exists today | **No** — zero references to `tool_use_id` anywhere under `src/services/sqlite/`. The identifier only appears in `src/types/transcript.ts` and `src/services/worker/SDKAgent.ts` (input payload shape). | Grep `tool_use_id` in `src/services/sqlite/` returns zero files. |
| Dedup is content-hash based, NOT `tool_use_id` | **Yes**`computeObservationContentHash` hashes `(memory_session_id, title, narrative)` at `store.ts:2129`. Subagent `agent_type`/`agent_id` intentionally excluded per the comment at `:1819`. | `store.ts:1346`. |
| `chroma_synced` column exists | **No** — no migration adds it; no reference in `runner.ts` or any store. | Grep confirms. |
| 60-s stale reset fires per-claim, not at boot | **Yes** — reset UPDATE lives **inside** the `claimTx` transaction at `PendingMessageStore.ts:107115`, run every time `claimNextMessage()` is called. | `PendingMessageStore.ts:99145`. |
| Python sqlite3 lives in production, not just tests | **Yes**`execFileSync('python3', [scriptPath, dbPath, objectName], ...)` at `Database.ts:99` inside the production `repairMalformedSchema` function (`:37109`). Test file at `tests/services/sqlite/schema-repair.test.ts` exercises that production code path. | `Database.ts:99`. |
| `schema.sql` file exists today | **No** — Phase 3 must create it. "HOW" is detailed below (dump current state from a clean fresh-install DB). | Glob `**/*.sql` under `src/` returns zero. |
**Net count correction propagated to every phase below:** "19 methods (not 22 or 27)" where migration count is cited.
---
## Reporting block 3 — Copy-ready snippet locations
| Destination | Source file:line | What to copy |
|---|---|---|
| `src/services/sqlite/migrations/2026-04-22_add_observations_tool_use_id.ts` (new upgrade migration) | Existing patterns from `runner.ts:658842` (migration `addOnUpdateCascadeToForeignKeys`, idempotent ALTER) | The idempotent "check column via `PRAGMA table_info`, ALTER if missing, mark `schema_versions`" pattern. |
| `src/services/sqlite/observations/store.ts` (Phase 1 rewrite) | Existing INSERT shape at `store.ts:77102` | Keep the 17-column INSERT layout; only change the body from "compute hash → check dup → INSERT" to "INSERT … ON CONFLICT (memory_session_id, tool_use_id) DO NOTHING RETURNING id". |
| `src/services/sqlite/migrations/2026-04-23_add_observations_chroma_synced.ts` (new upgrade migration) | Pattern from `addObservationContentHashColumn` at `runner.ts:844864` | Exact template: `PRAGMA table_info``ALTER TABLE observations ADD COLUMN chroma_synced INTEGER DEFAULT 0` → record version. |
| `src/services/sqlite/schema.sql` (new — created in Phase 3) | `runner.ts:52124` (initializeSchema block) + tables from migrations 5,6,8,9,10,11,16,17,19,20,21,22,23,24,25,27 | Run the current `MigrationRunner` end-to-end on a fresh `:memory:` DB, then dump via `SELECT sql FROM sqlite_master WHERE type IN ('table','index') ORDER BY rootpage` — this is the authoritative generator. Detail in Phase 3 tasks. |
| `src/services/sqlite/PendingMessageStore.ts` (Phase 4) | Stale-reset block at `PendingMessageStore.ts:107115` | Copy the SQL verbatim into a new `recoverStuckProcessing()` method; delete the copy from inside `claimTx`. `claimNextMessage` keeps only `peek` (`:118124`) + `mark-processing` (`:129134`) inside its transaction. |
| `src/cli/handlers/repair.ts` (new — Phase 5) | `Database.ts:79107` (Python script body + `execFileSync` call) | Move the whole Python-script-written-to-tempfile + `execFileSync` pattern into a user-invoked CLI command handler; remove boot-time auto-call. |
---
## Reporting block 4 — Confidence + gaps
**Confidence: HIGH** on:
- Phases 1, 2, 4, 6 — all reference existing, stable code (V14/V15/V19 are pinned to single-file call sites).
- Phase 5 — Python block is small (~70 lines of wrapper + embedded script at `Database.ts:37109`) and test coverage already exists at `tests/services/sqlite/schema-repair.test.ts`.
**Confidence: MEDIUM** on:
- Phase 3 (schema.sql generation). `schema.sql` does not exist today. The mechanical path is: (a) spin up `:memory:` DB, (b) run current `MigrationRunner.runAllMigrations()` unchanged, (c) dump `SELECT sql FROM sqlite_master` in a stable order, (d) check the dump into the repo. Risk: FTS5 virtual tables and their implicit rowid-shadow tables may need hand-tuning because `sqlite_master` includes internal `*_content`/`*_idx` tables that must NOT be in `schema.sql` (they're auto-created by the `CREATE VIRTUAL TABLE USING fts5` statement). **The schema.sql generator must filter `name NOT LIKE '%_content' AND name NOT LIKE '%_segments' AND name NOT LIKE '%_segdir' AND name NOT LIKE '%_docsize' AND name NOT LIKE '%_config'`** (all standard FTS5 shadow-table suffixes).
- Phase 1 ordering w.r.t. Phase 6. Dropping `DEDUP_WINDOW_MS` + `findDuplicateObservation` (Phase 6) ONLY after Phase 1 lands AND verification proves every observation-ingest path writes a `tool_use_id`. The **transcript-watcher ingest path** (`src/services/transcripts/watcher.ts`, referenced by downstream plan `07-session-lifecycle-management`) may emit observations where `tool_use_id` is derived from JSONL line parsing rather than the hook payload — if that path produces a non-unique or missing `tool_use_id`, the UNIQUE constraint will not cover it and the content-hash gate still provides value. **Phase 6 is gated by a concrete grep + runtime check that every call site into `storeObservation` supplies a real `tool_use_id`.**
**Top gaps:**
1. **`schema.sql` doesn't exist today — must be generated mechanically.** Phase 3 specifies the exact generator script so this is reproducible. The risk is that FTS5 shadow tables leak into the dump; the filter list above must be applied. If a future migration adds a `USING fts5` virtual table with a non-default suffix, the filter will need updating.
2. **Dedup semantics may differ across ingest paths.** V14 confirms the current dedup key (SHA of title+narrative) and V14's warning applies: the transcript watcher, `/api/sessions/observations` hook path, and `/sessions/:id/observations` legacy path may each derive `tool_use_id` differently. Phase 1 adds the UNIQUE constraint but Phase 6 (dedup-window removal) must verify all three paths supply a consistent `tool_use_id` BEFORE the content-hash fallback is deleted. If the transcript-watcher path uses synthetic IDs (e.g., `file:offset`) instead of the real Claude Code `tool_use_id`, that's a real gap to flag to the owner of plan `07-session-lifecycle-management` before both plans land.
---
## Phase contract — template applied below
Every phase specifies:
- **(a) What to implement** — framed as "Copy from `<file>:<line>` into `<dest>`".
- **(b) Documentation references** — 05 section + V-numbers + live file:line.
- **(c) Verification checklist** — concrete greps + tests.
- **(d) Anti-pattern guards** — A (invent migration methods), B (polling), C (silent fallback), E (two dedup paths).
---
## Phase 1 — Add `UNIQUE(session_id, tool_use_id)` and `ON CONFLICT DO NOTHING` INSERT
**Outcome**: Observations have a `tool_use_id` column; `(memory_session_id, tool_use_id)` is UNIQUE; `storeObservation` uses `INSERT ... ON CONFLICT DO NOTHING RETURNING id` (idempotent, constraint-based). Content-hash dedup still runs underneath (removed in Phase 6 after verification).
### (a) Tasks
1. **Create new migration** `src/services/sqlite/migrations/` (add a method to `MigrationRunner.runAllMigrations` between `addObservationSubagentColumns` (line 41) and a new method `addObservationToolUseIdUnique`, assigning `schema_versions.version = 28`).
- Copy the idempotent pattern from `addObservationContentHashColumn` at `runner.ts:844864`: `PRAGMA table_info(observations)` → if `tool_use_id` column missing, `ALTER TABLE observations ADD COLUMN tool_use_id TEXT`.
- Backfill legacy rows: `UPDATE observations SET tool_use_id = 'legacy:' || id WHERE tool_use_id IS NULL`. Legacy synthetic IDs must be unique across existing rows (row `id` is unique by PK) and prefixed so future real `tool_use_id` values never collide.
- Create unique partial index: `CREATE UNIQUE INDEX IF NOT EXISTS idx_observations_session_tool_use_id ON observations(memory_session_id, tool_use_id) WHERE tool_use_id IS NOT NULL`.
- Register version 28.
2. **Rewrite `src/services/sqlite/observations/store.ts:53108`** (`storeObservation`):
- Add `tool_use_id: string` to `ObservationInput` (`src/services/sqlite/observations/types.ts`).
- Replace the INSERT at `:77102` with:
```sql
INSERT INTO observations
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id,
content_hash, tool_use_id, created_at, created_at_epoch)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
ON CONFLICT(memory_session_id, tool_use_id) DO NOTHING
RETURNING id, created_at_epoch
```
- If `RETURNING` returns a row → new insert, return it.
- If no row returned → SELECT the existing row: `SELECT id, created_at_epoch FROM observations WHERE memory_session_id = ? AND tool_use_id = ?` and return.
- **Keep** `computeObservationContentHash` and `findDuplicateObservation` and the pre-INSERT dedup check **intact** in this phase. Phase 6 removes them. (Rationale: additive gate first, drop old gate only after confirming coverage — anti-pattern E avoidance.)
3. **Wire `tool_use_id` through every call site that creates an observation**. Grep: every `storeObservation(` caller must now pass `tool_use_id`. The three known ingest paths are (i) `/api/sessions/observations` HTTP route, (ii) `/sessions/:id/observations` legacy route, (iii) transcript-watcher ingest. Each must read `tool_use_id` from the incoming payload (hook sends it; transcript JSONL lines contain it).
### (b) Documentation references
- `05-clean-flowcharts.md` **section 3.3**, line 172 (`INSERT observations UNIQUE(session_id, tool_use_id)`) and line 188 (deletion ledger entry). Part 1 item **#15** at line 33.
- Verified-finding **V14** (`06-implementation-plan.md:41`).
- Live code: `observations/store.ts:13108`, `runner.ts:844864` (copy-from template).
### (c) Verification checklist
- [ ] Grep: `grep -n "tool_use_id" src/services/sqlite/` returns at least 3 hits (types, store INSERT, migration).
- [ ] Grep: `grep -n "tool_use_id" src/services/worker/http/routes/SessionRoutes.ts` confirms both observation route handlers read it from body.
- [ ] New unit test `tests/services/sqlite/observations/unique-constraint.test.ts`: insert two observations with same `(memory_session_id, tool_use_id)`; assert second returns the first's `id`; assert `SELECT COUNT(*) FROM observations` incremented by exactly 1.
- [ ] Existing `tests/services/sqlite/migration-runner.test.ts` (361 lines) still passes — no regressions on migrations 427.
- [ ] Fresh-install smoke: delete DB, boot worker, confirm `PRAGMA index_list(observations)` includes `idx_observations_session_tool_use_id`.
- [ ] Upgrade smoke: copy a v6.5.0 DB into place, boot worker, confirm legacy rows got `tool_use_id = 'legacy:<id>'` and new index exists.
### (d) Anti-pattern guards
- **A (invent migration methods)**: do NOT add any migration method besides `addObservationToolUseIdUnique` in this phase. Enumerate before adding.
- **C (silent fallback)**: `ON CONFLICT DO NOTHING` is **idempotent, not silent** — conflicts are expected and return the existing id. The route handler must not treat "no new row inserted" as an error; the caller gets the existing id back.
- **E (two dedup paths)**: both dedup gates are present in this phase **intentionally**. The old one exits in Phase 6 after every path is verified.
### Blast radius
Schema change (one new column, one new index). Hook + route payload shapes gain `tool_use_id`. No runtime behavior change on happy path (first INSERT wins as before); conflict path now returns the existing id faster (no pre-check query, one INSERT round-trip).
---
## Phase 2 — Add `chroma_synced` column (blocks plan 04)
**Outcome**: `observations.chroma_synced INTEGER DEFAULT 0`, `session_summaries.chroma_synced INTEGER DEFAULT 0`, and `user_prompts.chroma_synced INTEGER DEFAULT 0` exist. Partial index on `chroma_synced = 0` for the backfill scan on all three tables. Plan `04-vector-search-sync` can now consume these.
> **Preflight edit 2026-04-22 (reconciliation C3)**: The original phase covered only `observations` + `session_summaries`. Reconciliation identified that plan 04 also backfills `user_prompts`, so this phase must add the column there too. Migration body below extends to all three tables.
### (a) Tasks
1. **Add migration method `addChromaSyncedColumns`** to `MigrationRunner.runAllMigrations` (between the new `addObservationToolUseIdUnique` from Phase 1 and end of list), assigning `schema_versions.version = 29`.
- Template: `addObservationContentHashColumn` at `runner.ts:844864`.
- Body:
```ts
const obsInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
if (!obsInfo.some(c => c.name === 'chroma_synced')) {
this.db.run('ALTER TABLE observations ADD COLUMN chroma_synced INTEGER NOT NULL DEFAULT 0');
}
const sumInfo = this.db.query('PRAGMA table_info(session_summaries)').all() as TableColumnInfo[];
if (!sumInfo.some(c => c.name === 'chroma_synced')) {
this.db.run('ALTER TABLE session_summaries ADD COLUMN chroma_synced INTEGER NOT NULL DEFAULT 0');
}
const promptInfo = this.db.query('PRAGMA table_info(user_prompts)').all() as TableColumnInfo[];
if (!promptInfo.some(c => c.name === 'chroma_synced')) {
this.db.run('ALTER TABLE user_prompts ADD COLUMN chroma_synced INTEGER NOT NULL DEFAULT 0');
}
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_chroma_synced ON observations(chroma_synced) WHERE chroma_synced = 0');
this.db.run('CREATE INDEX IF NOT EXISTS idx_summaries_chroma_synced ON session_summaries(chroma_synced) WHERE chroma_synced = 0');
this.db.run('CREATE INDEX IF NOT EXISTS idx_prompts_chroma_synced ON user_prompts(chroma_synced) WHERE chroma_synced = 0');
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(29, new Date().toISOString());
```
2. **Do NOT** modify `ChromaSync.ts` in this phase — that is plan 04's responsibility. This phase only lands the schema.
### (b) Documentation references
- `05-clean-flowcharts.md` **section 3.4** line 226 ("Adds: `chroma_synced` boolean column on `observations`. Schema migration.").
- Verified-finding **V15** (`06-implementation-plan.md:42`).
- Live code: `runner.ts:844864` (copy template).
### (c) Verification checklist
- [ ] `PRAGMA table_info(observations)` on a fresh-boot DB includes `chroma_synced`.
- [ ] `PRAGMA table_info(session_summaries)` includes `chroma_synced`.
- [ ] `PRAGMA table_info(user_prompts)` includes `chroma_synced`.
- [ ] Partial indexes exist: `SELECT name FROM sqlite_master WHERE type='index' AND name LIKE '%chroma_synced%'` returns 3 rows.
- [ ] Upgrade smoke: on a pre-Phase-2 DB, both ALTERs run exactly once; second boot is a no-op (idempotency gate).
- [ ] `migration-runner.test.ts` extended with a case asserting `schema_versions.version = 29` after fresh install.
### (d) Anti-pattern guards
- **A**: one method, one version. Do not add a backfill-on-migration step here (that's plan 04).
- **E**: do NOT touch `ChromaSync.ts` write path in this phase; keep concerns isolated so plans can land independently.
### Blast radius
Pure additive schema. Zero runtime behavior change until plan 04 starts writing to the column.
---
## Phase 3 — Consolidate 19 migrations into `schema.sql` + slim upgrade-only runner
**Outcome**: Fresh DBs execute `src/services/sqlite/schema.sql` in one shot and write `schema_versions.version = <current>`. Existing DBs continue running only upgrade-step migrations whose version is `> max(schema_versions.version)`. The 19 `CREATE TABLE IF NOT EXISTS` / `ALTER TABLE` idempotency bodies shrink dramatically since fresh-DB paths no longer traverse them.
### (a) Tasks
1. **Generate `src/services/sqlite/schema.sql`** by a reproducible script, not by hand:
- Write a one-shot generator at `scripts/dump-schema.ts`:
```ts
import { Database } from 'bun:sqlite';
import { MigrationRunner } from '../src/services/sqlite/migrations/runner.js';
import { writeFileSync } from 'fs';
const db = new Database(':memory:');
new MigrationRunner(db).runAllMigrations();
// Filter out FTS5 shadow tables — they're created automatically by CREATE VIRTUAL TABLE.
const rows = db.query(`
SELECT sql FROM sqlite_master
WHERE sql IS NOT NULL
AND name NOT LIKE 'sqlite_%'
AND name NOT LIKE '%_content'
AND name NOT LIKE '%_segments'
AND name NOT LIKE '%_segdir'
AND name NOT LIKE '%_docsize'
AND name NOT LIKE '%_config'
AND name NOT LIKE '%_data'
AND name NOT LIKE '%_idx'
ORDER BY
CASE type WHEN 'table' THEN 0 WHEN 'index' THEN 1 WHEN 'trigger' THEN 2 ELSE 3 END,
name
`).all() as { sql: string }[];
writeFileSync('src/services/sqlite/schema.sql',
rows.map(r => r.sql + ';').join('\n\n') + '\n');
```
- Run `bun run scripts/dump-schema.ts`, commit the resulting `schema.sql`.
- `schema.sql` must end with `INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (29, datetime('now'));` (where 29 = current max after Phases 1 and 2).
2. **Rewrite `Database.ts:171172`** to check for fresh DB:
- After PRAGMAs, query `SELECT COUNT(*) FROM sqlite_master WHERE type='table' AND name='schema_versions'`.
- If zero (true fresh DB): read `schema.sql` (bundled via `import.meta` or FS at a known path), execute via `db.exec(sql)`, done.
- Else: run `MigrationRunner` as today (it's already idempotent per-migration via `PRAGMA table_info` checks).
3. **DO NOT delete the 19 migration methods.** They remain as upgrade paths for existing DBs from v6.4.x or earlier. What shrinks is the fresh-install path cost (19 idempotent ALTER checks → 1 `db.exec(schema.sql)`).
4. **Add a CI check** in `tests/services/sqlite/schema-consistency.test.ts`: runs the dump-schema generator in-memory, diffs against the checked-in `schema.sql`; fails if they drift. This is the only way to keep `schema.sql` honest as new migrations land.
### (b) Documentation references
- `05-clean-flowcharts.md` **section 3.3** lines 166170 (Boot → Check → Fresh? → Execute `schema.sql` vs Migrate). Line 191 in the deletion ledger.
- Verified-finding **V12** (`06-implementation-plan.md:39`) — confirms 19 methods, not 27.
- Live code: `Database.ts:163173` (boot sequence), `runner.ts:2241` (method list).
- **Gap note from reporting block 4 (#1)**: the FTS5 shadow-table filter list in the generator is non-obvious; comment it inline with a link to the SQLite FTS5 docs section on shadow tables.
### (c) Verification checklist
- [ ] `ls src/services/sqlite/schema.sql` exists and is > 0 bytes.
- [ ] Fresh-install test: delete DB → boot → dump `sqlite_master` → byte-equal to `schema.sql` content (modulo the `schema_versions` INSERT).
- [ ] Upgrade test: copy a v6.4 fixture DB → boot → all 19 migration methods run → final schema matches `schema.sql`.
- [ ] `schema-consistency.test.ts` (new) passes on CI.
- [ ] `migration-runner.test.ts` (existing, 361 lines) still passes — upgrade path is unchanged.
- [ ] No FTS5 shadow table names appear in `schema.sql` (grep: `_content\|_segments\|_segdir\|_docsize\|_config\|_data\|_idx` returns zero).
### (d) Anti-pattern guards
- **A (invent migration methods)**: `schema.sql` is NOT a replacement for the runner's upgrade methods — it's a fresh-install fast-path. Don't invent a "migration framework". `db.exec()` + a list of functions is the whole system.
- **C (silent fallback)**: if `schema.sql` parsing throws on boot, **do not** fall back to running the runner from scratch — fail boot with a clear error. A fresh-DB schema failure is a shipped-bug bug; users should see it.
### Blast radius
Fresh-install boot drops from ~19 idempotency checks to one `db.exec`. Existing DBs: identical behavior. Risk: `schema.sql` drift from runner — mitigated by the consistency test.
**Lines deleted estimate for this phase alone: 0 net from runner (methods stay for upgrades). Lines added: ~200 for `schema.sql`, ~30 for consistency test, ~15 for boot branch.**
---
## Phase 4 — Move all SQLite housekeeping to boot-once (revised 2026-04-22)
**Outcome**: zero repeating SQLite-related `setInterval`s anywhere in the worker. `PendingMessageStore.claimNextMessage()` becomes pure SELECT+UPDATE (no self-healing per call). Three boot-once jobs exist on `PendingMessageStore` / `Database`, called exactly once at worker startup:
1. `recoverStuckProcessing()` — resets `status='processing'` rows left by a crashed prior worker.
2. `clearFailedOlderThan(1h)` — prunes old failed rows that accumulated before this boot (no schema constraint requires periodic execution; see Reporting block 2).
3. Deletion of the periodic `PRAGMA wal_checkpoint(PASSIVE)` call — replaced by SQLite's native `wal_autocheckpoint` default (1000 pages). `Database.ts:162-168` sets no override so the default is already active; no new code is required.
**Why zero-timer** (authoritative rationale, supersedes any older plan text): SQLite auto-checkpoints when the WAL reaches 1000 pages of writes, which is the correct contract for a long-running worker. An explicit 2-min `PRAGMA wal_checkpoint(PASSIVE)` call accelerates checkpoints beyond that default but is not required for correctness — it was a band-aid layered on top of the stale-reaper interval (`worker-service.ts:547-589`). Similarly, `clearFailedOlderThan(1h)` running every 2 min purges rows that realistically accumulate at single-digit-per-hour rates; once-per-boot is sufficient and no `pending_messages` query cares about row count or stale-row presence. See `08-reconciliation.md` Part 4 revised cross-check (Invariant 4).
### (a) Tasks
1. **Add new method** `PendingMessageStore.recoverStuckProcessing()`:
- Copy the stale-reset SQL block from `PendingMessageStore.ts:106115` **verbatim** into the new method:
```ts
recoverStuckProcessing(): number {
const staleCutoff = Date.now() - STALE_PROCESSING_THRESHOLD_MS;
const resetStmt = this.db.prepare(`
UPDATE pending_messages
SET status = 'pending', started_processing_at_epoch = NULL
WHERE status = 'processing' AND started_processing_at_epoch < ?
`);
const result = resetStmt.run(staleCutoff);
if (result.changes > 0) {
logger.info('QUEUE', `BOOT_RECOVERY | recovered ${result.changes} stale processing message(s)`);
}
return result.changes as number;
}
```
- Note the SQL changes one thing: no `session_db_id = ?` predicate — boot recovery is global across all sessions.
2. **Delete** `PendingMessageStore.ts:103116` (the `staleCutoff` / `resetStmt` block inside `claimTx`). The transaction body shrinks to peek (lines 118124) + mark-processing (lines 129134).
3. **Confirm `clearFailedOlderThan()` is callable standalone.** Current signature at `PendingMessageStore.ts:486-495` accepts a `thresholdMs` number and runs a single-statement UPDATE/DELETE. No change to the method body; this phase only moves **where it is called from**. No new method is added for this — the existing one is sufficient.
4. **Delete the explicit `PRAGMA wal_checkpoint(PASSIVE)` call** from `worker-service.ts:~581` as part of plan 07 Phase 4's deletion of the stale-reaper block (`worker-service.ts:547-589`). This plan is the authority that it is safe to delete: `Database.ts:162-168` sets `journal_mode=WAL`, `synchronous=NORMAL`, `cache_size`, `mmap_size`, and leaves `wal_autocheckpoint` at SQLite's default (1000 pages). No override was ever introduced. Verification in (c) confirms.
5. **Wire the three boot calls** in the downstream plan `07-session-lifecycle-management` Phase 3 Mechanism C (boot-once reconciliation block). That plan's responsibility to place `pendingStore.recoverStuckProcessing()` and `pendingStore.clearFailedOlderThan(60 * 60 * 1000)` in the worker startup sequence. This plan **adds/confirms the methods** but does not modify `worker-service.ts` directly (single-responsibility per plan).
### (b) Documentation references
- `05-clean-flowcharts.md` **section 3.3** lines 183184 ("Worker startup ONCE (not on every claim) … crash recovery") and line 190 (deletion ledger).
- `05-clean-flowcharts.md` Part 2 **D3** (revised 2026-04-22 — zero repeating background timers).
- `05-clean-flowcharts.md` Part 4 timer census (revised — `clearFailedOlderThan` and `PRAGMA wal_checkpoint` explicit disposition).
- Part 1 item **#16** (line 34) and Part 2 decision on "Crash-recovery that solves a real OS-level problem … keep but consolidate".
- Verified-finding **V19** (`06-implementation-plan.md:46`).
- `08-reconciliation.md` Part 4 revised — Invariant 4 (SQLite auto-checkpoint default is active).
- Live code: `PendingMessageStore.ts:6` (threshold), `:99145` (full `claimNextMessage`), `:486-495` (`clearFailedOlderThan`), `Database.ts:162-168` (PRAGMA block — confirms no `wal_autocheckpoint` override), `worker-service.ts:547-589` (stale-reaper block being deleted by plan 07 Phase 4).
### (c) Verification checklist
- [ ] Grep: `grep -n "STALE_PROCESSING_THRESHOLD_MS" src/services/sqlite/PendingMessageStore.ts` → 2 matches max (constant + `recoverStuckProcessing` body).
- [ ] Grep: `grep -n "status = 'processing'" src/services/sqlite/PendingMessageStore.ts` finds exactly one UPDATE that flips processing→pending (in `recoverStuckProcessing`), NOT in `claimNextMessage`.
- [ ] Inspect `claimNextMessage`: transaction body has no UPDATE-to-pending step.
- [ ] Grep: `grep -rn "clearFailedOlderThan" src/` → exactly 2 matches (the method definition in `PendingMessageStore.ts` and a single call site in the boot-once reconciliation block inside `worker-service.ts`). No call inside any `setInterval` or handler.
- [ ] Grep: `grep -rn "wal_checkpoint" src/services/worker/ src/services/worker-service.ts` → **0 matches** in `worker-service.ts`. If the codebase introduces an observability read of `PRAGMA wal_autocheckpoint` at boot for logging purposes, that is fine — but no explicit `PRAGMA wal_checkpoint(...)` execution anywhere.
- [ ] Grep: `grep -n "wal_autocheckpoint" src/services/sqlite/Database.ts` → 0 matches (confirms we are relying on SQLite's default of 1000 pages; any future non-zero override must be reviewed against this plan).
- [ ] Grep: `grep -rn "setInterval" src/services/sqlite/ src/services/worker-service.ts` → **0 matches** for SQLite-related intervals.
- [ ] New unit test `tests/services/sqlite/PendingMessageStore.boot-recovery.test.ts`:
- Insert a row with `status='processing'`, `started_processing_at_epoch = Date.now() - 2*60_000`.
- Call `recoverStuckProcessing()`; assert return = 1; assert `status='pending'` and `started_processing_at_epoch=NULL`.
- [ ] New unit test `tests/services/sqlite/PendingMessageStore.failed-purge.test.ts`:
- Insert three `status='failed'` rows with `updated_at_epoch` values `now-2h`, `now-30min`, `now-5min`.
- Call `clearFailedOlderThan(60 * 60 * 1000)`; assert exactly the `now-2h` row is removed; the other two remain.
- [ ] WAL-checkpoint regression test: with `wal_autocheckpoint` at SQLite default, write > 1000 pages to the DB in a loop; assert the WAL file size stabilizes (does not grow unbounded). Proves the default is sufficient without explicit `PRAGMA wal_checkpoint`.
- [ ] Existing `tests/services/sqlite/PendingMessageStore.test.ts` tests for `claimNextMessage` still pass, but the "self-healing" test case (if present) is rewritten against `recoverStuckProcessing` instead.
### (d) Anti-pattern guards
- **B (no polling, no new interval)**: none of the three boot-once jobs may run on a timer, inside `claimNextMessage`, or inside any request handler. Boot-once is the contract. The canonical check is `grep -rn "setInterval" src/services/sqlite/ src/services/worker-service.ts` → **0**.
- **A (no invented abstractions)**: no `SqliteHousekeepingService` class, no `BootRecoveryOrchestrator`. The three calls live as three plain method invocations inside plan 07's boot-once reconciliation block. If a fourth housekeeping job appears later, *then* extract.
- **D (no facade-over-facade)**: `clearFailedOlderThan` is called directly on `PendingMessageStore` — do not add a `housekeepFailed()` wrapper that just forwards.
### Blast radius
`PendingMessageStore` (new method + deletion of in-transaction self-heal) and — through plan 07's boot block — `worker-service.ts` (deletion of the periodic `wal_checkpoint` + `clearFailedOlderThan` calls inside the stale-reaper interval). Downstream `07-session-lifecycle-management` adds the call sites; until that plan lands, `recoverStuckProcessing()` is dead code (acceptable — additive, doesn't break anything). Deleting the explicit `wal_checkpoint` call has no user-visible effect; the WAL grows slightly larger between auto-checkpoints, which is within SQLite's designed behavior.
---
## Phase 5 — Delete Python sqlite3 schema-repair; replace with user-facing `claude-mem repair`
**Outcome**: `Database.ts:37132` (`repairMalformedSchema` + `repairMalformedSchemaWithReopen`) gone. Production boot never shells out to Python. A new CLI subcommand `claude-mem repair` exists (or is stubbed with a documented follow-up plan) for users hitting pre-v6.5 corruption.
### (a) Tasks
1. **Delete** `Database.ts:25` (imports: `execFileSync`, `fs` helpers, `tmpdir`, `path.join`) and `Database.ts:37132` (both `repairMalformedSchema` functions and their reopen wrapper).
2. **Delete** `Database.ts:160` (the call to `repairMalformedSchemaWithReopen`) in the `ClaudeMemDatabase` constructor. PRAGMAs now execute directly after `new Database()`.
3. **Create CLI subcommand** `src/cli/handlers/repair.ts`:
- Copy the Python script body + `execFileSync` pattern from the deleted `Database.ts:8199` verbatim.
- Expose via `src/cli/index.ts` (or wherever subcommand dispatch lives) as `claude-mem repair`.
- On success, print a human-readable summary: "Dropped N orphaned schema objects; reset migration versions. Restart the worker."
- On failure: exit code 1 with the Python error surfaced.
- **Acceptable alternative if CLI scaffolding is heavier than expected**: ship this phase as a **stub** handler that prints a "Feature scheduled — see follow-up plan [link]" message and register the follow-up plan explicitly. Do not leave the production Python path alive "until the CLI is ready" — the boot-time auto-repair must be deleted in this phase.
4. **Move the existing test** `tests/services/sqlite/schema-repair.test.ts` (253 lines) to exercise the CLI handler instead of the production boot path. If the stub route is taken, the test becomes a skipped/TODO stub with a reference to the follow-up plan.
### (b) Documentation references
- `05-clean-flowcharts.md` Part 1 item **#27** (line 45): "Users on malformed DBs from v<X run a one-shot `claude-mem repair` command manually."
- Section 3.3 deletion ledger line 187 (~120 lines estimate).
- Verified-finding **V13** (`06-implementation-plan.md:40`).
- Live code: `Database.ts:37132` (delete), `tests/services/sqlite/schema-repair.test.ts` (repoint).
### (c) Verification checklist
- [ ] `grep -n "execFileSync\|execSync" src/services/sqlite/` → zero hits.
- [ ] `grep -n "python3" src/services/` → zero hits.
- [ ] `grep -rn "repairMalformedSchema" src/` → zero hits.
- [ ] `wc -l src/services/sqlite/Database.ts` shows ~100 fewer lines than today (359 → ~260).
- [ ] `claude-mem repair --help` prints usage (or stub message with follow-up-plan link).
- [ ] Fresh boot smoke: start worker with a healthy DB; confirm no Python process spawned (check `ps` or instrumentation log).
- [ ] Malformed-DB smoke: deliberately corrupt `sqlite_master`, boot worker → expect a clean error with instruction "run `claude-mem repair`" (not a silent auto-heal).
### (d) Anti-pattern guards
- **C (silent fallback)**: boot must not auto-recover from malformed schema. Surface the error. That's the whole point of V13's call-out.
- **A**: do not invent an `AutoRepairService`. One CLI handler, done.
- **E**: `claude-mem repair` is the ONE repair entry point. Delete everywhere else.
### Blast radius
Boot path simplifies. Users on corrupt DBs get a clear message instead of silent auto-fix. Risk: users accustomed to auto-repair will see hard failure — mitigated by the message pointing at `claude-mem repair`.
**Lines deleted estimate: ~100 from `Database.ts`.**
---
## Phase 6 — Delete `DEDUP_WINDOW_MS` + `findDuplicateObservation` (gated on Phase 1 verification)
**Outcome**: Content-hash dedup window removed. UNIQUE constraint is the sole dedup gate. `store.ts` drops to the single INSERT-with-conflict path.
**CRITICAL GATE**: this phase ONLY runs after the gap in reporting block 4 (#2) has been closed: every call site into `storeObservation` provably supplies a real, hook-or-transcript-sourced `tool_use_id`. Before running the `rm` commands below, execute the verification grep AND the integration test described.
### (a) Tasks
**Pre-phase gate (must pass before any deletion):**
- Run `grep -rn "storeObservation(" src/` → enumerate every caller.
- For each caller, trace the `tool_use_id` field back to its source. Must be either (i) the Claude Code hook payload (`tool_use_id` field from `PostToolUse`), (ii) a JSONL transcript line's `tool_use_id`, or (iii) a synthetic-but-stable identifier documented in the caller's comments.
- If any caller has no stable `tool_use_id`, **stop**. Flag to plan owner, keep content-hash fallback, exit this phase.
**If gate passes:**
1. **Delete from `observations/store.ts`**:
- Line 13 (`DEDUP_WINDOW_MS`).
- Lines 2130 (`computeObservationContentHash` export) — **KEEP** the column and the value written into it for analytics, but the function itself is no longer a public export; inline the SHA computation inside `storeObservation` so the column still gets populated on INSERT. Alternative: keep `computeObservationContentHash` as a utility if any caller outside this file uses it (grep first; V14 implies it's only used here).
- Lines 3646 (`findDuplicateObservation`).
- Lines 6975 (the pre-INSERT dup check block).
2. **Simplify `storeObservation` body** to a single INSERT path (the one added in Phase 1).
### (b) Documentation references
- `05-clean-flowcharts.md` section 3.3 lines 188189 (deletion ledger).
- Verified-finding **V14** (`06-implementation-plan.md:41`).
- Gap #2 in reporting block 4 above — this phase's gate is the closure mechanism for that gap.
### (c) Verification checklist
- [ ] Grep: `grep -rn "DEDUP_WINDOW_MS\|findDuplicateObservation" src/` → zero hits.
- [ ] Grep: `grep -n "computeObservationContentHash" src/services/sqlite/observations/` → limited to `store.ts` (inline) OR zero external callers.
- [ ] New integration test: simulate two PostToolUse hook payloads with the same content (title+narrative) but different `tool_use_id` → assert **both** observations are persisted (UNIQUE doesn't trigger, content-hash no longer blocks). This validates the coverage shift is correct behavior.
- [ ] New integration test: simulate two PostToolUse hook payloads with the same `(session, tool_use_id)` → assert only one row persists, both return the same id.
- [ ] End-to-end: run the full hook cycle; confirm observations land in DB and no dedup log lines from the deleted path appear.
### (d) Anti-pattern guards
- **E (two dedup paths)**: the WHOLE POINT of this phase. Grep must prove the old path is gone before merge.
- **C**: the UNIQUE constraint raises a conflict, which `ON CONFLICT DO NOTHING` converts to a no-op + SELECT-existing. That's **idempotent**, not silent — the caller gets the existing id. Do not introduce any `try/catch` that swallows the conflict differently.
### Blast radius
`observations/store.ts` shrinks to ~40 lines. If the gate fails and this phase is skipped, content-hash dedup survives harmlessly alongside the UNIQUE constraint (extra work per INSERT, no correctness loss).
**Lines deleted estimate: ~40 from `store.ts` (file goes from 108 → ~65 lines).**
---
## Phase 7 — Final verification
**Outcome**: All six phases above land; regression suite green; anti-pattern greps zero.
### (a) Tasks
1. **Run anti-pattern grep pass** (cite these exact patterns):
- `grep -rn "DEDUP_WINDOW_MS" src/` → zero (Phase 6).
- `grep -rn "findDuplicateObservation" src/` → zero (Phase 6).
- `grep -rn "repairMalformedSchema\|execFileSync.*python" src/services/` → zero (Phase 5).
- `grep -rn "STALE_PROCESSING_THRESHOLD_MS" src/` → 2 hits max: constant definition + `recoverStuckProcessing` body (Phase 4).
- `grep -n "status = 'processing'" src/services/sqlite/PendingMessageStore.ts` finds exactly one pending-flip UPDATE, inside `recoverStuckProcessing` (Phase 4).
- `grep -n "tool_use_id" src/services/sqlite/observations/store.ts` ≥ 2 hits (type + INSERT) (Phase 1).
- `grep -n "chroma_synced" src/services/sqlite/migrations/runner.ts` finds the Phase 2 migration (Phase 2).
- `ls src/services/sqlite/schema.sql` exists (Phase 3).
2. **Run tests**:
- `bun test tests/services/sqlite/` — all existing + new tests green.
- Specifically: `migration-runner.test.ts` (361 lines, unchanged test set must still pass), `PendingMessageStore.test.ts`, `schema-repair.test.ts` (retargeted to CLI), plus new: `unique-constraint.test.ts`, `boot-recovery.test.ts`, `schema-consistency.test.ts`.
3. **Run fresh-install smoke**:
- Delete `~/.claude-mem/claude-mem.db`.
- Boot worker via `npm run build-and-sync`.
- Assert: `schema.sql` path taken (no Python process, no 19 migration logs on fresh install).
- Assert: `schema_versions.version = 29` (or whatever the final version is after Phase 2's migration 29 lands).
4. **Run upgrade smoke**:
- Copy a v6.4.x fixture DB to the live path.
- Boot worker.
- Assert: all upgrade migrations through version 29 run; final schema matches `schema.sql`.
5. **Count deleted lines**: `git diff main -- src/services/sqlite/ | grep -c "^-"` should show:
- ~40 lines from `store.ts` (Phase 6).
- ~100 lines from `Database.ts` (Phase 5).
- ~15 lines from `PendingMessageStore.ts` (Phase 4 — net ~0 because `recoverStuckProcessing` is added).
- Net deletions: **~140 lines** (before counting Phase 3's `schema.sql` which is additive).
### (b) Documentation references
- `05-clean-flowcharts.md` section 3.3 (full).
- `06-implementation-plan.md` Phase 9 (lines 412448) — superseded-but-aligned.
- `06-implementation-plan.md` Phase 15 (lines 631655) — final-verification template.
### (c) Verification checklist
- [ ] All anti-pattern greps pass.
- [ ] All tests green.
- [ ] Fresh + upgrade smoke tests pass.
- [ ] Deleted-line count ≥ 140.
- [ ] Downstream plan owners (03, 04, 07) notified that their prerequisites (UNIQUE constraint, `chroma_synced` column, `recoverStuckProcessing`) are available.
### (d) Anti-pattern guards
- **A/B/C/E**: final grep pass is the enforcement.
---
## Summary
- **Phase count**: 7 (matches minimum expected set).
- **Net lines deleted** (estimate, source-only, excluding `schema.sql` which is added): **~140**, split:
- Phase 5: ~100 lines from `Database.ts` (Python repair).
- Phase 6: ~40 lines from `observations/store.ts` (dedup window + helper + call block).
- Phase 4: ~0 net (delete ~13, add ~15 for `recoverStuckProcessing`).
- Phase 3: 0 from source (migrations stay for upgrade path; `schema.sql` is new).
- Phases 1, 2: additive only (new migration methods + column + constraint).
- **Top gaps** (see reporting block 4):
1. `schema.sql` generator must filter FTS5 shadow tables; Phase 3 includes the exact NOT-LIKE filter list, but a new FTS5 virtual table with a non-default suffix in a future migration would break this — needs a convention-lock or a more general regex.
2. Phase 6 is **gated** by cross-path `tool_use_id` verification (Phase 1's UNIQUE must provably cover the transcript-watcher ingest path, owned by plan `07-session-lifecycle-management`). If transcript-watcher produces synthetic `tool_use_id`s (e.g., `file:offset`) that don't match hook-path IDs, the content-hash gate cannot be removed safely and Phase 6 must be deferred to a follow-up plan.
@@ -0,0 +1,257 @@
# 03 — response-parsing-storage (implementation plan)
> **Design authority**: `05-clean-flowcharts.md` §3.7 (clean diagram + deletion list at lines 295317), Part 1 bullshit items #20#23 (lines 3841), Part 2 decision **D5** (lines 77). This plan translates §3.7 into concrete edits. Where the audit disagrees with verified code, the live-file citations win and are called out.
## Dependencies
- **Upstream** — `02-sqlite-persistence`. The sibling plan introduces a `UNIQUE(session_id, tool_use_id)` constraint on `pending_messages` and replaces the 30 s in-memory dedup window with `INSERT … ON CONFLICT DO NOTHING`. *This plan does not touch `pending_messages` schema, but the sibling's `markFailed` contract (`UPDATE … SET status='failed'`) must remain intact — parser-level failure marking continues to go through `PendingMessageStore.markFailed(messageId)` at `src/services/sqlite/PendingMessageStore.ts:349`.* Cite: 02-sqlite-persistence Phase 2 (UNIQUE-constraint phase).
- **Downstream** — `07-session-lifecycle-management`. That plan owns `RestartGuard` evolution and the one-reaper timer. **Critical coupling**: today `RestartGuard` (`src/services/worker/RestartGuard.ts:1270`) exposes only `recordRestart()`, `recordSuccess()`, and read-only counters — **there is no `recordFailure()` method**. The audit's D5 claim "RestartGuard already exists for repeated failures" is half-true: it covers process-restart loops, not per-message parse failures. Two legitimate options:
1. (preferred) Let parse-failure propagate via `PendingMessageStore.markFailed` only. Session exits through the existing idle path; on the next summarize or observation attempt the session is re-initialised. If parsing fails repeatedly enough to crash the SDK subprocess, `RestartGuard.recordRestart()` is the thing that trips — already wired via existing restart paths. No new RestartGuard surface area required.
2. (alt) Add `session.recordFailure(reason)` as a thin helper that logs + calls `markFailed` for each `processingMessageIds` entry. Still no RestartGuard API changes.
**This plan adopts option (1)**: no new methods on RestartGuard. The flowchart box "session.recordFailure()" from §3.7 resolves to the block of code that marks all `processingMessageIds` as `'failed'` in `pending_messages` — identical shape to today's non-XML early-fail branch at `ResponseProcessor.ts:102106`, but reached through the single `parseAgentXml` return path. See the `07-session-lifecycle-management` plan for any RestartGuard API additions; do not add them here.
## Verified facts (pinned to files)
| # | Fact | Source |
|---|---|---|
| V7a | `coerceObservationToSummary` is a private fn used twice inside `parseSummary`. | `src/sdk/parser.ts:222` (def), `:152` + `:197` (call sites) |
| V7b | Non-XML early-fail branch lives at lines 87108. | `src/services/worker/agents/ResponseProcessor.ts:87108` |
| V7c | Consecutive-summary-failures circuit breaker lives at lines 176200. | `src/services/worker/agents/ResponseProcessor.ts:176200` |
| V7d | `consecutiveSummaryFailures` field on `ActiveSession`. | `src/services/worker-types.ts:53` |
| V7e | `consecutiveSummaryFailures` is also **read** by `SessionManager.queueSummarize` at line 340 to short-circuit. That site must be deleted too — the original Phase 3 draft in `06-implementation-plan.md` did not list it. | `src/services/worker/SessionManager.ts:340346` |
| V7f | `MAX_CONSECUTIVE_SUMMARY_FAILURES` constant in `src/sdk/prompts.ts:21` is imported by both `ResponseProcessor.ts:16` and `SessionManager.ts` (via prompts import). Delete the constant and both imports. | `src/sdk/prompts.ts:21` |
| V7g | Pending-message FAILED state literal is **`'failed'`** (lowercase). CHECK constraint: `status IN ('pending','processing','processed','failed')`. `markFailed(messageId)` is the official API. | `src/services/sqlite/PendingMessageStore.ts:22`, `:349`, `:369`; `src/services/sqlite/migrations/runner.ts:533`; `src/services/sqlite/SessionStore.ts:565` |
| V7h | RestartGuard has no `recordFailure()` method. Public surface: `recordRestart()`, `recordSuccess()`, `restartsInWindow`, `windowMs`, `maxRestarts`. | `src/services/worker/RestartGuard.ts:170` |
| V7i | Prompts already mandate `<summary>` root tag for summary turns ("you MUST wrap your ENTIRE response in `<summary>...</summary>` tags", "The ONLY accepted root tag is `<summary>`"). `<skip_summary reason="..."/>` is recognised by the parser (`parser.ts:124`) but is **not** documented in `buildSummaryPrompt` as a valid alternative. Prompt must be updated (Phase 1b) so the D5 contract is actually printed to the agent. | `src/sdk/prompts.ts:153174`; `src/sdk/parser.ts:124` |
| V7j | Atomic TX boundary is `sessionStore.storeObservations(...)` (single call, internal BEGIN/COMMIT). Do not split it. Today it wraps observations + optional summary in one transaction. | `src/services/worker/agents/ResponseProcessor.ts:149164`, `src/services/sqlite/observations/store.ts` (module) |
| V7k | `parseSummary` accepts `coerceFromObservation: boolean = false`. All coercion is gated on this flag — it is `true` only when `summaryExpected` (derived from `SUMMARY_MODE_MARKER` substring match) is true. | `src/sdk/parser.ts:122`, `ResponseProcessor.ts:7581` |
## Concrete target signatures
```ts
// src/sdk/parser.ts — replaces parseObservations + parseSummary + coerceObservationToSummary
export type ParseFailureReason = 'no_xml' | 'missing_summary' | 'malformed';
export interface ParsedAgentOutput {
observations: ParsedObservation[];
summary: ParsedSummary | null;
skipSummary: boolean;
}
export type ParseResult =
| { valid: true; data: ParsedAgentOutput }
| { valid: false; reason: ParseFailureReason };
export function parseAgentXml(
text: string,
opts: { requireSummary: boolean; correlationId?: string; sessionId?: number }
): ParseResult;
```
Failure semantics (no coercion, per D5):
- `text.trim()` is non-empty, no `<observation>`/`<summary>`/`<skip_summary` token → `{valid:false, reason:'no_xml'}`.
- `opts.requireSummary === true` and parse yields no `<summary>` and no `<skip_summary/>``{valid:false, reason:'missing_summary'}`.
- Any regex match with empty sub-tag payload where `requireSummary``{valid:false, reason:'malformed'}`.
- Otherwise → `{valid:true, data:{observations, summary|null, skipSummary}}`.
## Phases
### Phase 1 — Write `parseAgentXml` in `src/sdk/parser.ts`
**(a) What to implement**
1. Copy `extractField` from `src/sdk/parser.ts:267276` and `extractArrayElements` from `:282305` verbatim into the new module layout. These remain private helpers.
2. Copy the observation-extraction loop body (field extraction, type validation, ghost-obs filter) from `src/sdk/parser.ts:40108` into a private `extractObservations(text, correlationId)` that returns `ParsedObservation[]`. No behaviour change.
3. Copy the summary-extraction happy path (skip_summary check at `:124133`, `<summary>` regex at `:136137`, field extraction at `:164169`, false-positive guard at `:191214`) into a private `extractSummary(text, sessionId)` that returns `{ summary: ParsedSummary | null; skipSummary: boolean; malformed: boolean }`. **Delete the two `coerceFromObservation` branches at `:151158` and `:196203` — they do not survive.**
4. Delete `coerceObservationToSummary` (`src/sdk/parser.ts:222259`, 38 lines) outright.
5. Write the public `parseAgentXml(text, opts)` that:
- Computes `observations = extractObservations(text, opts.correlationId)`.
- Computes `{ summary, skipSummary, malformed } = extractSummary(text, opts.sessionId)`.
- Returns `{valid:false, reason:'no_xml'}` if `text.trim()` && `observations.length === 0` && `!summary` && `!skipSummary` && `!/<observation>|<summary>|<skip_summary\b/.test(text)`.
- Returns `{valid:false, reason:'missing_summary'}` if `opts.requireSummary` && `!summary` && `!skipSummary`.
- Returns `{valid:false, reason:'malformed'}` if `opts.requireSummary` && `malformed`.
- Returns `{valid:true, data:{observations, summary, skipSummary}}` otherwise.
6. Remove the old named exports `parseObservations` and `parseSummary` and their `coerceFromObservation` parameter. Keep `ParsedObservation`/`ParsedSummary` interfaces (`src/sdk/parser.ts:927`) — they're part of the public shape.
**(b) Docs** — `05-clean-flowcharts.md` §3.7 (clean diagram, lines 295317), Part 1 #20/#21/#23 (lines 3841), Part 2 D5 (line 77). V7a (parser.ts:222). V7i (prompt contract already mandates `<summary>`; skip-summary token recognised at parser.ts:124). V7k (coerceFromObservation gating on `summaryExpected`).
**(c) Verification**
- `grep -n "coerceObservationToSummary" src/` → 0 hits.
- `grep -n "parseObservations\|parseSummary\b" src/` → 0 hits outside `parser.ts` itself; inside `parser.ts` only the private helpers.
- Unit test: `parseAgentXml('', {requireSummary:false})``{valid:true, data:{observations:[], summary:null, skipSummary:false}}` (empty string is not `no_xml`; trim is empty).
- Unit test: `parseAgentXml('Error: auth token expired', {requireSummary:true})``{valid:false, reason:'no_xml'}`.
- Unit test: agent returns `<observation><type>x</type><title>t</title></observation>` with `requireSummary:true``{valid:false, reason:'missing_summary'}` (no coercion to summary).
- Unit test: `<skip_summary reason="no work"/>` with `requireSummary:true``{valid:true, data:{observations:[], summary:null, skipSummary:true}}`.
- Unit test: `<summary><request>r</request>…</summary>``{valid:true, data:{…, summary:{…}, skipSummary:false}}`.
**(d) Anti-pattern guards**
- **Guard C (silent fallback)**: Coercion is *deleted*, not relocated. `grep -n "coerce" src/sdk/parser.ts` → 0 hits.
- **Guard D (facades)**: `parseObservations` + `parseSummary` collapse to a single `parseAgentXml`. Two public fns → one.
- **Guard A (invent APIs)**: No new classes. Pure function returning a discriminated union. No `ParserValidator`, no `SummaryCoercer`, no base class.
---
### Phase 1b — Update agent contract in `src/sdk/prompts.ts`
**(a) What to implement** — Extend `buildSummaryPrompt()` at `src/sdk/prompts.ts:140175` (the return-value template) so it explicitly permits `<skip_summary reason="..."/>` as an alternative when there is literally nothing to summarise. Current text says "The ONLY accepted root tag is `<summary>`" (`:155`), which is incompatible with the parser's `<skip_summary/>` recognition (`parser.ts:124`) and incompatible with the D5 contract ("`<summary>` or `<skip_summary/>`"). Proposed insertion, directly after the existing line `:173`:
```
• If (and ONLY if) there is no work to summarise, you may return
<skip_summary reason="..."/> as the sole root tag instead of <summary>.
Any other response is a protocol violation and the session will fail.
```
Also delete the export `MAX_CONSECUTIVE_SUMMARY_FAILURES` at `src/sdk/prompts.ts:21` and its JSDoc at `:1720`. The constant is unused after Phase 2 + Phase 3.
**(b) Docs** — §3.7 deletion list ("agent must return `<summary>` or `<skip_summary/>`", line 311). Part 2 D5 (line 77). V7i.
**(c) Verification**
- `grep -n "MAX_CONSECUTIVE_SUMMARY_FAILURES" src/` → 0 hits.
- Manual diff of generated summary prompt shows the skip-summary clause.
- Existing prompt-mandate text (`:153`, `:155`, `:173`) preserved so the normal-case contract stays strict.
**(d) Anti-pattern guards**
- **Guard C**: The contract is now self-describing — no silent downstream coercion needed because the agent is told the protocol explicitly.
---
### Phase 2 — Replace parse path in `ResponseProcessor.ts`
**(a) What to implement**
1. Replace the import at `src/services/worker/agents/ResponseProcessor.ts:15` with `import { parseAgentXml, type ParsedObservation, type ParsedSummary } from '../../../sdk/parser.js';`. Delete `MAX_CONSECUTIVE_SUMMARY_FAILURES` from the `:16` import (keep `SUMMARY_MODE_MARKER`).
2. Replace `processAgentResponse` body at `:69108`:
- Keep `:6267` (lastGeneratorActivity + conversationHistory append).
- Compute `summaryExpected` exactly as today (`:7579`).
- Replace `:70` and `:81` (two separate parse calls) with a single call:
```ts
const parsed = parseAgentXml(text, {
requireSummary: summaryExpected,
correlationId: session.contentSessionId,
sessionId: session.sessionDbId,
});
```
- Replace the non-XML early-fail block `:83108` (26 lines) with:
```ts
if (!parsed.valid) {
const preview = text.length > 200 ? `${text.slice(0, 200)}...` : text;
logger.warn('PARSER', `${agentName} returned invalid response (${parsed.reason}); marking messages as failed`, {
sessionId: session.sessionDbId,
reason: parsed.reason,
preview,
});
const pendingStore = sessionManager.getPendingMessageStore();
for (const messageId of session.processingMessageIds) {
pendingStore.markFailed(messageId);
}
session.processingMessageIds = [];
return;
}
const { observations, summary } = parsed.data;
```
- Everything at `:110174` stays unchanged (normalize, ensureMemorySessionIdRegistered, STORING log, labeledObservations, atomic TX, STORED log, lastSummaryStored) — the single-TX invariant is preserved.
3. **Delete the circuit-breaker block `:176200`** (25 lines) entirely. After deleting, `:202` (claim-confirm) runs immediately after `:174` (lastSummaryStored).
4. No changes to `:202241` (claim-confirm, restartGuard.recordSuccess, Chroma sync, SSE broadcast, cleanup).
5. **(Preflight edit 2026-04-22 — reconciliation C6)** Emit `summaryStoredEvent` when a summary row is committed. After setting `session.lastSummaryStored` (unchanged from today), if `session.summaryStoredEvent` exists (initialized by `SessionManager` when the session is created, see plan 07 Phase 7), call `session.summaryStoredEvent.emit('stored', summaryId)`. This unblocks the blocking `/api/session/end` handler in plan 07 Phase 7 without polling. Contract: emit exactly once per summary commit; `summaryId` is the newly inserted row id from the atomic TX.
```ts
// inside the block that sets session.lastSummaryStored (around :170174)
session.lastSummaryStored = true;
session.summaryStoredEvent?.emit('stored', summaryRowId);
```
**(b) Docs** — §3.7 clean diagram (B→C→D→{Fail | Store}→Confirm→…, lines 299308). Part 1 #21 (line 39), #22 (line 40). Part 2 D5 (line 77). V7b (`:87108`), V7c (`:176200`), V7g (`'failed'` + `markFailed`).
**(c) Verification**
- `grep -n "parseObservations\|parseSummary\|coerceObservationToSummary\|consecutiveSummaryFailures" src/services/worker/agents/ResponseProcessor.ts` → 0 hits.
- `grep -n "MAX_CONSECUTIVE_SUMMARY_FAILURES" src/services/worker/agents/ResponseProcessor.ts` → 0 hits.
- Integration test A — malformed input: send `"Service temporarily unavailable"` as `text`, assert (i) no row inserted in `observations` table, (ii) no row in `session_summaries`, (iii) every id in `session.processingMessageIds` has `status='failed'` in `pending_messages` after the call returns, (iv) `session.processingMessageIds === []`.
- Integration test B — observation-without-summary when summary expected: `summaryExpected=true`, text is `<observation><type>code</type><title>x</title></observation>`, assert (i) no row in `session_summaries`, (ii) no row in `observations` (contract failure fails the whole batch — no partial write), (iii) pending messages marked `failed`. This is **the critical regression test** — today the coerce path would have written a coerced summary row.
- Integration test C — valid obs + summary: single atomic TX still commits both rows together (pre-existing behaviour, no regression).
**(d) Anti-pattern guards**
- **Guard C**: No coercion, no "close-enough" branch. Every `parsed.valid === false` path leads to `markFailed` and `return`.
- **Guard D**: One parse call (`parseAgentXml`) replaces two (`parseObservations` + `parseSummary`). No wrapper facade.
- **Guard A**: No new method on `RestartGuard`, no new class, no new helper file. Direct calls to the existing `PendingMessageStore.markFailed`.
---
### Phase 3 — Remove `consecutiveSummaryFailures` from `ActiveSession` + its consumer
**(a) What to implement**
1. Delete `src/services/worker-types.ts:5153` (the three lines: JSDoc + `consecutiveSummaryFailures: number;` field). Field name must vanish from the type.
2. Delete `src/services/worker/SessionManager.ts:336346` (the 11-line circuit-breaker check in `queueSummarize`). The method body goes straight from the auto-initialize check (`:331334`) to the `// CRITICAL: Persist to database FIRST` comment (`:348`). **This deletion was omitted from the original Phase 3 draft at `06-implementation-plan.md:155204` — V7e is the new citation.**
3. Delete the initialiser `consecutiveSummaryFailures: 0,` at `SessionManager.ts:232` (inside `initializeSession`).
4. Delete the `MAX_CONSECUTIVE_SUMMARY_FAILURES` import in `SessionManager.ts` (if present). Use `grep -n "MAX_CONSECUTIVE_SUMMARY_FAILURES" src/services/worker/SessionManager.ts` first; remove the line.
5. No schema changes. No new `RestartGuard` API (see Dependencies above — option (1)).
**(b) Docs** — §3.7 deletion bullet "consecutiveSummaryFailures counter + circuit-breaker logic (RestartGuard covers this already)" (line 314). Part 1 #22 (line 40). Part 2 D5 (line 77). V7d, V7e, V7f.
**(c) Verification**
- `grep -rn "consecutiveSummaryFailures" src/` → 0 hits.
- `grep -rn "MAX_CONSECUTIVE_SUMMARY_FAILURES" src/` → 0 hits (constant, its JSDoc, all imports gone).
- TypeScript compile succeeds (removing a field and all references is mechanical; no union fallout expected).
- Behavioural test: call `sessionManager.queueSummarize(sessionDbId)` five times in rapid succession with intentionally failing agent output; assert every call enqueues to `pending_messages` (no silent drop) and each failed attempt marks that message `'failed'`. The old circuit breaker would have swallowed calls 45; the new contract doesn't.
- Behavioural test: existing `RestartGuard` still trips after the configured restart count (`MAX_WINDOWED_RESTARTS = 10`, `RESTART_WINDOW_MS = 60_000`) — prove that repeated parse failures + subsequent subprocess restarts still converge to guard-tripped within the window. Covered by `07-session-lifecycle-management` tests; no duplication here.
**(d) Anti-pattern guards**
- **Guard A**: No new `RestartGuard.recordFailure()` invented. The class stays at 70 lines, public API unchanged. Dependency coupling to `07-session-lifecycle-management` is documentation-only.
- **Guard C**: Removing the circuit breaker means failures flow to queue-level `'failed'` state — a single, visible, DB-backed failure signal. No silent swallow.
---
### Phase 4 — Verification sweep
**(a) What to implement** — Grep audit + targeted regression tests. No new code.
**(b) Docs** — §3.7 full deletion list (lines 310315), Phase 3 verification block in `06-implementation-plan.md:189195`.
**(c) Verification — must all return 0 matches**
- `grep -rn "coerceObservationToSummary" src/` → 0.
- `grep -rn "consecutiveSummaryFailures" src/` → 0.
- `grep -rn "MAX_CONSECUTIVE_SUMMARY_FAILURES" src/` → 0.
- `grep -rn "parseObservations\|parseSummary" src/ | grep -v "src/sdk/parser.ts"` → 0 (the only survivors are private helpers inside `parser.ts` itself; if you named them without the `parse` prefix this grep is also 0).
- `grep -rn "coerceFromObservation" src/` → 0.
**(c-cont) Regression tests — must all pass**
- Parser fuzz: feed 1 000 synthetic agent outputs mixing valid/invalid XML + present/absent `<summary>`; assert `valid:false` paths never write to `observations` or `session_summaries`. Must be 0 coerced summary rows.
- Atomic-TX sanity: inject a DB error on `INSERT INTO session_summaries`; assert `storeObservations` rolls back so `observations` for that batch also revert. (Pre-existing invariant; we didn't touch it, but prove it.)
- Idempotency of failure: double-delivery of the same malformed response (e.g., via worker crash + retry) results in the same `pending_messages` row in `'failed'` status; second attempt does not create a duplicate observation. Relies on upstream `02-sqlite-persistence` `UNIQUE(session_id, tool_use_id)` — cross-check with that plan.
- End-to-end: Stop-hook summarize path exercises `parseAgentXml({requireSummary:true})`. With a mocked agent returning garbage, assert the hook receives the 110 s timeout path (no silent summary write), the pending message is `'failed'`, and SessionManager does NOT short-circuit subsequent summarize enqueues (circuit breaker is gone).
**(d) Anti-pattern guards** — All four grep checks enforce Guards A/C/D structurally.
---
## Blast radius
**Files modified**:
- `src/sdk/parser.ts` — full rewrite of public surface; private helpers preserved.
- `src/sdk/prompts.ts` — two-edit surgical change (skip-summary clause, constant delete).
- `src/services/worker/agents/ResponseProcessor.ts` — replace lines 1516 imports, 69108 parse block, delete 176200 circuit breaker.
- `src/services/worker-types.ts` — delete 3 lines.
- `src/services/worker/SessionManager.ts` — delete 11 lines (queueSummarize guard) + 1 line initialiser + maybe 1 import.
**Files not touched**: `src/services/sqlite/observations/store.ts` (atomic TX lives here and is preserved). `src/services/worker/RestartGuard.ts` (API unchanged — see Dependencies option 1). `src/services/worker/agents/SessionCleanupHelper.ts`. `ObservationBroadcaster.ts`. Any Chroma sync module.
**Schema changes**: none.
**Estimated lines deleted**:
- `coerceObservationToSummary` body + JSDoc: ~43 lines
- `coerceFromObservation` branches in `parseSummary`: ~16 lines
- `parseSummary` / `parseObservations` wrapper deduplication: ~15 lines (after collapse into `parseAgentXml`)
- Non-XML early-fail block in `ResponseProcessor.ts:83108`: ~26 lines (replaced by ~12 lines → net 14)
- Circuit breaker in `ResponseProcessor.ts:176200`: ~25 lines
- `consecutiveSummaryFailures` field + initialiser + SessionManager guard: ~15 lines
- `MAX_CONSECUTIVE_SUMMARY_FAILURES` constant + JSDoc + imports: ~8 lines
**Net**: ~135 lines deleted, ~35 lines added → **~100 LoC net reduction**.
## Confidence + gaps
**High confidence**:
- Parser rewrite is mechanical (extract three private fns, compose them, add the discriminated-union return).
- `'failed'` status string + `markFailed` API are verified.
- Circuit-breaker + field removals are pure deletion once call sites are enumerated (V7e catches the missed site).
**Gaps**:
1. **RestartGuard contract claim in D5 is overstated.** D5 says "RestartGuard already exists for repeated failures — delete the separate counter". RestartGuard today only handles **process-restart** loops, not per-message parse failures. This plan adopts the narrower interpretation (parse failure → `markFailed`; existing RestartGuard handles the subprocess-restart side effects unchanged). If the `07-session-lifecycle-management` plan decides to add `RestartGuard.recordFailure()`, callers here can start using it in a follow-up — no churn to this plan. **Flag for `07-session-lifecycle-management` author**: confirm the RestartGuard surface they want.
2. **Prompt updates assumed in-scope.** The audit implies the agent contract "already states `<summary>` or `<skip_summary/>`". Verified: prompts enforce `<summary>` strictly but never mention `<skip_summary/>`. Phase 1b adds the missing clause. If the team prefers to keep `<skip_summary/>` as a *recognised-but-undocumented* escape hatch, Phase 1b can be dropped — but then the parser should be stricter too (reason `missing_summary` when only skip-summary is emitted without prompt permission). Flag for product owner.
@@ -0,0 +1,314 @@
# Plan 04 — vector-search-sync
**Design authority**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` **section 3.4** (lines 197229). Bullshit ledger items **#24, #25, #26** (lines 4244 of `05-clean-flowcharts.md` Part 1). Implementation-plan anchor: `06-implementation-plan.md` **Phase 10** (lines 452486) and Phase 0 verified findings **V15, V16, V17** (lines 4244).
**Dependency — upstream (blocker)**: Plan `02-sqlite-persistence` **Phase 2** (`07-plans/02-sqlite-persistence.md:154190`) adds `observations.chroma_synced INTEGER DEFAULT 0`, `session_summaries.chroma_synced INTEGER DEFAULT 0`, and partial indexes `idx_observations_chroma_synced` / `idx_summaries_chroma_synced`. This plan ASSUMES that column and indexes exist. Do not start Phase 1 here until Plan 02 Phase 2 is merged and migrated on dev.
**Dependency — downstream (consumer)**: Plan `06-hybrid-search-orchestration` consumes this plan's write-path contract "Chroma down at write time → row committed to SQLite with `chroma_synced=0`, logger.warn, no throw", and the read-path contract "search with Chroma disabled returns 503 `chroma_unavailable`, no silent drop" (see `05-clean-flowcharts.md` section 3.6, lines 270272, bullshit item #32 line 50). Keep both contracts stable.
---
## Sources consulted
- `PATHFINDER-2026-04-21/05-clean-flowcharts.md:197229` — section 3.4 clean flowchart + deletion ledger.
- `PATHFINDER-2026-04-21/05-clean-flowcharts.md:4244` — bullshit items #24 #25 #26.
- `PATHFINDER-2026-04-21/05-clean-flowcharts.md:547548` — Part 5 deletion totals for Chroma (160 + 160 lines; +60 +40 added).
- `PATHFINDER-2026-04-21/06-implementation-plan.md:4244` — verified findings V15, V16, V17.
- `PATHFINDER-2026-04-21/06-implementation-plan.md:452486` — Phase 10 outcome, tasks, verification.
- `PATHFINDER-2026-04-21/01-flowcharts/vector-search-sync.md:1102` — before-state flowchart.
- `PATHFINDER-2026-04-21/07-plans/02-sqlite-persistence.md:154190` — chroma_synced migration (Phase 2).
- `src/services/sync/ChromaSync.ts:125187``formatObservationDocs` (granular, multi-doc).
- `src/services/sync/ChromaSync.ts:193256``formatSummaryDocs` (granular, multi-doc).
- `src/services/sync/ChromaSync.ts:262333``addDocuments` + delete-then-add conflict handler.
- `src/services/sync/ChromaSync.ts:339420``syncObservation` / `syncSummary`.
- `src/services/sync/ChromaSync.ts:479545``getExistingChromaIds` metadata scan.
- `src/services/sync/ChromaSync.ts:554592``ensureBackfilled` + `runBackfillPipeline`.
- `src/services/sync/ChromaSync.ts:864890` — static `backfillAllProjects`.
- `src/services/sync/ChromaSync.ts:903956``updateMergedIntoProject` (kept; uses `chroma_update_documents`).
- `src/services/worker/agents/ResponseProcessor.ts:286308` — observation call site (fire-and-forget).
- `src/services/worker/agents/ResponseProcessor.ts:380405` — summary call site (fire-and-forget).
- `src/services/worker-service.ts:470` — boot-time `ChromaSync.backfillAllProjects()` fire-and-forget.
## Concrete findings
- **CRITICAL — no `chroma_upsert_documents` tool exists in the codebase.** Grep of `ChromaSync.ts` for `upsert` returns zero hits. Available MCP tools used today: `chroma_add_documents` (line 284), `chroma_delete_documents` (line 297), `chroma_update_documents` (lines 899, 942, used only for metadata patching in `updateMergedIntoProject`), `chroma_get_documents` (lines 499, 918), `chroma_query_documents`. `chroma_update_documents` *silently ignores missing IDs* (confirmed by the comment at `ChromaSync.ts:293294`). Therefore a single-call upsert is not available via the current MCP surface.
- **Fallback strategy (documented)**: Replace the write path with "try `chroma_add_documents` first; on `"already exists"` error, call `chroma_delete_documents` then `chroma_add_documents` for that single ID (not the whole batch)." Because the new ID scheme is stable (`obs:<rowid>`, `sum:<rowid>`), conflicts can only occur on legitimate resync — never on organic dedup as before. Keep the branch but collapse it into one helper. Flag: if chroma-mcp ever exposes `chroma_upsert_documents`, replace the add-or-delete+add branch with a single call. Track as a TODO in the code.
- **Write-path is already fire-and-forget** at `ResponseProcessor.ts:286308` and `:380405` (`.then().catch()` with `logger.error`, no await). Do not make it blocking. The `chroma_synced=1` UPDATE must run inside the `.then()` arm; the `logger.warn` + leave-flag-0 must run inside the `.catch()` arm.
- **Granularity today**: an observation with narrative + 3 facts = **4** Chroma docs (`narrative` + `text` + `fact_0..fact_N`). A summary with 6 fields populated = **6** docs. Target: 1 doc per row (2 collections, one per doc_type).
- **`getExistingChromaIds` scans *all* metadata for a project** via paged `chroma_get_documents`. On large corpora this is expensive and happens on every worker boot. Replace with `WHERE chroma_synced=0 LIMIT 1000` scan of SQLite.
- **`updateMergedIntoProject` (lines 903956)** uses `chroma_update_documents` for metadata patching during worktree adoption. That code path is **unrelated** to this plan and must not be touched.
- **Boot-time backfill** is fire-and-forget at `worker-service.ts:470` via static `ChromaSync.backfillAllProjects()`. Swap with instance method `startupBackfillUnsynced()` but keep fire-and-forget.
## Copy-ready snippet locations
| What to copy / cut | From | To |
|---|---|---|
| Replace multi-doc formatter body | `ChromaSync.ts:125187` (`formatObservationDocs`) | One `formatObservationAsDoc` returning single doc; id `obs:${id}`, text `title + "\n\n" + narrative + "\n\n" + facts.join("\n")`, metadata block kept from lines 134157. |
| Replace multi-doc formatter body | `ChromaSync.ts:193256` (`formatSummaryDocs`) | One `formatSummaryAsDoc` returning single doc; id `sum:${id}`, text = all six fields joined with `"\n\n"`, metadata from lines 196204. |
| Rewrite write path | `ChromaSync.ts:262333` (`addDocuments` body) | `upsertDoc(doc)` helper: try `chroma_add_documents` with single id; on `"already exist"` call `chroma_delete_documents` then `chroma_add_documents` for that one id. No batch branch; callers pass a single doc. |
| Replace `syncObservation` tail | `ChromaSync.ts:369377` (`formatObservationDocs``addDocuments`) | `const doc = this.formatObservationAsDoc(stored); await this.upsertDoc(doc); await markObservationSynced(observationId);` |
| Replace `syncSummary` tail | `ChromaSync.ts:411419` (`formatSummaryDocs``addDocuments`) | `const doc = this.formatSummaryAsDoc(stored); await this.upsertDoc(doc); await markSummarySynced(summaryId);` |
| Wrap call sites with flag update | `ResponseProcessor.ts:286308` and `:380405` | Move `UPDATE observations SET chroma_synced=1 WHERE id=?` inside the helper (Phase 3), not in the call site. Leave the call site's `.catch()` as-is; it already logs. |
| Delete — static full-project scanner | `ChromaSync.ts:864890` (`backfillAllProjects`) | Replace with instance method `startupBackfillUnsynced()` that does one SELECT LIMIT 1000 and iterates. |
| Delete — metadata scanner | `ChromaSync.ts:479545` (`getExistingChromaIds`) | Remove entirely after Phase 6 verification passes. |
| Delete — pipeline + per-type backfill | `ChromaSync.ts:554592` (`ensureBackfilled`, `runBackfillPipeline`) + `backfillObservations/Summaries/Prompts` blocks | Remove after `startupBackfillUnsynced()` replaces them. |
| Boot-site swap | `worker-service.ts:470` (`ChromaSync.backfillAllProjects().then(...)`) | `const sync = this.dbManager.getChromaSync(); sync?.startupBackfillUnsynced().then(...)`. Keep fire-and-forget. |
## Confidence + gaps
- **High**: Flag column exists via Plan 02 Phase 2. Write path is fire-and-forget at call sites. Stable ID scheme is trivial. Granular formatter lines (125256) can be excised cleanly. `updateMergedIntoProject` is decoupled from this refactor.
- **Medium**: The `"already exists"` string is the only signal of ID conflict today (lines 292295). If chroma-mcp changes the error message, the delete-then-add branch will silently fall through to the generic error path. Mitigation: match on both `"already exist"` substring and error code if chroma-mcp exposes one (grep on landing day).
- **Gap — unverified MCP assumption**: No `chroma_upsert_documents` tool. Plan commits to delete-then-add fallback. If chroma-mcp adds native upsert post-landing, collapse `upsertDoc` into one call. Flag as TODO at the helper.
- **Gap — prompts**: `backfillPrompts` (`ChromaSync.ts:701`+) and `formatUserPromptDoc` (`:426438`) already produce one doc per prompt. Keep them; this plan only restructures obs + summary. Verify in Phase 4 that prompt backfill is folded into `startupBackfillUnsynced()` using a `user_prompts.chroma_synced` column (add to Plan 02 Phase 2 or skip — see Phase 4 note below).
---
## Phase 1 — One doc per row: rewrite formatters
### (a) What to implement
- Copy metadata block from `src/services/sync/ChromaSync.ts:134157` into a new `formatObservationAsDoc(stored): ChromaDocument` that returns exactly one document.
- Copy metadata block from `src/services/sync/ChromaSync.ts:196204` into a new `formatSummaryAsDoc(stored): ChromaDocument` that returns exactly one document.
- Replace `private formatObservationDocs` (lines 125187) and `private formatSummaryDocs` (lines 193256) with these single-doc versions. Delete the `field_type`, per-fact, per-field, and `obs_${id}_narrative` / `obs_${id}_text` / `summary_${id}_request` ID variants.
Observation doc shape:
```ts
{
id: `obs:${stored.id}`,
document: [stored.title, stored.narrative, facts.join("\n")]
.filter(Boolean)
.join("\n\n"),
metadata: /* existing baseMetadata block */
}
```
Summary doc shape: id `sum:${stored.id}`, document = `[request, investigated, learned, completed, next_steps, notes].filter(Boolean).join("\n\n")`.
### (b) Docs
- `05-clean-flowcharts.md` section 3.4 (line 203 `Format` node) and deletion ledger line 223.
- Bullshit item **#26** (`05-clean-flowcharts.md:44`).
- Verified finding **V16** (`06-implementation-plan.md:43`).
- Live code: `src/services/sync/ChromaSync.ts:125256`.
### (c) Verification
- `grep -n "obs_\${" src/services/sync/ChromaSync.ts` → zero.
- `grep -n "summary_\${" src/services/sync/ChromaSync.ts` → zero.
- `grep -nE "field_type|fact_\\\$\\{" src/services/sync/ChromaSync.ts` → zero.
- Unit test: given an observation with narrative + 3 facts, `formatObservationAsDoc` returns 1 doc whose `document` string contains title, narrative, and each fact, separated by `\n\n`, and `id === "obs:<rowid>"`.
### (d) Anti-pattern guards
- **A (Inventing APIs)**: do not add a new class for the single-doc shape — reuse the existing `ChromaDocument` type (already defined at top of `ChromaSync.ts`).
- **C (Silent fallbacks)**: if title is empty AND narrative is empty AND facts is empty, throw — do not produce an empty vector.
- **E (Two code paths)**: delete the multi-doc branches, do not leave them behind a feature flag.
---
## Phase 2 — Replace delete-then-add with upsert-or-fallback
### (a) What to implement
- Cut `private async addDocuments(documents[])` at `src/services/sync/ChromaSync.ts:262333`.
- Replace with `private async upsertDoc(doc: ChromaDocument): Promise<void>` that:
1. `await this.ensureCollectionExists();`
2. Sanitizes metadata (keep the `filter(([_, v]) => v !== null && v !== undefined && v !== '')` pattern from lines 277281).
3. Calls `chroma_add_documents` with a single-id payload.
4. On thrown error whose message matches `/already exist/i`: call `chroma_delete_documents` with `[doc.id]`, then retry `chroma_add_documents`. Log at `info` level.
5. On any other error: rethrow. The caller (the `.then()`/`.catch()` in Phase 3 or the `ResponseProcessor` fire-and-forget path) logs and sets the flag.
- TODO comment at top of `upsertDoc`: `// TODO: Replace delete+add fallback with chroma_upsert_documents when MCP exposes it.`
### (b) Docs
- `05-clean-flowcharts.md` section 3.4 line 204 (`Upsert` node) and deletion ledger line 222.
- Bullshit item **#25** (`05-clean-flowcharts.md:43`).
- Verified finding **V17** (`06-implementation-plan.md:44`).
- Live code to cut: `src/services/sync/ChromaSync.ts:262333`.
### (c) Verification
- `grep -nE "chroma_upsert_documents|upsertDoc" src/services/sync/ChromaSync.ts``upsertDoc` appears; `chroma_upsert_documents` absent unless chroma-mcp has shipped it.
- Behavioral test: call `upsertDoc({id:"obs:9999", ...})` twice in a row against a live Chroma. Expect: no error, `chroma_count_documents WHERE metadata.sqlite_id=9999` returns 1.
- Behavioral test: rename the collection to a read-only state, call `upsertDoc`. Expect: error propagates, caller's `.catch()` fires.
### (d) Anti-pattern guards
- **A**: do not add a `ChromaUpsertStrategy` class. One helper function.
- **C**: if delete succeeds but re-add fails, rethrow — do not swallow the error and return silently. The caller's `.catch()` path will leave `chroma_synced=0`, and the backfill will retry.
- **D (Facades that pass through)**: do not wrap `chromaMcp.callTool('chroma_add_documents', ...)` in a `ChromaClient.add()` method — call `callTool` directly inside `upsertDoc`.
---
## Phase 3 — Write path sets `chroma_synced=1` on success
### (a) What to implement
- In `SessionStore` (or nearest matching store file — grep for `prepareStatement('UPDATE observations SET ')` to confirm location before editing), add two 1-line helpers: `markObservationSynced(id: number)``UPDATE observations SET chroma_synced=1 WHERE id=?`; and `markSummarySynced(id: number)` likewise against `session_summaries`. Use `db.prepare().run(id)` pattern already used by the store.
- In `ChromaSync.syncObservation` (`ChromaSync.ts:339378`), replace the existing tail (`formatObservationDocs` + `addDocuments`) with:
```ts
const doc = this.formatObservationAsDoc(stored);
await this.upsertDoc(doc);
markObservationSynced(observationId);
```
Wrap the above in a `try`: on throw, `logger.warn('CHROMA_SYNC', 'obs sync failed, flag stays 0', {id: observationId}, err)` and **rethrow** so the `ResponseProcessor.ts:286308` `.catch()` still fires (it logs at error level — do not lose that log).
- Same pattern for `syncSummary` (`ChromaSync.ts:384420`) with `markSummarySynced`.
- Leave the `ResponseProcessor` call site alone — the existing `.then()/.catch()` is correct.
### (b) Docs
- `05-clean-flowcharts.md` section 3.4 lines 205209 (OK branch → `Mark`; fail branch → `LogFail`).
- Bullshit item **#24** (`05-clean-flowcharts.md:42`).
- Phase 10 task 3 (`06-implementation-plan.md:467`).
- Anti-pattern **C** (`06-implementation-plan.md:63`): "On Chroma failure at write time, do not throw — leave flag 0".
- Live call sites: `src/services/worker/agents/ResponseProcessor.ts:286308` (obs) and `:380405` (summary).
### (c) Verification
- Functional test: Chroma enabled, worker running, send one observation → after 1 s, `SELECT chroma_synced FROM observations WHERE id=<new>` returns `1`.
- Functional test: Stop Chroma subprocess (kill chroma-mcp), send one observation → SQLite row commits, `chroma_synced=0`, `logger.warn` line emitted. No 500 to the hook.
- Start Chroma again, restart worker. Phase 4's `startupBackfillUnsynced()` upserts the row; flag flips to `1`.
- `grep -n "chroma_synced=1\\|chroma_synced = 1" src/services/` → finds only the two new `mark*Synced` statements.
### (d) Anti-pattern guards
- **C (Silent fallbacks)**: the `logger.warn` call must include `obsId`, `project`, and the error message — never a bare "sync failed".
- **E**: do not set the flag inside the `.then()` arm at the call site. The store update lives in `ChromaSync`, one place.
- **A**: no `SyncStateMachine`, no `ChromaSyncResult` enum. Boolean column + throw-on-fail is enough.
---
## Phase 4 — Replace backfill trio with `startupBackfillUnsynced()`
### (a) What to implement
- Add instance method on `ChromaSync`:
```ts
async startupBackfillUnsynced(limit = 1000): Promise<void> {
const db = new SessionStore();
try {
const obsRows = db.db.prepare(
'SELECT id FROM observations WHERE chroma_synced = 0 LIMIT ?'
).all(limit) as { id: number }[];
for (const { id } of obsRows) { /* load, formatObservationAsDoc, upsertDoc, markObservationSynced — swallow per-row errors */ }
const sumRows = db.db.prepare(
'SELECT id FROM session_summaries WHERE chroma_synced = 0 LIMIT ?'
).all(limit) as { id: number }[];
for (const { id } of sumRows) { /* same pattern */ }
} finally {
db.close();
}
}
```
- Per-row `try/catch`: a single failed upsert must not abort the whole backfill. Logger.warn per failure; leave flag 0.
- In `src/services/worker-service.ts:470`, replace `ChromaSync.backfillAllProjects().then(...)` with `this.dbManager.getChromaSync()?.startupBackfillUnsynced().then(...).catch(...)`. Keep fire-and-forget.
- Delete `static async backfillAllProjects()` (`ChromaSync.ts:864890`), `ensureBackfilled` (`:554573`), `runBackfillPipeline` (`:575592`), `backfillObservations`, `backfillSummaries`, `backfillPrompts`.
- **Prompts note**: if `user_prompts.chroma_synced` column is not added by Plan 02 Phase 2, then either (a) extend Plan 02 Phase 2 to include it, or (b) keep `formatUserPromptDoc`-based one-shot backfill for prompts only and mark as a follow-up. Do not block Phase 4 on this — flag it and continue.
### (b) Docs
- `05-clean-flowcharts.md` section 3.4 lines 211212 (`BootOnce` → `CheckUnsync` → `LoopBackfill`).
- Deletion ledger lines 220, 224.
- Phase 10 task 4 (`06-implementation-plan.md:468`).
- Live code to cut: `src/services/sync/ChromaSync.ts:554592`, `:864890`, and `backfillObservations/Summaries/Prompts` helper bodies (currently inside the 600860 range).
- Boot call site: `src/services/worker-service.ts:470`.
### (c) Verification
- `grep -n "backfillAllProjects\|ensureBackfilled\|runBackfillPipeline" src/` → zero.
- Functional test: Insert 5 observations while Chroma is down. Restart worker. Within 10 s, all 5 rows have `chroma_synced=1` and Chroma collection shows 5 docs with ids `obs:<id>`.
- Functional test: Set 1001 rows to `chroma_synced=0`. Restart worker. Exactly 1000 rows flip to `1` after boot backfill; the 1001st stays `0` until next boot (LIMIT 1000 is intentional — document this).
- Log check: `CHROMA_SYNC` logger emits one `"startup backfill complete"` info line per boot with counts.
### (d) Anti-pattern guards
- **A**: no `BackfillScheduler`, no `cron`, no second setInterval. One boot call, fire-and-forget.
- **B (Polling where events exist)**: the existing 5-s rescan or per-startup metadata scan are the exact pollers being removed — do not add a retry timer here.
- **E**: `startupBackfillUnsynced` must use `upsertDoc` and `formatObservationAsDoc` from Phases 12. Do not write a parallel fast path.
---
## Phase 5 — Delete `getExistingChromaIds` metadata scan
### (a) What to implement
- Delete `private async getExistingChromaIds(projectOverride?: string)` at `src/services/sync/ChromaSync.ts:479545` and every call site (only call today is from the now-deleted `ensureBackfilled`).
- **Precondition**: Phase 4 must be landed and its verification passing. This phase is the cleanup sweep.
- **Do NOT delete** in the same PR as Phase 4 unless the targeted `WHERE chroma_synced=0` backfill has been proven in staging to cover missing-doc recovery. Keep `getExistingChromaIds` dead-code-fenced with an `@deprecated` JSDoc for one release if there is any concern.
### (b) Docs
- `05-clean-flowcharts.md:221` ("`getExistingChromaIds` metadata index scan (~80 lines)").
- Verified finding **V17** (`06-implementation-plan.md:44`).
- Live code to cut: `src/services/sync/ChromaSync.ts:479545`.
### (c) Verification
- `grep -n "getExistingChromaIds" src/` → zero.
- No change in functional behavior vs. end of Phase 4 — this is a pure deletion.
- Re-run Phase 4 functional tests; all pass.
### (d) Anti-pattern guards
- **D (Facades that pass through)**: confirm no caller besides `ensureBackfilled` existed (grep both `ChromaSync.ts` and test files).
- **A**: do not replace with a `getSyncedIds` helper. The SQLite flag is source of truth now.
---
## Phase 6 — Verification gates
### (a) What to implement
Pure test/verification phase. No source edits.
1. **Chroma doc-count = one per obs row**:
- Fresh DB + Chroma. Insert 20 observations. Wait for sync.
- `SELECT COUNT(*) FROM observations WHERE chroma_synced=1` → 20.
- `chroma_count_documents(cm__claude-mem)` → 20 (not 60100 as before).
2. **Idempotent re-sync**:
- For existing observation id 42 (`chroma_synced=1`): call `syncObservation(42, ...)` again (simulate worktree adoption touch-up).
- Expect: no error, Chroma still has exactly one doc with id `obs:42`, SQLite flag still `1`.
3. **Chroma-down write path**:
- Stop chroma-mcp subprocess. Insert 5 observations via hook.
- SQLite rows commit, `chroma_synced=0` for all 5, `logger.warn` emitted 5 times.
- Restart Chroma, restart worker. Within 10 s: 5 rows flip to `1`, Chroma has 5 docs with ids `obs:<id>`.
4. **Downstream contract smoke** (for Plan 06):
- With Chroma disabled (`CLAUDE_MEM_CHROMA_ENABLED=false`), new observations commit with `chroma_synced=0` and no warn spam.
- Search path (Plan 06's 503 contract): not tested here — plan 06 owns that test.
5. **Grep gates** (all must return zero):
- `grep -nE "formatObservationDocs|formatSummaryDocs" src/`
- `grep -nE "backfillAllProjects|ensureBackfilled|runBackfillPipeline|getExistingChromaIds" src/`
- `grep -nE "obs_\\\$\\{|summary_\\\$\\{|field_type" src/services/sync/`
- `grep -n "addDocuments" src/services/sync/` (should show only the new `upsertDoc` name).
### (b) Docs
- `06-implementation-plan.md:473476` (Phase 10 verification list).
- `05-clean-flowcharts.md:228` (effect: ~70% index shrink).
### (c) Verification
- All grep gates green.
- All four functional tests pass in CI.
- Chroma on-disk size (`du -sh ~/.claude-mem/chroma`) drops vs. pre-landing baseline (expected ~70% reduction after a full reindex; partial if tests only rebuild a fraction).
### (d) Anti-pattern guards
- **C**: the idempotent re-sync test catches silent divergence (doc count != row count).
- **E**: the grep gates catch any stray code path left behind.
---
## Blast radius
- **Index regenerates under new doc shape**: users on an upgrade path see the old index until `startupBackfillUnsynced()` catches up. On a large corpus (10k+ observations) with a 1000-row limit per boot, full reindex takes ~10 worker restarts or a one-time `claude-mem reindex` CLI (out of scope for this plan — file follow-up).
- **Breaking ID change** (`obs_42_narrative` → `obs:42`): any caller that had hard-coded the old ID scheme (there are none in this repo — grep) would break. Third-party search tools reading Chroma directly would also break; document in changelog.
- **Metadata field removal**: `field_type` and `fact_index` disappear from Chroma metadata. If the viewer UI or search filters depend on these, Plan 06 must absorb the change. Grep `src/` for `field_type` and `fact_index` before merging.
## Estimated deletion
Matches the Part-5 ledger entry "Chroma silent-fallback + 90-day filter + granular docs + delete-then-add" (`-220 +60`) plus "Chroma backfill full-project scan" (`-200 +40`). Net for this plan alone: **~-320 lines** (not counting test churn).
@@ -0,0 +1,308 @@
# Plan 05 — context-injection-engine (U2 unified renderObservations)
**Date**: 2026-04-22
**Flowchart**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` section **3.5** (context-injection-engine clean)
**Before-state**: `PATHFINDER-2026-04-21/01-flowcharts/context-injection-engine.md`
**Design authority**: `05-clean-flowcharts.md` Part 1 item #34, Part 2 Decision **D4**, Part 3 section **3.5**.
---
## Dependencies
**Upstream**: none direct. This plan *introduces* **U2 `renderObservations(obs, strategy)`** — the single traversal that all four existing formatters become strategy configs for.
**Downstream**:
- `06-hybrid-search-orchestration``SearchResultStrategy` is a `renderObservations` strategy (05 section 3.6 arrow `Fmt -->|markdown| M["renderObservations(results, SearchResultStrategy)"]`).
- `10-knowledge-corpus-builder``CorpusDetailStrategy` is a `renderObservations` strategy (05 section 3.11 arrow `D --> E["renderObservations(obs, CorpusDetailStrategy)"]`).
- `09-lifecycle-hooks` — consumes the single `GET /api/session/start` endpoint introduced in 05 section 3.1; that endpoint returns `{sessionDbId, contextMarkdown, semanticMarkdown}` in one payload (Phase 6 below).
**Note on `06-implementation-plan.md`**: Phase 8 of the implementation plan covers the same renderer unification and owns the verification-findings list (V1V20). **There is no V-number for `renderObservations` itself** — the audit's item #34 is the sole design reference. Cited here explicitly so downstream agents don't look for a V-number that doesn't exist.
---
## Sources consulted
1. `PATHFINDER-2026-04-21/05-clean-flowcharts.md` — full file (607 lines). Section 3.5 at lines 232258; Part 1 item #34 at line 52; Decision D4 at line 75; deletion ledger row for this refactor at line 543 (600 lines formatters → +320 renderer + 4 strategies = **280 net**).
2. `PATHFINDER-2026-04-21/06-implementation-plan.md` — Phase 8 at lines 368408. No V-number for renderObservations.
3. `PATHFINDER-2026-04-21/01-flowcharts/context-injection-engine.md` — before diagram; documents the existing two-path surface (`/api/context/inject` GET for SQLite context + `/api/context/semantic` POST for Chroma injection) and the HeaderRenderer/TimelineRenderer/SummaryRenderer/FooterRenderer fan-out.
4. Live codebase — file:line table below.
5. Existing 07-plans/ — directory empty at planning time; this is the first plan file.
### Live file:line inventory (the four formatters + orchestration)
| Concern | File | Lines | Key symbols |
|---|---|---|---|
| **AgentFormatter** (LLM markdown) | `src/services/context/formatters/AgentFormatter.ts` | 227 | `renderAgentHeader` :36, `renderAgentLegend` :46, `renderAgentContextEconomics` :75, `renderAgentDayHeader` :103, `renderAgentTableRow` :127, `renderAgentFullObservation` :142, `renderAgentSummaryItem` :177, `renderAgentSummaryField` :189, `renderAgentPreviouslySection` :197, `renderAgentFooter` :214, `renderAgentEmptyState` :225, private `compactTime` :120, private `formatHeaderDateTime` :21 |
| **HumanFormatter** (ANSI terminal) | `src/services/context/formatters/HumanFormatter.ts` | 238 | `renderHumanHeader` :35, `renderHumanLegend` :47, `renderHumanColumnKey` :60, `renderHumanContextIndex` :72, `renderHumanContextEconomics` :87, `renderHumanDayHeader` :116, `renderHumanFileHeader` :126, `renderHumanTableRow` :135, `renderHumanFullObservation` :155, `renderHumanSummaryItem` :186, `renderHumanSummaryField` :200, `renderHumanPreviouslySection` :208, `renderHumanFooter` :225, `renderHumanEmptyState` :236, private `formatHeaderDateTime` :20 |
| **ResultFormatter** (search markdown, class) | `src/services/worker/search/ResultFormatter.ts` | 301 | `class ResultFormatter` :21, `formatSearchResults` :25 (the top-level walker), `combineResults` :115, `formatSearchTableHeader` :141, `formatTableHeader` :149, `formatObservationSearchRow` :157, `formatSessionSearchRow` :178, `formatPromptSearchRow` :199, `formatObservationIndex` :221, `formatSessionIndex` :237, `formatPromptIndex` :250, `estimateReadTokens` :264, `formatChromaFailureMessage` :275, `formatSearchTips` :288 |
| **CorpusRenderer** (corpus detail, class) | `src/services/worker/knowledge/CorpusRenderer.ts` | 133 | `class CorpusRenderer` :10, `renderCorpus` :14 (the top-level walker), `renderObservation` :39 (private, the per-obs detail renderer), `estimateTokens` :90, `generateSystemPrompt` :97 |
| Orchestrator | `src/services/context/ContextBuilder.ts` | 186 | `generateContext` :130, `buildContextOutput` :80, `initializeDatabase` :49, `renderEmptyState` :73 (calls both empty-state functions) |
| Day-grouping walker (shared today) | `src/services/context/sections/TimelineRenderer.ts` | 183 | `groupTimelineByDay` :21, `renderTimeline` :168, `renderDayTimeline` :151 (forHuman branch :159), `renderDayTimelineAgent` :56, `renderDayTimelineHuman` :97, private `getDetailField` :46 |
| Section dispatch (forHuman branching) | `src/services/context/sections/HeaderRenderer.ts` | 61 | `renderHeader` :15 (branches forHuman for 5 sub-sections) |
| Section dispatch | `src/services/context/sections/SummaryRenderer.ts` | 65 | `shouldShowSummary` :15, `renderSummaryFields` :46 (branches forHuman) |
| Section dispatch | `src/services/context/sections/FooterRenderer.ts` | 42 | `renderPreviouslySection` :15 (branches forHuman), `renderFooter` :28 (branches forHuman) |
| Token economics (KEEP) | `src/services/context/TokenCalculator.ts` | 78 | `calculateTokenEconomics`, `formatObservationTokenDisplay`, `shouldShowContextEconomics` |
| Mode filtering (KEEP) | `src/services/domain/ModeManager.ts` | 266 | `ModeManager.getInstance()`, `getActiveMode`, `getTypeIcon`, `getWorkEmoji` |
| HTTP caller (today) | `src/services/worker/http/routes/SearchRoutes.ts` | — | `handleContextInject` :209 (GET, dynamically imports `context-generator.generateContext`), `handleSemanticContext` :258 (POST, inlines its own formatter at :286293) |
**Top-level LoC of the four formatters**: 227 + 238 + 301 + 133 = **899 lines**. Section dispatch files (Header/Summary/Footer/Timeline) add another 61 + 65 + 42 + 183 = **351 lines of forHuman branching** that collapse once strategies own the shape.
### Copy-ready: the shared "walk" all four formatters share
Every formatter does some subset of the same four-step traversal. The invariants below become the body of `renderObservations`:
1. **Optional header**: project/title/date line + legend + economics. Today: `HeaderRenderer.renderHeader` (`HeaderRenderer.ts:15`) + `ResultFormatter.formatSearchResults` :53 + `CorpusRenderer.renderCorpus` :17. → Strategy flag: `header: 'context' | 'search' | 'corpus' | 'none'`.
2. **Group and iterate** — the core walk. Today: `groupTimelineByDay` (`TimelineRenderer.ts:21`) for agent/human paths; `groupByDate` (`shared/timeline-formatting.ts`) + file-bucketing at `ResultFormatter.ts:5672` for search; flat iteration for corpus at `CorpusRenderer.ts:2831`. → Strategy flag: `grouping: 'by-day' | 'by-day-then-file' | 'none'`.
3. **Per-observation row** — either compact line or full-detail block. Today: `renderAgentTableRow`/`renderAgentFullObservation`, `renderHumanTableRow`/`renderHumanFullObservation`, `formatObservationSearchRow`/`formatObservationIndex`, `CorpusRenderer.renderObservation`. → Strategy flag: `density: 'compact' | 'table' | 'full-detail'` + `colorize: boolean` + `columns: [...]` + `showTokens: {read, work}`.
4. **Optional tail**: summary fields + previously section + footer tips. Today: `SummaryRenderer.renderSummaryFields`, `FooterRenderer.renderPreviouslySection`, `FooterRenderer.renderFooter`, `ResultFormatter.formatSearchTips`. → Strategy flag: `tail: 'context' | 'search-tips' | 'corpus-stats' | 'none'`.
The **five constants** all four share: `ModeManager.getTypeIcon(type)` for the type emoji, `formatTime(epoch)` / `formatDate` / `formatDateTime` from `shared/timeline-formatting.ts`, `extractFirstFile` for file extraction, `parseJsonArray` for facts parsing, and the title-fallback rule `obs.title || 'Untitled'`. These move unchanged into the renderer.
### Confidence + gaps
**High confidence**:
- File inventory, LoC, and symbol-level API of the four formatters.
- That all four read the same shape (`Observation` with `id/title/narrative/facts/type/created_at_epoch/files_modified/files_read`).
- Decision D4's four-strategy ceiling: **Agent, Human, SearchResult, CorpusDetail** — no others.
**Gaps / risks**:
- **ANSI-color preservation in `HumanContextStrategy` is a regression surface**. `HumanFormatter.ts` uses `colors.bright`, `colors.cyan`, `colors.gray`, `colors.dim`, `colors.yellow`, `colors.magenta`, `colors.green`, `colors.blue` imported from `../types.js`. Any divergence — including trailing spaces around ANSI wrappers, padding in `renderHumanTableRow` at :145 (`' '.repeat(time.length)` when `showTime=false`), and the `─`×60 separator at `:39` and `:237` — is a user-visible regression. Phase 8 fixtures must assert byte equality including escape sequences.
- **ResultFormatter has two row formats** (`formatSearchTableHeader` without `Work` column + `formatTableHeader` with `Work` column). `SearchResultStrategy` must support both, gated by a `columns` array — otherwise index-rendering callers (`formatObservationIndex` used elsewhere) regress silently. Grep during Phase 4 to enumerate callers before choosing defaults.
- Semantic-injection POST handler at `SearchRoutes.ts:286293` implements **its own mini-formatter** (`## Relevant Past Work (semantic match)` header + `### title (date)` + narrative). Anti-pattern E forbids this post-refactor. Phase 6 folds it into a `SearchResultStrategy` variant or a narrow `SemanticInjectStrategy` (still counts as a `SearchResult` strategy per Decision D4's four-total rule — treat this as a strategy *flag*, not a fifth strategy).
---
## Phase contract (applies to every phase)
Every phase below carries:
- **(a) What**: "Copy from …" instructions. The four existing formatters become four strategy configs feeding ONE `renderObservations`.
- **(b) Docs**: `05-clean-flowcharts.md` section 3.5 + Decision D4 + live file:line for each of the four formatters (table above).
- **(c) Verification**: unit tests per strategy against a fixed `Observation[]` fixture; **byte-for-byte match** against the old formatter's output for identical inputs.
- **(d) Anti-pattern guards**:
- **Guard A** (audit Part 2): only four strategies — `AgentContextStrategy`, `HumanContextStrategy`, `SearchResultStrategy`, `CorpusDetailStrategy`. Any fifth strategy fails review.
- **Guard E** (audit Part 2): single renderer path. No caller may implement its own walker. Grep check (Phase 8) enforces.
---
## Phase 1 — Extract common traversal into `renderObservations(obs, strategy)`
**(a) What**:
Create a new module `src/services/rendering/renderObservations.ts` (new folder `src/services/rendering/` so no caller is forced to import across feature boundaries). Copy the *walk* from the three existing walkers:
- Day grouping: from `TimelineRenderer.groupTimelineByDay` (`src/services/context/sections/TimelineRenderer.ts:21`).
- Day-then-file grouping: from `ResultFormatter.formatSearchResults` (`src/services/worker/search/ResultFormatter.ts:5672`).
- Flat iteration: from `CorpusRenderer.renderCorpus` (`src/services/worker/knowledge/CorpusRenderer.ts:2831`).
Signature:
```ts
export interface RenderStrategy {
name: 'agent-context' | 'human-context' | 'search-result' | 'corpus-detail';
header?: (ctx: HeaderCtx) => string[];
grouping: 'by-day' | 'by-day-then-file' | 'none';
renderGroupHeader?: (key: string) => string[];
renderSubgroupHeader?: (key: string) => string[]; // e.g., file within day
renderSummaryItem?: (s: SummaryItem, time: string) => string[];
renderRow: (obs: Observation, ctx: RowCtx) => string;
renderFullObservation?: (obs: Observation, ctx: RowCtx) => string[];
tail?: (ctx: TailCtx) => string[];
emptyState?: (ctx: HeaderCtx) => string;
}
export function renderObservations(
items: Array<Observation | SummaryItem>,
strategy: RenderStrategy,
ctx: RenderContext,
): string;
```
The orchestrator owns: (1) token budget enforcement (from `calculateTokenEconomics`, `TokenCalculator.ts:25`), (2) mode filtering (from `ModeManager.getActiveMode()`, `ModeManager.ts:15`), (3) full-vs-compact selection (from `getFullObservationIds` in `ObservationCompiler.ts`). Strategies **do not** re-implement any of this.
**(b) Docs**: 05 section 3.5 lines 234251; Decision D4 line 75. File:line for all four formatters per inventory table.
**(c) Verification**:
- Unit tests: `tests/services/rendering/renderObservations.test.ts` — three tests, one per `grouping` mode, with a synthetic `Observation[]` of 5 items across 2 days and 3 files.
- Build check: `npm run build-and-sync` passes after new module is in place (not yet wired).
**(d) Anti-pattern guards**: A — stop at four strategy names (compile-time `name` union enforces). E — module is the single renderer; callers will switch to it in Phase 6, Phase 7 deletes the old paths.
---
## Phase 2 — `AgentContextStrategy` from `AgentFormatter`
**(a) What**: Create `src/services/context/strategies/AgentContextStrategy.ts` and copy the output-shape bytes from `AgentFormatter.ts` into strategy callbacks:
- `header``renderAgentHeader` (:36) + `renderAgentLegend` (:46) + `renderAgentColumnKey` (:61, no-op) + `renderAgentContextIndex` (:68, no-op) + `renderAgentContextEconomics` (:75) composed in order per `HeaderRenderer.renderHeader` :15.
- `grouping: 'by-day'`; `renderGroupHeader``renderAgentDayHeader` (:103).
- `renderSummaryItem``renderAgentSummaryItem` (:177).
- `renderRow``renderAgentTableRow` (:127); `renderFullObservation``renderAgentFullObservation` (:142).
- `tail``renderAgentSummaryField` (:189) for each of the four fields + `renderAgentPreviouslySection` (:197) + `renderAgentFooter` (:214).
- `emptyState``renderAgentEmptyState` (:225).
The shared `formatHeaderDateTime` (:21) and `compactTime` (:120) move into `src/services/rendering/render-helpers.ts` or stay inline in the strategy (two callers — no DRY pressure yet).
**(b) Docs**: 05 section 3.5 arrow `Strategy -->|AgentContextStrategy| AgentOut["Compact markdown for LLM"]` (line 244); inventory row for `AgentFormatter.ts` above.
**(c) Verification**: snapshot test — feed the same `Observation[]` fixture to (i) the old `buildContextOutput(..., forHuman=false)` and (ii) `renderObservations(items, AgentContextStrategy, ctx)`; assert string equality. Zero-tolerance: LLM context is consumed by models — any whitespace change shifts KV-cache and can surface as behavioral regressions.
**(d) Anti-pattern guards**: A — strategy file defines the config object only, no walker. E — no custom grouping code; reuse Phase 1's `by-day` grouping.
---
## Phase 3 — `HumanContextStrategy` from `HumanFormatter` (preserves ANSI)
**(a) What**: Create `src/services/context/strategies/HumanContextStrategy.ts`. Copy output-shape bytes from `HumanFormatter.ts`:
- `header``renderHumanHeader` (:35) + `renderHumanLegend` (:47) + `renderHumanColumnKey` (:60) + `renderHumanContextIndex` (:72) + `renderHumanContextEconomics` (:87).
- `grouping: 'by-day-then-file'`; `renderGroupHeader``renderHumanDayHeader` (:116); `renderSubgroupHeader``renderHumanFileHeader` (:126).
- `renderSummaryItem``renderHumanSummaryItem` (:186).
- `renderRow``renderHumanTableRow` (:135) — **preserves `colors.dim`, `colors.cyan`, `colors.bright`, `colors.reset` escapes and the ` '.repeat(time.length)` padding for `showTime=false`** (see HumanFormatter.ts:145).
- `renderFullObservation``renderHumanFullObservation` (:155).
- `tail``renderHumanSummaryField` (:200) per field (with its per-field ANSI color from `SummaryRenderer.ts:5256``blue/yellow/green/magenta`) + `renderHumanPreviouslySection` (:208) + `renderHumanFooter` (:225).
- `emptyState``renderHumanEmptyState` (:236) — note the literal `─`×60 separator and the `\n` layout.
ANSI `colors` import from `src/services/context/types.js` stays inside this strategy only. The renderer core is ANSI-agnostic.
**(b) Docs**: 05 section 3.5 arrow `Strategy -->|HumanContextStrategy| HumanOut["ANSI-colored terminal"]` (line 245); inventory row for `HumanFormatter.ts`; D4 explicit about "columns/density/grouping" plus `colorize` per Phase 8 sketch in 06-implementation-plan.md line 385.
**(c) Verification**: snapshot test with explicit ANSI-escape comparison. Fixture MUST include: a no-time continuation row (to exercise the `' '.repeat(time.length)` padding at :145), a full-observation row with facts (exercises :167177), and the empty-state path (exercises :237). Assert raw buffer equality — not stripped-ANSI equality. Confidence gap: this is the highest regression risk in the plan (see Gaps above).
**(d) Anti-pattern guards**: A — one human strategy. E — no duplicate ANSI wrapping helper; `colors` constants travel with the strategy.
---
## Phase 4 — `SearchResultStrategy` from `ResultFormatter`
**(a) What**: Create `src/services/worker/search/strategies/SearchResultStrategy.ts`. Copy from `ResultFormatter.ts`:
- `header` ← the `Found N result(s) matching "…"` line at :53 (parameterized on query + counts).
- `grouping: 'by-day-then-file'`; `renderGroupHeader` ← day label ``### ${day}`` (:57); `renderSubgroupHeader` ← `**${file}**` + `formatSearchTableHeader` :141 (the `| ID | Time | T | Title | Read |` header).
- `renderRow` dispatches on item kind: `formatObservationSearchRow` (:157), `formatSessionSearchRow` (:178), `formatPromptSearchRow` (:199). The `lastTime` threading for `"` continuation stays in the renderer's `RowCtx` (from Phase 1).
- `tail``formatSearchTips` (:288) appended when not empty.
- `emptyState``No results found matching "${query}"` (:38) / `formatChromaFailureMessage` (:275) gated by a new `ctx.chromaFailed` flag.
The index-column variant (`formatObservationIndex` :221 etc., with the `Work` column) becomes a strategy *option* `columns: ['id','time','type','title','read'] | ['id','time','type','title','read','work']`. Before choosing a default, grep Phase 4 callers to enumerate usages — confidence gap noted above.
**(b) Docs**: 05 section 3.6 line 281 (`renderObservations(results, SearchResultStrategy)`); inventory row for `ResultFormatter.ts`. Cross-reference: `06-hybrid-search-orchestration` plan (downstream) will consume this strategy.
**(c) Verification**: feed the same `SearchResults` fixture to `ResultFormatter.formatSearchResults` and to `renderObservations(combined, SearchResultStrategy, ctx)`; assert byte equality including the date-group headers, file headers, table pipe characters, and trailing blank lines.
**(d) Anti-pattern guards**: A — single `SearchResultStrategy`; if semantic-injection handler at `SearchRoutes.ts:286293` needs a different shape, it becomes a **flag** on this strategy (`variant: 'table' | 'injection'`), not a fifth strategy. E — delete any caller that still walks `results.observations.map(...)` by hand (Phase 7 grep).
---
## Phase 5 — `CorpusDetailStrategy` from `CorpusRenderer`
**(a) What**: Create `src/services/worker/knowledge/strategies/CorpusDetailStrategy.ts`. Copy from `CorpusRenderer.ts`:
- `header``CorpusRenderer.renderCorpus` :1426 (the `# Knowledge Corpus: …`, description, stats block, `---` divider). Parameterized on `CorpusFile.name/description/stats`.
- `grouping: 'none'` — corpus walks flat (:2831).
- `renderFullObservation``CorpusRenderer.renderObservation` (:39) — full narrative, facts list, concepts, files_read, files_modified. No compact row form; every observation renders at full detail (per CorpusRenderer.ts:5).
- `tail: undefined` — corpus has no tail beyond the trailing `---`.
`generateSystemPrompt` (:97) is **not** part of the strategy — it's a separate function on the corpus feature that stays where it is. `estimateTokens` (:90) already moves to `shared/timeline-formatting.ts` as `estimateTokens` (it's already there per `ResultFormatter.ts:17` import); delete the duplicate at `CorpusRenderer.ts:90`.
**(b) Docs**: 05 section 3.11 line 457 (`renderObservations(obs, CorpusDetailStrategy)`); inventory row for `CorpusRenderer.ts`. Cross-reference: `10-knowledge-corpus-builder` plan (downstream) consumes this strategy.
**(c) Verification**: feed the same `CorpusFile` to `CorpusRenderer.renderCorpus` and to `renderObservations(corpus.observations, CorpusDetailStrategy, {corpus})`; assert byte equality. Important: corpus output is a *prompt* — whitespace divergence changes prompt-cache hit rate on the SDK side (see 05 section 3.11 cost note, line 476).
**(d) Anti-pattern guards**: A — single `CorpusDetailStrategy`. E — `KnowledgeAgent` and `CorpusBuilder` both route through it; no direct `CorpusRenderer` instantiation post-Phase 7.
---
## Phase 6 — Switch `ContextBuilder.generateContext` + `/api/session/start` handler to `renderObservations`
**(a) What**:
1. Rewrite `src/services/context/ContextBuilder.ts`:
- `buildContextOutput` :80 collapses to: resolve strategy = `forHuman ? HumanContextStrategy : AgentContextStrategy`, build `RenderContext` (economics, fullObservationIds, priorMessages, mostRecentSummary), call `renderObservations(timeline, strategy, ctx)`. The explicit `renderHeader`/`renderTimeline`/`renderSummaryFields`/`renderPreviouslySection`/`renderFooter` fan-out at :95119 deletes in favor of strategy-owned `header`/`renderGroupHeader`/`renderRow`/`tail`.
- `renderEmptyState` :73 collapses to `strategy.emptyState?.(ctx)`.
- `generateContext` :130 signature is unchanged — external callers see identical input/output.
2. Add the new `/api/session/start` handler (per 05 section 3.1 line 95 `GET /api/session/start?project=…`). Owned by `lifecycle-hooks` plan (09); this plan lands the *renderer-facing* side: one call into `generateContext(forHuman:false)` for `contextMarkdown`, one call into `SearchOrchestrator.search(query, limit=5)` + `renderObservations(results, SearchResultStrategy, {variant:'injection'})` for `semanticMarkdown`. Both served from a single response body.
3. Delete the inline mini-formatter at `SearchRoutes.ts:286293` (the `## Relevant Past Work …` block); route through `SearchResultStrategy`.
**(b) Docs**: 05 section 3.5 entry arrows lines 236242; 05 section 3.1 lines 95 + 100 (one `/api/session/start` returns ctx + semantic); 06 plan Phase 8 lines 391394.
**(c) Verification**:
- End-to-end byte-identity: capture the pre-refactor output of `GET /api/context/inject?projects=X&colors=true` and `…&colors=false` for a seeded DB; after the switch, curl the same and diff. Zero diff.
- New `/api/session/start` returns `{sessionDbId, contextMarkdown, semanticMarkdown}` (per 05 section 3.1 line 100) with the two markdown fields byte-matching the previous two-endpoint responses.
- `npm run build-and-sync` passes.
**(d) Anti-pattern guards**: A — no new strategies introduced. E — `SearchRoutes.handleSemanticContext` either deleted (covered by `/api/session/start`) or its body becomes a single `renderObservations(…, SearchResultStrategy, {variant:'injection'})` call — no more inline `lines.push('### …')`.
---
## Phase 7 — Delete the four old formatter files; update imports
**(a) What**:
1. `rm src/services/context/formatters/AgentFormatter.ts` (227 lines).
2. `rm src/services/context/formatters/HumanFormatter.ts` (238 lines).
3. `rm src/services/worker/search/ResultFormatter.ts` (301 lines).
4. `rm src/services/worker/knowledge/CorpusRenderer.ts` (133 lines).
5. Delete `src/services/context/sections/{HeaderRenderer,TimelineRenderer,SummaryRenderer,FooterRenderer}.ts` — their forHuman branching is now owned by strategies. `ObservationCompiler.ts` keeps the data-loading helpers (`queryObservations`, `buildTimeline`, `getFullObservationIds` — these feed the renderer, not part of the deletion).
6. Update imports at: `ContextBuilder.ts` (switch to `renderObservations` + strategies), `SearchManager.ts` / `SearchRoutes.ts` (switch to `SearchResultStrategy`), `KnowledgeAgent.ts` / `CorpusBuilder.ts` (switch to `CorpusDetailStrategy`). Grep for every `import … from '.*AgentFormatter|HumanFormatter|ResultFormatter|CorpusRenderer'` — expect zero after this phase.
**Net line impact**: deletes 227 + 238 + 301 + 133 + 61 + 183 + 65 + 42 = **1,250 lines**. Adds ~320 for `renderObservations` + 4 strategies + shared helpers. **Net ≈ 930 lines** — beats the audit's estimate at 05 line 543 (280 net) because the forHuman branching in the section renderers was not counted there.
**(b) Docs**: 05 section 3.5 "Deleted" list lines 253256; 06 plan Phase 8 verification line 397.
**(c) Verification**:
- `grep -rn "AgentFormatter\|HumanFormatter\|ResultFormatter\|CorpusRenderer" src/ tests/` → zero hits.
- `grep -rn "renderHeader\|renderTimeline\|renderSummaryFields\|renderPreviouslySection\|renderFooter" src/services/context/sections/` → zero hits (directory removed).
- `npx tsc --noEmit` passes.
- `npm run build-and-sync` passes.
**(d) Anti-pattern guards**: D — no compatibility shim re-exports old names. E — single walker; grep `for (const .* of .*observations)` in `src/services/worker/` and `src/services/context/` should only match inside `renderObservations.ts` (and test fixtures).
---
## Phase 8 — Verification: byte-identical output for all four paths
**(a) What**: Add four golden-file fixtures under `tests/fixtures/rendering/`:
- `agent-context.txt` — output of old `generateContext(input, forHuman=false)` captured before Phase 6.
- `human-context.ansi` — raw bytes including ANSI escapes from old `generateContext(input, forHuman=true)`.
- `search-result.md` — output of old `ResultFormatter.formatSearchResults(results, "test query")`.
- `corpus-detail.md` — output of old `CorpusRenderer.renderCorpus(corpus)`.
Capture on the branch tip *before* Phase 1 so the baseline is pre-refactor. Each phase's unit test (Phases 25) diffs against its golden file.
A final integration test runs the four renderers end-to-end against a seeded DB and diffs all four outputs simultaneously.
**(b) Docs**: 06 plan Phase 8 verification lines 396399 ("Snapshot tests: for each strategy, feed the same fixture `Observation[]` and assert output is byte-equal to the old formatter's output").
**(c) Verification**:
- All four snapshot tests green.
- Grep audit: `grep -rn "setInterval\|formatObservation\|renderObservation" src/ | grep -v renderObservations.ts | grep -v test` — zero hits outside the one renderer.
- SessionStart end-to-end: trigger a real Claude Code session with `npm run build-and-sync`; Agent context in the session + ANSI context in terminal both diff-clean against pre-refactor capture.
- Chroma corpus query test: build a corpus, query it 3× within 5 minutes, assert `cache_read_input_tokens > 0` on SDK response (proves corpus prompt bytes are stable, per 05 section 3.11 cost note).
**(d) Anti-pattern guards**: A — tests enforce the four-strategy ceiling by unioned `name` type. E — the grep audit above is the single-walker check.
---
## Constraints summary
- **Zero behavior change** for LLM (Agent) output bytes and human terminal ANSI bytes. Enforced by Phase 8 golden files.
- **Token-budget logic stays in the orchestrator** (`calculateTokenEconomics` at `TokenCalculator.ts:25`; `getFullObservationIds` at `ObservationCompiler.ts`). Strategies receive computed `RowCtx.isFull`, never re-decide.
- **Mode filtering stays in the orchestrator** (`ModeManager.getActiveMode()` at `ModeManager.ts:15`). Strategies receive filtered `Observation[]`.
- **ANSI color codes preserved**: all `colors.*` literals from `src/services/context/types.js` travel into `HumanContextStrategy` only. The renderer core is ANSI-agnostic.
- **Four strategies, no more**: `AgentContextStrategy`, `HumanContextStrategy`, `SearchResultStrategy`, `CorpusDetailStrategy`. Variants live as strategy config flags.
---
## Phase count
**8 phases.**
- Phase 1: extract renderer.
- Phase 2: `AgentContextStrategy`.
- Phase 3: `HumanContextStrategy` (ANSI).
- Phase 4: `SearchResultStrategy`.
- Phase 5: `CorpusDetailStrategy`.
- Phase 6: wire `ContextBuilder.generateContext` + `/api/session/start`.
- Phase 7: delete old formatters + section renderers.
- Phase 8: byte-identical verification.
---
## Blast radius + estimated LoC
- **Files deleted**: 8 (four formatters + four section renderers).
- **Files created**: ~6 (`renderObservations.ts` + 4 strategy files + shared helpers).
- **Lines deleted**: ~1,250 (AgentFormatter 227 + HumanFormatter 238 + ResultFormatter 301 + CorpusRenderer 133 + HeaderRenderer 61 + TimelineRenderer 183 + SummaryRenderer 65 + FooterRenderer 42).
- **Lines added**: ~320 (renderer + four strategies, per audit estimate at 05 line 543).
- **Net**: **930 lines**, ~3.3× the audit's row-level estimate of 280, once the forHuman branching in `*Renderer.ts` section files is counted.
Risk: lowest of the cleanup plan (pure reorganization, no behavior change). Snapshot tests are the safety net.
@@ -0,0 +1,283 @@
# Plan 06 — hybrid-search-orchestration (clean)
> **Design authority**: `05-clean-flowcharts.md` section 3.6. This plan implements that diagram. When plan and audit disagree, the `06-implementation-plan.md` verified-findings (Phase 0, V11) take precedence.
## Dependencies
- **Upstream**: `07-plans/05-context-injection-engine.md` — introduces `renderObservations(obs, strategy)` and the `SearchResultStrategy` strategy config (derived from `ResultFormatter.ts`). This plan consumes that strategy; it does NOT create it. Hard blocker: Phase 6 below cannot land until Plan 05 Phase 4 lands.
- **Downstream**: `07-plans/10-knowledge-corpus-builder.md``CorpusBuilder.build` calls `SearchOrchestrator.search(params)`. Signature stability of `SearchOrchestrator.search` is the contract Plan 10 depends on. Do not rename. Do not change the shape of `StrategySearchResult`.
## Sources consulted
1. `PATHFINDER-2026-04-21/05-clean-flowcharts.md` — section 3.6 (lines 262292); Part 1 bullshit items #30 #31 #32 #33 (lines 4851).
2. `PATHFINDER-2026-04-21/06-implementation-plan.md` — Phase 0 V11 (line 38); Phase 4 (lines 208242); anti-pattern guards C and D (lines 6364).
3. `PATHFINDER-2026-04-21/01-flowcharts/hybrid-search-orchestration.md` — before-state; full 97 lines.
4. `src/services/worker/SearchManager.ts:1-2069` — full method inventory via grep; spot-read `:1-200`, `:1209-1310`.
5. `src/services/worker/search/SearchOrchestrator.ts:1-290` — confirmed `search(args: any): Promise<StrategySearchResult>` signature; `executeWithFallback` at `:81-121`; silent fallback branch at `:100-110`.
6. `src/services/worker/search/strategies/ChromaSearchStrategy.ts:1-247``filterByRecency` at `:196-217`; hard-coded 90-day cutoff via `SEARCH_CONSTANTS.RECENCY_WINDOW_MS` at `:200`.
7. `src/services/worker/search/strategies/SQLiteSearchStrategy.ts:1-132`, `HybridSearchStrategy.ts:1-240`, `SearchStrategy.ts:1-61` — strategy interface and existence confirmed.
8. `src/services/worker/search/types.ts:15-16``RECENCY_WINDOW_DAYS: 90` and `RECENCY_WINDOW_MS: 90 * 24 * 60 * 60 * 1000`.
9. `src/services/worker/http/routes/SearchRoutes.ts:1-303` — 14 search/context handlers, all delegating `await this.searchManager.<method>(req.query)`.
10. `PATHFINDER-2026-04-21/07-plans/05-context-injection-engine.md``SearchResultStrategy` signature & path (`src/services/worker/search/strategies/SearchResultStrategy.ts` per that plan's Phase 4).
## Concrete findings
### SearchManager method inventory (2069 lines)
Classifications per Decision D ("if body is `return this.other.method(args)`, delete it"):
| `:line` | Method | Classification | Notes |
|---|---|---|---|
| `:59` | `queryChroma` | **real-work (but @deprecated)** | Pre-Orchestrator; called only by `searchChromaForTimeline` and `findByConcept`/`findByFile` hybrid paths inside `SearchManager`. **DELETE** (item #30). |
| `:70` | `searchChromaForTimeline` | **real-work (but @deprecated)** | Bakes 90-day cutoff via `ninetyDaysAgo` param. Callers: only `timeline()` `:490`. **DELETE** (item #30). |
| `:103` | `normalizeParams` | **display-wrap helper** | SearchOrchestrator `:239` has an equivalent. This one adds `filePath→files`, `concept→concepts`, `isFolder` coercion. If we keep SearchManager display-wrap, keep this. Otherwise fold into SearchOrchestrator.normalizeParams and delete. |
| `:161` | `search` | **real-work (display-wrap)** | Lines 161445: re-implements the whole decision tree + recency filter + categorization + markdown tables. Contains one of four 90-day filter copies (`:230-259`). This is the V11 "real work" method. **REFACTOR**: decision tree/execution deleted (already in Orchestrator); keep only the markdown combining → migrate to `renderObservations(combined, SearchResultStrategy)`. |
| `:450` | `timeline` | **real-work (display-wrap)** | Uses `searchChromaForTimeline` `:490` + 90-day cutoff `:488`. Delegates to `TimelineBuilder` for rendering. **REFACTOR**: strip 90-day cutoff; call `SearchOrchestrator` timeline helpers (`getTimeline`, `formatTimeline` at Orchestrator `:185-209`). |
| `:731` | `decisions` | **display-wrap** | Semantic shortcut; queries Chroma for "decision" observations, renders tables. Route could call `SearchOrchestrator.search({query:'decision', ...})` directly; keep the markdown wrap. |
| `:810` | `changes` | **display-wrap** | Same shape as `decisions`. |
| `:894` | `howItWorks` | **display-wrap** | Same shape. |
| `:951` | `searchObservations` | **pass-through** (with backward-compat shim) | `{type:'observations'}` preset + call through. **DELETE**; route calls `SearchOrchestrator.search({...req.query, type:'observations'})`. |
| `:1037` | `searchSessions` | **pass-through** | Same; `type:'sessions'`. **DELETE**. |
| `:1123` | `searchUserPrompts` | **pass-through** | Same; `type:'prompts'`. **DELETE**. |
| `:1209` | `findByConcept` | **real-work (display-wrap)** | Duplicates the two-phase hybrid logic that exists in `HybridSearchStrategy.findByConcept` at `HybridSearchStrategy.ts:74`. Pure duplication. **DELETE** execution; route calls `SearchOrchestrator.findByConcept(concept, args)` at `SearchOrchestrator.ts:126`. Keep markdown header/table rendering via `renderObservations(obs, SearchResultStrategy)`. |
| `:1277` | `findByFile` | **real-work (display-wrap)** | Same pattern — duplicates `HybridSearchStrategy.findByFile`. **DELETE** execution; route → `SearchOrchestrator.findByFile`. Keep render. |
| `:1399` | `findByType` | **real-work (display-wrap)** | Same pattern — duplicates `HybridSearchStrategy.findByType`. **DELETE** execution; route → `SearchOrchestrator.findByType`. Keep render. |
| `:1468` | `getRecentContext` | **real-work** | ContextBuilder territory, NOT search. Leave to Plan 05. |
| `:1596` | `getContextTimeline` | **real-work** | Same — ContextBuilder / Plan 05. Leave. |
| `:1810` | `getTimelineByQuery` | **real-work** | Contains a fourth copy of the 90-day filter at `:1840-1847`. Depends on `SearchOrchestrator.getTimeline` + `formatTimeline`. **REFACTOR**: strip 90-day; delegate. |
**Tally**: 3 pure pass-throughs to delete (`:951`, `:1037`, `:1123`); 2 `@deprecated` to delete (`:59`, `:70`); 6 real-work methods that keep only their rendering (`:161`, `:450`, `:1209`, `:1277`, `:1399`, `:1810`); 3 semantic shortcuts kept as display-wraps (`:731`, `:810`, `:894`); 2 ContextBuilder-owned methods left for Plan 05 (`:1468`, `:1596`). Every remaining "real-work" body becomes `orchestrator.X(args)` + `renderObservations(combined, SearchResultStrategy, ctx)` — no decision tree, no Chroma calls, no recency filter.
### Duplication vs facade distinction
The three hybrid methods (`findByConcept` `:1209`, `findByFile` `:1277`, `findByType` `:1399`) are not thin facades — they implement the same two-phase (SQLite metadata filter → Chroma semantic rank → intersect) algorithm that already lives in `HybridSearchStrategy.ts:26-240`. This is **parallel reimplementation**, not delegation. Phase 6 kills the in-file copy and routes through `SearchOrchestrator.findByConcept/File/Type` (`SearchOrchestrator.ts:126-180`), which already wraps `HybridSearchStrategy`.
### filterByRecency location
- **Canonical**: `src/services/worker/search/strategies/ChromaSearchStrategy.ts:196-217``private filterByRecency(chromaResults)`. Uses `SEARCH_CONSTANTS.RECENCY_WINDOW_MS` at `:200`. Called from `:119` inside `executeChromaSearch`.
- **Constant**: `src/services/worker/search/types.ts:15``RECENCY_WINDOW_DAYS: 90`; `:16``RECENCY_WINDOW_MS: 90 * 24 * 60 * 60 * 1000`.
- **Legacy copies in `SearchManager.ts`**: `:230`, `:247-259`, `:488`, `:978-985`, `:1064-1071`, `:1150-1157`, `:1840-1847`. All delete with the methods above or their refactors.
### Current Chroma-fail behavior (item #32 silent fallback)
`SearchOrchestrator.executeWithFallback` at `SearchOrchestrator.ts:93-110`:
```ts
const result = await this.chromaStrategy.search(options);
if (result.usedChroma) return result;
// Chroma failed - fall back to SQLite for filter-only
const fallbackResult = await this.sqliteStrategy.search({
...options,
query: undefined // Remove query for SQLite fallback <-- DROPS query text silently
});
return { ...fallbackResult, fellBack: true };
```
And inside `ChromaSearchStrategy.search` at `:76-86`, a thrown error becomes `{ usedChroma: false, fellBack: false }` (swallowed). The Orchestrator's `usedChroma=false` branch then runs SQLite with the query text stripped. **This is the silent fallback from audit item #32**. The current behavior drops the query text and returns filter-only SQLite results — no 503, no error signal to the caller. Caller (SearchManager) flips a `chromaFailed` flag into the rendered markdown, but JSON callers (viewer UI, mem-search skill, CorpusBuilder) have no way to detect it.
### Route surface
`src/services/worker/http/routes/SearchRoutes.ts` declares 18 endpoints. Of those that invoke `this.searchManager.*`:
- Pass-through candidates (3): `/api/search/observations` `:98`, `/api/search/sessions` `:107`, `/api/search/prompts` `:116`.
- Route-to-Orchestrator-directly candidates (3): `/api/search/by-concept` `:125`, `/api/search/by-file` `:134`, `/api/search/by-type` `:143`.
- Display-wrap kept: `/api/search` `:53`, `/api/timeline` `:62`, `/api/decisions` `:71`, `/api/changes` `:80`, `/api/how-it-works` `:89`, `/api/timeline/by-query` `:303`, plus `/api/context/*` (Plan 05 territory).
## Copy-ready snippet locations
- Hybrid decision tree + 503 branch target: `SearchOrchestrator.ts:81-121`. Replace lines 100110 with the 503 throw.
- 503 shape: follow anti-pattern guard C from `06-implementation-plan.md:63` — throw a typed `ChromaUnavailableError` (new class `src/services/worker/search/errors.ts`) with `code='chroma_unavailable'`; `SearchRoutes.wrapHandler` catches and maps to `res.status(503).json({error:'chroma_unavailable'})`.
- Render path: `renderObservations(combined, SearchResultStrategy, ctx)` from Plan 05 Phase 4 → new file `src/services/worker/search/strategies/SearchResultStrategy.ts`.
- Pass-through deletion ranges: `SearchManager.ts:951-1036` (`searchObservations`), `:1037-1122` (`searchSessions`), `:1123-1208` (`searchUserPrompts`).
- `filterByRecency` + callers to delete: `ChromaSearchStrategy.ts:196-217` + call site `:119`; `SEARCH_CONSTANTS.RECENCY_WINDOW_DAYS`/`_MS` at `types.ts:15-16`; plus the seven copies in `SearchManager.ts` listed above.
## Confidence + gaps
**High confidence**:
- SearchManager method classifications (grep-verified inventory; body-read for the three hybrid methods confirms exact duplication of `HybridSearchStrategy.*`).
- Current silent-fallback behavior (read in `SearchOrchestrator.ts:93-110`).
- 90-day default exists at exactly one shared constant (`types.ts:15-16`) plus seven in-file duplicate copies inside `SearchManager.ts`.
**Gaps**:
- Semantic-inject POST `/api/context/semantic` at `SearchRoutes.ts:270` calls `searchManager.search` with its own mini-formatter **post-render** (flagged by Plan 05 Phase 6). This plan does not touch that handler; Plan 05 owns it.
- `ResultFormatter.formatSearchResults` callers — need one grep pass during Phase 6 to confirm no other caller beyond `SearchManager.search` at `:321`, `formatSearchResults` routes, and `SearchOrchestrator.ts:214` (which also exposes it). Left as a Phase 6 checklist item.
- Exact JSON error body shape for 503 — two reasonable choices (`{error:'chroma_unavailable'}` vs `{error:{code:'chroma_unavailable', retryable:true}}`). Defer to Phase 4 decision; current plan uses the simpler shape.
---
## Phase 1 — Classify every `SearchManager` method
**(a) What**: Lock the method inventory above into the repo as a code comment in `SearchManager.ts` header (keeps future auditors honest). No behavior change.
**(b) Docs**: `05-clean-flowcharts.md` Part 1 item #31; `06-implementation-plan.md:38` (V11); live file `src/services/worker/SearchManager.ts:1-2069`.
**(c) Verification**:
- `grep -n "^\s*async \+[a-zA-Z]" src/services/worker/SearchManager.ts | wc -l` → 15 public async methods (matches inventory).
- `grep -n "@deprecated" src/services/worker/SearchManager.ts` → exactly one hit at `:57` (`queryChroma`). Confirm `searchChromaForTimeline` at `:70` is untagged but classified deprecated per `01-flowcharts/hybrid-search-orchestration.md:91`.
**(d) Anti-pattern guards**: Guard D — every method marked "pass-through" in the inventory must have a body that trivially forwards to `this.orchestrator.*` after reading. If a method claims pass-through but also does date filtering or recency windows, reclassify as real-work before later phases delete it.
---
## Phase 2 — Delete `@deprecated` methods
**(a) What**: Copy from `SearchManager.ts:59-97`**delete** both `queryChroma` and `searchChromaForTimeline`. Update `timeline()` at `:490` to call `SearchOrchestrator.getTimeline` / `formatTimeline` (`SearchOrchestrator.ts:185-209`) instead.
**(b) Docs**: `05-clean-flowcharts.md` Part 1 item #30 (line 48); `05-clean-flowcharts.md` §3.6 "Deleted" bullet 2 (line 286); `SearchManager.ts:57` @deprecated tag.
**(c) Verification**:
- `grep -rn "queryChroma\|searchChromaForTimeline" src/` → only hits are `chromaSync.queryChroma` (ChromaSync public method — do not touch) and `ChromaSearchStrategy.ts` calls to `chromaSync.queryChroma`.
- `grep -n "@deprecated" src/services/worker/SearchManager.ts` → zero hits.
- `npm run build` passes; `/api/timeline?query=x` still returns timeline.
**(d) Anti-pattern guards**: Guard D — no replacement shim; delete outright. Do not leave a `/** @deprecated */` stub calling the Orchestrator — that is the thin-facade anti-pattern returning.
---
## Phase 3 — Route `SearchRoutes` directly to `SearchOrchestrator` for pass-throughs
**(a) What**: In `src/services/worker/http/routes/SearchRoutes.ts`:
1. Inject `SearchOrchestrator` alongside `SearchManager` (or replace `SearchManager` prop entirely once Phase 6 lands). Copy constructor wiring shape from `SearchRoutes.ts:14-18`.
2. Rewire three handlers:
- `:98` `handleSearchObservations``await this.orchestrator.search({...req.query, type:'observations'})`
- `:107` `handleSearchSessions``await this.orchestrator.search({...req.query, type:'sessions'})`
- `:116` `handleSearchPrompts``await this.orchestrator.search({...req.query, type:'prompts'})`
3. Delete `searchObservations`, `searchSessions`, `searchUserPrompts` from `SearchManager.ts:951-1208`.
**(b) Docs**: `05-clean-flowcharts.md` §3.6 diagram (line 267 `B --> C`); `06-implementation-plan.md:208-225` Phase 4 step 1; live file `src/services/worker/http/routes/SearchRoutes.ts:98-118` and `SearchManager.ts:951-1208`.
**(c) Verification**:
- `grep -n "this.searchManager.search\(Observations\|Sessions\|UserPrompts\)" src/` → zero hits.
- `curl localhost:37777/api/search/observations?query=x` returns the same JSON shape as before (snapshot test).
- Chroma-down test: stop the Chroma subprocess; call `/api/search/observations?query=x`**503 with `{error:'chroma_unavailable'}`** (contract established in Phase 4). Not an empty `observations:[]` array.
**(d) Anti-pattern guards**:
- Guard D — the deleted methods were ~85 lines each of wrapping; make sure the replacement route lines do NOT re-import a "for type consistency" shim from SearchManager.
- Guard C — if the old pass-through silently caught Chroma failures and returned `observations:[]`, the new direct route must propagate the 503 from Phase 4.
---
## Phase 4 — Replace silent Chroma-fail with 503 in `SearchOrchestrator`
**(a) What**: Copy from `SearchOrchestrator.ts:90-110`. Delete the fallback branch:
```ts
// DELETE these lines 100-110
const fallbackResult = await this.sqliteStrategy.search({...options, query: undefined});
return {...fallbackResult, fellBack: true};
```
Replace with:
```ts
throw new ChromaUnavailableError();
```
Add `src/services/worker/search/errors.ts` exporting `class ChromaUnavailableError extends Error { code = 'chroma_unavailable' }`.
Also update `ChromaSearchStrategy.ts:76-86` — the catch block currently swallows errors and returns `usedChroma:false`. Change to rethrow as `ChromaUnavailableError` so `executeWithFallback` sees it.
In `SearchRoutes.ts` `wrapHandler` (or `BaseRouteHandler`), catch `ChromaUnavailableError``res.status(503).json({error:'chroma_unavailable'})`.
Update `SearchOrchestrator.findByConcept`/`findByType`/`findByFile` (`:126-180`) — today they fall back to SQLite-only on no-hybrid. That fallback is **allowed** because concept/type/file filters are legitimate without Chroma. Only text-query paths get 503. Document this distinction inline.
**(b) Docs**: `05-clean-flowcharts.md` Part 1 item #32 (line 50); `05-clean-flowcharts.md` §3.6 line 271 (`Return 503 error=chroma_unavailable (NO silent fallback)`); `06-implementation-plan.md:63` anti-pattern C; `06-implementation-plan.md:644` verification line (grep for `res.status(503)` + `chroma_unavailable`).
**(c) Verification**:
- Unit test: stub `ChromaSync.queryChroma` to throw → `SearchOrchestrator.search({query:'x'})` throws `ChromaUnavailableError`.
- Unit test: construct `SearchOrchestrator` with `chromaSync = null``search({query:'x'})` throws `ChromaUnavailableError` (today returns an empty result at `:115-120`; that branch also goes).
- Integration test: `curl localhost:37777/api/search?query=x` with Chroma disabled → `503` with body `{"error":"chroma_unavailable"}`.
- Integration test: `curl localhost:37777/api/search/by-concept?concept=x` with Chroma disabled → 200 with SQLite-only results. Concept/type/file filters remain functional without Chroma; only text-query paths hard-fail.
- `curl localhost:37777/api/search` (no query) with Chroma disabled → 200 with SQLite filter-only results (this path is legitimate per §3.6 line 272).
- `grep -rn "query: undefined" src/services/worker/search/` → zero hits (the silent-drop pattern).
- `grep -rn "fellBack" src/` → zero hits. The `fellBack` field on `StrategySearchResult` is obsolete once fallback is deleted; remove from `types.ts` as part of this phase.
**(d) Anti-pattern guards**:
- Guard C — primary target. Silent fallback deleted; explicit error class + HTTP status.
- Guard D — do not wrap the new throw behind a shim in `SearchManager`. The orchestrator throws; routes handle.
---
## Phase 5 — Delete `filterByRecency` and the 90-day default
**(a) What**:
1. Copy from `ChromaSearchStrategy.ts:196-217`**delete** `filterByRecency` method.
2. Delete its call site at `ChromaSearchStrategy.ts:119` (`const recentItems = this.filterByRecency(chromaResults);`). Replace with direct `chromaResults.ids` + `metadatas` join (preserve the metadata-by-id map logic from the old method's lines `:202-208` — that dedup IS real work; only the 90-day filter goes).
3. Delete `SEARCH_CONSTANTS.RECENCY_WINDOW_DAYS` and `RECENCY_WINDOW_MS` from `src/services/worker/search/types.ts:15-16`.
4. Delete the seven in-file copies in `SearchManager.ts` (lines 230-259, 488, 978-985, 1064-1071, 1150-1157, 1840-1847). Replaced by caller-supplied `dateRange` only — if caller wants recency, caller passes `dateRange: {start, end}`.
**(b) Docs**: `05-clean-flowcharts.md` Part 1 item #33 (line 51); `05-clean-flowcharts.md` §3.6 "Deleted" bullet 4 (line 288); live `src/services/worker/search/strategies/ChromaSearchStrategy.ts:196-217`; `src/services/worker/search/types.ts:15-16`.
**(c) Verification**:
- `grep -rn "RECENCY_WINDOW\|filterByRecency\|ninetyDaysAgo\|90.day\|90 days" src/` → zero hits.
- Integration test: seed an observation dated 100 days ago; query by its text → it appears in results (would have been filtered out pre-deletion).
- Integration test: pass `dateRange.start` = 60 days ago; observation from 100 days ago is excluded. Explicit filter still works.
**(d) Anti-pattern guards**:
- Guard C — silent implicit filter replaced by explicit caller param.
- Guard D — no "convenience wrapper" that re-applies 90 days when `dateRange` is missing. Missing = all.
---
## Phase 6 — Keep display-wrap in `SearchManager`; switch to `renderObservations(results, SearchResultStrategy)`
**BLOCKED until**: Plan 05 Phase 4 lands and ships `src/services/worker/search/strategies/SearchResultStrategy.ts`.
**(a) What**:
1. In `SearchManager.ts:161-445` (`search`): delete everything from the `PATH 1` decision at `:177` through the categorization/hydration blocks at `:321`. The full decision tree is already in `SearchOrchestrator.search`. Replace body with:
```ts
async search(args: any): Promise<any> {
const results = await this.orchestrator.search(args);
if (args.format === 'json') return { content:[{type:'text', text: JSON.stringify(results)}] };
const combined = combineResults(results.results);
return { content:[{type:'text', text: renderObservations(combined, SearchResultStrategy, ctx)}] };
}
```
2. Apply same transformation to `timeline` `:450`, `findByConcept` `:1209`, `findByFile` `:1277`, `findByType` `:1399`, `getTimelineByQuery` `:1810`. Each becomes: call orchestrator → render via strategy. Keep the outer `{content:[{type:'text', ...}]}` MCP envelope; drop everything in between.
3. Keep `decisions`, `changes`, `howItWorks` `:731-950` as semantic-shortcut wrappers. They compute a preset query string, call `this.orchestrator.search({...args, query:'decision'})` (or equivalent), render via `renderObservations`. Body shrinks from ~70 lines each to ~10.
4. Delete or drop-in replace `normalizeParams` at `:103``SearchOrchestrator.normalizeParams` at `:239` is canonical. If the API-only coercions (`filePath→files`, `isFolder`) are missing there, **move them into** `SearchOrchestrator.normalizeParams` and delete the SearchManager copy. Guard: grep every caller to confirm the Orchestrator version covers all cases.
**(b) Docs**: `05-clean-flowcharts.md` §3.6 line 281 (`Fmt -->|markdown| M["renderObservations(results, SearchResultStrategy)"]`); `06-implementation-plan.md:220-225` (Phase 4 step 3 — keep the combine/group/table code as a `ResultRenderer` module); `07-plans/05-context-injection-engine.md:169-182` Phase 4 (SearchResultStrategy); live `src/services/worker/SearchManager.ts:161-445`.
**(c) Verification**:
- `wc -l src/services/worker/SearchManager.ts` → under 400 lines (from 2069).
- Snapshot test: fixture `SearchResults``renderObservations(combined, SearchResultStrategy, ctx)` output is byte-equal to the pre-refactor `ResultFormatter.formatSearchResults` output. Plan 05 Phase 4 owns this fixture; reuse it here.
- `grep -n "combineResults\|groupByDate\|groupByFile" src/services/worker/SearchManager.ts` → zero hits (now lives in SearchResultStrategy / renderObservations).
- Manual: viewer UI `http://localhost:37777` search results render identically.
**(d) Anti-pattern guards**:
- Guard D — SearchManager's remaining methods must each be ≤15 lines (orchestrator call + render envelope). If any method balloons back, it's re-implementing decision logic.
- Guard A (strategy count from Plan 05 audit Part 2) — don't invent a fifth strategy just for "semantic context injection". Plan 05 Phase 6 routes that handler through `SearchResultStrategy` with a flag.
---
## Phase 7 — Verification
Run all checks from phases 16 in one pass, plus:
1. **Behavior preservation**:
- All three search paths (filter-only, Chroma-semantic, hybrid concept/type/file) return results for representative queries.
- `?format=json` and default markdown both work on every search endpoint.
- `concept=`, `type=`, `obs_type=`, `files=`, `filePath=` filters all honored (grep-verify normalizeParams covers each).
- Timeline endpoint returns chronological groupings with anchor depth filtering intact.
2. **Chroma-down contract**:
- Stop Chroma subprocess. `curl /api/search?query=x` → 503 `{"error":"chroma_unavailable"}`. Not empty, not silent.
- `curl /api/search` (no query) → 200 with SQLite filter results.
- `curl /api/search/by-concept?concept=foo` → 200 with SQLite metadata results (per `SearchOrchestrator.ts:126-140`).
3. **Line-count targets**:
- `SearchManager.ts`: 2069 → under 400 lines (≥1600 deleted).
- `SearchOrchestrator.ts`: ~290 → ~280 (fallback branch removed, error class added).
- `ChromaSearchStrategy.ts`: 247 → ~215 (filterByRecency deleted).
- Net project delete target: ~1700 lines.
4. **Grep contract checks**:
- `grep -rn "query: undefined" src/services/worker/search/` → 0.
- `grep -rn "RECENCY_WINDOW\|filterByRecency\|ninetyDaysAgo" src/` → 0.
- `grep -rn "@deprecated" src/services/worker/SearchManager.ts` → 0.
- `grep -rn "this.searchManager.search\(Observations\|Sessions\|UserPrompts\)" src/` → 0.
- `grep -rn "res.status(503)" src/services/worker/http/` → at least one hit on the `chroma_unavailable` path.
5. **Downstream smoke** (Plan 10 contract):
- `CorpusBuilder.build` test — feed synthetic observations, confirm `SearchOrchestrator.search` signature unchanged and `StrategySearchResult` shape stable.
6. **Anti-pattern audit**:
- Guard C: no `catch { return empty }` patterns in `src/services/worker/search/`.
- Guard D: every method in `SearchManager.ts` either renders or shortcut-presets. No single-line `return this.orchestrator.x(args)` remains.
@@ -0,0 +1,529 @@
# Implementation Plan: session-lifecycle-management
**Flowchart**: PATHFINDER-2026-04-21/05-clean-flowcharts.md § 3.8 ("session-lifecycle-management (clean) — BIGGEST CULL")
**Before-state**: PATHFINDER-2026-04-21/01-flowcharts/session-lifecycle-management.md
**Scope** (revised 2026-04-22: zero-timer model): delete all three repeating background timers in the worker layer — no `ReaperTick` replacement, no `sqliteHousekeepingInterval`. Replace each recurring check with one of: (a) the `child.on('exit')` handlers already wired at `ProcessRegistry.ts:479` (SDK) and `worker-service.ts:530` (MCP), (b) the per-iterator 3-min idle `setTimeout` already wired at `SessionQueueProcessor.ts:6` (covers hung-generator case on its own), (c) a per-session `setTimeout(deleteSession, 15min)` scheduled on last-generator-completion and cleared on new activity (covers abandoned-session case), (d) a boot-once reconciliation block that calls the existing `killSystemOrphans()` + `supervisor.pruneDeadEntries()` + `recoverStuckProcessing()` + `clearFailedOlderThan(1h)` once at worker startup. Delete the worker-level `ProcessRegistry` facade (528 LoC). Inline the SIGTERM→SIGKILL ladder. Implement blocking `POST /api/session/end`.
**Target LoC**: process-lifecycle ~900 → ~400.
**Target repeating-timer count in `src/services/worker/` + `worker-service.ts`**: 3 → **0**. (The only `setTimeout` calls that remain are the per-operation escalation ladder, per-session idle, per-session abandonment, and the generator-exit race — all non-repeating, all correct.)
---
## Dependencies
### Upstream (must land first)
- **01-privacy-tag-filtering** — defines shared `stripMemoryTags(text)` in `src/utils/tag-stripping.ts`. Phase 1 of THIS plan introduces `ingestObservation` / `ingestPrompt` / `ingestSummary` helpers that call that function. If 01 has not landed, Phase 1 here imports the existing wrappers, but the ingest-helper location (`src/services/ingest/`) is authoritative and 01 rewires its call-sites into these helpers.
- **02-sqlite-persistence** — owns the boot-recovery section of `sqlite-persistence (clean)` (§ 3.3 bottom box `BootOnce`). V19 per-claim 60-s reset (`PendingMessageStore.ts:99-145`) is deleted by Phase 5 of THIS plan and replaced with a single `PendingMessageStore.recoverStuckProcessing()` called once in worker boot. 02 codifies the broader schema-recovery ordering; Phase 5 slots `recoverStuckProcessing()` into that boot sequence.
- **03-response-parsing-storage** — defines `ResponseProcessor` + `session.recordFailure()` contract. Phase 7 (blocking `/api/session/end`) awaits the `summary_stored` flag that `ResponseProcessor` sets after a successful summary commit. The "summary_stored OR 110s timeout" integration point lives inside this plan (Phase 7) but depends on 03 wiring the flag.
### Downstream (this plan enables)
- **09-lifecycle-hooks** — hook layer consumes the blocking `POST /api/session/end` built in Phase 7 (replaces the current 500-ms polling loop in `src/cli/handlers/summarize.ts:117-150`). That plan's hook simplification is blocked until Phase 7 ships.
---
## Concrete findings from live code
### `src/services/worker/ProcessRegistry.ts` (527 lines — entire file slated for deletion)
Exposed surface (every export → supervisor-registry method it should hit directly):
| Worker export | File:line | Replacement |
|---|---|---|
| `registerProcess(pid, sessionDbId, process)` | `:57-65` | `getSupervisor().registerProcess(id, info, procRef)` — already the body of this function |
| `unregisterProcess(pid)` | `:70-79` | `getSupervisor().getRegistry().getByPid(pid)` + `getSupervisor().unregisterProcess(record.id)` — already the body |
| `getProcessBySession(sessionDbId)` | `:85-94` | Move to free helper `findSessionProcess(id)` in `src/services/worker/process-spawning.ts`; body iterates `getRegistry().getAll()` + filters by `type==='sdk'` (same as `getTrackedProcesses` helper at `:34-52`) |
| `getActiveCount()` | `:99-101` | Direct: `getSupervisor().getRegistry().getAll().filter(r => r.type==='sdk').length` |
| `waitForSlot(max, timeout, evict)` | `:122-167` | Pool-slot bookkeeping is worker-scoped, **not** a supervisor concern. Keep as free function in `process-spawning.ts`. The `slotWaiters` array (`:104`) stays module-local. |
| `notifySlotAvailable()` (internal) | `:109-112` | Stays module-local in `process-spawning.ts`; called from the `exit` event handler inside `createPidCapturingSpawn`. Under the zero-timer model, `exit` is the sole runtime trigger, so slot notification happens directly from the handler that already owns subprocess-death semantics. No scanner involved. |
| `getActiveProcesses()` | `:172-179` | Free helper in `process-spawning.ts` (still used for stats / debug endpoints). |
| `ensureProcessExit(tracked, timeoutMs=5000)` | `:185-229` | **Inline** into `deleteSession` (SessionManager.ts:406-413) as 12-line block: check `exitCode`, `Promise.race([once('exit'), setTimeout])`, SIGKILL, race again. Per audit item #9 and anti-pattern guard A. |
| `killIdleDaemonChildren()` | `:244-309` | **Delete**. Its runtime role (cleaning up our own idle daemons) is covered by the `child.on('exit')` handler at `ProcessRegistry.ts:479` which already calls `unregisterProcess(pid)`, combined with the per-iterator 3-min idle `setTimeout` at `SessionQueueProcessor.ts:6` that aborts hung generators. Ppid=1 leftovers from a prior worker crash are caught by boot-once `killSystemOrphans()` (see next row). |
| `killSystemOrphans()` | `:315-344` | **Keep function body; move call from interval to boot-once.** Ppid=1 Claude processes can only exist because a *previous* worker crashed without reaping them — during the current worker's lifetime, `exit` handlers catch subprocess death. So one call at worker startup covers the full scope. Called from worker boot init (Phase 3), never scheduled. |
| `reapOrphanedProcesses(activeSessionIds)` | `:349-382` | **Delete**. Runtime component: covered by `exit` handlers. Cross-restart component: covered by boot-once `supervisor.pruneDeadEntries()` which walks the registry and drops entries whose PIDs are no longer in the OS. |
| `createPidCapturingSpawn(sessionDbId)` | `:393-502` | Move verbatim to `process-spawning.ts` as free function. It already wires `child.on('exit')``unregisterProcess(pid)` at `:479-486` — keep that path; it's the sole runtime subprocess-death signal under the zero-timer model. |
| `startOrphanReaper(getActiveSessionIds, intervalMs=30_000)` | `:508-527` | **Delete**; no replacement timer. |
Caller fan-out (every `from '.../ProcessRegistry'` site must be re-pointed):
- `src/services/worker/SessionManager.ts:17` — imports `getProcessBySession, ensureProcessExit`. Rewrite: import from `./process-spawning.js` (findSessionProcess), and inline the exit wait in `deleteSession`.
- `src/services/worker/SDKAgent.ts:24` — imports `createPidCapturingSpawn, getProcessBySession, ensureProcessExit, waitForSlot`. Rewrite: import from `./process-spawning.js`. The `ensureProcessExit` call-site (search inside SDKAgent) goes away when we route through `deleteSession`.
- `src/services/worker-service.ts:109` — imports `startOrphanReaper, reapOrphanedProcesses, getProcessBySession, ensureProcessExit`. After Phase 3, imports shrink to `{ getActiveProcesses }` from `./process-spawning.js`. `startOrphanReaper` + `reapOrphanedProcesses` delete. The `ensureProcessExit` at `worker-service.ts:786` inlines.
### `src/supervisor/process-registry.ts` (408 lines — authoritative, stays as-is)
Relevant API (no changes needed):
- `class ProcessRegistry` at `:175``register`, `unregister`, `getAll`, `getBySession`, `getByPid`, `getRuntimeProcess`, `pruneDeadEntries` (`:269-285`, uses `isPidAlive`), `reapSession(sessionId)` (`:292-385`, implements SIGTERM → wait 5 s → SIGKILL → wait 1 s).
- `isPidAlive(pid)` at `:28-45` — reused directly by boot-once `supervisor.pruneDeadEntries()` (Phase 3 Mechanism C) and by the inlined `killSystemOrphans()` body, both called exactly once per worker boot. Not called by any repeating timer.
- `getSupervisor().getRegistry()` — how worker code reaches this class (verified in worker/ProcessRegistry.ts:39, 71, 353).
### `src/services/worker/worker-service.ts`
- Line `109`: import site that must shrink.
- Line `174`: `private staleSessionReaperInterval: ReturnType<typeof setInterval> | null = null;` — delete field.
- Line `537`: `this.stopOrphanReaper = startOrphanReaper(() => { ... });` — delete outright, no replacement timer. Runtime subprocess death is handled by `child.on('exit')` handlers; cross-restart orphans are handled by boot-once `killSystemOrphans()` + `supervisor.pruneDeadEntries()`.
- Line `547`: `this.staleSessionReaperInterval = setInterval(async () => { ... }, 2*60*1000)`**delete the entire block** (outer wrapper + body). Disposition of the three things it did under the zero-timer model:
- `reapStaleSessions()` → deleted (no replacement timer). Hung-generator case is covered by the per-iterator idle `setTimeout` at `SessionQueueProcessor.ts:6`; no-generator abandonment is covered by the per-session `abandonedTimer` (Phase 3 Mechanism B).
- `clearFailedOlderThan(1h)` → moved to boot-once (Phase 3 Mechanism C step 4, co-owned with plan 02).
- `PRAGMA wal_checkpoint(PASSIVE)` → deleted outright. SQLite's default `wal_autocheckpoint=1000` pages is the contract (confirmed at `Database.ts:162-168` — no override).
- Line `786`: `await ensureProcessExit(trackedProcess, 5000)` — inline.
- Line `1108-1110`: shutdown path clears `staleSessionReaperInterval`. **Delete both shutdown clauses outright** — there is nothing to clear since no `setInterval` remains in the worker layer.
### `src/services/worker/SessionManager.ts`
- `MAX_GENERATOR_IDLE_MS = 5*60*1000` at `:23`**delete**. Hung-generator detection is now owned by `SessionQueueProcessor.ts:6` (`IDLE_TIMEOUT_MS = 3*60*1000`) at the stream level. The 5-min worker-layer threshold is redundant with the 3-min per-iterator threshold and the old split created two sources of truth.
- `MAX_SESSION_IDLE_MS = 15*60*1000` at `:26` — keep; now consumed by the per-session `scheduleAbandonedCheck()` method (Phase 3 Mechanism B).
- `detectStaleGenerator(session, proc, now)` at `:59-84`**delete**. Its consumer (`reapStaleSessions`) is being deleted; its logic (compare `lastGeneratorActivity` against a threshold) is superseded by the per-iterator idle `setTimeout` in `SessionQueueProcessor.ts` which resets on every chunk and fires `onIdleTimeout``abortController.abort()` at the stream level, not from a scanner.
- `deleteSession(sessionDbId)` at `:381-446` — inline `ensureProcessExit` at `:412`; additionally, clear `session.abandonedTimer` at the top of this method if set (per Phase 3 Mechanism B wiring).
- `reapStaleSessions()` at `:516-568`**delete method**, no replacement closure. The two branches:
- Generator-active branch at `:520-549`: replaced by the per-iterator idle `setTimeout` at `SessionQueueProcessor.ts:6` which aborts the controller when the stream is silent ≥3 min. The subprocess's `exit` handler then unregisters.
- No-generator branch at `:550-561`: replaced by the per-session `abandonedTimer` `setTimeout` scheduled on last-generator-completion and cleared on new activity (Phase 3 Mechanism B).
- `queueSummarize(sessionDbId, lastAssistantMessage)` at `:329-377` — unchanged; Phase 7's blocking endpoint calls this first, then awaits.
### `src/services/worker/SDKAgent.ts`
- Line `24` imports.
- The iterator pattern uses `session.abortController` (established in `SessionManager.initializeSession`); Phase 7's `/api/session/end` calls `session.abortController.abort()` after awaiting summary_stored. No change to SDKAgent body needed for abort semantics — the AbortSignal flows through the SDK query already (confirmed by SessionManager.ts:390 existing abort path).
### `src/services/sqlite/PendingMessageStore.ts`
- `STALE_PROCESSING_THRESHOLD_MS = 60_000` at `:6`.
- `claimNextMessage(sessionDbId)` at `:99-145` — the transaction body currently does both self-heal (`:103-116`) and claim (`:118-140`). Phase 5: keep the transaction, delete lines `103-116`, add a new public method `recoverStuckProcessing(): number` that runs the same UPDATE **unscoped by session id** once at worker boot.
- No behavior regression: the only functional change is timing. Crashed sessions are recovered on next worker boot (correct crash-recovery semantic), not on every claim call (polling anti-pattern).
### Blocking `POST /api/session/end` (Phase 7) — current state
- Existing endpoints (to consolidate):
- `POST /api/sessions/summarize` at `SessionRoutes.ts:387` → handler `handleSummarizeByClaudeId` → calls `queueSummarize` (`:705`) and returns immediately.
- `POST /api/sessions/complete` at `SessionRoutes.ts:753` → clears active session map.
- `GET /api/sessions/status?contentSessionId=...` at hook-side polling (`src/cli/handlers/summarize.ts:123`) — returns `{queueLength, summaryStored}`.
- `session.lastSummaryStored` is already written inside `ResponseProcessor` (see `SessionRoutes.ts:747` where it is read). This is the flag Phase 7 awaits.
- Phase 7 delivers: `POST /api/session/end` — body `{sessionDbId, last_assistant_message}`. Server-side: call `queueSummarize`, then `await` a `Promise` that resolves when `session.lastSummaryStored` flips, with a hard 110 000 ms timeout, then `session.abortController.abort()`, then `deleteSession`. Returns `{summaryId or null}`.
- Hook simplification (in 09-lifecycle-hooks plan) replaces the 220-iteration 500-ms poll loop at `summarize.ts:117-150` with one POST.
---
## Copy-ready snippet locations — event-driven + boot-once + per-session timers (revised 2026-04-22)
No new file. No `reaper.ts`. No `ReaperTick`. Three mechanisms, spread across existing modules:
### Mechanism A — `child.on('exit')` handlers (already wired; verify and keep)
- SDK spawn: `ProcessRegistry.ts:475-486` → moves to `process-spawning.ts:createPidCapturingSpawn` in Phase 2. The `on('exit', ...)` at `:479` must continue to call `unregisterProcess(child.pid)` at `:484`. Do not modify.
- MCP spawn: `worker-service.ts:523-532`. The `once('exit', ...)` at `:530` must continue to call `getSupervisor().unregisterProcess('mcp-server')` at `:531`. Do not modify.
- Per-iterator 3-min idle timeout: `SessionQueueProcessor.ts:6` (`IDLE_TIMEOUT_MS`), resets at `:51-52, :62-63`, fires `onIdleTimeout` at `:93-104``SessionManager.ts:651-655``session.abortController.abort()` → the abort signal reaches the spawn at `ProcessRegistry.ts:463` → child exits → `exit` handler unregisters. This chain already exists and covers the hung-generator case entirely.
**No code edit** — this mechanism is the verification target, not the change target. Phase 3 verification greps confirm these handlers are still in place after Phase 2's extraction.
### Mechanism B — Per-session abandoned-session `setTimeout` (new, replaces `reapAbandonedSessions`)
Goal: when a session has no generator running and no pending messages for 15 min, delete it. Detected at the session itself rather than by a global scanner.
Add to `SessionManager.ts`:
```ts
// In ActiveSession interface — add:
abandonedTimer?: ReturnType<typeof setTimeout>;
// New private method on SessionManager:
private scheduleAbandonedCheck(sessionDbId: number): void {
const session = this.sessions.get(sessionDbId);
if (!session) return;
if (session.abandonedTimer) clearTimeout(session.abandonedTimer);
session.abandonedTimer = setTimeout(() => {
const s = this.sessions.get(sessionDbId);
if (!s) return;
if (s.generatorPromise !== null) return; // still working — drop the timer silently
if (this.pendingStore.getPendingCount(sessionDbId) > 0) {
this.scheduleAbandonedCheck(sessionDbId); // work arrived while we waited — reschedule
return;
}
void this.deleteSession(sessionDbId); // truly abandoned — clean up
}, MAX_SESSION_IDLE_MS);
}
// In every code path that marks "work finished" — call scheduleAbandonedCheck
// In every code path that marks "new work arrived" — call clearTimeout(session.abandonedTimer)
```
Call-sites (derived from `SessionManager.ts`):
- Schedule (work finished): after `generatorPromise` resolves at `SessionManager.ts:~335` (`queueSummarize` fire-and-forget completion) and after `iterator` exits at `SessionManager.ts:~648` (the for-await loop exit).
- Clear (new work arrived): at the top of `initializeSession()` when a pending message lands; inside `queueSummarize()`; inside any `ingestObservation` path that sets `lastActivity`.
The timer is per-session, not repeating. When it fires it either deletes the session or reschedules itself if new work snuck in — no drift, no thundering-herd scan.
### Mechanism C — Boot-once reconciliation block (new helper in `worker-service.ts`)
Goal: at worker startup, in ONE sequential block, reconcile all state that event handlers cannot catch (i.e., state that can only have been orphaned by a previous worker instance).
Add to `worker-service.ts` boot init, immediately after `resetStaleProcessingMessages(0)` at `:424`:
```ts
// Boot-once reconciliation — runs exactly ONCE per worker process lifetime.
// Catches state orphaned by a previous (possibly crashed) worker instance.
await this.reconcileWorkerStartup();
// private method:
private async reconcileWorkerStartup(): Promise<void> {
// 1. Kill ppid=1 Claude processes leftover from a crashed prior worker.
// (Copy body of killSystemOrphans from ProcessRegistry.ts:315-344 into
// process-spawning.ts as a free helper before Phase 2 deletes the file.)
await killSystemOrphans();
// 2. Prune registry entries whose PID is no longer in the OS (crash-recovery).
getSupervisor().getRegistry().pruneDeadEntries();
// 3. pending_messages stuck on 'processing' from a crashed worker.
// (Moved from per-claim 60-s reset — see Phase 5.)
this.sessionManager.getPendingMessageStore().recoverStuckProcessing();
// 4. SQLite housekeeping (moved from the deleted stale-reaper interval).
// (Covered by plan 02's boot-once SQLite housekeeping phase — this
// plan assumes 02 has landed; if it has not, copy the call here.)
this.sessionManager.getPendingMessageStore().clearFailedOlderThan(60 * 60 * 1000);
}
```
No `setInterval` anywhere in this block. Each step runs exactly once. Explicit `PRAGMA wal_checkpoint` is **not** in this block because SQLite's default `wal_autocheckpoint=1000` pages (`Database.ts:162-168` sets no override) is the contract — see plan 02.
### What's deleted outright (no replacement)
- `src/services/worker/reaper.ts` (never created in this revision).
- `startReaperTick` export (never created).
- `staleSessionReaperInterval` (`worker-service.ts:174, :547`).
- `startOrphanReaper` (`ProcessRegistry.ts:508-527`, `worker-service.ts:537-544`).
- `reapStaleSessions` (`SessionManager.ts:516-568`).
- `reapOrphanedProcesses` (`ProcessRegistry.ts:349-382`).
- `killIdleDaemonChildren` as a runtime sweep (`ProcessRegistry.ts:244-309`) — function deleted entirely; its role is already covered by `exit` handlers + per-iterator idle timeout.
- Periodic `PRAGMA wal_checkpoint(PASSIVE)` call at `worker-service.ts:~581` — SQLite default covers it.
- Periodic `clearFailedOlderThan(1h)` call at `worker-service.ts:~567` — moved to boot-once (Mechanism C step 4).
---
## Phases
Every phase must satisfy: (a) precise "Copy from …" pointer, (b) doc citations, (c) verification, (d) anti-pattern guards (A invent supervisor API; B polling; D facade-over-facade).
### Phase 1 — Introduce ingest helpers (`ingestObservation` / `ingestPrompt` / `ingestSummary`)
(a) **Implement**:
- Create `src/services/ingest/index.ts` (new module). Three exports:
- `ingestObservation(payload: ObservationPayload): { id: number; skipped: boolean }`
- `ingestPrompt(payload: PromptPayload): { id: number; skipped: boolean }`
- `ingestSummary(payload: SummaryPayload): { id: number; skipped: boolean }`
- Each helper: `stripMemoryTags` all user-facing text fields → `PrivacyCheckValidator.validate(operationType)` (existing at `src/services/worker/validation/PrivacyCheckValidator.ts:17-24`) → `INSERT pending_messages` via `PendingMessageStore.enqueue`.
- Copy from: current HTTP-boundary strip + validate + enqueue sequence in `SessionRoutes.ts:696-705` (summarize branch) and the observation-queue path in `SessionManager.ts:276`. Consolidate.
(b) **Docs**:
- 05 § 3.8 — "`POST /api/session/observation``ingestObservation(payload) strip → validate → INSERT pending_messages` → emit 'message' event"
- 05 Part 2 D1 ("One observation ingest path")
- 05 § 3.2 call-site list (`C1` ingestObservation, `C2` ingestPrompt, `C3` ingestSummary — **C3 closes the summary privacy gap**)
- 06 cites `src/services/worker/validation/PrivacyCheckValidator.ts:17-24`
- Live: `src/services/worker/http/routes/SessionRoutes.ts:696-705`, `src/services/worker/SessionManager.ts:276`
(c) **Verification**:
- Grep `stripMemoryTags` usage: exactly 3 call-sites (one per helper) + unit test imports.
- Unit test: `ingestSummary({ last_assistant_message: "<private>secret</private> clean text" })` → DB row's `last_assistant_message` field does not contain "secret" (closes P1).
- `POST /api/sessions/summarize` call-path routes through `ingestSummary` (no direct strip call in `SessionRoutes.ts` anymore).
(d) **Guards**:
- A: do **not** add a fourth "`ingestAny(type, payload)`" dispatcher; the three shapes have different required fields and privacy rules. Separate functions → explicit failure modes.
- D: do **not** keep the old HTTP-boundary strip calls as a "belt-and-suspenders" second pass. Edge-processing only.
### Phase 2 — Delete `src/services/worker/ProcessRegistry.ts`; extract spawn helpers
(a) **Implement**:
- Create `src/services/worker/process-spawning.ts`:
- `createPidCapturingSpawn(sessionDbId)` — copy verbatim from `ProcessRegistry.ts:393-502`.
- `findSessionProcess(sessionDbId): TrackedProcess | undefined` — copy from `ProcessRegistry.ts:85-94` (`getProcessBySession` renamed for clarity).
- `getActiveProcesses()` — copy from `:172-179`.
- `getActiveProcessCount()` — copy from `:99-101`.
- `waitForSlot(max, timeoutMs, evict)` + `notifySlotAvailable()` + `slotWaiters` array + `TOTAL_PROCESS_HARD_CAP` — copy from `:104-167`.
- `TrackedProcess` interface — copy from `:27-32`.
- Inline helper `getTrackedProcesses()` — copy from `:34-52`.
- Rewire imports in:
- `SessionManager.ts:17``{ findSessionProcess }` from `./process-spawning.js`.
- `SDKAgent.ts:24``{ createPidCapturingSpawn, findSessionProcess, waitForSlot }`.
- `worker-service.ts:109``{ getActiveProcesses }`.
- Delete `src/services/worker/ProcessRegistry.ts`.
(b) **Docs**:
- 05 § 3.8 "Deleted: `src/services/worker/ProcessRegistry.ts` (facade, 528 lines) — supervisor registry is source of truth"
- 05 Part 1 item #4
- 06 Phase 5 "Delete worker ProcessRegistry facade" (Phase 5 :246-280)
- V5, V6
- Live: `ProcessRegistry.ts:1-527`, `worker-service.ts:109, 537, 786`, `SessionManager.ts:17, 412`, `SDKAgent.ts:24`
(c) **Verification**:
- `test -f src/services/worker/ProcessRegistry.ts` → false.
- `grep -rn "worker/ProcessRegistry" src/` → 0.
- `npx tsc --noEmit` clean.
- Manual: spawn SDK subprocess, kill with `kill -TERM <pid>`; subprocess exits; supervisor-registry prunes dead PID on next reaper tick (Phase 3 verifies the prune).
(d) **Guards**:
- D: no compat shim re-exporting deleted symbols.
- A: do **not** invent new methods on `supervisor/process-registry.ts` — use its existing public API (`register`, `unregister`, `getByPid`, `getBySession`, `getAll`, `pruneDeadEntries`, `reapSession`, `getRuntimeProcess`).
### Phase 3 — Wire event-driven cleanup + boot-once reconciliation + per-session abandoned-session timer (revised 2026-04-22)
**Previously proposed:** build a new `reaper.ts` module exporting a `ReaperTick` with three skippable checks on a 30-s interval; additionally introduce a dedicated `sqliteHousekeepingInterval` for `clearFailedOlderThan` + `wal_checkpoint`. Both were rejected as band-aids by investigation 2026-04-22 — see `08-reconciliation.md` Part 4 revision. This phase is now a **three-part change with zero new `setInterval`s.**
(a) **Implement — Part 1 (Mechanism A: verify existing event handlers survive Phase 2's extraction)**:
After Phase 2 moved `createPidCapturingSpawn` from `ProcessRegistry.ts:393-502` to `process-spawning.ts`, verify the subprocess `exit` handler still:
- At `ProcessRegistry.ts:479` (now `process-spawning.ts` in its new location): `child.on('exit', ...)` is present.
- Calls `unregisterProcess(child.pid)` (line `:484` relative) on exit.
- Also calls `notifySlotAvailable()` inside the same handler (keeps pool bookkeeping correct without a scanner).
No code change beyond what Phase 2 already did — the handler was already correct; this phase is where it *becomes load-bearing* because the sweeper it was backing up is being deleted.
(a) **Implement — Part 2 (Mechanism B: per-session abandoned-session `setTimeout`)**:
In `SessionManager.ts`:
1. Add `abandonedTimer?: ReturnType<typeof setTimeout>` to `ActiveSession` interface.
2. Add private `scheduleAbandonedCheck(sessionDbId: number): void` per the Copy-ready snippet section (Mechanism B). Threshold: `MAX_SESSION_IDLE_MS = 15*60*1000` (re-home from the module-level const at `:26` to a `thresholds` object — or leave in place and import into the method).
3. Wire schedule-on-idle call-sites:
- Inside `queueSummarize()` fire-and-forget completion handler (around `:335` — the `.finally` branch on the generator promise): `this.scheduleAbandonedCheck(sessionDbId)`.
- Inside the for-await iterator exit in `getMessageIterator()` consumer (around `:648`): `this.scheduleAbandonedCheck(sessionDbId)`.
4. Wire clear-on-activity call-sites:
- Top of `initializeSession()`: if `sessions.has(id)` and `session.abandonedTimer`, `clearTimeout(session.abandonedTimer)` + `session.abandonedTimer = undefined`.
- Inside `queueSummarize()` at entry: same clear.
- Inside observation enqueue path (wherever `ingestObservation` bumps `lastActivity`): same clear.
5. Inside `deleteSession()`: `if (session.abandonedTimer) clearTimeout(session.abandonedTimer)`. (Prevents firing after deletion.)
(a) **Implement — Part 3 (Mechanism C: boot-once reconciliation in `worker-service.ts`)**:
In `worker-service.ts`, replace the deleted blocks at lines `537-544` (`startOrphanReaper`) and `547-589` (stale reaper + WAL + failed-purge) with the boot-once call per the Copy-ready snippet section (Mechanism C). Insertion point: immediately after the existing `resetStaleProcessingMessages(0)` at `:424`.
Move the body of `killSystemOrphans` out of the doomed `ProcessRegistry.ts` **before** Phase 2 deletes that file. Two options:
- Land Phase 3 before Phase 2 and keep a direct import until Phase 2 runs; then move the function along with `createPidCapturingSpawn` into `process-spawning.ts` and re-export. (Chosen — preserves Phase ordering.)
- Copy the body inline into `worker-service.ts` boot helper. (Fallback if circular-import issues arise.)
`supervisor.getRegistry().pruneDeadEntries()` is used directly — no new method on the supervisor, per anti-pattern guard A.
(b) **Docs**:
- 05 § 3.8 revised subgraph "Event-driven cleanup — no repeating timers" and "Worker startup — boot-once reconciliation".
- 05 Part 2 **D3** ("Zero repeating background timers").
- 05 Part 4 timer census ("Repeating background timers: 3 → 0") — revision 2026-04-22.
- 08-reconciliation.md Part 4 (revised) — zero-timer model rationale + invariants.
- V6 (register ownership), V19 (stale-reset relocation to boot-once).
- Live: `ProcessRegistry.ts:315-344, 475-486, 479-484`, `worker-service.ts:421-427, 523-532, 537-589`, `SessionManager.ts:26, 59-84, 516-568, 648-656, 651-655`, `SessionQueueProcessor.ts:6, 51-52, 62-63, 93-104`, `supervisor/process-registry.ts` (pruneDeadEntries).
(c) **Verification**:
- **Zero `setInterval` in the worker layer**:
```
grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts
```
Expected: **0** matches. No exclusions, no parenthetical carve-outs.
- **Zero references to the deleted sweeper names**:
```
grep -rn "ReaperTick\|startReaperTick\|startOrphanReaper\|staleSessionReaperInterval\|reapStaleSessions\|reapOrphanedProcesses\|killIdleDaemonChildren\|sqliteHousekeepingInterval" src/
```
Expected: **0**.
- **`killSystemOrphans` is called exactly once per worker boot**:
```
grep -rn "killSystemOrphans" src/
```
Expected: 2 matches — the definition and a single call site inside the boot-once helper. No call site inside any handler or interval.
- **Abandoned-session timer**:
- Unit test: initialize a session, fire-and-forget resolve its generator, advance a fake clock 15 min — assert `deleteSession` was called exactly once.
- Unit test: initialize a session, let it go idle for 14 min, then enqueue an observation — assert `abandonedTimer` was cleared and nothing was deleted.
- Unit test: initialize a session, idle 15 min, timer fires, but `pendingStore.getPendingCount()` returns > 0 at the moment of firing — assert timer reschedules and no delete occurs.
- **Hung-generator path**:
- Integration test: spawn an SDK session, freeze its stream (SIGSTOP the subprocess); after 3 min the per-iterator idle timeout at `SessionQueueProcessor.ts` fires, `abortController.abort()` fires, the child exits, the `exit` handler unregisters. No background scanner involved.
- **Boot-once reconciliation**:
- Integration test: before starting the worker, spawn a detached Claude subprocess whose ppid is `1` (simulate a crashed prior worker). Boot the worker. Within 1 s of boot completion, that process is SIGKILLed. Registry is clean.
- Integration test: seed `pending_messages` with a row in `status='processing'` from a prior (fake-crashed) worker; boot; assert the row is reset to `status='pending'` within 1 s.
- **Subprocess crash-recovery during runtime**:
- Integration test: while the worker is running, `kill -9` an active SDK subprocess. Within 500 ms the `exit` handler fires, `unregisterProcess` is called, pool slot is released. No timer involved.
(d) **Guards**:
- **B (no polling, no new interval)**: the definitive grep. `grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts` must return **0**. Any hit is a regression — the fix is to either remove the call or convert it to an event-driven / per-session pattern.
- **A (no invented supervisor API)**: `pruneDeadEntries`, `getByPid`, `getBySession`, `getAll`, `reapSession`, `getRuntimeProcess`, `unregisterProcess`, `registerProcess` are the full public surface — any other method name in a diff is an invented API and must be reverted.
- **D (no facade-over-facade)**: the per-session abandoned-session timer lives on `ActiveSession` as a field — no new `AbandonedSessionManager` class, no `SessionTimeoutScheduler` abstraction. If a second per-session timer needs to be added later, *then* extract.
- **E (one code path per concern)**: the only subprocess-death signal at runtime is `child.on('exit')`. Do not add a second redundant signal (no `pid-alive` poller, no "heartbeat check").
### Phase 4 — Delete `staleSessionReaperInterval` + `startOrphanReaper` + periodic SQLite housekeeping (revised 2026-04-22)
(a) **Implement**:
- Delete `src/services/worker/worker-service.ts:174` field declaration (`private staleSessionReaperInterval`).
- Delete `worker-service.ts:537-544` (startOrphanReaper call + `this.stopOrphanReaper` wiring).
- Delete `worker-service.ts:547-589` (entire stale-reaper block, including its embedded `clearFailedOlderThan` and `PRAGMA wal_checkpoint(PASSIVE)` calls). **Do not** create a new `setInterval` in their place. `clearFailedOlderThan` has moved to boot-once (Phase 3 Mechanism C step 4, co-owned with plan 02). `wal_checkpoint` is deleted outright — SQLite's default `wal_autocheckpoint=1000` pages covers it (`Database.ts:162-168` sets no override; the default is active).
- Delete shutdown clauses at `worker-service.ts:1108-1110` (both `clearInterval(this.staleSessionReaperInterval)` and `this.stopOrphanReaper?.()`). The boot-once block has nothing to clear on shutdown.
- Delete `startOrphanReaper` export from `ProcessRegistry.ts` (already removed by Phase 2's file deletion).
- Delete `SessionManager.reapStaleSessions()` method entirely (`SessionManager.ts:516-568`). No stub; no replacement — both of its branches are covered by the per-iterator idle timeout (hung-generator branch) and the per-session abandoned-session timer from Phase 3 (no-generator branch).
- Keep module-level `MAX_SESSION_IDLE_MS` in `SessionManager.ts:26` — it is now consumed by `scheduleAbandonedCheck()` (Phase 3 Mechanism B). Keep `MAX_GENERATOR_IDLE_MS` at `:23` — unchanged usage by `detectStaleGenerator`.
(b) **Docs**:
- 05 § 3.8 Deleted list (`staleSessionReaperInterval`, `startOrphanReaper`, `reapStaleSessions`, periodic `clearFailedOlderThan`, periodic `wal_checkpoint`).
- 05 Part 1 items #5, #6, #7.
- 05 Part 4 timer census (revised 2026-04-22 — 3 → 0).
- 05 Part 2 **D3** (zero repeating background timers).
- 08-reconciliation.md Part 4 revised + C7 revised (no `sqliteHousekeepingInterval`).
- V6.
- Live: `worker-service.ts:174, 537, 547-589, 1108`, `SessionManager.ts:516-568`, `Database.ts:162-168` (auto-checkpoint confirmation).
(c) **Verification**:
- `grep -rn "staleSessionReaperInterval\|startOrphanReaper\|reapStaleSessions\|sqliteHousekeepingInterval" src/` → **0** (tests included).
- `grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts` → **0**. No carve-outs, no exclusions. If any match appears, the fix is to delete or convert to event-driven, never to add an exclusion comment.
- `grep -rn "wal_checkpoint" src/` → 0 in `worker-service.ts`. (The `PRAGMA wal_autocheckpoint` read at boot for observability is fine if introduced by plan 02.)
- `grep -rn "clearFailedOlderThan" src/` → 2 matches: the definition in `PendingMessageStore.ts` and a single call site inside the boot-once reconciliation block.
(d) **Guards**:
- D: no "deprecated stub" left behind for `reapStaleSessions`; no shim for `startOrphanReaper`; no renamed variant of `sqliteHousekeepingInterval`.
- B: no `setInterval` added anywhere in the worker layer — the grep above is the canonical check.
### Phase 5 — Move `PendingMessageStore` 60-s reset to one-shot boot recovery
(a) **Implement**:
- In `src/services/sqlite/PendingMessageStore.ts`:
- Delete lines `103-116` (self-heal UPDATE inside `claimNextMessage` transaction).
- Add a new public method:
```ts
recoverStuckProcessing(): number {
const stmt = this.db.prepare(`
UPDATE pending_messages
SET status = 'pending', started_processing_at_epoch = NULL
WHERE status = 'processing'
`);
const result = stmt.run();
if (result.changes > 0) {
logger.info('QUEUE', `BOOT_RECOVERY | recovered ${result.changes} stuck processing message(s)`);
}
return result.changes;
}
```
- Note the one-shot version is **unscoped by session** and **unscoped by threshold** — on boot, any `processing` row is by definition stuck (worker was not running a moment ago), so the 60-s guard is not needed. This is cleaner than copying the threshold logic.
- Delete `STALE_PROCESSING_THRESHOLD_MS` constant (line 6) — no remaining caller.
- In `src/services/worker-service.ts`, call `pendingStore.recoverStuckProcessing()` once during boot as part of the boot-once reconciliation block (Phase 3 Mechanism C step 3), after DB initialization. (Co-owned with 02-sqlite-persistence; that plan may also call it — this plan guarantees the call exists.)
(b) **Docs**:
- 05 § 3.3 bottom box "BootOnce → Recover" (authoritative).
- 05 Part 1 item #16.
- 05 § 3.8 bottom "Worker startup → UPDATE pending_messages status processing → pending".
- 06 Phase 6 task 3.
- V19.
- Live: `src/services/sqlite/PendingMessageStore.ts:6, 99-145`.
(c) **Verification**:
- `grep -n "STALE_PROCESSING_THRESHOLD_MS" src/` → 0.
- Integration test: insert `pending_messages` row with `status='processing', started_processing_at_epoch=now-2*3600*1000`; start worker; assert row flips to `pending` before first `claimNextMessage` is called.
- Unit test: `claimNextMessage` is now a pure SELECT+UPDATE transaction; passing a row with `started_processing_at_epoch=now-10000` (stale by old threshold) is **not** reset — confirms boot-only recovery.
(d) **Guards**:
- B: `claimNextMessage` no longer mutates on read path.
- A: `recoverStuckProcessing` is a method on `PendingMessageStore`, not a new table / migration.
### Phase 6 — Inline SIGTERM → wait 5 s → SIGKILL
(a) **Implement**:
- In `SessionManager.deleteSession` (`:381-446`), replace the call at `:412` (`await ensureProcessExit(tracked, 5000)`) with the inlined ladder. 12-line block:
```ts
if (tracked.process.exitCode !== null) {
// already exited
} else {
const exited = new Promise<void>(resolve => tracked.process.once('exit', () => resolve()));
const timed = new Promise<void>(resolve => setTimeout(resolve, 5000));
await Promise.race([exited, timed]);
if (tracked.process.exitCode === null) {
try { tracked.process.kill('SIGKILL'); } catch { /* dead */ }
const killed = new Promise<void>(resolve => tracked.process.once('exit', () => resolve()));
const killTimed = new Promise<void>(resolve => setTimeout(resolve, 1000));
await Promise.race([killed, killTimed]);
}
}
// unregister via supervisor
for (const rec of getSupervisor().getRegistry().getByPid(tracked.pid)) {
if (rec.type === 'sdk') getSupervisor().unregisterProcess(rec.id);
}
notifySlotAvailable();
```
- Do the same inline at `worker-service.ts:786` (other call-site).
- Delete `ensureProcessExit` (already removed with `ProcessRegistry.ts` in Phase 2; this phase also removes its re-export if any temporary shim existed).
(b) **Docs**:
- 05 Part 1 item #9 ("Keep SIGTERM → SIGKILL, delete the ladder framework — inline it").
- 05 § 3.8 Deleted list.
- 06 Phase 5 task 1 ("`ensureProcessExit` → keep as free function... Remove the ladder-framework packaging").
- Live: `ProcessRegistry.ts:185-229`, `SessionManager.ts:412`, `worker-service.ts:786`.
(c) **Verification**:
- `grep -n "ensureProcessExit" src/` → 0.
- Manual: spawn subprocess that ignores SIGTERM (`trap '' TERM; sleep 60`); call `deleteSession`; observe SIGKILL 5 s after the abort.
(d) **Guards**:
- A: no new `EscalationLadder` class, no `ProcessControl` wrapper.
### Phase 7 — Blocking `POST /api/session/end`
(a) **Implement**:
- Add new route in `src/services/worker/http/routes/SessionRoutes.ts`:
```ts
app.post('/api/session/end', this.handleSessionEnd.bind(this));
```
- Handler body (copy and simplify from `handleSummarizeByClaudeId` at `:663-720` + the hook-side wait at `summarize.ts:117-150`):
1. Resolve `session = sessionManager.getSession(sessionDbId)`; if missing, try to init from DB (same pattern `queueSummarize` uses at `SessionManager.ts:332-334`).
2. `sessionManager.queueSummarize(sessionDbId, last_assistant_message)`. Also call `ensureGeneratorRunning(sessionDbId, 'summarize')` (same helper used at `SessionRoutes.ts:500, 708`).
3. Await `session.lastSummaryStored` flag flipping (currently written by `ResponseProcessor` — see 03-response-parsing-storage). Implementation: expose an `awaitSummary(sessionDbId, timeoutMs)` helper on `SessionManager` that returns a `Promise<{ summaryId: number | null; timedOut: boolean }>`. Internally: subscribe to the existing `sessionQueues` EventEmitter for a `summary-stored` event, OR fall back to polling `session.lastSummaryStored` once per 200 ms. *Recommendation: add a `session.summaryStoredEvent = new EventEmitter()` field and have `ResponseProcessor` emit `'stored'` with the summary id; `awaitSummary` uses `events.once(emitter, 'stored')` raced against `setTimeout(110_000)`.*
4. After the promise resolves (or times out): `session.abortController.abort()`. Wait briefly (≤1 s) for generator, then `sessionManager.deleteSession(sessionDbId)` (which runs the inline SIGTERM→SIGKILL from Phase 6 + supervisor `reapSession`).
5. **(Preflight edit 2026-04-22 — reconciliation B2)** Return `{ summaryId, timedOut }` with **HTTP 200 on both success and timeout**. Do NOT return 504 on timeout — that status was rejected in reconciliation. Windows Terminal closes tabs only when the hook exits with code 0; hook 09 Phase 3 maps HTTP 200 → exit 0 unconditionally. If the endpoint returns any non-200, the hook must fall through to exit 1 which accumulates Windows Terminal tabs per CLAUDE.md. Contract: timeout path response is `{ summaryId: null, timedOut: true }` with status 200; success path is `{ summaryId: <number>, timedOut: false }` with status 200. Only programmer errors (400 invalid body, 404 missing session) use non-200.
6. **(Preflight edit 2026-04-22 — reconciliation C6)** Initialize `session.summaryStoredEvent = new EventEmitter()` when an `ActiveSession` is created in `SessionManager` (likely the `initializeSession` method). The emitter is consumed by `awaitSummary` above and produced by `ResponseProcessor` per plan 03 Phase 2 step 5. Field addition on `ActiveSession` shape: `summaryStoredEvent?: EventEmitter`. Use `events.once(session.summaryStoredEvent, 'stored')` raced against `setTimeout(110_000)` inside `awaitSummary`.
- Delete after hook 09 lands: `POST /api/sessions/complete` (`:753`) and `GET /api/sessions/status` consumers in hooks (the hook-side poll loop at `summarize.ts:117-150`). Keep the status endpoint for the viewer UI short-term.
(b) **Docs**:
- 05 § 3.8 `End → queueSummarize → await summary_stored OR 110s → abortController.abort → delete` (authoritative).
- 05 § 3.1 (STOP box: "BLOCKS until summary written or 110s timeout").
- 05 Part 1 item #11 ("`/api/sessions/summarize` blocks until done... Hook waits on one call").
- 05 Part 2 D6.
- Live: `src/cli/handlers/summarize.ts:25, 89, 117-150`, `src/services/worker/http/routes/SessionRoutes.ts:379-720, 747-753`, `src/services/worker/SessionManager.ts:329-377`, `src/services/worker/agents/ObservationBroadcaster.ts:43-55`.
(c) **Verification**:
- Hook-less integration test: POST `/api/session/end` with a valid sessionDbId that has queued work; response arrives only after the summary row exists in `session_summaries`; **HTTP 200** with `{ summaryId: <number>, timedOut: false }`; total latency <5 s in happy path.
- Timeout test: POST with a session whose SDK is hung; response at 110 s with **HTTP 200** and `{ summaryId: null, timedOut: true }`; subprocess is killed (verify PID gone from registry). Assert status code is 200, not 504 — this is a Windows Terminal contract gate (preflight edit B2).
- Hook 09 plan's verification runs one POST (no 500-ms loop) and asserts hook exit 0 on both the success and timeout paths.
(d) **Guards**:
- B: no 500-ms polling loop in the server handler either — use the event emitter or single 200-ms fall-back.
- D: do not keep `/api/sessions/complete` as a "safety net" — one endpoint owns session termination.
- A: do not extend `SessionRoutes` with a seventh summary endpoint; route-count goal is shrink, not grow.
### Phase 8 — Verification
(a) **Run**:
- `grep -rn "setInterval" src/services/worker/ src/services/worker-service.ts` → **0** matches. No repeating intervals in the worker layer at all.
- `wc -l src/services/worker/ProcessRegistry.ts 2>/dev/null || echo DELETED` → DELETED.
- `wc -l src/services/worker/process-spawning.ts` → ~150 LoC (contains `createPidCapturingSpawn`, `findSessionProcess`, `getActiveProcesses`, `waitForSlot`, `notifySlotAvailable`, `killSystemOrphans` as free helpers). No `reaper.ts` exists.
- Session-lifecycle total: `SessionManager.ts` (~570 after deleting `reapStaleSessions` + `detectStaleGenerator` + `MAX_GENERATOR_IDLE_MS`, adding `scheduleAbandonedCheck` + `abandonedTimer` wiring) + `process-spawning.ts` (~150) + worker-service boot-once block (~40 added, ~55 removed from the deleted stale-reaper block) + `supervisor/process-registry.ts` (unchanged 408) ≈ **~450 LoC reduction** from today's ~900 in worker-layer lifecycle code.
(b) **Regression suite**:
- Subprocess crash recovery: kill SDK subprocess → within ~500 ms the `child.on('exit')` handler fires at `process-spawning.ts` (copied from `ProcessRegistry.ts:479`) and calls `unregisterProcess(pid)`. No scanner involved.
- Hung-generator kill: SDK subprocess frozen (SIGSTOP) → after 3 min of stream silence the per-iterator idle `setTimeout` at `SessionQueueProcessor.ts:6` fires `onIdleTimeout` → `SessionManager.ts:651-655` → `abortController.abort()` → child exits → `exit` handler unregisters. No scanner involved.
- Abandoned-session cleanup: session with no generator and no pending for 15 min → the per-session `abandonedTimer` (scheduled on last-generator-completion) fires, calls `deleteSession(id)`. If new work arrived first, the timer was cleared on activity. No scanner involved.
- Cross-restart orphans: ppid=1 Claude processes from a previously crashed worker are cleaned up exactly once, at the next worker's boot, by `killSystemOrphans()` in the boot-once reconciliation block. No repeating sweep.
- PID reuse: supervisor `isPidAlive` + `verifyPidFileOwnership` (already at `supervisor/process-registry.ts:28-172`) catches PID reuse — no behavior change.
- Privacy gap closed: end-to-end test with `<private>` tag in `last_assistant_message` — not persisted to `session_summaries`.
- Blocking `/api/session/end`: one request, ≤110 s, returns summary id or null.
(c) **Doc-driven coverage check**: every item in 05 § 3.8 "Deleted" list corresponds to a Phase and a grep-based verification.
(d) **Guards audit**: no new timers, no new classes over 5 LoC, no supervisor-registry surface extension.
---
## Confidence + gaps
### High confidence
- Worker-layer `ProcessRegistry.ts` (527 LoC) is a pure facade over `supervisor/process-registry.ts`: every method body I audited (`:34-52`, `:57-65`, `:70-79`, `:85-94`, `:99-101`, `:349-382`) already delegates via `getSupervisor().getRegistry()`. Deletion is mechanical.
- `reapStaleSessions` (SessionManager.ts:516-568) has two independent branches that map cleanly onto existing mechanisms: the generator-active branch is already covered by `SessionQueueProcessor.ts:6` (per-iterator 3-min idle `setTimeout` that resets on every chunk and aborts the controller — then `child.on('exit')` unregisters); the no-generator branch is covered by the new per-session `abandonedTimer` `setTimeout` (Phase 3 Mechanism B). `detectStaleGenerator` (`:59-84`) is deleted along with `reapStaleSessions` — the per-iterator timer at the stream level is the single source of truth for "silent generator."
- Supervisor `reapSession` (`supervisor/process-registry.ts:292-385`) already implements SIGTERM → 5 s → SIGKILL; the worker-layer `ensureProcessExit` (`ProcessRegistry.ts:185-229`) duplicates this for the ChildProcess reference. Inlining the worker version keeps per-process escalation while supervisor-level reap handles the session-wide sweep on `deleteSession`.
- Cadence math: 30 s tick × 4 = 2 min matches the current `staleSessionReaperInterval` cadence at `worker-service.ts:589`. Zero timing regression.
### Gaps / open integration points
1. **`summary_stored` wiring (Phase 7)** — the cleanest implementation needs `ResponseProcessor` (03-response-parsing-storage) to emit a per-session event on successful summary write. Today `session.lastSummaryStored` is written (referenced at `SessionRoutes.ts:747`) but there is no event — only a polled read. **Blocking coordinate point: 09-lifecycle-hooks cannot simplify its hook until Phase 7 is wired, and Phase 7 cannot wire `awaitSummary` cleanly until 03 exposes an emitter.** Concrete ask from 03: add `session.summaryStoredEvent = new EventEmitter()` populated inside `ResponseProcessor` after the commit (approx. location: `src/services/worker/agents/ResponseProcessor.ts:228` region where `broadcastSummary` is already called). Fallback if 03 can't accommodate: Phase 7 polls `session.lastSummaryStored` at 200 ms with the 110 s timeout — still one HTTP call from the hook's perspective, still blocking server-side, just internally polled. Degrades cleanly.
2. **SQLite housekeeping in `worker-service.ts:547-589`** (resolved 2026-04-22) — the stale-reaper block today also runs `clearFailedOlderThan(1h)` and `PRAGMA wal_checkpoint`. Under the zero-timer model: `clearFailedOlderThan` moves to boot-once (co-owned with plan 02's boot-once SQLite housekeeping phase); `wal_checkpoint` explicit calls are deleted outright because `Database.ts:162-168` sets no `wal_autocheckpoint` override, so SQLite's default of 1000 pages is the active policy. This plan's Phase 4 deletes all three items together — no transient "two `setInterval` hits" in the diff.
@@ -0,0 +1,363 @@
# Plan 08 — transcript-watcher-integration (clean)
**Feature scope**: `src/services/transcripts/*` + `src/cli/handlers/observation.ts` HTTP loopback.
**Source of truth (design)**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` § 3.12; Part 1 items #17, #18, #19.
**Phase-7 counterpart in 06**: `PATHFINDER-2026-04-21/06-implementation-plan.md` Phase 7 (Transcript watcher cleanup).
**Before-state**: `PATHFINDER-2026-04-21/01-flowcharts/transcript-watcher-integration.md`.
## Dependencies (must land first)
| Plan | Dependency | What this plan consumes |
|---|---|---|
| `07-plans/01-privacy-tag-filtering.md` | `stripMemoryTags(text)` (06 Phase 1) | Single call used inside `ingestObservation`. We never strip in the watcher. |
| `07-plans/07-session-lifecycle-management.md` | `ingestObservation(payload)` helper (06 Phase 2) + `SessionManager.initializeSession` / `endSession` direct API (06 § 3.8) | Watcher calls the helper **directly** (no `workerHttpRequest`, no `observationHandler.execute`). Session lifecycle routes `session_init` / `session_end` to `SessionManager` without HTTP. |
Downstream dependents: **none**.
## Dependency-verified facts (live-code citations)
- **V18 confirmed** (`06-implementation-plan.md:45`). All three artifacts still present:
- 5-s rescan timer — `src/services/transcripts/watcher.ts:124` (`rescanIntervalMs ?? 5000`) + `setInterval(...)` at `:125`.
- `pendingTools` map — `src/services/transcripts/processor.ts:23` (in `SessionState` interface) + `.set` at `:202`, `.get/.delete` at `:232-236`, `.clear` at `:317`.
- HTTP loopback — `src/cli/handlers/observation.ts:17` loops through `workerHttpRequest('/api/sessions/observations', ...)`. Chain: watcher.ts:221 → processor.ts:252 `observationHandler.execute` → observation.ts:17 `workerHttpRequest` back to the same worker. This is the "call the CLI handler from inside the worker, which HTTP-loops back to the worker" anti-pattern.
- **Schema list (exhaustive)**: only **one** JSONL transcript schema ships today: **Codex**, defined in `src/services/transcripts/config.ts:9` as `CODEX_SAMPLE_SCHEMA` (confirming `63472 — CODEX_SAMPLE_SCHEMA in config.ts is the source of truth`). The live config file is `transcript-watch.example.json` (line 1-95) which registers only `codex` under `schemas.codex`. The `CodexCliInstaller.ts` is the only installer that merges JSONL schemas into `~/.claude-mem/transcript-watch.json` (`src/services/integrations/CodexCliInstaller.ts:97-99`).
- `CursorHooksInstaller.ts`, `OpenCodeInstaller.ts`, `GeminiCliHooksInstaller.ts` do **not** register JSONL transcript schemas — they install **PostToolUse hooks** that feed the CLI observation handler directly (same path as Claude Code's own hooks). They do not touch the transcript watcher.
- **The audit's "Cursor, OpenCode, Gemini-CLI" for transcript ingestion is accurate only at the user-facing-feature level (these agents' activity is captured), but the capture path for those three is the hook handler chain, not the JSONL watcher.** The watcher's only current JSONL client is Codex.
- **tool_use_id availability in Codex schema** (`src/services/transcripts/config.ts:47-77`):
- `tool-use` event: `toolId: 'payload.call_id'` — present on `function_call`, `custom_tool_call`, `web_search_call`, `exec_command`.
- `tool-result` event: `toolId: 'payload.call_id'` — present on `function_call_output`, `custom_tool_call_output`, `exec_command_output`.
- **Both sides always carry `call_id`** in the Codex schema. No fallback needed for Codex.
- **Schema-driven, not hard-coded**: the `toolId` field is part of the `SchemaEvent.fields` contract (`src/services/transcripts/types.ts:34`). Any future schema that wants to use the transcript watcher must set `fields.toolId` on both its tool_use and tool_result events, or pair them some other way. Phase 2 below documents this contract explicitly.
- **Watched parent dir per schema**: `~/.codex/sessions/**/*.jsonl` (`config.ts:95`, `transcript-watch.example.json:83`). The glob matches files recursively under `~/.codex/sessions/`. The parent dir to pass to `fs.watch(..., { recursive: true })` is the **glob-root**: `expandHomePath('~/.codex/sessions')` (everything before the first glob metachar). `resolveWatchFiles()` at `watcher.ts:143-163` already understands glob vs plain-dir vs plain-file — the new watch code will derive the root the same way.
- **fs.watch recursive support**: supported on macOS, Linux (kernel >= 2.6.36 via `inotify`, but Node's recursive option landed with macOS + Windows in 0.x and Linux in Node 20 via libuv). CI target: `package.json:58` declares `"node": ">=18.0.0"`. **Recursive fs.watch on Linux requires Node 20+**; we must bump the engines floor (see Gaps). Bun supports `fs.watch` recursive on all three platforms.
- **FileTailer location**: `src/services/transcripts/watcher.ts:15-81` (unchanged by this plan — lines already do the byte-offset-tail correctly; only the file-discovery layer changes).
## Phase contract (applies to every phase below)
- **(a) Copy from** `05-clean-flowcharts.md` § 3.12 (canonical flowchart).
- **(b) Docs** at the top of each phase: 05 section ref + 06 verified finding (V-number) + live file:line.
- **(c) Verification** is mechanical: a `grep` count, a runtime test, or a file existence check.
- **(d) Anti-pattern guards** — every phase cites (from `06:59-66`):
- **A** — no invented APIs. Grep for the method before using it.
- **B** — no polling; `fs.watch` events only (no rescan `setInterval`).
- **E** — one code path for observation ingest; watcher + CLI hook both call `ingestObservation`, never a second path.
---
## Phase 1 — Parent-directory recursive watch replaces per-file `fs.watch` + 5 s rescan
**Goal**: `fs.watch(parentDir, { recursive: true }, onFileEvent)` supplants both the per-file `fsWatch(filePath, ...)` in `FileTailer` and the `setInterval(..., rescanIntervalMs)` rescan in `TranscriptWatcher`.
### (a) What to implement — Copy from § 3.12
From the clean flowchart (`05-clean-flowcharts.md:484-500`):
```
Boot["Worker startup"] --> LoadCfg["loadTranscriptWatchConfig"]
LoadCfg --> ParentWatch["fs.watch(parent_dir, {recursive})
watches existing files AND new files"]
ParentWatch --> OnChange([File event])
OnChange --> ReadDelta["FileTailer.readNewBytes"]
```
**Code change (watcher.ts)**:
1. Delete the per-file watcher inside `FileTailer` (`src/services/transcripts/watcher.ts:16`, `:28-33`, `:35-38`). `FileTailer` becomes a pure byte-offset reader — no internal `fs.watch` subscription. Rename its `start()` to `readAvailable()` (one-shot tail) and drop the `close()` method (nothing to close now).
2. In `TranscriptWatcher.setupWatch` (`:110`), derive `glob-root` from `watch.path`:
- If `watch.path` has no glob metachars and is a file: watch `dirname(resolved)` non-recursively.
- Otherwise: walk the path tokens, stop at the first token containing a glob metachar, join the prefix — that's the root dir (e.g. `~/.codex/sessions/**/*.jsonl``~/.codex/sessions`). Use the new helper `getGlobRoot(inputPath): string`.
3. Replace `setInterval(async () => { ... }, rescanIntervalMs)` (`:124-132`) with:
```ts
fs.watch(globRoot, { recursive: true, persistent: true }, (eventType, filename) => {
if (!filename) return;
const absPath = path.resolve(globRoot, filename);
if (!globMatches(absPath, resolvedPath)) return;
// rename event fires when a new file is created (or renamed/deleted)
if (!this.tailers.has(absPath) && existsSync(absPath)) {
this.addTailer(absPath, watch, schema, false).catch(err =>
logger.warn('TRANSCRIPT', 'addTailer failed on fs.watch event',
{ file: absPath, error: err instanceof Error ? err.message : String(err) }));
}
const tailer = this.tailers.get(absPath);
tailer?.readAvailable().catch(() => undefined);
});
```
4. Update `TranscriptWatcher.stop()` (`:99-108`) to close the single parent watcher per target instead of iterating per-tailer `.close()` + `clearInterval` on the timer array. Delete the `rescanTimers: NodeJS.Timeout[]` field (`:87`).
5. Delete the `rescanIntervalMs?: number` field from `WatchTarget` (`src/services/transcripts/types.ts:61`). Update `CodexCliInstaller.ts` and `transcript-watch.example.json` if either still sets it (grep).
### (b) Docs cited
- 05 § 3.12 lines 482-500 (clean flowchart).
- Part 1 item #19 (`05-clean-flowcharts.md:37`) — "5-s rescan timer for new transcript files".
- V18 (`06-implementation-plan.md:45`) — `rescanIntervalMs ?? 5000` at `watcher.ts:124`.
- Live: `src/services/transcripts/watcher.ts:28` (per-file `fsWatch`), `:124-133` (rescan interval + `setInterval`).
### (c) Verification
- `grep -n "setInterval" src/services/transcripts/` → **zero** matches.
- `grep -n "rescanIntervalMs" src/ transcript-watch.example.json` → **zero** matches.
- Runtime test: start worker against an empty temp dir `T`; wait 1 s; `touch T/new-session.jsonl` then `echo '{"type":"session_meta","payload":{"id":"test","cwd":"/tmp"}}' >> T/new-session.jsonl`; assert a `TRANSCRIPT Watching transcript file` log line appears within **100 ms** of the write (not within the old 5 s window). Follow up with a tool_use line and assert `pending_messages` row appears within another 100 ms.
- `grep -n "new FileTailer.*filePath.*offset.*onLine" src/services/transcripts/` → still exactly one call site in `addTailer` (signature preserved for byte-offset state).
### (d) Anti-pattern guards
- **A**: do not invent a "glob walker" class. A single `getGlobRoot(path: string): string` top-level function is enough.
- **B**: **no** fallback `setInterval` "in case fs.watch misses events". The parent-recursive watch is the contract; missed-event scenarios fall under the Gaps section (Node-version requirement).
### Blast radius
Single file rewrite: `src/services/transcripts/watcher.ts`. Small touch: `types.ts` (drop `rescanIntervalMs`). One touch to `CodexCliInstaller.ts` or `transcript-watch.example.json` only if they reference that deleted option.
---
## Phase 2 — Delete `pendingTools` map; match `tool_use` + `tool_result` by `tool_use_id` at parse time
**Goal**: `SessionState.pendingTools: Map<string, …>` is gone. Tool pairing happens locally inside each log file's tail buffer keyed by `tool_use_id`; the per-session map disappears.
### (a) What to implement — Copy from § 3.12
```
Route -->|tool_use + tool_result paired by tool_use_id| Ingest["ingestObservation({sessionDbId, tool_use_id, name, input, output})"]
```
**Code change (processor.ts)**:
1. Remove `pendingTools: Map<string, {name?, input?}>` from `SessionState` (`src/services/transcripts/processor.ts:23`).
2. Remove `pendingTools: new Map()` from `getOrCreateSession` (`:59`).
3. Rewrite `handleToolUse` (`:193-222`):
- Move the per-file pairing buffer **out of** the session and **into** `TranscriptWatcher` as a **per-file** map: `private pendingToolUses = new Map<string /* filePath */, Map<string /* tool_use_id */, { name: string; input: unknown; ts: number }>>()`. Inject it as a callback arg, or move the pairing into the processor keyed by file.
- Simpler option (preferred): keep the short-lived pairing **in the processor keyed by `${watch.name}:${sessionId}:${tool_use_id}`** — it still clears on `tool_result`, but it's keyed by ID, not by session-state entry. Upper bound size with an LRU (`max=10_000`, drop-oldest) to avoid unbounded growth if a tool_use has no matching tool_result.
4. Rewrite `handleToolResult` (`:224-246`) to read from that keyed map; on hit, emit **one** `ingestObservation({sessionDbId, tool_use_id, name, input, output})` call (Phase 3 wires the helper). On miss, log debug + drop (don't synthesize).
5. Drop the `apply_patch` auto-file-edit branch at `:205-213` only if Codex stops sending `tool_use` with `toolResponse` inline — inspecting `handleToolUse` today, there's a legacy branch at `:215-221` that fires `sendObservation` from inside `handleToolUse` when `toolResponse !== undefined`. That branch is the **first half of the duplicated ingest** and must be deleted in Phase 3. Keep the `apply_patch` file-edit branch (`:205-213`); file edits are a separate path not in scope here.
6. Session state retains `lastUserMessage`, `lastAssistantMessage`, `cwd`, `project` — untouched.
### (b) Docs cited
- 05 § 3.12 line 494 ("paired by tool_use_id").
- Part 1 item #17 (`05-clean-flowcharts.md:35`) — "pendingTools map in TranscriptEventProcessor ... match by ID, no state map."
- V18 — pendingTools presence confirmed.
- Live: `src/services/transcripts/processor.ts:23` (interface field), `:59` (init), `:202` (`.set`), `:232-236` (lookup/delete), `:317` (clear on session_end).
- Contract source: Codex schema in `src/services/transcripts/config.ts:47-77` — `toolId: 'payload.call_id'` on both tool_use and tool_result.
### (c) Verification
- `grep -rn "pendingTools" src/` → **zero** matches (interface field, initializer, and three call sites all gone).
- `grep -n "SessionState" src/services/transcripts/processor.ts` — interface still exists, but with `pendingTools` field removed (assert via a small diff check in a test).
- Runtime: replay a recorded Codex JSONL (fixture). Assert the stream of `pending_messages` rows matches byte-for-byte with the pre-refactor run for the same fixture (the pairing semantics are unchanged; we only moved where the map lives).
- Memory test: feed 50 sessions with 1000 tool_use each but **no** tool_result. The LRU bounds at 10k — not unbounded.
### (d) Anti-pattern guards
- **A**: the pairing map is a private field of `TranscriptEventProcessor`, not a new `ToolPairingService` class.
- **E**: only **one** observation ingest call per paired event — delete the `handleToolUse`-inline `sendObservation` branch at `:215-221` in Phase 3.
### Blast radius
`src/services/transcripts/processor.ts` only. No schema contract change (Codex already populates `call_id` on both sides).
---
## Phase 3 — Replace `observationHandler.execute()` HTTP loopback with direct `ingestObservation(payload)`
**Goal**: `sendObservation` no longer calls the CLI handler, which no longer does `workerHttpRequest`. The worker process calls its own helper in-memory.
### (a) What to implement — Copy from § 3.12 + D1
From 05 Part 2 Decision D1 (`:69-70`):
> **D1. One observation ingest path.** Hook, transcript-watcher, and manual-save all call `ingestObservation(payload)`. That function does: strip tags → validate privacy → INSERT `pending_messages`. **No HTTP loopback inside the worker process.**
From § 3.12 line 494 — `ingestObservation({sessionDbId, tool_use_id, name, input, output})`.
**Code change**:
1. In `src/services/transcripts/processor.ts`:
- Replace `sendObservation` body (`:248-260`) so it builds the `IngestObservationPayload` (matching the shape owned by `07-plans/07-session-lifecycle-management.md`) and calls `await ingestObservation(payload)` directly. No `observationHandler` import.
- Remove the import of `observationHandler` (`:3`).
- Remove the import of `workerHttpRequest` and `ensureWorkerRunning` from `../../shared/worker-utils.js` (`:6`) **from the observation path only** — `queueSummary` still hits `/api/sessions/summarize` today and `updateContext` still hits `/api/context/inject`; those two are untouched by Phase 3. Phase 4 deletes both.
2. In `src/services/transcripts/watcher.ts`: no change — the watcher already delegates to `processor.processEntry`; the processor is what imports the helper.
3. `IngestObservationPayload` shape reused from Plan 07 (definition lives in `src/services/worker/ingest/index.ts`):
```ts
{ contentSessionId, platformSource, cwd, tool_name, tool_use_id,
tool_input, tool_response, agentId?, agentType? }
```
Plan 07 additionally adds `tool_use_id` as a required field when the caller is the transcript watcher (already present in hook-path flows via the UNIQUE constraint added in Phase 9 of `06-implementation-plan.md`). Synthesize `tool_use_id = payload.call_id` from the schema's `toolId` field.
### (b) Docs cited
- 05 § 3.12 line 494, Part 2 D1 lines 69-70.
- Part 1 item #18 (`05-clean-flowcharts.md:36`) — "observationHandler.execute() HTTP loopback from transcript-watcher ... Extract ingestObservation helper; both call it directly."
- V18 — `observation.ts:17` HTTP loopback confirmed.
- Live: `src/cli/handlers/observation.ts:17` (`workerHttpRequest('/api/sessions/observations', …)`), `src/services/transcripts/processor.ts:252` (`observationHandler.execute` call site).
- Dependency contract: `07-plans/07-session-lifecycle-management.md` exports `ingestObservation` at `src/services/worker/ingest/index.ts` per `06-implementation-plan.md:126-132`.
### (c) Verification
- `grep -n "observationHandler" src/services/transcripts/` → **zero** matches.
- `grep -n "workerHttpRequest.*observations" src/services/transcripts/` → **zero** matches.
- `grep -n "workerHttpRequest" src/services/transcripts/` → count ≤ 2 (temporarily: `queueSummary` + `updateContext`, deleted in Phase 4).
- `grep -n "workerHttpRequest" src/cli/handlers/observation.ts` → still exactly one (CLI hook path still uses HTTP when the CLI is a separate process from the worker; that's **not** a loopback, it's the hook-to-worker boundary).
- Unit test: seed a single Codex JSONL line with a tool_use + tool_result pair; assert (1) exactly one `pending_messages` INSERT, (2) zero outbound HTTP requests recorded against the worker's own `/api/sessions/observations` endpoint (use an HTTP spy).
### (d) Anti-pattern guards
- **B**: no polling — direct function call, not an event bus, not a retry loop.
- **E**: the hook path and the transcript path **both** call `ingestObservation(payload)`. Only ingress shape conversion differs; the helper is the single code path (matches `06-implementation-plan.md:146` — "One helper, both handlers call it.").
### Blast radius
`src/services/transcripts/processor.ts` only. The watcher chain inside the worker process no longer crosses the HTTP boundary. The CLI hook (`observation.ts`) remains unchanged for this phase — it runs in the hook subprocess and must HTTP the worker.
---
## Phase 4 — Route `session_init` / `session_end` directly to `SessionManager` (drop `/api/sessions/summarize` + `/api/context/inject` loopbacks)
**Goal**: `handleSessionInit` calls `SessionManager.initializeSession` directly. `handleSessionEnd` calls `SessionManager.endSession` (which internally queues the summary the same way the hook-side does). The last two in-process HTTP loopbacks disappear from the transcript path.
### (a) What to implement — Copy from § 3.12
```
Route -->|session_init| Init["sessionManager.initializeSession(sessionDbId)
(direct, no HTTP loopback)"]
Route -->|session_end| EndFlow["sessionManager.endSession(sessionDbId)
→ queueSummarize (same as hook path)"]
EndFlow --> WriteCtx["Optional: writeAgentsMd (Cursor flag)"]
```
**Code change (processor.ts)**:
1. Replace `handleSessionInit` (`:178-191`) with a direct call to `SessionManager.initializeSession(sessionDbId, userPrompt=fields.prompt, promptNumber)`. The worker-process `SessionManager` instance is injected via constructor (plan 07 already plumbs this; the watcher receives it in `TranscriptWatcher` constructor).
2. Replace `queueSummary` (`:322-344`): call the same helper that `07-plans/07-session-lifecycle-management.md` exposes as `endSession({contentSessionId, platformSource, last_assistant_message})` → internally it calls `ingestSummary(payload)` (from `06-implementation-plan.md:130`). No `workerHttpRequest('/api/sessions/summarize', …)`.
3. Replace `updateContext` (`:346-392`): keep the **path-traversal guard** (`:363-373` — real security check, not patch cruft), but replace the HTTP call at `:377` with a direct `generateContext(allProjects)` call from `ContextBuilder` (the same function `/api/context/inject` handler wraps). `writeAgentsMd` unchanged.
4. Remove import of `ensureWorkerRunning` and `workerHttpRequest` (both already freed by this point).
5. `sessionCompleteHandler.execute` at `processor.ts:311-315` — delete; `endSession` subsumes it.
### (b) Docs cited
- 05 § 3.12 lines 493, 495, 497 — direct `initializeSession` / `endSession`, `writeAgentsMd` kept.
- 05 Part 2 D1 line 70 — "no HTTP loopback inside the worker process."
- Dependency: plan 07 `06-implementation-plan.md:114-152` (Phase 2 helpers: `ingestObservation`, `ingestPrompt`, `ingestSummary`) and `:321-326` (§ 3.8 `endSession` blocks until summary).
- Live: `src/services/transcripts/processor.ts:185` (`sessionInitHandler.execute`), `:334` (`workerHttpRequest('/api/sessions/summarize', …)`), `:377` (`workerHttpRequest(contextUrl)`), `:363-373` (security guard — **preserve**).
### (c) Verification
- `grep -n "workerHttpRequest\|ensureWorkerRunning" src/services/transcripts/` → **zero** matches.
- `grep -n "sessionInitHandler\|sessionCompleteHandler\|observationHandler" src/services/transcripts/` → **zero** matches.
- `grep -n "writeAgentsMd\|isPathSafe" src/services/transcripts/processor.ts` → still present (security guard kept).
- Integration: drive a full Codex JSONL run through the watcher; assert the AGENTS.md file is written with the same content as the pre-refactor path.
### (d) Anti-pattern guards
- **D**: no facade — the processor talks to `SessionManager` **directly**, not via a `TranscriptSessionBridge`.
- **E**: `ingestSummary` is the one code path — transcript `session_end` and hook `Stop` both call it.
### Blast radius
`src/services/transcripts/processor.ts` — large internal rewrite. No external shape changes: the eventual `pending_messages` rows are byte-identical to today's hook-path output.
---
## Phase 5 — Remove `isProjectExcluded` re-check in the processor (moved into `ingestObservation`)
**Goal**: The transcript processor does not re-run project-exclusion. `ingestObservation` (and its siblings) run the check once, centrally (per Plan 07).
### (a) What to implement — Copy from § 3.12
From 05 § 3.12 Deleted list (`:502-506`):
> - `isProjectExcluded` re-check inside transcript processor (done once in `ingestObservation`)
**Code change**:
1. `grep -n "isProjectExcluded" src/services/transcripts/` — if any call site exists (it is currently checked inside `observationHandler.execute`, `src/cli/handlers/observation.ts:59`, which the watcher path no longer uses after Phase 3), delete it.
2. Assert `ingestObservation` performs the exclusion check (Plan 07 requirement, per `06-implementation-plan.md:132` — "(b) runs privacy / project-exclusion validation").
### (b) Docs cited
- 05 § 3.12 deleted-list (`:506`).
- Dependency: `06-implementation-plan.md:132`.
- Live: `src/cli/handlers/observation.ts:57-62` — current exclusion check (removed from the transcript path by Phase 3's loopback kill; this phase confirms no second copy exists in the watcher).
### (c) Verification
- `grep -rn "isProjectExcluded" src/services/transcripts/` → **zero** matches.
- `grep -n "isProjectExcluded" src/services/worker/ingest/` → **exactly one** call (inside `ingestObservation` / shared privacy-validate path).
### (d) Anti-pattern guards
- **E**: one exclusion check, one code path — `ingestObservation` is authoritative.
### Blast radius
Essentially a grep-and-delete pass; most likely zero lines to change (the check never lived in the processor, only in the CLI handler we've already unlinked).
---
## Phase 6 — Verification gate
**Goal**: Prove the four deletions and the single new mechanism by mechanical checks.
### Checks
1. **Parent-dir watch drop test** (from Phase 1's ©): write a brand-new JSONL file into a mock watched dir; within **100 ms** observe a `Watching transcript file` log line AND a `pending_messages` INSERT after the first tool_use+tool_result pair. Without the 5-s rescan, this must succeed on a sub-second timeline.
2. **`pendingTools` gone**: `grep -rn "pendingTools" src/` → `0`.
3. **HTTP loopback gone**: `grep -rn "workerHttpRequest\|ensureWorkerRunning" src/services/transcripts/` → `0`. `grep -rn "observationHandler\|sessionInitHandler\|sessionCompleteHandler" src/services/transcripts/` → `0`.
4. **Timer gone**: `grep -rn "setInterval" src/services/transcripts/` → `0`.
5. **Single-path ingest**: `grep -rn "ingestObservation(" src/` — ≥ 2 call sites (transcript processor + hook-path route handler from Plan 07); zero in CLI handler (still uses HTTP to reach the worker).
6. **Schema-contract fuzz**: drop a crafted JSONL where `tool_use` omits `call_id`. Assert: debug log "tool_use without toolId", no crash, no paired observation emitted. Drop a `tool_result` with a `call_id` we never saw. Assert: debug log "orphan tool_result", no crash.
7. **Cursor / OpenCode / Gemini-CLI unaffected**: those paths go through `src/cli/handlers/observation.ts` (hook PostToolUse). Run the standard hook-round-trip smoke test (`npm run build-and-sync` + trigger a PostToolUse from each); assert `pending_messages` rows still appear. **This is the non-regression guard for the prompt's "preserve Cursor/OpenCode/Gemini-CLI" constraint** — they never depended on the transcript JSONL watcher, so Phases 1-5 cannot break them; this check exists to *prove* it.
8. **End-to-end**: full Codex JSONL fixture → expected SQLite state identical to pre-refactor.
### Anti-pattern guards (final sweep)
- **A**: every new identifier (`getGlobRoot`, `pendingToolUses` map, `readAvailable`) traces to a concrete live function or the plan's invented, single-use helper. No new classes.
- **B**: one `fs.watch` subscription per target, no timers, no polling, no "retry-rescan on SIGCHLD".
- **E**: transcript processor and hook route both import `ingestObservation` from the same module (`src/services/worker/ingest/index.ts`), with no privately duplicated strip / privacy / exclusion logic.
---
## Summary of line deletions
Against current live code:
| File | Lines removed | Lines added | Net |
|---|---|---|---|
| `src/services/transcripts/watcher.ts` | ~40 (per-file fsWatch + rescan interval + timer-cleanup scaffolding) | ~25 (parent-dir recursive watch + `getGlobRoot`) | -15 |
| `src/services/transcripts/processor.ts` | ~120 (`pendingTools` state, `handleToolUse` inline ingest, HTTP queueSummary, HTTP updateContext, handler imports) | ~50 (LRU tool-pairing map, direct `ingestObservation`/`endSession` calls, direct `generateContext` import) | -70 |
| `src/services/transcripts/types.ts` | 1 (`rescanIntervalMs` field) | 0 | -1 |
| `src/cli/handlers/observation.ts` | 0 (preserved; hook path still HTTPs the worker) | 0 | 0 |
| **Total** | **~161** | **~75** | **~-86** |
Plan-level estimate aligns with `05-clean-flowcharts.md:554` row "Transcript 5-s rescan + pendingTools map + HTTP loopback: -150 / +40 / -110" — consistent with our per-file count.
---
## Phase count
**6 phases** (5 implementation + 1 verification gate), matching the minimum set specified in the prompt.
---
## Gaps and open questions
1. **Node-version floor must bump.** `package.json:58` currently pins `"node": ">=18.0.0"`. `fs.watch(dir, { recursive: true })` on **Linux** became stable in **Node 20** (earlier versions throw `ERR_FEATURE_UNAVAILABLE_ON_PLATFORM`). macOS + Windows + Bun have supported it all along. **Action before merging Phase 1**: bump `engines.node` to `>=20.0.0` (coordinate with infra/CI matrix) and verify the plugin's install path (Bun-managed) satisfies it. If bumping is blocked, a Linux-only fallback (chokidar or a polling Map of child dirs) is needed — but that re-introduces anti-pattern B, so the Node-20 bump is the right move.
2. **Single schema in the live codebase, audit phrasing diverges from implementation.** The audit text (and this prompt) references "Cursor, OpenCode, Gemini-CLI transcript ingestion" as preserved. In this codebase **those three agents ingest through the PostToolUse hook chain** (`CursorHooksInstaller.ts`, `OpenCodeInstaller.ts`, `GeminiCliHooksInstaller.ts` — none of which register a JSONL schema). The only JSONL schema is **Codex** (`src/services/transcripts/config.ts:9` + `transcript-watch.example.json`). Phases 1-5 therefore only affect the Codex capture path. The preservation claim for Cursor/OpenCode/Gemini-CLI is satisfied trivially — their path doesn't touch this feature. This is worth calling out in the PR description to avoid reviewer confusion.
## Sources consulted
- `PATHFINDER-2026-04-21/05-clean-flowcharts.md` — full file, § 3.12 canonical, Part 1 #17/18/19, Part 2 D1, Part 4 timer census, Part 5 deletion row.
- `PATHFINDER-2026-04-21/06-implementation-plan.md` — full file, Phase 0 V18, Phase 7 scope, Phase 2 ingest-helper contract.
- `PATHFINDER-2026-04-21/01-flowcharts/transcript-watcher-integration.md` — full before-state.
- `src/services/transcripts/watcher.ts` (lines 1-242).
- `src/services/transcripts/processor.ts` (lines 1-393).
- `src/services/transcripts/config.ts` (lines 1-138).
- `src/services/transcripts/types.ts` (lines 1-70).
- `src/services/transcripts/field-utils.ts` (lines 1-153).
- `src/cli/handlers/observation.ts` (lines 1-86).
- `src/services/worker/http/routes/SessionRoutes.ts` (lines 560-659 for `handleObservationsByClaudeId` shape).
- `src/services/worker-service.ts` (watcher lifecycle at :90, :164, :466, :614-640, :1095-1097).
- `src/services/integrations/{CursorHooksInstaller,OpenCodeInstaller,GeminiCliHooksInstaller,CodexCliInstaller}.ts` — confirming only Codex registers a JSONL schema.
- `transcript-watch.example.json` — confirming only `codex` schema in the live config template.
- `package.json:57-60` — Node engine floor.
@@ -0,0 +1,469 @@
# Phase Plan 09 — lifecycle-hooks (clean)
**Date**: 2026-04-22
**Target flowchart**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` §3.1 ("lifecycle-hooks (clean)")
**Before-state**: `PATHFINDER-2026-04-21/01-flowcharts/lifecycle-hooks.md`
**Scope**: Collapse the 10 current `SessionRoutes` endpoints + the 500-ms polling Stop hook + the 8 per-handler `ensureWorkerRunning` calls + the duplicate `/api/context/*` fetches into the clean 4-endpoint, no-polling, hook-cached design from §3.1. **Zero user-facing change. Exit codes preserved.**
---
## Header: Dependencies
**Upstream (must land first):**
- **Plan 01 — privacy-tag-filtering** (Phases 12 of the implementation plan — `stripMemoryTags` + `ingestObservation/ingestPrompt/ingestSummary` helpers). Required because the new `POST /api/session/observation`, `POST /api/session/prompt`, and `POST /api/session/end` endpoints call those ingest helpers rather than re-implementing tag stripping. Cite: `06-implementation-plan.md` Phase 1 + Phase 2 (plan authoring pipeline; `01-privacy-tag-filtering.md` when landed).
- **Plan 05 — context-injection-engine** — introduces `GET /api/session/start` returning `{sessionDbId, contextMarkdown, semanticMarkdown}`. Phase 1 of this plan depends on that endpoint existing on the worker side. Cite: `05-clean-flowcharts.md` §3.5 + §3.1 arrow `SS → SSR`.
- **Plan 07 — session-lifecycle-management** — introduces blocking `POST /api/session/end` (per-session `Deferred<SummaryResult>` resolved by `ResponseProcessor` when the summary row is written; 110 s hard timeout). Phase 3 of this plan switches the Stop hook to call that endpoint. Cite: `05-clean-flowcharts.md` §3.8 (`POST /api/session/end → queueSummarize → await summary_stored flag OR 110s timeout`), Part 2 decision **D6** (blocking endpoints over polling), `06-implementation-plan.md` Phase 11 step 2.
**Downstream:** none. This is a leaf cleanup in the dependency DAG — no other feature plan reads from the hook layer.
---
## Sources Consulted (what this plan is built from)
1. `PATHFINDER-2026-04-21/05-clean-flowcharts.md` — full read. Authoritative §3.1 diagram (lines 89123); §3.9 route inventory (lines 382418); Part 1 bullshit-inventory items **#11** (500 ms poll), **#12** (double `/api/context/inject`), **#13** (`ensureWorkerRunning` every entry), **#14** (`/api/context/inject` + `/api/context/semantic` both at UserPromptSubmit); Part 2 decision **D6** (blocking endpoints over polling, line 79); Part 4 timer census (Summary poll 500 ms × 220 iter → endpoint blocks, line 520); Part 5 deletion ledger rows `Summarize 500-ms polling hook -60/+20` and `Double /api/context/* fetches → /api/session/start -120/+60` (lines 552553).
2. `PATHFINDER-2026-04-21/06-implementation-plan.md` — Phase 0 verified-findings **V8** (500 ms poll @ `summarize.ts:117150`, `POLL_INTERVAL_MS=500` @ `:24`, `MAX_WAIT_FOR_SUMMARY_MS=110_000` @ `:25`), **V9** (SessionRoutes is **actually 10 endpoints, not 8**: six `/sessions/:sessionDbId/*` at `:377:382` + five `/api/sessions/*` at `:385:389`; `/api/sessions/status` is the polled one), **V10** (`ensureWorkerRunning` in all 8 CLI handlers: `context.ts:19`, `user-message.ts:35`, `summarize.ts:44`, `observation.ts:34`, `file-context.ts:218`, `file-edit.ts:32`, `session-init.ts:41`, `session-complete.ts:35`). Phase 2 (unified ingest helpers) and Phase 11 (endpoint consolidation) define the shared contract.
3. `PATHFINDER-2026-04-21/01-flowcharts/lifecycle-hooks.md` — "before" diagram. 10 hook→worker HTTP edges enumerated (lines 8492 — side effects). Two-phase Stop handling (`summarize` → poll → `session-complete`) at lines 6873.
4. Live codebase (verified `Read`/`Grep` during authoring, 2026-04-22):
- `src/cli/handlers/context.ts:19``await ensureWorkerRunning()` at SessionStart.
- `src/cli/handlers/user-message.ts:35``await ensureWorkerRunning()` at SessionStart (parallel).
- `src/cli/handlers/session-init.ts:41` — UserPromptSubmit.
- `src/cli/handlers/observation.ts:34` — PostToolUse.
- `src/cli/handlers/summarize.ts:17` (import), `:24` (`POLL_INTERVAL_MS = 500`), `:25` (`MAX_WAIT_FOR_SUMMARY_MS = 110_000`), `:44` (`ensureWorkerRunning`), `:89` (`POST /api/sessions/summarize`), `:117150` (poll loop against `/api/sessions/status?contentSessionId=…`), `:156` (`POST /api/sessions/complete`).
- `src/cli/handlers/session-complete.ts:18` (`POST /api/sessions/complete`), `:35` (`ensureWorkerRunning`).
- `src/cli/handlers/file-context.ts:218` (`ensureWorkerRunning`), `:237` (`GET /api/observations/by-file`).
- `src/cli/handlers/file-edit.ts:15` (`POST /api/sessions/observations`), `:32` (`ensureWorkerRunning`).
- `src/services/worker/http/routes/SessionRoutes.ts:375389``setupRoutes` registers **10** routes:
- Legacy `/sessions/:sessionDbId/*` × **6** (`:377` init, `:378` observations, `:379` summarize, `:380` status, `:381` delete, `:382` complete).
- `/api/sessions/*` × **5** (`:385` init, `:386` observations, `:387` summarize, `:388` complete, `:389` status).
- (Earlier sections above register `:setupRoutes` itself on the Express app; the 11 `.get/.post/.delete(` tokens outside `setupRoutes` are internal maps, not routes — verified.)
- `src/shared/hook-constants.ts:2122``HOOK_EXIT_CODES.SUCCESS = 0`. Every handler returns it on the graceful-degradation path (required by CLAUDE.md exit-code strategy — Windows Terminal tab preservation depends on exit 0).
5. Dependency plans: **not yet written on disk**. Plans 01, 05, 07 will be authored in parallel to this one; citations above reference their planned phase numbers per `06-implementation-plan.md` (authoritative sequencing doc).
---
## Endpoint Reality Check (numbers — V9 vs §3.9 claim)
| Source | Claimed current count | Verified current count |
|---|---|---|
| `05-clean-flowcharts.md` §3.1 "Endpoint count: 8 → 4" (line 123) | 8 | — |
| `06-implementation-plan.md` Phase 0 **V9** | — | **10** (six `:377:382` + five `:385:389`) |
| Live `Grep router\.` / `.post/.get/.delete` on `SessionRoutes.ts` (2026-04-22) | — | **10** (confirms V9; §3.9 "8" is an undercount) |
**This plan uses 10 → 4** as the verified target. The §3.1 "8 → 4" claim is footnoted as an undercount of the legacy `/sessions/:sessionDbId/*` subtree.
---
## Hook → Endpoint Mapping (current vs clean)
| Claude Code event | Current hook handler | Current endpoints called | Clean endpoint (§3.1) |
|---|---|---|---|
| SessionStart | `context.ts` | `GET /api/context/inject?projects=…` (`:41`) + (conditionally) `GET /api/context/inject?colors=true` (`:42`) | **`GET /api/session/start?project=…`** — returns `{sessionDbId, contextMarkdown, semanticMarkdown}` |
| SessionStart (parallel) | `user-message.ts` | `GET /api/context/inject?project=…&colors=true` (`:14`) | (same) — reads from the cached `/api/session/start` response in `context.ts`; no second HTTP call |
| UserPromptSubmit | `session-init.ts` | `POST /api/sessions/init` (`:75`), `POST /sessions/{id}/init` (`:141`), `POST /api/context/semantic` (`:23`) | **`POST /api/session/prompt`** `{sessionDbId, prompt}` → returns `{promptId}` (SDK-start implicit inside prompt handler) |
| PostToolUse | `observation.ts` | `POST /api/sessions/observations` (`:17`) | **`POST /api/session/observation`** `{sessionDbId, tool_use_id, name, input, output}``{observationId}` |
| PostToolUse (Cursor file-edit) | `file-edit.ts` | `POST /api/sessions/observations` (`:15`) | **`POST /api/session/observation`** (same endpoint, same payload shape) |
| PreToolUse (file-context gate) | `file-context.ts` | `GET /api/observations/by-file` (`:237`) | Unchanged — this is a read endpoint outside the Session lifecycle; belongs to Plan 08 (DataRoutes), not this one |
| Stop | `summarize.ts` | `POST /api/sessions/summarize` (`:89`) + poll `GET /api/sessions/status` 500 ms × up to 220 iter (`:117150`) + `POST /api/sessions/complete` (`:156`) | **`POST /api/session/end`** `{sessionDbId, last_assistant_message}` — blocks until summary written or 110 s timeout; returns `{summaryId|null}` |
| Stop (phase 2) | `session-complete.ts` | `POST /api/sessions/complete` (`:18`) | **Deleted.** Folded into `POST /api/session/end` (§3.1: "Two-phase Stop handling (summarize then session-complete) — one endpoint, one response"). |
**Endpoints before**: 10 on `SessionRoutes` + 2 on `SearchRoutes` (`/api/context/inject`, `/api/context/semantic`) = 12 lifecycle-touching endpoints.
**Endpoints after**: 4 on `SessionRoutes` (`start`, `prompt`, `observation`, `end`). `/api/context/*` removed (folded into `/api/session/start`).
**Net delete**: 10 4 = **6 from SessionRoutes**; **2 from SearchRoutes**; **8 total**.
---
## Phase Contract (applied to every phase below)
Each phase specifies:
- **(a) What to implement** — "Copy from §X.Y / V-finding / file:line" — no invention.
- **(b) Docs** — `05-clean-flowcharts.md` section + `V8/V9/V10` + live file:line.
- **(c) Verification** — grep counts, before/after.
- **(d) Anti-pattern guards** — **A** (invent hook event types), **B** (polling — replace 500 ms loop with blocking endpoint + SSE), **D** (two context fetches collapse to one `GET /api/session/start`), **E** (duplicate `/api/context/inject` at SessionStart + user-message — single cache).
---
## Phase 1 — Collapse double `/api/context/*` fetches into single `GET /api/session/start`
### (a) What to implement
Copy from `05-clean-flowcharts.md` §3.1 lines 95, 100 (`SS --> SSR["Returns {sessionDbId, contextMarkdown, semanticMarkdown}"]`) and §3.5 line 236 (`generateContext(projects, forHuman=false)` + `generateContext(projects, forHuman=true)` on one route handler).
Switch `context.ts` + `user-message.ts` to a **single** `GET /api/session/start` call. The worker route is produced by Plan 05 Phase 1; this phase only rewires the two hook handlers.
1. **Rewrite `src/cli/handlers/context.ts:4174`**: replace the two-URL `Promise.all([workerHttpRequest(apiPath), showTerminalOutput ? workerHttpRequest(colorApiPath).catch(()=>null) : …])` with one `workerHttpRequest('/api/session/start?project=…&colors=…&semantic=…')`. Parse response as `{sessionDbId, contextMarkdown, humanMarkdown?, semanticMarkdown}`. `contextMarkdown``additionalContext`; `humanMarkdown` (present when `colors=true`) → `systemMessage` block.
2. **Delete `user-message.ts:fetchAndDisplayContext` (lines 1330) entirely.** The parallel SessionStart display becomes a second consumer of `context.ts`'s cached `/api/session/start` result — see Phase 2 for the shared cache. In the interim (before Phase 2 lands), `user-message.ts` calls `/api/session/start?colors=true&display=true` with its own request — one HTTP call, still replaces the old `/api/context/inject` double-call. Remove the `fetchAndDisplayContext` helper + its usage at `:46`.
3. **Delete hook-side calls to `/api/context/inject`** anywhere they appear. Grep: only `context.ts:41,42` + `user-message.ts:1416` touch it. After this phase: zero hook-side references to `/api/context/inject`.
4. `session-init.ts:23` (`POST /api/context/semantic`) moves to Phase 6 (consolidated with session-prompt); leave untouched here.
### (b) Docs
- §3.1 lines 95, 100 — `SS → SSR` edge.
- §3.5 line 236 — `generateContext(projects, forHuman=false)` + `generateContext(projects, forHuman=true)` (dual-strategy render).
- Part 1 items **#12** ("double `/api/context/inject` at SessionStart") and **#14** ("`/api/context/inject` + `/api/context/semantic` both at UserPromptSubmit — fold into `/api/session/start`").
- **V10** — both `context.ts:19` and `user-message.ts:35` currently bootstrap the worker then each fire a GET.
- Live: `src/cli/handlers/context.ts:4174`, `src/cli/handlers/user-message.ts:1330,46`.
### (c) Verification
```
grep -rn "/api/context/inject" src/cli/handlers/ → 0 matches
grep -rn "/api/session/start" src/cli/handlers/ → 2 matches (context.ts + user-message.ts)
grep -c "workerHttpRequest" src/cli/handlers/context.ts → 1 (was 2 — the `apiPath` + `colorApiPath` pair collapses)
```
Snapshot test: capture `additionalContext` bytes from an existing SessionStart fixture and assert byte-equal after the rewire (strategy-driven rendering must be indistinguishable in `forHuman=false` mode).
### (d) Anti-pattern guards
- **D** — no two fetches for the same data. `/api/session/start` is one request returning both markdowns.
- **E** — the parallel SessionStart display in `user-message.ts` shares the response shape; Phase 2 collapses to one cache entry.
- **A** — no new `hookEventName` values. Still `'SessionStart'` at `context.ts:88`.
---
## Phase 2 — Cache `alive=true` in the hook process for the session lifetime
### (a) What to implement
Copy from `05-clean-flowcharts.md` §3.1 "Deleted from old flowchart" bullet 1 ("`ensureWorkerRunning` at every entry point (cache `alive` for the hook lifetime)") + Part 1 item **#13** ("Hook has no shared state. — Cache `alive=true` in the hook process for the session.").
1. **Create `src/hooks/worker-cache.ts`** (new file, ~25 lines):
```ts
// One variable in the hook's process; lives as long as the hook process does.
let alive: boolean | null = null;
// Cached /api/session/start response, shared between context + user-message handlers
// within the same hook process (invoked once per SessionStart fan-out).
let sessionStartResponse: SessionStartResponse | null = null;
export async function ensureWorkerAliveOnce(): Promise<boolean> {
if (alive !== null) return alive;
alive = await originalEnsureWorkerRunning();
return alive;
}
export function cacheSessionStart(response: SessionStartResponse): void { sessionStartResponse = response; }
export function getCachedSessionStart(): SessionStartResponse | null { return sessionStartResponse; }
```
"Hook process" = one Node/Bun invocation per Claude Code hook event. Lifetime ~50 ms ~120 s. Module-scope `let` is sufficient; no cross-process state needed.
2. **Switch all 8 CLI handlers** to import `ensureWorkerAliveOnce` instead of `ensureWorkerRunning`:
- `context.ts:19`, `user-message.ts:35`, `summarize.ts:44`, `observation.ts:34`, `file-context.ts:218`, `file-edit.ts:32`, `session-init.ts:41`, `session-complete.ts:35`.
3. **First-call behaviour**: the first handler in a given hook process spawns/pings the worker (same code path as today's `ensureWorkerRunning` in `src/shared/worker-utils.ts`). Subsequent calls in the **same process** skip.
4. **Cross-handler coordination for SessionStart**: when `context.ts` receives the `/api/session/start` response it calls `cacheSessionStart(response)`. `user-message.ts` (running as a parallel handler in the same hook process when both are wired to SessionStart) calls `getCachedSessionStart()` first; falls back to its own fetch if null (separate hook-process invocations).
### (b) Docs
- §3.1 "Deleted from old flowchart" bullet 1.
- Part 1 item **#13**.
- **V10** — 8 live callsites today.
- Live: `src/shared/worker-utils.ts` (current `ensureWorkerRunning` implementation is the one `ensureWorkerAliveOnce` delegates to internally).
### (c) Verification
```
grep -rn "ensureWorkerRunning" src/cli/handlers/ → 0 matches (was 8 import lines + 8 callsites)
grep -rn "ensureWorkerAliveOnce" src/cli/handlers/ → 8 import + 8 callsite matches
grep -c "ensureWorkerRunning" src/cli/handlers/*.ts → reduces from 8 to 0 (cached)
```
Instrumentation test: start a Claude Code session, trigger SessionStart → UserPromptSubmit → 2× PostToolUse → Stop. Assert the worker's `GET /health` (or equivalent startup ping) is called **once** per hook process, not once per handler. (Today it's 5 calls in the SessionStart fan-out alone.)
### (d) Anti-pattern guards
- **E** — one cache, two readers (`context.ts` + `user-message.ts`). No duplicate cache keys.
- **A** — no `WorkerCacheService` class. Module-scope `let` is sufficient; adding a class would be invention (CLAUDE.md: YAGNI, simple-first).
### Exit-code invariant
The caller still returns `HOOK_EXIT_CODES.SUCCESS` when `ensureWorkerAliveOnce()` returns `false` (worker unavailable → empty context → exit 0). CLAUDE.md exit-code strategy preserved: Windows Terminal tabs continue to close on exit 0 even when the worker is down.
---
## Phase 3 — Replace `summarize.ts` 500 ms poll loop with single blocking `POST /api/session/end`
### (a) What to implement
Copy from `05-clean-flowcharts.md` §3.1 lines 98, 107 (`STOP --> STOPR["Returns {summaryId or null}"]`) + §3.8 lines 346349 (`POST /api/session/end → queueSummarize → await summary_stored flag OR 110s timeout → abortController.abort → Delete`) + Part 2 decision **D6**. The worker-side blocking endpoint is implemented by Plan 07 Phase 2 (per-session `Deferred<SummaryResult>` resolved by `ResponseProcessor` when the summary row is written).
1. **Rewrite `src/cli/handlers/summarize.ts:86167`** (the queue + poll + complete block) into:
```ts
const response = await workerHttpRequest('/api/session/end', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ contentSessionId: sessionId, last_assistant_message: lastAssistantMessage, platformSource }),
timeoutMs: MAX_WAIT_FOR_SUMMARY_MS + 5_000 // 115s — hook times out slightly after server
});
// Response: { summaryId: number | null, timedOut?: boolean }
```
2. **Delete constants** `POLL_INTERVAL_MS = 500` (`:24`) and `POLL_INTERVAL_MS` references. `MAX_WAIT_FOR_SUMMARY_MS` stays — migrates from poll-duration cap to HTTP-client timeout (preserves the 110 s semantic).
3. **Delete the poll loop** (`summarize.ts:117150`).
4. **Delete the explicit session-complete call** (`summarize.ts:155161`) — folded into the worker's `/api/session/end` handler on the other side of the wire.
5. **Preserve the subagent guard** at `:3441` (exits early before any HTTP).
6. **Preserve the transcript-extract guard** at `:6078` (exits 0 when no assistant message).
7. **Preserve the exit-code contract**: successful completion, timeout, and worker-unreachable all return `HOOK_EXIT_CODES.SUCCESS` (exit 0). This matches today's `summarize.ts:47,56,67,77,103,107,167` — every return path exits 0. CLAUDE.md exit-code strategy: Windows Terminal closes tabs on exit 0, so the 110 s timeout path must also exit 0, not 2.
### (b) Docs
- §3.1 lines 98, 107 — STOP edge.
- §3.8 lines 346349 — `End → Queue_Sum → WaitSum → Abort → Delete`.
- Part 2 **D6** (blocking endpoints over polling, line 79).
- Part 4 timer census line 520 (`Summary poll (500 ms × 220 iter)` ✓ before / ✗ after).
- **V8** — `summarize.ts:117150` + `:24` + `:25`.
- **V9** — `/api/sessions/status` is deleted in Phase 5.
- Live: `src/cli/handlers/summarize.ts:2425,86167`.
### (c) Verification
```
grep -n "POLL_INTERVAL_MS" src/ → 0 matches
grep -n "MAX_WAIT_FOR_SUMMARY_MS" src/cli/handlers/summarize.ts → 1 match (used as HTTP timeout)
grep -n "/api/sessions/status" src/ → 0 matches in src/cli/
grep -n "/api/session/end" src/cli/handlers/summarize.ts → 1 match
wc -l src/cli/handlers/summarize.ts → < 90 (was 169)
```
End-to-end: run a Claude Code session that produces a summary. Assert the Stop hook returns within ~(summary-processing time + 1 s), not ≥500 ms (the old minimum due to the first poll interval). Assert no `GET /api/sessions/status` requests hit the worker log.
Timeout path test: configure the SDK agent to hang past 110 s. Assert Stop hook returns exit 0 with `summaryId: null, timedOut: true`. **This is the exit-code invariant that CLAUDE.md's Windows Terminal note demands — confirm explicitly** (see "Confidence + Gaps" below).
### (d) Anti-pattern guards
- **B** — polling replaced by blocking endpoint + HTTP-client timeout. The hook-side client timeout is `MAX_WAIT_FOR_SUMMARY_MS + 5_000` to give the server side first claim on the 110 s budget.
- **A** — no new `SessionStopResult` type; reuse the existing `{summaryId, timedOut?}` shape Plan 07 Phase 2 defines.
---
## Phase 4 — Delete `/sessions/:sessionDbId/*` legacy endpoints (6)
### (a) What to implement
Copy from `06-implementation-plan.md` Phase 11 step 3 ("Delete the old 10 endpoints under `/sessions/:sessionDbId/*` and `/api/sessions/*` after all hook-side callers are switched"). Also §3.9 line 403 (SessionRoutes: "`/api/session/*` (4 endpoints — see 3.1)").
1. **Delete registrations** at `SessionRoutes.ts:377382`:
- `app.post('/sessions/:sessionDbId/init', this.handleSessionInit.bind(this));`
- `app.post('/sessions/:sessionDbId/observations', this.handleObservations.bind(this));`
- `app.post('/sessions/:sessionDbId/summarize', this.handleSummarize.bind(this));`
- `app.get('/sessions/:sessionDbId/status', this.handleSessionStatus.bind(this));`
- `app.delete('/sessions/:sessionDbId', this.handleSessionDelete.bind(this));`
- `app.post('/sessions/:sessionDbId/complete', this.handleSessionComplete.bind(this));`
2. **Delete handler methods** `handleSessionInit`, `handleObservations`, `handleSummarize`, `handleSessionStatus`, `handleSessionDelete`, `handleSessionComplete` (the legacy six) if no other code references them.
3. Keep the `handle*ByClaudeId` variants in place *for this phase* — Phase 5 deletes `/api/sessions/status` specifically; Phase 6 replaces the remaining four `/api/sessions/*` with the unified four `/api/session/*`.
### (b) Docs
- §3.1 line 123 ("Endpoint count: 8 → 4") — corrected to **10 → 4** per V9.
- §3.9 line 403 — final target `R3["SessionRoutes: /api/session/* (4 endpoints — see 3.1)"]`.
- **V9**.
- Live: `src/services/worker/http/routes/SessionRoutes.ts:377382`.
### (c) Verification
```
grep -n "app\.\(post\|get\|delete\)\('/sessions/" src/services/worker/http/routes/SessionRoutes.ts → 0 matches
grep -n "app\.\(post\|get\|delete\)\('/api/sessions/" src/services/worker/http/routes/SessionRoutes.ts → 5 matches (Phase 5+6 reduce to 0)
wc -l src/services/worker/http/routes/SessionRoutes.ts → drops by ~250 lines (legacy handlers removed)
```
Integration test: send `POST /sessions/1/init` to a running worker. Assert `404`. Send to `/api/session/prompt` (Phase 6's replacement). Assert `200`.
### (d) Anti-pattern guards
- **D** — pure deletion; no "forwarding shim" to the new endpoints.
- **A** — no "LegacySessionRoutes" compatibility module. Delete means delete. Users who pinned an old plugin version still have the old worker binary shipped with their install.
---
## Phase 5 — Delete `/api/sessions/status` (polling endpoint is obsolete)
### (a) What to implement
Copy from §3.1 "Deleted from old flowchart" bullet 5 ("500-ms poll loop on `/api/sessions/status` (replaced by blocking `/api/session/end`)"). Phase 3 removes the only consumer; this phase deletes the supply.
1. **Delete registration** at `SessionRoutes.ts:389` (`app.get('/api/sessions/status', this.handleStatusByClaudeId.bind(this));`).
2. **Delete handler method** `handleStatusByClaudeId` + any private helpers it uses (if no other code references them).
3. Sanity-grep for any residual polling client.
### (b) Docs
- §3.1 deletion bullet 5.
- Part 2 **D6**.
- **V9** (endpoint 10 of 10).
- Live: `src/services/worker/http/routes/SessionRoutes.ts:389`.
### (c) Verification
```
grep -rn "/api/sessions/status" src/ → 0 matches (hook side removed in Phase 3)
grep -n "handleStatusByClaudeId" src/ → 0 matches
```
### (d) Anti-pattern guards
- **B** — no polling endpoint means no one can be tempted to re-add a 500 ms loop against it later.
---
## Phase 6 — Consolidate `session-init` / `session-complete` handlers into unified session endpoints
### (a) What to implement
Copy from §3.1 diagram edges:
- `UPS["POST /api/session/prompt<br/>{sessionDbId, prompt}"] --> UPSR["Returns {promptId}"]` (lines 96, 103).
- `PTU["POST /api/session/observation<br/>{sessionDbId, tool_use_id, name, input, output}"] --> PTUR["Returns {observationId}"]` (lines 97, 105).
- "Deleted" bullet 3: "`POST /sessions/{id}/init` SDK-start endpoint (implicit inside `/api/session/prompt`)".
- "Deleted" bullet 6: "Two-phase Stop handling (summarize then session-complete) — one endpoint, one response".
1. **Rewrite `src/cli/handlers/session-init.ts:72150`** as a single `POST /api/session/prompt` call:
- Replace `/api/sessions/init` (`:75`) + `/sessions/{sessionDbId}/init` (`:141`) + `/api/context/semantic` (`:23`) with one `workerHttpRequest('/api/session/prompt', {body: JSON.stringify({sessionId, project, prompt, platformSource})})`.
- The worker-side `/api/session/prompt` handler (implemented by Plan 07 Phase 3) does: (a) resolve/create `sessionDbId`, (b) `ingestPrompt` (Plan 01 Phase 2), (c) start the SDK agent if not already running for this session, (d) fetch semantic markdown via `SearchOrchestrator`, (e) return `{promptId, sessionDbId, semanticMarkdown?}`.
- `session-init.ts` passes `semanticMarkdown` into `additionalContext` (preserves the user-facing semantic injection feature — §3.5 + §3.1 `SS → SSR`).
2. **Rewrite `src/cli/handlers/observation.ts:17`** to call `POST /api/session/observation` with the new `{sessionDbId, tool_use_id, name, input, output}` payload. `tool_use_id` is passed through from the Claude Code hook input (already captured in `NormalizedHookInput` — verify before landing; if not, Plan 01 Phase 2 adds it because the UNIQUE constraint in Phase 9 depends on it).
3. **Rewrite `src/cli/handlers/file-edit.ts:15`** similarly — same endpoint, Cursor flow generates a synthetic `tool_use_id` (`file-edit:<path>:<mtime>`) if none exists.
4. **Delete `src/cli/handlers/session-complete.ts` entirely.** Its only role (mark session inactive) moves server-side into `/api/session/end`.
5. **Delete hook wiring** for the Stop-phase-2 `sessionCompleteHandler` in the adapter layer (`src/cli/adapters/claude-code.ts` — verify dispatcher mapping; this handler was the second callsite for the Stop event, feeding the old two-phase flow).
6. **Delete the remaining four `/api/sessions/*` legacy endpoints** at `SessionRoutes.ts:385388` (`init`, `observations`, `summarize`, `complete`) — Phase 5 already deleted `status`. Their handlers `handleSessionInitByClaudeId`, `handleObservationsByClaudeId`, `handleSummarizeByClaudeId`, `handleCompleteByClaudeId` are deleted.
### (b) Docs
- §3.1 lines 96, 97, 103, 105 + deletion bullets 3, 6.
- §3.8 lines 325332 (A `POST /api/session/prompt` → `SessionManager.initializeSession → Create → ActiveSession → spawn SDK`) — implicit SDK start.
- **V9** endpoints `:385:388`.
- Live: `src/cli/handlers/session-init.ts:75,141,23`; `src/cli/handlers/observation.ts:17`; `src/cli/handlers/file-edit.ts:15`; `src/cli/handlers/session-complete.ts` (entire file).
### (c) Verification
```
grep -rn "/api/sessions/" src/ → 0 matches (all five legacy paths deleted)
grep -rn "/sessions/.*sessionDbId" src/ → 0 matches (legacy six deleted in Phase 4)
grep -rn "/api/session/" src/ → exactly 4 distinct paths: start, prompt, observation, end
grep -rn "/api/context/semantic" src/ → 0 matches (folded into /api/session/prompt)
grep -rn "sessionCompleteHandler" src/ → 0 matches (file deleted)
test -f src/cli/handlers/session-complete.ts → false
```
End-to-end: full SessionStart → UserPromptSubmit → PostToolUse × 3 → Stop cycle against a fresh worker. Assert exactly these HTTP calls (verified via worker access log):
1. `GET /api/session/start?project=…` (SessionStart, from `context.ts`)
2. (Maybe) `GET /api/session/start?project=…&colors=true` (SessionStart parallel, from `user-message.ts`) — **if Phase 2 cache misses because the two handlers run in separate hook processes; otherwise 0 calls.**
3. `POST /api/session/prompt` (UserPromptSubmit)
4. `POST /api/session/observation` × 3 (PostToolUse)
5. `POST /api/session/end` (Stop)
Total: 5 or 6 HTTP calls per session (was 1014: one `ensureWorkerRunning` ping per handler + two `/api/context/inject` + `/api/sessions/init` + `/sessions/1/init` + `/api/context/semantic` + 3× `/api/sessions/observations` + `/api/sessions/summarize` + ~220× poll `/api/sessions/status` + `/api/sessions/complete` × 2).
### (d) Anti-pattern guards
- **A** — no new event type; `POST /api/session/prompt` maps 1:1 to the existing UserPromptSubmit hook. No `hookEventName` changes.
- **D** — `/api/session/prompt` is the single source of truth for "start processing this user prompt". No facade calling an internal `/api/sessions/init`.
- **E** — `session-init.ts` and `observation.ts` both land on the same backend `ingestObservation`/`ingestPrompt` helpers via their respective endpoints; no duplicate tag-strip / privacy check paths.
---
## Phase 7 — Verification (grep counts, exit codes, Windows Terminal)
### (a) What to verify
1. **Grep counts** (final "clean" state):
```
grep -rn "ensureWorkerRunning" src/cli/handlers/ → 0
grep -rn "ensureWorkerAliveOnce" src/cli/handlers/ → 8
grep -n "POLL_INTERVAL_MS" src/ → 0
grep -n "MAX_WAIT_FOR_SUMMARY_MS" src/cli/handlers/summarize.ts → 1 (HTTP client timeout)
grep -rn "/api/sessions/" src/ → 0
grep -rn "/sessions/.*sessionDbId" src/ → 0
grep -rn "/api/context/inject" src/ → 0
grep -rn "/api/context/semantic" src/ → 0
grep -rn "/api/session/" src/ → exactly 4 paths
grep -c "app\.\(post\|get\|delete\)" src/services/worker/http/routes/SessionRoutes.ts → 4
```
2. **Exit-code census** (preserves CLAUDE.md contract):
- Every hook-handler return path uses `HOOK_EXIT_CODES.SUCCESS` (= 0) on the graceful-degradation branch. Run:
```
grep -B1 "HOOK_EXIT_CODES" src/cli/handlers/*.ts
```
Expected: exit 0 on (worker-unreachable, empty context, empty transcript, 110 s timeout, subagent, project excluded). No new exit 2 paths.
- Windows Terminal tab behaviour: exit 0 closes the tab on successful completion. The blocking `/api/session/end` 110 s path MUST also return exit 0 (not exit 2), so tabs close on timeout. Ship a Windows-Terminal integration test: trigger a synthetic 110 s timeout; confirm tab closes.
3. **Timer census**:
```
grep -n "setInterval\|setTimeout.*recursive" src/cli/ → 0 in CLI handlers
grep -n "setTimeout.*POLL" src/cli/ → 0
```
4. **Endpoint count** on `SessionRoutes.ts`: exactly **4** route registrations. Matches §3.1.
### (b) Docs
- Whole §3.1 diagram, Part 4 timer census, Part 5 deletion ledger rows for "Summarize 500-ms polling hook" and "Double `/api/context/*` fetches".
- **V8**, **V9**, **V10**.
- CLAUDE.md exit-code strategy section ("Exit 0: Success or graceful shutdown — Windows Terminal closes tabs").
### (c) Verification (running the phase)
The phase produces no new code; it runs the grep + integration tests above and fails the rollout if any gate trips. Land only when:
- all greps pass,
- synthetic 110 s timeout → exit 0 → tab closes (Windows),
- full session cycle reports 56 HTTP calls (was 1014).
### (d) Anti-pattern guards
- **B/D/E** — verified by absence (grep). **A** — verified by "`hookEventName` value set unchanged" (`SessionStart`, `UserPromptSubmit`, `PostToolUse`, `Stop`).
---
## Copy-Ready Snippet Locations
**Hook-side session-alive cache (Phase 2)**:
Location: new file `src/hooks/worker-cache.ts` (create; this is the one file added by this plan).
Shape: one module-scope `let alive: boolean | null = null;` + one `let sessionStartResponse: SessionStartResponse | null = null;`. Lives as long as the hook process does (≤120 s). No persistence, no cross-process sharing — that's the point. Plan 07 owns the *server-side* session state; Plan 09 owns only the per-hook-process cache.
**Poll loop deletion target (Phase 3)**:
`src/cli/handlers/summarize.ts:117150` — the entire `while ((Date.now() - waitStart) < MAX_WAIT_FOR_SUMMARY_MS) { await sleep(POLL_INTERVAL_MS); … }` block plus `summarize.ts:24` (`POLL_INTERVAL_MS = 500`).
**Double-fetch deletion target (Phase 1)**:
`src/cli/handlers/context.ts:4157` (the `Promise.all([workerHttpRequest(apiPath), workerHttpRequest(colorApiPath)])`) + `src/cli/handlers/user-message.ts:1330` (`fetchAndDisplayContext`).
**`ensureWorkerRunning` 8 callsites (Phase 2 rewires all 8)**:
```
src/cli/handlers/context.ts:19
src/cli/handlers/user-message.ts:35
src/cli/handlers/session-init.ts:41
src/cli/handlers/observation.ts:34
src/cli/handlers/summarize.ts:44
src/cli/handlers/session-complete.ts:35 (file deleted in Phase 6 — callsite deleted with it)
src/cli/handlers/file-context.ts:218
src/cli/handlers/file-edit.ts:32
```
---
## Confidence + Gaps
### High confidence
- Hook → endpoint mapping (enumerated against live code).
- V8/V9/V10 verified against `Grep` output this session (2026-04-22).
- Endpoint count **10 → 4** verified at `SessionRoutes.ts:377389` — supersedes the §3.1 "8 → 4" claim.
- `HOOK_EXIT_CODES.SUCCESS = 0` is the sole value used in every return branch of every handler today. Every phase preserves exit-0 semantics.
### Gaps (call out before executing)
1. **Stop-hook exit codes on 110 s timeout path — NEEDS CONFIRMATION.** Current `summarize.ts` returns exit 0 on all branches (poll timeout falls through to `/api/sessions/complete` → `return { exitCode: undefined }` implicitly → adapter defaults to 0). The new blocking `/api/session/end` must explicitly return exit 0 when the server responds `{timedOut: true, summaryId: null}`. §3.1 ("Exit 0") and CLAUDE.md ("Exit 0: graceful shutdown — Windows Terminal closes tabs") agree. **Phase 3 verification step must include a synthetic-timeout Windows Terminal test** — otherwise the refactor could silently introduce an exit-2 path that blocks tab closure, which CLAUDE.md explicitly warns against.
2. **`tool_use_id` availability in CLI hook payloads.** `POST /api/session/observation` requires `tool_use_id` (§3.1 `PTU` edge). Current `NormalizedHookInput` may or may not already carry it — `src/shared/NormalizedHookInput` needs a verification pass in Phase 6 (deferred to Plan 01 Phase 2 if absent). This gates the UNIQUE constraint in Plan 09 Phase 9 (SQLite); out of scope here but a coupling to flag.
3. **`user-message.ts` + `context.ts` run as separate hook processes on some Claude Code versions.** Module-scope `let` in `worker-cache.ts` won't share state across processes. If the Claude Code hook runner invokes them sequentially in one process: 1 HTTP call. If in parallel processes: 2 HTTP calls (still one each, still ≤2 total — acceptable, same as today's `/api/context/inject` double-fetch but under the new endpoint). **Not a correctness issue; a minor perf claim in Phase 1 verification needs empirical confirmation, not a blocker.**
### Out-of-scope adjacencies (flagged)
- Worker-side implementation of `GET /api/session/start`, `POST /api/session/prompt`, `POST /api/session/end` → Plans 05 + 07.
- `ingestObservation`/`ingestPrompt`/`ingestSummary` helpers → Plan 01.
- `file-context.ts` `GET /api/observations/by-file` endpoint → Plan 08 (DataRoutes), not touched here.
- `pre-compact.ts` (delegates to `summarizeHandler`) inherits the Phase 3 rewrite automatically; no extra work.
---
## Summary
- **7 phases**, executed in order (1 → 7). Phases 1, 2, 3 are independent of each other on the **hook side** (different files) but all depend on worker-side Plans 01, 05, 07 Phase-N endpoints existing; Phases 4, 5, 6 delete worker-side code after hooks stop calling it.
- **Lines deleted (hook side)**: `summarize.ts` loses ~80 lines (lines 86167 collapse to ~10); `user-message.ts` loses ~17 lines; `context.ts` loses ~15 lines; `session-complete.ts` deleted entirely (65 lines); `session-init.ts` loses ~60 lines. **~237 lines gone** from `src/cli/handlers/`.
- **Lines deleted (worker side, SessionRoutes.ts)**: ~250 lines (6 legacy handlers + 5 ByClaudeId handlers).
- **Lines added**: `src/hooks/worker-cache.ts` ~25 lines; 8 handler rewires net ~0. **Total net**: ~-460 lines in this plan's scope (consistent with Part 5 ledger rows `-60/+20` summarize + `-120/+60` context = **-100 net**, plus the Phase 4+5+6 SessionRoutes delete not counted in §5 because §5 lumped it into "session-lifecycle-management").
- **Top gaps**: (1) 110 s timeout exit code must be 0 (Windows Terminal contract); (2) `tool_use_id` presence in `NormalizedHookInput` needs verification before Phase 6.
@@ -0,0 +1,391 @@
# Plan 10 — knowledge-corpus-builder (clean)
**Target section**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` § 3.11 (lines 450476), Part 1 items #35 (line 53) and #36 (line 54).
**Before-state**: `PATHFINDER-2026-04-21/01-flowcharts/knowledge-corpus-builder.md` (lines 187).
**Implementation-plan correspondence**: `PATHFINDER-2026-04-21/06-implementation-plan.md` Phase 13 — "KnowledgeAgent simplification" (lines 567597). **Direct V-number: NONE** — the verified-findings matrix (V1V20, lines 2247) does not include a corpus-specific entry. No upstream discrepancy was registered for this area; treat 05 § 3.11 + Phase 13 as the canonical pair.
## Dependencies
- **Upstream**:
- Plan 05-context-injection-engine — defines `CorpusDetailStrategy` (one of the four strategy configs in 05 § 3.5 lines 232259 and Part 2 decision D4 line 75). This plan calls `renderObservations(obs, CorpusDetailStrategy)` from CorpusBuilder.
- Plan 06-hybrid-search-orchestration — defines the clean `SearchOrchestrator.search` signature (05 § 3.6 lines 262292). CorpusBuilder is a *consumer* — the live call is `SearchOrchestrator.search(args)` at `src/services/worker/search/SearchOrchestrator.ts:71`.
- **Downstream**: none.
## Phase 0 — Documentation Discovery (already done)
### Sources consulted
1. `PATHFINDER-2026-04-21/05-clean-flowcharts.md` — full file (607 lines). Section 3.11 (lines 450476) is canonical; Part 1 items #3536 (lines 5354) set the kill rationale; Part 5 ledger row (line 556) promises ~110 net lines deleted in this area.
2. `PATHFINDER-2026-04-21/06-implementation-plan.md` — full file (691 lines). Phase 13 (lines 567597). **No V-number in 06's verified-findings table (V1V20) covers the corpus.** Stated explicitly: Phase 13 cites 05 § 3.11 directly without a V-correction, because the audit's claims matched the live code.
3. `PATHFINDER-2026-04-21/01-flowcharts/knowledge-corpus-builder.md` — full file (87 lines). "Before" flowchart + the Confidence+Gaps section pinpoints the regex at `KnowledgeAgent.ts:179`.
4. Live codebase (confirmed paths, line counts, and specific anchors):
- `src/services/worker/knowledge/KnowledgeAgent.ts` (284 lines)
- `src/services/worker/knowledge/CorpusStore.ts` (127 lines)
- `src/services/worker/knowledge/CorpusBuilder.ts` (174 lines)
- `src/services/worker/knowledge/CorpusRenderer.ts` (133 lines)
- `src/services/worker/knowledge/types.ts` (56 lines)
- `src/services/worker/knowledge/index.ts` (14 lines)
- `src/services/worker/http/routes/CorpusRoutes.ts` (283 lines)
- `src/services/worker-service.ts:455-456` — constructor wiring
- `src/servers/mcp-server.ts:499,517,551` — MCP tool surface that mirrors HTTP
5. Dependency plans (cross-refs only, not re-planned here):
- 05 § 3.5 (CorpusDetailStrategy) — renderer contract at 05 lines 379389
- 05 § 3.6 (SearchOrchestrator.search) — live signature at `src/services/worker/search/SearchOrchestrator.ts:71`.
### Allowed APIs (copy from; do not invent)
- **Claude Agent SDK**`query({ prompt, options })` already used at `KnowledgeAgent.ts:75` and `:190`. Per 05 § 3.11 (line 461 node "S"): call as `SDK.query(systemPrompt=corpus, userPrompt=question)` — a fresh query every call. The existing SDK usage patterns (cwd, disallowedTools, pathToClaudeCodeExecutable, env) at `KnowledgeAgent.ts:77-84` stay.
- **Prompt caching** — the SDK supplies it automatically when the same system prompt is sent within the 5-min TTL. 05 § 3.11 "Cost note" (lines 476): "cached system prompt TTL is 5 min. Cost approximately equal to session-resume path without the session-expiration brittleness." The refactor does not add any caching code — it relies on the SDK's own behavior.
- **CorpusDetailStrategy** — comes from Plan 05 (renderer contract at 05 lines 379389). This plan consumes it; it does not define it.
- **`bun:sqlite` / file I/O** — `CorpusStore` already uses `fs.writeFileSync/readFileSync`. No new storage primitives.
### Anti-patterns to prohibit (cited in every phase)
- **A — Invent SDK methods for session resume.** The SDK has no documented session-expiry ping or refresh endpoint. Don't add one.
- **B — Polling.** The regex test `/session|resume|expired|invalid.*session|not found/i` at `KnowledgeAgent.ts:179` is a polling heuristic in disguise — try, match on error text, retry. Delete.
- **C — Silent fallback.** The current "session expired → silently reprime → retry" path at `KnowledgeAgent.ts:146160` hides a contract violation. Replacement contract: every `/query` runs a **fresh** SDK query; there is no expiration state to recover from.
- **D — Facades that pass through.** `KnowledgeAgent.reprime` at `KnowledgeAgent.ts:168171` is a two-line call to `prime`. Both die together.
- **E — Two code paths for the same data.** After the refactor, there is exactly one path that sends a corpus to the SDK: inside the `/query` handler.
### Corpus.json schema change (from `types.ts:4051`)
Before:
```ts
interface CorpusFile {
version: 1;
name: string;
description: string;
created_at: string;
updated_at: string;
filter: CorpusFilter;
stats: CorpusStats;
system_prompt: string;
session_id: string | null; // <-- DROP
observations: CorpusObservation[];
}
```
After (per 06 Phase 13 task 2, line 579 — with this plan's note that observations stay because `/query` still needs them to build the system prompt):
```ts
interface CorpusFile {
version: 2; // bump so older files with session_id are recognized
name: string;
description: string;
created_at: string;
updated_at: string;
filter: CorpusFilter;
stats: CorpusStats;
system_prompt: string;
observations: CorpusObservation[];
}
```
> 06 Phase 13 line 579 suggests trimming further to `{name, filters, renderedCorpus, generatedAt}`. This plan keeps the richer shape so `/query` can recompute `renderObservations(obs, CorpusDetailStrategy)` on demand without re-hitting SQLite. If the stored `system_prompt` + observations combined are too large, switch to storing `renderedCorpus` directly; decision flagged in "Gaps" below.
### HTTP surface (constraint from prompt)
Keep:
- `POST /api/corpus` (build)
- `POST /api/corpus/:name/query`
- `POST /api/corpus/:name/rebuild`
- `DELETE /api/corpus/:name`
- `GET /api/corpus` (list) and `GET /api/corpus/:name` (get) — present today at `CorpusRoutes.ts:29-30`; 05 § 3.11 doesn't mention them but they are user-facing read endpoints. Keep.
Delete (per 05 § 3.11 lines 468474):
- `POST /api/corpus/:name/prime` (handler at `CorpusRoutes.ts:33` / `:213-228`)
- `POST /api/corpus/:name/reprime` (handler at `CorpusRoutes.ts:35` / `:267-282`)
---
## Phase 1 — Remove `session_id` from the corpus schema and `CorpusStore`
### (a) What to implement — Copy from …
- Copy from **05 § 3.11 line 470**: "`session_id` persisted in corpus.json" is in the deleted list. Also **06 Phase 13 task 2** (line 579): "Simplify `CorpusStore`… No `session_id`."
### (b) Docs
- 05 § 3.11 (lines 450474) — sets the "no session_id" rule.
- 06 Phase 13 task 2 (line 579) — task text.
- Live file:line targets:
- `src/services/worker/knowledge/types.ts:49``session_id: string | null;` inside `CorpusFile`. Remove.
- `src/services/worker/knowledge/types.ts:40` — bump `version: 1``version: 2`.
- `src/services/worker/knowledge/types.ts:53-56``QueryResult { answer, session_id }`. Remove `session_id` from `QueryResult` (new shape: `{ answer }`).
- `src/services/worker/knowledge/CorpusStore.ts:61, :67, :77``list()` return type drops `session_id`; payload builder at `:74-78` drops the field.
- `src/services/worker/knowledge/CorpusBuilder.ts:104` — literal `session_id: null` inside the built corpus. Delete the line.
### (c) Verification
- `grep -n "session_id" src/services/worker/knowledge/` → zero lines. (Today: 18 matches across KnowledgeAgent.ts, CorpusStore.ts, CorpusBuilder.ts, types.ts.)
- Compile clean: `npx tsc --noEmit`.
- Unit test: `CorpusStore.read` on a legacy corpus file that still has `session_id` returns a valid `CorpusFile` (extra field ignored by the structural cast, or migrated — see "Blast radius" note below).
- `corpus.json` schema assertion (new integration test): build a corpus; read the file back with `JSON.parse`; assert `!("session_id" in parsed)`.
### (d) Anti-pattern guards
- **A**: Don't add a "migration helper" that re-writes old `session_id: "..."` fields into some new shape. Ignore the field on read; the worker never re-emits it.
- **C**: Don't default `session_id` to `null` "for backward compat" — drop the field outright.
---
## Phase 2 — Delete `KnowledgeAgent.prime` as a distinct operation
### (a) What to implement — Copy from …
- Copy from **05 § 3.11 deleted list, line 469**: "`KnowledgeAgent.prime` as a distinct operation — build IS prime (corpus.json is the prime artifact)."
- 06 Phase 13 task 1 (line 578).
### (b) Docs
- 05 § 3.11 (lines 450474) — deleted-nodes rationale.
- Live file:line targets:
- `src/services/worker/knowledge/KnowledgeAgent.ts:52-117` — entire `prime()` method (66 lines). Delete.
- `src/services/worker/knowledge/KnowledgeAgent.ts:163-171` — entire `reprime()` method (9 lines). Delete (see Phase 4 for endpoint). `reprime` just calls `prime`, so it dies with it (anti-pattern **D**).
- `src/services/worker/knowledge/KnowledgeAgent.ts:12-41` — imports `OBSERVER_SESSIONS_DIR`, `ensureDir`, `buildIsolatedEnv`, `sanitizeEnv`, `KNOWLEDGE_AGENT_DISALLOWED_TOOLS`. Some still used by the rewritten `query()` in Phase 5; reassess after Phase 5 lands. The disallowedTools list at `:28-41` stays (still applied per call per 05 § 3.11 — Q&A only).
### (c) Verification
- `grep -n "^\s*async prime\|\.prime(" src/services/worker/knowledge/` → zero.
- `grep -n "async reprime\|\.reprime(" src/services/worker/knowledge/` → zero.
- Corpus still builds end-to-end: `curl -X POST /api/corpus -d '{"name":"t","limit":5}'` returns metadata; the resulting `~/.claude-mem/corpora/t.corpus.json` has observations + system_prompt but no SDK session was spawned during build.
- `wc -l src/services/worker/knowledge/KnowledgeAgent.ts` drops by roughly 75 lines (prime 66 + reprime 9). Tracked against the 110-line net-delete target in 05 Part 5.
### (d) Anti-pattern guards
- **A**: Don't add `buildAndPrime(corpus)` as a "unified" helper. Build *is* prime; the SDK is not touched at build time anymore.
- **D**: `reprime` is a pass-through; delete the method, don't keep a stub.
---
## Phase 3 — Delete the auto-reprime regex and the session-expiration retry path
### (a) What to implement — Copy from …
- Copy from **05 Part 1 line 53** (item #35): "KnowledgeAgent auto-reprime on session-expiration regex match … just always prime on query — or store corpus content in a file the SDK loads fresh. No session_id persistence."
- Copy from **05 § 3.11 deleted list, line 471**: "Auto-reprime on regex-matched expiration (~40 lines)."
### (b) Docs
- 05 Part 1 #35 (line 53) — kill rationale.
- 05 § 3.11 (lines 450474) — replacement flow ("SDK.query(systemPrompt=corpus, userPrompt=question) — fresh query — no session resume").
- Live file:line targets:
- `src/services/worker/knowledge/KnowledgeAgent.ts:119-161``query()` method with its try/catch auto-reprime branch. Delete the entire body; Phase 5 rewrites it.
- `src/services/worker/knowledge/KnowledgeAgent.ts:173-180``isSessionResumeError()`. **Exact regex to delete** (captured at `:179`):
```
/session|resume|expired|invalid.*session|not found/i
```
Delete the whole method.
- `src/services/worker/knowledge/KnowledgeAgent.ts:183-230``executeQuery()` (the resume path). Delete; Phase 5 replaces it.
### (c) Verification
- `grep -n "isSessionResumeError\|auto.?reprime\|session.*expired" src/services/worker/knowledge/` → zero.
- `grep -nE "session\|resume\|expired\|invalid.*session\|not found" src/services/worker/knowledge/` → zero (the raw regex string is gone).
- No retry-on-error logic anywhere in `KnowledgeAgent`. A failed `/query` call propagates to the route handler as a thrown error, returned to the client as `{error: '…'}`.
### (d) Anti-pattern guards
- **B**: Do not replace the regex with a different error-string match. The whole "detect expiry → retry" pattern goes.
- **C**: If `SDK.query` throws, do **not** silently reprime and retry. Propagate. The caller decides.
- **A**: The SDK does not expose a `refreshSession` or `isSessionValid` method — confirmed by the existing usage in `SDKAgent.ts` (not imported for our code path). Don't invent one.
---
## Phase 4 — Delete `/prime` and `/reprime` endpoints
### (a) What to implement — Copy from …
- Copy from **05 § 3.11 deleted list, lines 472474**: "`reprime` endpoint (rebuild covers it)" and (by implication) `prime` endpoint (since `prime` as an operation is gone).
- 06 Phase 13 task 1 (line 578): "Delete `KnowledgeAgent.prime` and the `reprime` endpoint."
### (b) Docs
- Constraint from the request: keep `POST /api/corpus`, `POST /api/corpus/:name/query`, `POST /api/corpus/:name/rebuild`, `DELETE /api/corpus/:name`. Drop `/prime` and `/reprime`.
- Live file:line targets:
- `src/services/worker/http/routes/CorpusRoutes.ts:33``app.post('/api/corpus/:name/prime', …)` registration. Delete.
- `src/services/worker/http/routes/CorpusRoutes.ts:35``app.post('/api/corpus/:name/reprime', …)` registration. Delete.
- `src/services/worker/http/routes/CorpusRoutes.ts:209-228``handlePrimeCorpus` handler (20 lines). Delete.
- `src/services/worker/http/routes/CorpusRoutes.ts:263-282``handleReprimeCorpus` handler (20 lines). Delete.
- `src/servers/mcp-server.ts:499` — MCP tool `prime_corpus`. Delete (tool registration + handler). The deferred-tool namespace exposes it today as `mcp__plugin_claude-mem_mcp-search__prime_corpus`.
- `src/servers/mcp-server.ts:551` — MCP tool `reprime_corpus`. Delete.
- `src/servers/mcp-server.ts:517``query_corpus` description mentions "The corpus must be primed first"; update to "Ask a question about the corpus; the corpus content is loaded fresh per query."
### (c) Verification
- `curl -X POST http://localhost:37777/api/corpus/foo/prime` → HTTP 404 (route no longer registered; Express default 404).
- `curl -X POST http://localhost:37777/api/corpus/foo/reprime` → HTTP 404.
- `grep -n "prime_corpus\|reprime_corpus" src/` → zero.
- `grep -n "handlePrimeCorpus\|handleReprimeCorpus" src/` → zero.
- MCP client listing no longer shows `prime_corpus` or `reprime_corpus` tools.
### (d) Anti-pattern guards
- **D**: Don't leave thin `/prime` and `/reprime` handlers that just return 410 Gone. Delete the routes; 404 is the correct response.
- **A**: Don't add a compatibility-shim tool `prime_corpus_deprecated`.
---
## Phase 5 — Rewrite `/query` to issue a fresh SDK query with corpus content as system prompt
### (a) What to implement — Copy from …
- Copy from **05 § 3.11 lines 460463** (the clean flowchart):
```
Q["POST /api/corpus/:name/query {question}"] --> R["CorpusStore.read(name)"]
R --> S["SDK.query(systemPrompt=corpus, userPrompt=question) (fresh query — no session resume)"]
S --> T["Return answer"]
```
- Copy from **06 Phase 13 task 3** (line 580): "Rewrite `KnowledgeAgent.query` to always pass `systemPrompt = renderedCorpus` to the SDK. Claude prompt-caching reduces cost when the same corpus is queried repeatedly within the 5-min TTL."
### (b) Docs
- 05 § 3.11 (lines 450476), especially the Cost note (line 476).
- Live file:line targets:
- `src/services/worker/knowledge/KnowledgeAgent.ts` — new `query(corpus, question)` body. Copy the SDK-invocation pattern from the current `executeQuery` at `:185-230`, but with:
- `prompt: question` (user prompt)
- `options.systemPrompt: renderedCorpus` (new — load the corpus as system prompt)
- **Remove** `options.resume: corpus.session_id` (line 194)
- Keep `options.model`, `options.cwd`, `options.disallowedTools`, `options.pathToClaudeCodeExecutable`, `options.env` (lines 193, 195198).
- `src/services/worker/knowledge/KnowledgeAgent.ts:14``import { CorpusRenderer }` already exists. Use it. The corpus-rendering call is the combination of `corpus.system_prompt` + `renderer.renderCorpus(corpus)`. Exact shape (copy from the current `prime` prompt at `KnowledgeAgent.ts:61-69`, minus the "Acknowledge" ending):
```
const systemPrompt = [
corpus.system_prompt,
'',
'Here is your complete knowledge base:',
'',
renderer.renderCorpus(corpus),
].join('\n');
```
- **Note for Phase 6**: `renderer.renderCorpus(corpus)` is the migration target for `renderObservations(obs, CorpusDetailStrategy)`. In this phase, call the existing renderer; Phase 6 swaps the internals.
- `src/services/worker/http/routes/CorpusRoutes.ts:235-261``handleQueryCorpus`. Keep the handler; change the response shape from `{answer, session_id}` (line 260) to `{answer}` only.
- `src/services/worker/knowledge/types.ts:53-56``QueryResult` narrowed to `{ answer: string }`.
### (c) Verification
- Send three queries against the same corpus within 5 min. Inspect SDK response usage (cache fields). Expected: call 1 writes full system prompt to the cache; calls 2 and 3 report `cache_read_input_tokens > 0`.
- `grep -n "resume:" src/services/worker/knowledge/KnowledgeAgent.ts` → zero.
- `grep -n "systemPrompt" src/services/worker/knowledge/KnowledgeAgent.ts` → exactly one occurrence (inside new `query`).
- Every `/query` call produces a subprocess with no `--resume` flag. Verify with `lsof` or SDK logs.
- End-to-end: `curl -X POST /api/corpus/foo/query -d '{"question":"What did we learn about Chroma?"}'` returns `{answer: "..."}` with no `session_id` field.
### (d) Anti-pattern guards
- **A**: The SDK option is `systemPrompt`; do not invent `systemMessage`, `initialContext`, or `primePrompt`. Verify the exact SDK option name in `@anthropic-ai/claude-agent-sdk` types before shipping.
- **C**: If `SDK.query` throws, propagate the error. No silent retry. No fallback to "cached answer".
- **E**: There is exactly one SDK-call site in the knowledge module after this phase — inside `KnowledgeAgent.query`. Anyone adding a second SDK call elsewhere in the module is introducing duplication.
---
## Phase 6 — Switch `CorpusBuilder` rendering to `renderObservations(obs, CorpusDetailStrategy)`
### (a) What to implement — Copy from …
- Copy from **05 § 3.11 line 457** (the clean flowchart node E): `E["renderObservations(obs, CorpusDetailStrategy)<br/>(U2 unified renderer)"]`.
- Copy from **05 Part 2 Decision D4** (line 75): "One renderer. `renderObservations(obs[], strategy)` where `strategy` selects columns, density, and grouping. The four existing formatters become four small strategy configs."
- Copy the `RenderStrategy` contract from **05 § 3.5 / 06 Phase 8** (06 lines 379389).
### (b) Docs
- 05 § 3.11 (lines 450476), 05 § 3.5, 05 Part 2 D4.
- **This plan depends on Plan 05-context-injection-engine** to have defined `CorpusDetailStrategy` at `src/services/rendering/renderObservations.ts` (path per 06 Phase 8 task 1, line 379). If Plan 05 has not shipped, this phase BLOCKS on it.
- Live file:line targets:
- `src/services/worker/knowledge/CorpusBuilder.ts:44``this.renderer = new CorpusRenderer();` constructor line. Replace with import of `renderObservations` and `CorpusDetailStrategy`.
- `src/services/worker/knowledge/CorpusBuilder.ts:109``corpus.system_prompt = this.renderer.generateSystemPrompt(corpus)`. Keep (the system-prompt *preamble* is distinct from the observation rendering). Or migrate to a separate strategy if 05 specifies one; 05 does not, so keep.
- `src/services/worker/knowledge/CorpusBuilder.ts:112``const renderedText = this.renderer.renderCorpus(corpus)`. Replace with `const renderedText = renderObservations(corpus.observations, CorpusDetailStrategy);`.
- `src/services/worker/knowledge/CorpusBuilder.ts:113``corpus.stats.token_estimate = this.renderer.estimateTokens(renderedText)`. Keep (token estimator is independent); if Plan 05 moves `estimateTokens` into the unified renderer's output, update.
- `src/services/worker/knowledge/KnowledgeAgent.ts` (Phase 5 rewrite) — swap `renderer.renderCorpus(corpus)` inside the query-time systemPrompt builder for `renderObservations(corpus.observations, CorpusDetailStrategy)`.
- `src/services/worker/knowledge/CorpusRenderer.ts` — after both call-sites migrate, delete `renderCorpus()` (lines 1434) and `renderObservation()` (lines 3985). Keep `generateSystemPrompt()` (lines 97132) and `estimateTokens()` (lines 9092) unless Plan 05 absorbs them. If nothing remains, delete the file; otherwise trim.
### (c) Verification
- `grep -n "renderCorpus\|renderObservation(" src/services/worker/knowledge/CorpusBuilder.ts` → zero.
- `grep -n "renderObservations" src/services/worker/knowledge/` → exactly two call-sites (CorpusBuilder and KnowledgeAgent).
- Snapshot test: feed the same fixture `CorpusObservation[]` to the old `CorpusRenderer.renderCorpus` and the new `renderObservations(obs, CorpusDetailStrategy)` call; assert byte-equal output (or diff in a controlled way documented in Plan 05's snapshot contract).
- `wc -l src/services/worker/knowledge/CorpusRenderer.ts` drops from 133 to roughly 40 (only `generateSystemPrompt` + `estimateTokens` remain, if they remain at all).
### (d) Anti-pattern guards
- **A**: The function name is `renderObservations` (plural), per 05 D4 and 06 Phase 8. Don't invent `renderCorpusObservations` or `renderForAgent`.
- **E**: After this phase, there is one traversal of `observations` in the knowledge module — inside `renderObservations`. Don't leave `renderObservation` (singular) as a helper in CorpusRenderer; Plan 05 owns it.
---
## Phase 7 — Verification (final)
### (a) What to implement — Copy from …
- Copy the verification pattern from **06 Phase 13 task 4 / verification block** (lines 581588).
- Copy the cost-check from **05 § 3.11 Cost note** (line 476).
### (b) Docs
- 05 § 3.11 (lines 450476).
- 06 Phase 13 (lines 567597).
### (c) Verification
1. **Grep gauntlet** (exact commands):
- `grep -rn "session_id" src/services/worker/knowledge/`**zero**.
- `grep -rn "session_id" src/services/worker/http/routes/CorpusRoutes.ts src/servers/mcp-server.ts` → zero for corpus/knowledge paths.
- `grep -rn "isSessionResumeError\|auto.?reprime\|session.*expired" src/services/worker/knowledge/` → zero.
- `grep -rn "/session|resume|expired|invalid.*session|not found/" src/services/worker/knowledge/` → zero (the exact regex string must be gone).
- `grep -rn "\.prime(\|\.reprime(" src/services/worker/knowledge/ src/servers/mcp-server.ts` → zero.
- `grep -rn "prime_corpus\|reprime_corpus" src/` → zero.
- `grep -rn "handlePrimeCorpus\|handleReprimeCorpus" src/` → zero.
2. **HTTP endpoints**:
- `POST /api/corpus` → 200, returns metadata.
- `POST /api/corpus/:name/rebuild` → 200.
- `POST /api/corpus/:name/query` → 200, `{answer: "..."}` only (no `session_id`).
- `DELETE /api/corpus/:name` → 200.
- `POST /api/corpus/:name/prime`**404**.
- `POST /api/corpus/:name/reprime`**404**.
3. **Cost smoke test** (per 05 line 476, "cached system prompt TTL is 5 min"):
- Build a 20-observation corpus.
- Run `POST /api/corpus/test/query` three times within 90 seconds, each with a different question.
- Record SDK response usage counters for each call. Expect: call 1 `cache_read_input_tokens == 0`; calls 2 and 3 `cache_read_input_tokens > 0` (approximately equal to the rendered corpus length in tokens).
- If no cache hits on calls 23, escalate to "Gaps" below — cost model is broken and the refactor must be revisited.
4. **corpus.json on disk**:
- `cat ~/.claude-mem/corpora/test.corpus.json | jq 'has("session_id")'``false`.
- `jq '.version'``2`.
5. **Line-count delta** (target from 05 Part 5 line 556: net -110 LOC for this area):
- Before: KnowledgeAgent 284 + CorpusStore 127 + CorpusBuilder 174 + CorpusRenderer 133 + CorpusRoutes 283 = **1001 lines** in the five files.
- After: roughly -75 (prime+reprime) -10 (CorpusStore `session_id` fields) -40 (auto-reprime + regex + executeQuery body) -40 (prime+reprime HTTP handlers) -93 (CorpusRenderer renderCorpus+renderObservation shift to shared renderer) +30 (new slim query() using systemPrompt). Net ≈ **-228**.
- 05 Part 5 promised -110; actual deletion is larger because the audit underweighted the CorpusRenderer migration credit (it's also double-counted in Plan 08/unified-renderer).
6. **Full `npm run build-and-sync`** passes.
7. **MCP tool listing** no longer exposes `prime_corpus` or `reprime_corpus`.
### (d) Anti-pattern guards
- **A**: Every grep that returns a non-zero match is a failed phase. No "we'll clean it up later" waivers.
- **B**: If the cost smoke test fails (no cache hits on call 2/3), do not "fix" by reintroducing session-resume. Investigate the SDK's prompt-caching behavior and file the bug.
- **C**: Any handler that silently returns a cached answer without calling the SDK is a regression. Every `/query` must invoke the SDK.
---
## Blast radius + migration
- **corpus.json schema**: `version: 1``version: 2`. Old files with `session_id` still parse because TypeScript structural casting is permissive on reads; extra field is ignored, never re-emitted. No explicit migration script — corpus files are rebuilt on `/rebuild` anyway.
- **MCP surface shrinks**: downstream users of the MCP search plugin lose `prime_corpus` and `reprime_corpus` tool names. Coordinate with plugin release notes.
- **Cost profile**: depends on SDK prompt-caching TTL (5 min). See Gap 1 below.
## Confidence + Gaps
**Confidence — High**:
- All deletion targets have exact file:line references verified against live code.
- The 06 Phase 13 verification steps align 1:1 with 05 § 3.11 deletion list.
- Every HTTP and MCP endpoint has been mapped to a specific line in `CorpusRoutes.ts` or `mcp-server.ts`.
**Gap 1 (flagged per prompt — prompt-caching TTL)**: 05 line 476 asserts "cached system prompt TTL is 5 min" → cost roughly equal to session-resume. **This is an assumption**, not a measured fact. If the Claude Agent SDK's caching hits on `systemPrompt` behave differently than expected (e.g., cache key sensitive to small whitespace changes in the rendered corpus; cache disabled when `options.cwd` varies; TTL shorter than 5 min), every `/query` becomes a full prompt-ingest — per-call cost jumps ~20×. **Required**: Phase 7 step 3 (the cost smoke test) must run and the cache-hit ratio must be logged before declaring the phase shipped. If cache miss rate > 10% on repeat queries within 5 min, escalate.
**Gap 2 — corpus.json storage shape**: 06 Phase 13 task 2 (line 579) suggests `{name, filters, renderedCorpus, generatedAt}` — storing the fully-rendered string instead of observations. This plan keeps observations because `renderObservations(obs, CorpusDetailStrategy)` is recomputed per query (Phase 5). Tradeoff: storing `renderedCorpus` saves one render per query (small) but loses the ability to change strategies without a rebuild. **Decision deferred**: ship Phase 17 with observations preserved; reopen if Plan 05 lands and stores `renderedCorpus` directly.
---
## Phase Count
**7 phases**: schema cleanup → `prime` deletion → auto-reprime deletion → endpoint deletion → `/query` rewrite → renderer unification → verification.
## Anticipated LOC Impact
- 05 Part 5 row 19 (line 556): `-140 / +30 / net -110`.
- This plan's line-by-line trace (see Phase 7 step 5): actual net deletion closer to **-228** once the `CorpusRenderer` shrink lands.
- Five files touched: `KnowledgeAgent.ts`, `CorpusStore.ts`, `CorpusBuilder.ts`, `CorpusRenderer.ts`, `CorpusRoutes.ts`, plus `mcp-server.ts` and `types.ts` edits.
@@ -0,0 +1,463 @@
# Plan 11: http-server-routes (clean)
Implements flowchart §3.9 of `PATHFINDER-2026-04-21/05-clean-flowcharts.md`.
Introduces Zod + `validateBody(schema)` middleware, deletes the rate limiter, caches the two served static files at boot, and strips per-route hand-rolled shape-validation. Bullshit-inventory items **#37 (per-route validation boilerplate)**, **#39 (rate limit)**, **#40 (oversized-body special handling)** are eliminated. **#38 (admin endpoints)** is explicitly preserved per the inventory note.
## Header
- **Target flowchart**: `PATHFINDER-2026-04-21/05-clean-flowcharts.md` §3.9 "http-server-routes (clean)" (lines 382-420).
- **Before state**: `PATHFINDER-2026-04-21/01-flowcharts/http-server-routes.md`.
- **Upstream dependencies**: *none*. Zod adoption is orthogonal to every other plan; this plan OWNS the Zod introduction.
- **Downstream dependencies**: *none*. Other plans land unaffected; they gain `validateBody(schema)` validation by attaching a schema to their routes at landing time, not by rewriting this plan.
- **Coordination note**: Plan 09 (lifecycle-hooks) collapses `SessionRoutes` from 10 → 4 endpoints (V9 finding). This plan MUST land **after** Plan 09 so the Zod schemas here target the final 4-endpoint surface, not the legacy 10. If landing order flips, re-attach schemas to whichever route names survive.
- **Verified findings cited**: V2 (legacy `/sessions/*` vs `/api/sessions/*`, SessionRoutes.ts:378-389); V9 (SessionRoutes has 10 endpoints, not 8); V20 (rate limiter at `src/services/worker/http/middleware.ts:45-79`, 300 req/min IP map, keyed by `::ffff:127.0.0.1`-normalized IP).
## Anti-patterns prohibited in every phase
- **A**: No invented Zod methods. Every API used must be verified against the installed zod version (Phase 1). In particular, use `schema.safeParse(body)` + `result.success ? result.data : result.error.flatten()` — no `ZodUtil.assertBody`, no `schema.validateOrThrow`.
- **D**: No per-route validation blocks of 5+ if statements. Any block that currently does `if (typeof x !== 'string') ... if (!body.foo) ... if (!body.bar) ...` collapses to a single `validateBody(schema)` middleware call.
- **E**: No two validation paths. If a route gets a Zod schema, the hand-rolled checks in the handler body get deleted in the same commit. "Defense in depth" via duplicate validation is forbidden.
---
## Phase 1 — Confirm Zod availability; add if absent
**Outcome**: `zod` is a first-class dependency in `package.json`, installed in `node_modules`, with a known version so every schema in Phase 3 uses a stable API.
### (a) What to implement
- Run `npm ls zod` in the repo root.
- If present (transitive or direct): pin the resolved major version in `package.json` dependencies (move from transitive to explicit so future `npm ci` can't drop it).
- If absent (confirmed state as of 2026-04-22 — see findings below): `npm install zod@^3.23.8` (current stable 3.x line). Commit `package.json` + `package-lock.json`.
- Record the resolved version in the PR description. All subsequent phases use this version's API surface.
Copy from: nothing — this is a dependency add. Reference the `package.json` structure at `/Users/alexnewman/.superset/worktrees/claude-mem/vivacious-teeth/package.json:111-125` (current `dependencies` block).
### (b) Docs
- §3.9 "Deleted" bullet 2 ("Per-route hand-rolled validation (Zod middleware replaces)").
- `06-implementation-plan.md` line 55: "Zod — `z.object({...})`, `schema.safeParse(body)`, `result.success ? result.data : result.error.flatten()`. (Not yet a dep; Phase 12 adds `zod` via npm; already shipped transitively via `@anthropic-ai/sdk` — confirm before landing.)"
- V9 (06-implementation-plan.md:36) confirms the SessionRoutes endpoint count that Phase 3 must schema.
- Live file:line: `package.json:111-125` (dependencies block); `package.json:124` (`zod-to-json-schema` — sibling package, *not* zod itself).
### (c) Verification
- `npm ls zod` prints a single resolved path, not "(empty)".
- `node -e "require('zod')"` exits 0.
- Grep: `grep -n '"zod"' package.json`**≥1** match in dependencies (not just `zod-to-json-schema`).
- `git diff package.json` shows `zod` added; `package-lock.json` shows resolved version.
### (d) Anti-pattern guards
- **A**: Don't pin to `@latest`; pin to the major line installed now (3.x). Record the exact minor in the plan PR.
- **E**: Don't add `zod` to both `dependencies` and `devDependencies` — runtime code imports it, so `dependencies` only.
---
## Phase 2 — Write `validateBody(schema)` middleware
**Outcome**: One Express middleware file, ~40 lines, that accepts any Zod schema and rejects non-conforming bodies with a uniform 400 shape. Zero per-route boilerplate.
### (a) What to implement
Create `src/services/worker/http/middleware/validateBody.ts`:
```ts
import { RequestHandler } from 'express';
import { ZodType } from 'zod';
export function validateBody<T>(schema: ZodType<T>): RequestHandler {
return (req, res, next) => {
const result = schema.safeParse(req.body);
if (!result.success) {
res.status(400).json({
error: 'validation_failed',
message: 'Request body failed schema validation',
code: 'VALIDATION_FAILED',
fields: result.error.flatten()
});
return;
}
req.body = result.data;
next();
};
}
```
Copy error-shape keys (`error`, `message`, `code`) from the existing `BaseRouteHandler.handleError` response shape at `/Users/alexnewman/.superset/worktrees/claude-mem/vivacious-teeth/src/services/worker/http/BaseRouteHandler.ts:82-99`, extended with `fields` (per 06-implementation-plan.md:546, 553, 563).
Create the directory: `src/services/worker/http/middleware/` (new; sibling to `middleware.ts`). One file, one export.
### (b) Docs
- §3.9 flowchart node D: `validateBody(schema) middleware (Zod per route)` → node E `Valid? → 400 with field errors` (05-clean-flowcharts.md:388-391).
- 06-implementation-plan.md Phase 12, lines 542-548 (middleware signature + `safeParse` + 400 with `fields`).
- Live file:line: existing error shape at `src/services/worker/http/BaseRouteHandler.ts:82-99` (fields: `error`, `code`, `details`).
### (c) Verification
- `grep -n "export function validateBody" src/services/worker/http/middleware/validateBody.ts` → 1 match.
- `grep -rn "res.status(400)" src/services/worker/http/middleware/validateBody.ts` → exactly 1 (the single 400 response).
- Unit test: schema `z.object({ foo: z.string() })` accepts `{foo:"bar"}`, rejects `{foo:42}` with 400 and `fields.fieldErrors.foo` populated.
- TypeScript: `tsc --noEmit` succeeds — the generic `<T>` signature must compile.
### (d) Anti-pattern guards
- **A**: `safeParse` only — no `.parse()` with try/catch wrapper, no `assertSafe`, no `ZodUtil` helper class. The Express middleware contract already provides error isolation.
- **D**: This file is the *only* place a Zod parse happens in the HTTP layer. If a future PR adds a second `safeParse` call inside a handler, it is a duplicate validation path — delete it.
- **E**: `next()` only on success. On failure, `res.status(400).json(...)` **and return**. Never both call `next()` and send a response.
---
## Phase 3 — Per-route Zod schemas; attach via middleware
**Outcome**: Every POST / PUT / DELETE-with-body endpoint has a Zod schema sitting next to its route registration. `validateBody(schema)` is inserted into the middleware chain for that route.
### (a) What to implement
For each route file, add a top-of-file `schemas` block (plain `const X = z.object({...})` — do NOT build a `schemas/` parallel directory; inline at top of file keeps the schema co-located with its handler). Attach via the route registration:
Before (`CorpusRoutes.ts:28`):
```ts
app.post('/api/corpus', this.handleBuildCorpus.bind(this));
```
After:
```ts
app.post('/api/corpus', validateBody(BuildCorpusSchema), this.handleBuildCorpus.bind(this));
```
**Schemas required (one per endpoint with a body). Target list assumes Plan 09 has already collapsed SessionRoutes to the 4-endpoint surface per §3.1.** If Plan 09 has not landed, also schema the legacy `/sessions/:sessionDbId/*` endpoints at `src/services/worker/http/routes/SessionRoutes.ts:377-382` — they're deleted by Plan 09 but must not be left unvalidated in the interim.
| Route file | Endpoint | Schema name | Core fields |
|---|---|---|---|
| `SessionRoutes.ts` | `POST /api/session/start` (post-Plan 09) | `SessionStartSchema` | `{ project: string, contentSessionId: string, platformSource?: string, customTitle?: string }` |
| `SessionRoutes.ts` | `POST /api/session/prompt` | `SessionPromptSchema` | `{ sessionDbId: number, prompt: string }` |
| `SessionRoutes.ts` | `POST /api/session/observation` | `SessionObservationSchema` | `{ sessionDbId: number, tool_use_id: string, name: string, input: unknown, output: unknown, cwd?: string }` |
| `SessionRoutes.ts` | `POST /api/session/end` | `SessionEndSchema` | `{ sessionDbId: number, last_assistant_message: string }` |
| `DataRoutes.ts` | `POST /api/observations/batch` | `ObservationsBatchSchema` | `{ ids: z.array(z.number().int()), orderBy?: z.enum(['date_desc','date_asc']), limit?: number, project?: string }` |
| `DataRoutes.ts` | `POST /api/sdk-sessions/batch` | `SdkSessionsBatchSchema` | `{ memorySessionIds: z.array(z.string()) }` |
| `DataRoutes.ts` | `POST /api/processing` | `SetProcessingSchema` | `{ isProcessing: z.boolean() }` (verify field name in handler) |
| `DataRoutes.ts` | `POST /api/pending-queue/process` | `ProcessQueueSchema` | (likely empty — `z.object({}).strict()`) |
| `DataRoutes.ts` | `POST /api/import` | `ImportSchema` | per handler's body shape |
| `MemoryRoutes.ts` | `POST /api/memory/save` | `MemorySaveSchema` | `{ text: z.string().min(1), title?: string, project?: string }` |
| `CorpusRoutes.ts` | `POST /api/corpus` | `BuildCorpusSchema` | `{ name: z.string().min(1), description?: string, project?: string, types?: z.array(z.string()), concepts?: z.array(z.string()), files?: z.array(z.string()), query?: string, date_start?: string, date_end?: string, limit?: z.number().int().positive() }` |
| `CorpusRoutes.ts` | `POST /api/corpus/:name/query` | `QueryCorpusSchema` | `{ question: z.string().min(1) }` |
| `CorpusRoutes.ts` | `POST /api/corpus/:name/rebuild` | `RebuildCorpusSchema` | `z.object({}).strict()` or per handler |
| `SettingsRoutes.ts` | `POST /api/settings` | `UpdateSettingsSchema` | **see note below** |
| `SettingsRoutes.ts` | `POST /api/mcp/toggle` | `ToggleMcpSchema` | `{ enabled: z.boolean() }` |
| `SettingsRoutes.ts` | `POST /api/branch/switch` | `SwitchBranchSchema` | `{ branch: z.enum(['main', 'beta/7.0', 'feature/bun-executable']) }` |
| `SettingsRoutes.ts` | `POST /api/branch/update` | `UpdateBranchSchema` | `z.object({}).strict()` |
| `LogsRoutes.ts` | `POST /api/logs/clear` | `ClearLogsSchema` | `z.object({}).strict()` or per handler |
| `ViewerRoutes.ts` | (GET-only) | — | no body schemas needed |
| `SearchRoutes.ts` | `POST /api/context/semantic` | `SemanticContextSchema` | per handler at `src/services/worker/http/routes/SearchRoutes.ts:41` |
**Special case — `POST /api/settings`**: the existing `validateSettings(settings)` function at `src/services/worker/http/routes/SettingsRoutes.ts:237-385` is ~148 lines of domain validation (valid providers, port ranges, Python version regex, URL parse). That is **domain validation, not shape validation.** Keep it. The Zod schema here validates only that each field, if present, is of the right primitive type (`z.string().optional()`, `z.number().optional()`, `z.boolean().optional()` as appropriate per the `settingKeys` array at `SettingsRoutes.ts:88-128`). The domain validation stays in the handler. This is the correct application of rule D: delete only shape checks, not domain checks.
Copy-ready pattern to replicate: `CorpusRoutes.ts:238-244` — the `QueryCorpusSchema` replaces exactly this block. Cleanest single-field existing check in the codebase.
### (b) Docs
- §3.9 flowchart node D (`validateBody(schema) middleware (Zod per route)`, 05-clean-flowcharts.md:388).
- Bullshit-inventory item #37: "Per-route validation boilerplate × 8 files" → "`validateBody(schema)` middleware; per-route Zod schema" (05-clean-flowcharts.md:55).
- 06-implementation-plan.md Phase 12 task 3 (line 547): "Per-route schemas in a parallel `schemas/` directory (or inline at top of each route file). One `z.object({…})` per endpoint." **This plan chooses inline** (co-location wins over directory partition at this scale — 8 files × ~3 schemas each = ~24 schemas; a separate directory adds import overhead with no clarity gain).
- V9 (06-implementation-plan.md:36): confirms SessionRoutes endpoint count pre/post Plan 09.
- Live file:line per row in the schema table above.
### (c) Verification
- `grep -rn "^import.*from 'zod'" src/services/worker/http/routes/`**≥1 per route file with a POST endpoint** (7 of 8 files — ViewerRoutes is GET-only).
- `grep -rn "validateBody(" src/services/worker/http/routes/` → count matches the POST/PUT endpoint total in the table above (~18 endpoints).
- For each schema: a successful request round-trips unchanged; an invalid-shape request returns 400 with `{error:'validation_failed', fields:...}`.
### (d) Anti-pattern guards
- **A**: Every schema uses published zod 3.x methods (`z.object`, `z.string`, `z.number`, `z.array`, `z.enum`, `z.boolean`, `.optional`, `.min`, `.int`, `.positive`). Anything else — verify against the resolved zod version from Phase 1. **Do not invent** `.isPositiveInt()` or `.nonEmptyString()` helper methods; use the built-in chain.
- **E**: No schema duplicated. If two endpoints share a shape (e.g. `contentSessionId` appears in multiple SessionRoutes handlers), extract to a shared `const SessionIdField = z.string()` at the top of the file and reuse. Duplicated literal `z.object({...})` with identical fields across files = delete one.
- **D**: Inline schemas only. Do not build `schemas/SessionSchemas.ts` / `schemas/DataSchemas.ts` — that re-introduces the parallel-directory anti-pattern the plan text at 06-implementation-plan.md:547 warns about.
---
## Phase 4 — Delete hand-rolled validation blocks
**Outcome**: Every shape-validation block (type check, presence check, array check) inside a route handler is deleted. Only domain validation remains.
### (a) What to implement
Delete (exact line ranges, to be deleted alongside the Phase 3 schema attachment for each route):
| File | Line range to delete | What | Replaced by |
|---|---|---|---|
| `src/services/worker/http/routes/CorpusRoutes.ts` | `44-51` | `if (!req.body.name) { res.status(400).json({error:'Missing required field: name', fix:..., example:...}); return; }` | `BuildCorpusSchema` in Phase 3 |
| `src/services/worker/http/routes/CorpusRoutes.ts` | `55-69` | Coercion calls for `types`, `concepts`, `files`, `limit` (`coerceStringArray`, `coercePositiveInteger`) | Zod coerces via `z.coerce.number()`, `z.string().transform(s => s.split(','))` as needed |
| `src/services/worker/http/routes/CorpusRoutes.ts` | `88-125` | `coerceStringArray` + `coercePositiveInteger` helper methods | Zod schema coercion replaces both helpers entirely |
| `src/services/worker/http/routes/CorpusRoutes.ts` | `238-245` | `QueryCorpus` question presence + type check | `QueryCorpusSchema` in Phase 3 |
| `src/services/worker/http/routes/DataRoutes.ts` | `118-123` | `path` query-param check (note: query-param, not body — keep as-is OR migrate to `validateQuery(schema)` if the middleware is extended; for this plan, leave) | — |
| `src/services/worker/http/routes/DataRoutes.ts` | `144-163` | `ids` coerce + array-check + integer-check for `POST /api/observations/batch` | `ObservationsBatchSchema` |
| `src/services/worker/http/routes/DataRoutes.ts` | `196-206` | `memorySessionIds` coerce + array-check for `POST /api/sdk-sessions/batch` | `SdkSessionsBatchSchema` |
| `src/services/worker/http/routes/SessionRoutes.ts` | `570-572` | `if (!contentSessionId) return this.badRequest(...)` in `handleObservationsByClaudeId` | Pre-Plan 09: keep as-is until routes collapse; post-Plan 09: replaced by `SessionObservationSchema` |
| `src/services/worker/http/routes/SessionRoutes.ts` | `672-676` | `contentSessionId` check in `handleSummarizeByClaudeId` | Same |
| `src/services/worker/http/routes/SessionRoutes.ts` | `724-728` | `contentSessionId` query-param check in `handleStatusByClaudeId` (GET — query not body; leave) | — |
| `src/services/worker/http/routes/SessionRoutes.ts` | `767-771` | `contentSessionId` check in `handleCompleteByClaudeId` | `SessionEndSchema` post-Plan 09 |
| `src/services/worker/http/routes/SessionRoutes.ts` | `831-835` | `this.validateRequired(req, res, ['contentSessionId'])` in `handleSessionInitByClaudeId` | `SessionStartSchema` post-Plan 09 |
| `src/services/worker/http/routes/SettingsRoutes.ts` | `159-164` | `enabled` boolean type check in `handleToggleMcp` | `ToggleMcpSchema` |
| `src/services/worker/http/routes/SettingsRoutes.ts` | `184-198` | `branch` presence + allowlist check in `handleSwitchBranch` | `SwitchBranchSchema` (`z.enum([...])` handles both presence and allowlist) |
| `src/services/worker/http/routes/MemoryRoutes.ts` | `33-36` | `text` presence + type + non-empty check | `MemorySaveSchema` |
| `src/services/worker/http/routes/BaseRouteHandler.ts` | `54-62` | `validateRequired(req, res, params)` helper method | **Delete entire method.** No caller remains after this phase. Keep `parseIntParam`, `badRequest`, `notFound`, `handleError`, `wrapHandler`. |
Total hand-rolled-validation lines deleted: approximately **125 LOC** across 5 files.
**`SettingsRoutes.validateSettings` at lines 237-385 is NOT deleted** — that is domain validation (provider allowlists, port ranges, URL parse) and stays in the handler as-is. Zod handles only shape. Cite rule D: "per-route validation blocks of 5+ if statements — collapsed to validateBody(schema)" applies to shape blocks; domain blocks are orthogonal and survive.
### (b) Docs
- §3.9 "Deleted" bullet 2: "Per-route hand-rolled validation (Zod middleware replaces)" (05-clean-flowcharts.md:414).
- Bullshit-inventory #37 (05-clean-flowcharts.md:55).
- 06-implementation-plan.md Phase 12 task 4 (line 548): "Delete per-route boilerplate: manual `typeof x !== 'string'` checks, `if (!body.foo) return res.status(400)…`."
- Live line ranges per row in the table above.
### (c) Verification
- `grep -rn "validateRequired" src/services/worker/http/`**0**.
- `grep -rn "typeof .* !== 'string'" src/services/worker/http/routes/`**0** for body validation; any surviving matches must be for non-body purposes (e.g., narrowing a union type inside business logic).
- `grep -rn "res.status(400)" src/services/worker/http/routes/` drops significantly (from ~12 to ≤ 2 domain-specific 400s in `SettingsRoutes.validateSettings` path and corpus `404 → 400` edge).
- `grep -n "coerceStringArray\|coercePositiveInteger" src/`**0**.
- Happy-path tests for each endpoint: response shape unchanged.
### (d) Anti-pattern guards
- **D**: If a handler still has a `typeof` check on a body field after this phase, the schema is missing a constraint. Fix the schema, not the handler.
- **E**: No fall-through: after `validateBody` accepts, the handler does NOT re-validate the same field. Example: `SwitchBranchSchema` uses `z.enum(['main','beta/7.0','feature/bun-executable'])` — the handler must not re-check `if (!allowedBranches.includes(branch))`.
- **A**: Don't replace `validateRequired` with a similarly-named Zod wrapper. Delete the method outright.
---
## Phase 5 — Delete rate-limit middleware
**Outcome**: The rate limiter at `src/services/worker/http/middleware.ts:45-79` (300 req/min IP map, keyed by `::ffff:127.0.0.1`-normalized IP) is deleted. Bullshit item #39 removed.
### (a) What to implement
Delete the following from `src/services/worker/http/middleware.ts`:
- **Lines 45-50**: comment block + `requestCounts` map + `RATE_LIMIT_WINDOW_MS` + `RATE_LIMIT_MAX_REQUESTS` constants.
- **Lines 52-77**: the `rateLimiter` RequestHandler.
- **Line 79**: `middlewares.push(rateLimiter);`.
Total: **35 LOC deleted from middleware.ts**.
No change needed in `Server.ts` — it registers middleware via `createMiddleware(summarizeRequestBody)` at `src/services/server/Server.ts:156`, which returns the array. Removing the `.push(rateLimiter)` call is sufficient; the caller loops over whatever middleware returns.
### (b) Docs
- §3.9 "Deleted" bullet 1: "In-memory rate limiter (300/min IP map) — localhost trust model everywhere else makes this theater" (05-clean-flowcharts.md:413).
- Bullshit-inventory #39 (05-clean-flowcharts.md:57).
- V20 (06-implementation-plan.md:47): "Rate limiter 300/min — Confirmed at `src/services/worker/http/middleware.ts:45-79`. Constants at `:49-50`. Keyed by IP, normalizes `::ffff:127.0.0.1`. Phase 14 deletes."
- 06-implementation-plan.md Phase 14 task 1 (line 612).
- Live file:line: `src/services/worker/http/middleware.ts:45-79`.
### (c) Verification
- `grep -n "RATE_LIMIT_WINDOW_MS\|RATE_LIMIT_MAX_REQUESTS\|requestCounts\|rateLimiter" src/`**0 matches**.
- `grep -n "429" src/services/worker/http/`**0** (the only 429 in the codebase is the rate limiter; survey the repo with `grep -rn "429" src/` to confirm).
- `curl -s -w "%{http_code}" -o /dev/null http://localhost:37777/api/health` repeated 1000× returns 200 every time — no 429 after request #300.
- Build green: `tsc --noEmit`.
### (d) Anti-pattern guards
- **B** (from 06-implementation-plan.md:623): "Don't re-introduce the rate limiter as a 'config flag'. Localhost trust model is explicit." No `if (settings.rateLimitEnabled)` conditional reintroduction.
- **D**: Do not leave the function in place "commented out" — delete the lines.
- **A**: Do not repurpose the `requestCounts` Map for a "request-counting telemetry" feature. Delete the Map.
---
## Phase 6 — Cache viewer.html and /api/instructions at boot
**Outcome**: The sync `readFileSync` on every `GET /` and `GET /api/instructions` request is replaced by an in-memory `Buffer` loaded once at worker boot.
> **Cache lifecycle contract (Preflight edit 2026-04-22 — reconciliation C10)**: The cached `Buffer` lives for the **lifetime of the worker process** — re-read on every worker boot, never refreshed mid-process. This is the contract plan 12's T1 regression test (SHA-256 of `GET /`) assumes when it mandates re-baselining after every worker restart. If the viewer.html content includes a per-boot bearer-token injection (observation 71147), the Buffer captures that token at constructor time and serves it consistently until the next boot. **Do not** add any hot-reload / file-watcher / TTL cache invalidation. If an operator edits `viewer.html` in place, they must restart the worker to see the change — documented tradeoff, not a regression.
### (a) What to implement
**`/` (viewer.html)** — currently at `src/services/worker/http/routes/ViewerRoutes.ts:54-72`:
Refactor `ViewerRoutes` constructor (currently `src/services/worker/http/routes/ViewerRoutes.ts:19-25`) to resolve + read `viewer.html` once and store as a module-level or instance-level `Buffer`:
```ts
private viewerHtml: Buffer;
constructor(...) {
super();
const packageRoot = getPackageRoot();
const candidates = [
path.join(packageRoot, 'ui', 'viewer.html'),
path.join(packageRoot, 'plugin', 'ui', 'viewer.html')
];
const found = candidates.find(existsSync);
if (!found) throw new Error('Viewer UI not found at boot');
this.viewerHtml = readFileSync(found); // Buffer
}
private handleViewerUI = this.wrapHandler((req, res) => {
res.setHeader('Content-Type', 'text/html');
res.send(this.viewerHtml);
});
```
Delete `readFileSync` + `existsSync` calls from inside the request handler (lines 63-71 of current file).
**`/api/instructions`** — currently at `src/services/server/Server.ts:202-234`:
The endpoint supports 4 `topic` values × N `operation` values. Option (a): pre-compute the 4 section strings at boot. Option (b): pre-read `SKILL.md` once and read `operations/*.md` lazily (these are rarer).
Recommended: Option (a). At `Server` constructor time, call `loadInstructionContent(undefined, 'all')` once, extract all 4 sections, store as `Record<string, Buffer>`. Store a `Map<string, Buffer>` for `operations/*.md` populated lazily on first hit (or eagerly if the operations directory is small — enumerate at boot).
Preserve path-traversal security: the `operationPath.startsWith(OPERATIONS_BASE_DIR + path.sep)` check at `Server.ts:218` stays. Caching does not bypass validation — the cache key is the already-validated operation name.
Preserve the `ALLOWED_TOPICS` + `ALLOWED_OPERATIONS` allowlist at `Server.ts:207-213`.
Copy-ready pattern: the current `extractInstructionSection` function at `Server.ts:350-359` already partitions content into a `sections` record — that IS the cache structure; just hoist it from per-request to boot.
### (b) Docs
- §3.9 "Deleted" bullet 3: "Synchronous file read for `/` and `/api/instructions` (replace with cached `Buffer` loaded at boot)" (05-clean-flowcharts.md:415).
- §3.10 flowchart node HTML: "viewer.html (cached at boot)" (05-clean-flowcharts.md:426).
- 06-implementation-plan.md Phase 14 task 2 (line 613): "Cache `viewer.html` and `/api/instructions` content in memory at boot; serve from `Buffer` instead of `fs.readFile`."
- Live file:line: `src/services/worker/http/routes/ViewerRoutes.ts:54-72` (viewer.html); `src/services/server/Server.ts:202-234` (instructions endpoint); `src/services/server/Server.ts:337-345` (loader); `src/services/server/Server.ts:350-359` (section extractor).
### (c) Verification
- Static file reads happen once at boot: add a `logger.info('WORKER', 'viewer.html cached', { bytes: this.viewerHtml.length })` at constructor time; grep logs after 100 `GET /` requests to confirm the message fires exactly once.
- `lsof -p $(pidof node) | grep viewer.html` at steady-state: either zero (Buffer held in memory, no open FD) or exactly one (memory-mapped).
- `grep -n "readFileSync.*viewer.html\|readFileSync.*SKILL.md\|readFileSync.*operations" src/services/worker/ src/services/server/`**0** matches inside request handlers (module-scope or constructor-scope matches are fine; per-request matches fail).
- Response body unchanged (byte-for-byte) across a request pair before and after the change.
### (d) Anti-pattern guards
- **E**: Do not keep the `readFileSync` path "as a fallback" for when the Buffer is undefined. If the file isn't found at boot, throw — fail-fast aligns with global standard #3. No silent fallback.
- **D**: The viewer-path-candidate array at `ViewerRoutes.ts:58-61` is not a duplicate validation — it's install-layout probing. Keep both candidates for boot-time resolution. After the first successful read, the candidate list is discarded.
- **A**: Do not wrap the Buffer in a `StaticFileCache` class. Hold it as a private field on the route class. One field, one assignment.
---
## Phase 7 — Delete oversized-body special handling
**Outcome**: The 5MB JSON parse limit stays (cheap; bullshit item #40 keep-clause). Any `if (body.size > …) specialHandler()` or hand-rolled 413 logic is deleted — Express's built-in 413 from the `express.json({ limit: '5mb' })` middleware is sufficient.
### (a) What to implement
Survey the route files and `middleware.ts` for body-size special handling:
- `src/services/worker/http/middleware.ts:25``express.json({ limit: '5mb' })`**KEEP**. This is the one-line limit per item #40.
- Any handler that inspects `req.body.length`, `req.headers['content-length']`, or returns a custom 413: **DELETE**.
Based on the grep survey in Phase 0, **no custom oversized-body handling currently exists in `src/services/worker/http/`**. This phase is a verification pass confirming absence. If any is discovered during implementation, delete it without replacement — the `express.json()` middleware already emits 413 with `entity.too.large` on oversized bodies.
If any handler catches the Express 413 and remaps it to a different shape, delete the catch — uniform error handling via `BaseRouteHandler.handleError` (`src/services/worker/http/BaseRouteHandler.ts:82-99`) is already in place.
### (b) Docs
- Bullshit-inventory #40 (05-clean-flowcharts.md:58): "JSON parse 5MB limit on every request — Keep (cheap), but delete any special handling for oversized — 413 is fine."
- Live file:line: `src/services/worker/http/middleware.ts:25` (the `express.json` call to preserve).
### (c) Verification
- `grep -rn "413\|'entity.too.large'\|PayloadTooLarge" src/services/worker/http/`**0 matches in handler code** (framework-internal uses do not appear in our source).
- `grep -rn "content-length\|contentLength\|Content-Length" src/services/worker/http/routes/`**0** matches in route handlers (header-inspection by handlers is the anti-pattern to find).
- Sending a 6MB body returns Express default 413. Sending a 4MB body round-trips.
### (d) Anti-pattern guards
- **D**: If a grep hit appears, delete it. Do not "improve" it.
- **A**: Don't add a `RequestSizeGuard` middleware. `express.json({ limit })` already guards.
- **E**: Don't let a handler's try/catch swallow a 413 and remap to 400. The Express error shape for 413 is Express's; uniformity below that boundary is enforced by `handleError`.
---
## Phase 8 — Verification
**Outcome**: Whole §3.9 diagram is reality. All greps clean, route smoke tests pass, deleted-line count matches estimate.
### (a) What to implement
Execute the verification checklist below. This phase does not modify production code; it runs scripts/tests and fixes regressions uncovered.
### (b) Docs
- §3.9 full diagram (05-clean-flowcharts.md:384-410).
- §3.9 "Deleted" block (lines 412-416).
- §3.9 "Kept" block (line 418): "All user-facing routes, SSE, middleware chain, admin endpoints (used by tooling)." — the admin endpoints (`/api/admin/restart`, `/api/admin/shutdown`, `/api/admin/doctor` at `src/services/server/Server.ts:237-330`) are explicitly preserved; item #38 (05-clean-flowcharts.md:56).
- 06-implementation-plan.md Phase 15 (line 631-656): timer census + grep pass + full test suite.
### (c) Verification checklist
- [ ] **Rate limiter gone**: `grep -rn "RATE_LIMIT_WINDOW_MS\|RATE_LIMIT_MAX_REQUESTS\|requestCounts\|rateLimiter" src/`**0**.
- [ ] **Zod present**: `grep -rn "^import .* from 'zod'" src/services/worker/http/`**≥8** matches (middleware + 7 route files with POSTs).
- [ ] **validateBody attached**: `grep -rn "validateBody(" src/services/worker/http/routes/" → **~18** matches (one per schemaed POST/PUT).
- [ ] **validateRequired deleted**: `grep -rn "validateRequired" src/`**0**.
- [ ] **Static-file reads hoisted**: `grep -rn "readFileSync.*viewer.html" src/services/worker/` → 0 matches inside request handlers; OK in constructor/module-scope.
- [ ] **SSE preserved**: `GET /stream` returns `text/event-stream` with initial `initial_load` event (manual smoke test).
- [ ] **Admin preserved**: `POST /api/admin/doctor` from localhost returns JSON; from non-localhost returns 403 (per `requireLocalhost` at `src/services/worker/http/middleware.ts:121-143`). Used by version-bump per item #38.
- [ ] **Route smoke tests per endpoint (curl or integration suite)**:
- `GET /` → 200 HTML (from cached Buffer).
- `GET /health` → 200 JSON `{status:'ok', activeSessions:N}`.
- `GET /stream` → 200 SSE stream.
- `POST /api/memory/save` with `{text:""}` → 400 `{error:'validation_failed', fields:...}`.
- `POST /api/memory/save` with `{text:"hi"}` → 200 `{success:true, id:...}`.
- `POST /api/corpus` with `{name:"t", query:"hooks"}` → 200 metadata.
- `POST /api/corpus` with `{}` → 400 validation_failed with `fields.fieldErrors.name`.
- `POST /api/mcp/toggle` with `{enabled:"yes"}` → 400; `{enabled:true}` → 200.
- `POST /api/branch/switch` with `{branch:"nonexistent"}` → 400; `{branch:"main"}` → 200.
- `GET /api/instructions?topic=workflow` → 200 JSON content (served from cache).
- `POST /api/admin/restart` from localhost → 200 `{status:'restarting'}`.
- [ ] **Build green**: `npm run build` succeeds.
- [ ] **Worker boots**: `npm run build-and-sync` and verify `GET /health` answers within 2s.
- [ ] **Deleted-lines tally**: approximately **35 LOC** (rate limiter, Phase 5) + **~125 LOC** (hand-rolled validation + helpers, Phase 4) + **~9 LOC** (`BaseRouteHandler.validateRequired` method, Phase 4) + **~10 LOC** (per-request `readFileSync`/`existsSync` probes moved to constructor, Phase 6) ≈ **~180 LOC net deleted**, offset by **~60 LOC added** (new `validateBody` + ~24 schemas averaging 2-3 lines each) = **~120 LOC net deletion**.
### (d) Anti-pattern guards
- **D** (whole plan): if any verification grep finds unexpected matches, do not "fix forward" — delete the offending code.
- **E**: If a route smoke test fails due to schema over-constraint (e.g., an optional field rejected), **relax the schema, do not re-add a hand-rolled fallback.**
- **A**: Do not add integration tests that fake the Zod surface. Use the installed zod.
---
## Reporting summary
**Phase count**: 8.
**Estimated deletion**: ~180 LOC gross, ~60 LOC added, **~120 LOC net**. Primary deletes: rate limiter (35), hand-rolled validation blocks (125), `validateRequired` helper (9), per-request file-read probing (10). Primary additions: `validateBody.ts` (~40), Zod schemas inline (~60 across 7 files).
**Sources consulted**:
- `PATHFINDER-2026-04-21/05-clean-flowcharts.md` (full); §3.9 (lines 382-420) canonical; Part 1 items #37-40 (lines 55-58); Part 2 decisions (lines 65-79).
- `PATHFINDER-2026-04-21/06-implementation-plan.md`: V2 (line 29), V9 (line 36), V20 (line 47); allowed-APIs block (lines 49-55); anti-patterns (line 59); Phase 12 (lines 530-565); Phase 14 (lines 600-627); Phase 15 (lines 631-656).
- `PATHFINDER-2026-04-21/01-flowcharts/http-server-routes.md` (before state).
- Live codebase (9 files): `src/services/worker/http/middleware.ts`, `src/services/worker/http/BaseRouteHandler.ts`, `src/services/worker/http/routes/{ViewerRoutes,SearchRoutes,SessionRoutes,DataRoutes,SettingsRoutes,MemoryRoutes,CorpusRoutes,LogsRoutes}.ts`, `src/services/server/Server.ts`.
- `package.json` (dependencies block lines 111-125) + `npm ls zod` + filesystem probe of `node_modules/zod`.
**Concrete findings**:
- **Zod presence check** (2026-04-22 10:18 PDT): `npm ls zod` returns `(empty)`. `node_modules/zod/package.json` does not exist. Transitively it is NOT shipped — the only zod-adjacent package is `zod-to-json-schema@^3.24.6` at `package.json:124`, which does not pull in `zod` itself. **Phase 1 MUST add `zod` via `npm install zod@^3.x`.** Verified findings block at `06-implementation-plan.md:55` should be updated: "already shipped transitively via `@anthropic-ai/sdk`" is false for this repo (the SDK is `@anthropic-ai/claude-agent-sdk`, not `@anthropic-ai/sdk`).
- **Route-file inventory with validation styles** (8 files, `src/services/worker/http/routes/`):
- `ViewerRoutes.ts` (116 LOC): GET-only, no body schemas needed.
- `SearchRoutes.ts` (421 LOC): 1 POST (`/api/context/semantic` at line 41), mostly query-param validation.
- `SessionRoutes.ts` (958 LOC): 10 POST endpoints per V9 (6 legacy `/sessions/:id/*` at lines 377-382 + 4 under `/api/sessions/*` at lines 385-389, plus `/api/sessions/status` GET). Uses `this.validateRequired` (line 833) and inline `if (!contentSessionId)` checks (lines 570, 674, 726, 769). Post-Plan 09 collapses to 4.
- `DataRoutes.ts` (562 LOC): 5 POST endpoints. Uses `this.badRequest` + inline `typeof` checks (lines 120-123, 149-163, 203-206). Contains ad-hoc coerce logic (JSON.parse-or-split-by-comma) at lines 145-147, 199-201 — Zod `z.preprocess` subsumes this.
- `SettingsRoutes.ts` (434 LOC): 5 POST endpoints. Has a 148-line **domain-validation** function `validateSettings` (lines 237-385) — **preserve**; the shape-validation is inline at lines 161-164, 185-197 — **delete**.
- `MemoryRoutes.ts` (93 LOC): 1 POST. Validation block at lines 33-36. Cleanest single-endpoint pattern in the codebase — **copy-ready template for Phase 3**.
- `CorpusRoutes.ts` (283 LOC): 5 POST endpoints. Validation at lines 44-51, 238-245 plus two coerce helpers at lines 88-125 (~38 LOC of helper boilerplate deletable).
- `LogsRoutes.ts` (165 LOC): 1 POST (`/api/logs/clear` at line 102). Minimal body.
- **Static file endpoints**:
- `GET /` serves `viewer.html``ViewerRoutes.ts:54-72` does per-request `readFileSync` over 2 candidate paths. Move to constructor.
- `GET /api/instructions``Server.ts:202-234` does per-request `fs.promises.readFile` via `loadInstructionContent` (line 337). 4 topic sections (extractable at boot) + operation files (lazy-cache OK). Allowlist at `Server.ts:207-213` (`ALLOWED_TOPICS`, `ALLOWED_OPERATIONS`) stays; path-traversal check at line 218 stays.
- Static assets (`js`, `css`, fonts) served via `express.static(uiDir)` at `middleware.ts:110-112`**already cached by Express; no change**.
- **Copy-ready snippet locations**:
- Cleanest single-field validation example to replicate: `CorpusRoutes.ts:238-244` (the `question` check for `QueryCorpus`) — this exact shape replaces one-to-one with a `QueryCorpusSchema = z.object({ question: z.string().min(1) })`.
- Cleanest presence check to Zod-ify: `MemoryRoutes.ts:33-36` (the `text` check) — maps to `MemorySaveSchema = z.object({ text: z.string().min(1), title: z.string().optional(), project: z.string().optional() })`.
- Error-shape template to mirror in `validateBody`: `BaseRouteHandler.ts:82-99` (existing `{error, code, details}` shape) — extend with `fields`.
**Confidence + gaps**:
- **High confidence**: rate-limiter deletion (V20 verified exact lines), static-file caching (exact file:line confirmed), validation-block locations (grep returned matching line numbers), BaseRouteHandler method cleanup.
- **Gap 1 — Plan 09 landing order**: This plan assumes the §3.1 4-endpoint SessionRoutes surface is the target. If Plan 09 has not landed when this plan begins Phase 3, the plan must attach schemas to the 10 legacy endpoints (`src/services/worker/http/routes/SessionRoutes.ts:377-389`) and then refactor in lockstep when Plan 09 merges. Coordination required — add `[blocked-on: plan-09]` gate on the Phase 3 PR, or land Plan 09 first.
- **Gap 2 — Zod version lock-in for the whole refactor**: Phase 1 picks the zod 3.x version; if a future phase in another plan wants a zod 4.x-only API, this plan's schemas become incompatible. Mitigation: schemas use only the stable `z.object/string/number/array/enum/boolean/optional/min/int/positive` surface, which is unchanged between 3.x majors and 4.x. Still, a breaking upgrade must be coordinated here.
@@ -0,0 +1,297 @@
# Plan 12 — viewer-ui-layer (LOCKDOWN / REGRESSION-DETECTION)
**Target flowchart:** `PATHFINDER-2026-04-21/05-clean-flowcharts.md` section 3.10 ("viewer-ui-layer (clean)")
**Before-state flowchart:** `PATHFINDER-2026-04-21/01-flowcharts/viewer-ui-layer.md`
**Canonical doctrine from 05 §3.10:** *"Deleted: (Nothing — this subsystem is clean.)"* / *"Kept: Everything. User-facing."*
## Plan Type
**LOCKDOWN / REGRESSION-DETECTION.** This is NOT a refactor plan. Section 3.10 declares the viewer subsystem already aligned with the clean architecture. The deliverable is a protective harness that detects regressions introduced by the **other 11 plans** landing.
No source code in `src/ui/viewer/**` is modified by this plan. The only artifacts produced are regression tests, baselines, and a re-run schedule.
**Expected lines deleted by this plan:** 0
**Expected lines added to `src/`:** 0 (tests live under `tests/viewer-lockdown/`)
## Dependencies
- **Upstream:** none — no other plan produces code this plan consumes.
- **Downstream:** none — no other plan consumes code this plan produces.
- **Cross-reference dependencies (tests-run-after):**
- Plan 11 (`http-server-routes`, flowchart §3.9) — **CRITICAL.** Phase 14 of `06-implementation-plan.md:600-627` caches `viewer.html` at boot. The lockdown suite MUST run after plan 11 to confirm the cached Buffer serve still produces a byte-identical HTML response and that `express.static(path.join(packageRoot, 'ui'))` (`ViewerRoutes.ts:30`) still serves JS/CSS assets.
- Plan 09 (`lifecycle-hooks`) — only indirectly relevant; hooks don't talk to the viewer, but SSE broadcast events originate from write paths the hooks trigger. Re-run the `new_observation` live-update test after plan 09 lands.
- All remaining 9 plans — run the suite as a smoke check.
- **Implementation-plan cross-ref:** no V-finding targets the viewer subsystem directly in `06-implementation-plan.md`. V20 (rate-limiter deletion, Phase 14) and the "cache `viewer.html`" task in Phase 14 tasks 12 are the only lines that touch the viewer's serve path. **No V-number in `06-implementation-plan.md` is assigned to viewer-ui behavior. State recorded here for audit completeness.**
## Sources Consulted
- `PATHFINDER-2026-04-21/05-clean-flowcharts.md:422-447` (section 3.10, canonical)
- `PATHFINDER-2026-04-21/05-clean-flowcharts.md:564-587` (Part 5 deletion totals — viewer contributes 0)
- `PATHFINDER-2026-04-21/01-flowcharts/viewer-ui-layer.md:1-95` (before-state, identical to after-state)
- `PATHFINDER-2026-04-21/06-implementation-plan.md:600-627` (Phase 14 — static-file cache task)
- `src/ui/viewer/App.tsx:1-163`
- `src/ui/viewer/index.tsx:1-17`
- `src/ui/viewer/hooks/useSSE.ts:1-148`
- `src/ui/viewer/hooks/usePagination.ts:1-119`
- `src/ui/viewer/hooks/useSettings.ts:1-100`
- `src/ui/viewer/components/Feed.tsx:1-100`
- `src/ui/viewer/constants/api.ts:5-12`
- `src/ui/viewer/constants/timing.ts:7` (`SSE_RECONNECT_DELAY_MS: 3000`)
- `src/services/worker/http/routes/ViewerRoutes.ts:1-116`
- `src/services/worker/http/routes/DataRoutes.ts:38-45` (`/api/observations` endpoints)
- `src/services/worker/http/routes/SettingsRoutes.ts:30-31` (`/api/settings` endpoints)
## Concrete Findings (React Component + Hook Inventory)
### React Components (all in `src/ui/viewer/components/`)
- `ErrorBoundary.tsx` — root wrapper, mounted via `index.tsx:13-15`.
- `Header.tsx` — project/source filters, SSE connection light, theme toggle.
- `Feed.tsx:18` — interleaved card list; IntersectionObserver at `Feed.tsx:33-41` with `threshold: UI.LOAD_MORE_THRESHOLD`.
- `ObservationCard.tsx` / `SummaryCard.tsx` / `PromptCard.tsx` — rendered in `Feed.tsx:69-75`.
- `ContextSettingsModal.tsx` — POST `/api/settings` via `useSettings.saveSettings`.
- `LogsDrawer` (from `LogsModal.tsx`) — console capture drawer.
- `ScrollToTop.tsx` — inside `Feed.tsx:65`.
- `TerminalPreview.tsx`, `ThemeToggle.tsx`, `GitHubStarsButton.tsx` — supplemental.
### Hooks (all in `src/ui/viewer/hooks/`)
- `useSSE.ts:6`**SSE subscription owner.** Returns `{observations, summaries, prompts, projects, sources, projectsBySource, isProcessing, queueDepth, isConnected}`. EventSource at `useSSE.ts:50`; auto-reconnect at `useSSE.ts:61-71` after `TIMING.SSE_RECONNECT_DELAY_MS`.
- `usePagination.ts:108` — exposes `{observations, summaries, prompts}`, each with `{isLoading, hasMore, loadMore}`. Resets offset on filter change (`usePagination.ts:36-46`).
- `useSettings.ts:8` — GET/POST `/api/settings`.
- `useTheme.ts`, `useStats.ts`, `useContextPreview.ts`, `useGitHubStars.ts`, `useSpinningFavicon.ts` — ancillary.
### SSE Event Types the Viewer Subscribes To
From `useSSE.ts:76-120` switch:
- `initial_load` — catalog payload `{projects, sources, projectsBySource}`.
- `new_observation` — appends to `observations` state (prepend).
- `new_summary` — appends to `summaries` state (prepend).
- `new_prompt` — appends to `prompts` state (prepend).
- `processing_status` — updates `isProcessing` + `queueDepth`.
### The Dedup Invariant (05 §3.10 line 444)
Live SSE data (`useSSE().observations`) and paginated history (`App.paginatedObservations`) are merged with `(project, id)` dedup in `App.tsx:50-66` via `mergeAndDeduplicateByProject`. Section 3.10 line 444 explicitly protects this: *"which is a correct pattern for live + historical merging."* **Anti-pattern guard E:** do NOT collapse the two paginated fetches into one. The duplication is legitimate.
## Phase Contract
Every phase below follows this structure:
- **(a) What to implement** — the regression artifact or action.
- **(b) Docs** — 05 §3.10 + live file:line anchors.
- **(c) Verification** — exact executable checks.
- **(d) Anti-pattern guards** — A (invent new UI behaviors) + E (collapse legitimate dedup).
---
## Phase 1 — Inventory viewer behaviors
**(a) What to implement**
Produce a single source-of-truth inventory document at `tests/viewer-lockdown/INVENTORY.md` enumerating:
1. All 7 component files under `src/ui/viewer/components/` with file:line anchors for their main exports.
2. All 9 hook files under `src/ui/viewer/hooks/` with exported function signatures.
3. Every SSE event type the viewer subscribes to (5 types, from `useSSE.ts:76-120`).
4. Every HTTP endpoint the viewer calls (`/stream`, `/api/observations`, `/api/summaries`, `/api/prompts`, `/api/settings`, `/api/stats`).
5. Timing constants currently in effect: `SSE_RECONNECT_DELAY_MS=3000` (`constants/timing.ts:7`), `UI.PAGINATION_PAGE_SIZE`, `UI.LOAD_MORE_THRESHOLD` (`constants/ui.ts`).
**(b) Docs**
- 05 §3.10 (mermaid diagram at `05-clean-flowcharts.md:424-441`)
- `01-flowcharts/viewer-ui-layer.md:18-27` (component tree) + `:30` (happy path)
**(c) Verification**
- `grep -c "^" tests/viewer-lockdown/INVENTORY.md` ≥ 60 lines.
- Every file:line reference in the inventory resolves under `git ls-files`.
- All 5 SSE event types from `useSSE.ts:76-120` appear verbatim in the inventory.
**(d) Anti-pattern guards**
- **A:** Do not invent behaviors. Inventory strictly what exists in HEAD.
- **E:** List the dedup call site (`App.tsx:50-66`) as a "protected pattern — do not collapse".
---
## Phase 2 — Define invariants (one per behavior from 05 §3.10)
**(a) What to implement**
Write `tests/viewer-lockdown/INVARIANTS.md` with one numbered invariant per flowchart node/edge in 05 §3.10:
- **I1 (serve):** `GET /` returns HTML whose byte-count equals the baseline within 0 bytes OR differs only by bearer-token substitution. Anchor: `ViewerRoutes.ts:54-72`.
- **I2 (mount):** `index.tsx:11-15` mounts `<ErrorBoundary><App/></ErrorBoundary>` into `#root`. No other mount paths.
- **I3 (SSE open):** `useSSE.ts:50` opens `new EventSource(API_ENDPOINTS.STREAM)` where `STREAM === '/stream'` (`constants/api.ts:12`).
- **I4 (initial_load):** On the first `initial_load` event, `catalog.projects`, `catalog.sources`, `catalog.projectsBySource` populate (`useSSE.ts:77-87`).
- **I5 (live appends):** `new_observation` / `new_summary` / `new_prompt` prepend to their arrays (`useSSE.ts:89-111`). Order: newest first.
- **I6 (processing_status):** Updates `isProcessing` + `queueDepth` (`useSSE.ts:113-119`).
- **I7 (pagination):** `Feed.tsx:33-41` IntersectionObserver fires `onLoadMoreRef.current()``App.handleLoadMore` (`App.tsx:79-99`) → three parallel `/api/{observations,summaries,prompts}` fetches with `offset` + `limit` query params.
- **I8 (dedup):** `App.tsx:50-66` merges live + paginated with `mergeAndDeduplicateByProject` keyed on `(project, id)`. **Two distinct arrays MUST remain.** (Anti-pattern guard E.)
- **I9 (filter reset):** Changing `currentFilter` or `currentSource` resets `paginatedObservations/Summaries/Prompts` to `[]` and re-fetches page 0 (`App.tsx:102-108`, `usePagination.ts:36-46`).
- **I10 (settings round-trip):** `ContextSettingsModal` save → `useSettings.saveSettings``POST /api/settings``{success: true}` response path sets `saveStatus='✓ Saved'` (`useSettings.ts:65-96`).
- **I11 (reconnect):** EventSource `onerror` closes and calls `connect()` after `TIMING.SSE_RECONNECT_DELAY_MS` (3000 ms) (`useSSE.ts:61-71`).
- **I12 (static assets):** `express.static(path.join(packageRoot, 'ui'))` (`ViewerRoutes.ts:30`) serves bundled JS/CSS. Must still 200 after plan 11 lands its cache change.
**(b) Docs**
- Each invariant cites file:line as shown above.
- Cross-ref 05 §3.10 mermaid nodes one-to-one: HTTP→I1, HTML→I1/I12, React→I2, SSE→I3, Initial→I4, Feed→I7, Page→I7, Merge→I8, Cards→I5, Settings→I10, Reconnect→I11.
**(c) Verification**
- Every mermaid node in `05-clean-flowcharts.md:426-440` maps to ≥1 invariant in `INVARIANTS.md`.
- Every invariant cites at least one live `file.ts:NN` anchor that resolves at HEAD.
**(d) Anti-pattern guards**
- **A:** Each invariant must be phrased as "X currently happens", not "X should happen". This is a lockdown, not a wish list.
- **E:** I8 is the anti-collapse invariant — explicitly forbid "flattening paginated + live into a single array".
---
## Phase 3 — Write regression tests (one per invariant)
**(a) What to implement**
Create the test harness `tests/viewer-lockdown/` with these files. Prefer Playwright (headless Chromium) since EventSource + IntersectionObserver require a real browser. If Playwright is not already a dev dep, author a **manual checklist** instead — do not introduce a new test framework.
1. `tests/viewer-lockdown/regression.spec.ts` (Playwright) OR `tests/viewer-lockdown/CHECKLIST.md` (manual):
- **T1 → I1:** `curl -s http://localhost:37777/` returns 200 + `Content-Type: text/html`. Diff against `baseline/viewer.html.sha256`.
- **T2 → I2:** Page loads, `document.querySelector('#root').children.length > 0` within 2 s.
- **T3 → I3+I4:** Open `/stream` via EventSource, receive `initial_load` within 2 s; payload has `projects`, `sources`, `projectsBySource`.
- **T4 → I5:** Insert a synthetic observation via `POST /api/sessions/:id/observations`; assert a card appears in the feed within 2 s without a page refresh.
- **T5 → I7:** Scroll the feed past the IntersectionObserver sentinel; assert network panel shows `GET /api/observations?offset=20&limit=20` (or matching `UI.PAGINATION_PAGE_SIZE`).
- **T6 → I8:** Inject a duplicate `(project, id)` pair via SSE and paginated response; assert exactly one card rendered.
- **T7 → I9:** Change project filter; assert `paginatedObservations` cleared (check via `Feed` DOM length before/after) and a fresh page-0 request fires.
- **T8 → I10:** Open `ContextSettingsModal`, change `CLAUDE_MEM_CONTEXT_OBSERVATIONS`, click save; assert `POST /api/settings` → 200 → `saveStatus` text contains `✓ Saved`.
- **T9 → I11:** Kill the worker SSE connection (e.g. `curl -X POST /__test__/drop-sse-clients` if available, else restart worker); assert EventSource reconnects within 4 s (3 s delay + 1 s slack).
- **T10 → I12:** `curl -sI http://localhost:37777/viewer.js` (or whatever bundled asset name is) returns 200.
- **T11 → I6:** Trigger worker processing; assert `queueDepth` in DOM increments.
2. `tests/viewer-lockdown/run.sh` — wrapper that spins up the worker on a test port, seeds fixtures, runs the spec, and tears down.
**(b) Docs**
- Each T-number maps to an I-number in a table at the top of `regression.spec.ts` / `CHECKLIST.md`.
**(c) Verification**
- Running the suite against a clean HEAD worker (before any of plans 111 land) produces 11/11 PASS. This is the baseline.
- Every test has a deterministic pass/fail criterion. No "looks right" assertions.
**(d) Anti-pattern guards**
- **A:** Do not add tests for behaviors not listed in 05 §3.10 mermaid (e.g. do not test Header theme-toggle colors — out of scope).
- **E:** T6 is the explicit anti-collapse test.
---
## Phase 4 — Baseline current outputs
**(a) What to implement**
Capture pre-refactor baselines under `tests/viewer-lockdown/baseline/`:
1. `baseline/viewer.html.sha256` — SHA-256 of `GET /` response body with bearer token stripped (token is injected per-boot per `Apr 19 2026 observation 71147`).
2. `baseline/initial_load.json` — full `initial_load` SSE event payload captured against a seeded DB.
3. `baseline/api-observations-page0.json` — response of `GET /api/observations?offset=0&limit=20` on the same seeded DB.
4. `baseline/api-settings.json` — response of `GET /api/settings`.
5. `baseline/screenshots/` — 3 Playwright screenshots: initial feed render, modal open, filter applied. These are visual-regression anchors only; do NOT gate CI on pixel diffs.
**(b) Docs**
- `baseline/README.md` records git SHA, worker version, node version, OS at capture time.
**(c) Verification**
- Running the suite twice against HEAD produces identical SHA-256s and identical JSON payloads (modulo timestamps stripped).
**(d) Anti-pattern guards**
- **A:** Baselines represent observed HEAD behavior, not design wishes.
- **E:** n/a.
---
## Phase 5 — Post-landing re-run schedule
**(a) What to implement**
A schedule table in `tests/viewer-lockdown/SCHEDULE.md` mandating suite re-run after each of the **other 11 plans** lands. Critical re-run points:
| Upstream plan | Trigger | Critical tests |
|---|---|---|
| Plan 01 (privacy-tag-filtering) | new tag stripping at ingest | T4 (observation renders with stripped tags visible in card) |
| Plan 02 (sqlite-persistence) | schema migration | T3 (`initial_load` catalog non-empty after migration) |
| Plan 03 (response-parsing-storage) | ResponseProcessor changes | T4, T11 |
| Plan 04 (vector-search-sync) | `chroma_synced` column added | T5 (pagination response shape unchanged) |
| Plan 05 (context-injection-engine) | — | smoke only |
| Plan 06 (hybrid-search-orchestration) | — | smoke only |
| Plan 07 (session-lifecycle-management) | reaper consolidation | T3, T11 |
| Plan 08 (knowledge-corpus-builder) | — | smoke only |
| Plan 09 (lifecycle-hooks) | hook cache / `ensureWorkerRunning` changes | T4 (hook-triggered observation still broadcasts via SSE) |
| **Plan 11 (http-server-routes)** | **Phase 14 static-file cache + rate-limiter delete** (`06-implementation-plan.md:600-627`) | **ALL 11 tests** — critical. |
| Plan 12 (transcript-watcher-integration) | watcher rewires to direct-call | T4 (Cursor-sourced observation still appears via SSE) |
**(b) Docs**
- Schedule references 05 §3.10 as the unchanging contract.
- Mention CI hook location: if a CI workflow runs the test suite, gate merges of plans 111 on the lockdown suite passing green.
**(c) Verification**
- Schedule covers every plan in `06-implementation-plan.md` Phases 114 that is not this one.
- Plan 11 row explicitly lists all 11 tests (T1T11) as critical.
**(d) Anti-pattern guards**
- **A:** Do not skip the re-run for "unrelated" plans; smoke-run is still mandatory.
- **E:** n/a.
---
## Phase 6 — Escalation path
**(a) What to implement**
Write `tests/viewer-lockdown/ESCALATION.md` documenting:
1. **If the lockdown suite goes red after plan N lands:** open a new plan `07-plans/13-viewer-regression-{short-name}.md` describing:
- Which test failed (T-number).
- Which invariant was violated (I-number).
- Which upstream plan's change triggered the regression.
- A fix proposal.
2. **Do NOT** fix regressions inline inside plan N's branch. Regressions get their own branch, their own PR, and their own review. This preserves audit traceability.
3. **Special case — Plan 11 static-file cache:** if T1 SHA-256 mismatches after plan 11 lands, the likely cause is that `ViewerRoutes.handleViewerUI` (`ViewerRoutes.ts:54-72`) now serves a cached Buffer with a different bearer-token-injection strategy. Document whether (a) the baseline should be regenerated (bearer-token format changed) or (b) the cache implementation needs to match the pre-cache injection point. This is the single highest-risk interaction in the entire refactor.
**(b) Docs**
- Reference `06-implementation-plan.md:600-627` Phase 14 task 2.
- Reference `01-flowcharts/viewer-ui-layer.md:80` (reconnect timing constant) for I11 reconnect regressions.
**(c) Verification**
- Escalation doc exists.
- Template for `13-viewer-regression-*.md` is included.
**(d) Anti-pattern guards**
- **A:** Escalation doc does not prescribe fixes — only detection + routing.
- **E:** n/a.
---
## Copy-ready snippet locations
**None.** This is a lockdown plan; no code snippets are authored.
Regression-test files to be created (all under `tests/viewer-lockdown/`):
- `INVENTORY.md`
- `INVARIANTS.md`
- `regression.spec.ts` (or `CHECKLIST.md` if Playwright is unavailable)
- `run.sh`
- `baseline/viewer.html.sha256`
- `baseline/initial_load.json`
- `baseline/api-observations-page0.json`
- `baseline/api-settings.json`
- `baseline/screenshots/` (3 PNGs)
- `baseline/README.md`
- `SCHEDULE.md`
- `ESCALATION.md`
## Confidence + Gaps
**High confidence:**
- React component tree (confirmed in `App.tsx:1-163`).
- SSE event type list (confirmed in `useSSE.ts:76-120`).
- Hook inventory (confirmed via `src/ui/viewer/hooks/*` glob).
- Dedup pattern anchor (`App.tsx:50-66`, `utils/data.ts``mergeAndDeduplicateByProject`).
- Flowchart-to-live-code mapping for I1I12.
**Medium / gaps:**
1. **Gap — Plan 11 cache + bearer-token interaction.** Phase 14 task 2 in `06-implementation-plan.md:613` says "Cache `viewer.html` … in memory at boot; serve from `Buffer` instead of `fs.readFile`." But observation 71147 (Apr 19 2026) says the bearer token is injected into the viewer HTML as a per-boot window global. If the cache is a static immutable Buffer captured at worker-start, the bearer token will be baked in once per worker boot — fine. If plan 11 changes that to share a Buffer across worker restarts (e.g. via a persistent cache file), the token would desync. **T1 SHA-256 baseline must be regenerated after every worker restart** — document this in `baseline/README.md`. Confirm with plan 11 author whether caching happens at process-boot or at module-import (which could be once per container lifetime).
2. **Gap — Playwright availability.** If `package.json` does not already list Playwright as a dev dependency, adding it to satisfy this lockdown plan would violate the "no code changes" constraint. Fallback: author a manual `CHECKLIST.md` instead of the spec file. Decision deferred to execution time. Check: `grep -q playwright package.json` before choosing automation-vs-manual path.
3. **Low-priority gap — catalog update strategy.** `01-flowcharts/viewer-ui-layer.md:93` lists this as Medium confidence ("additive only"). If a plan introduces project deletion, `updateCatalogForItem` (`useSSE.ts:21-42`) is additive-only and will show stale entries. Not in scope for this lockdown but worth adding I13 if any upstream plan touches catalog eviction.
## Summary
- **Phase count:** 6
- **Expected lines deleted:** 0
- **Expected lines added to `src/`:** 0 (tests go under `tests/viewer-lockdown/`, outside the protected subsystem)
- **Top gaps:**
1. Plan 11's static-file cache change may reshape how bearer tokens are injected into `viewer.html` — T1 SHA-256 baseline needs re-capture after worker boots, and the cache lifecycle (per-boot vs. persistent) must be confirmed with plan 11 before T1 is considered reliable.
2. Playwright may not be a project dev dependency; fall back to a manual `CHECKLIST.md` if adding it is out-of-scope for a lockdown plan (which it is).