perf: streamline worker startup and consolidate database connections (#2122)

* docs: pathfinder refactor corpus + Node 20 preflight

Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 01 — data integrity

Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.

- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
  started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
  UNIQUE(memory_session_id, content_hash) on observations; dedup
  duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
  worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
  and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
  observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
  2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
  path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
  CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
  Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.

Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/01-data-integrity.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 02 — process lifecycle

OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).

- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
  canonical registry at src/supervisor/process-registry.ts is the
  sole survivor; SDK spawn site consolidated into it via new
  createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
  ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
  ['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
  process.kill(-pgid, signal) on Unix when pgid is recorded;
  Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
  staleSessionReaperInterval setInterval (including the co-located
  WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
  WAL growth without an app-level timer), killIdleDaemonChildren,
  killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
  detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
  constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
  via generatorPromise.finally() already lives in worker-service
  startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
  SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
  for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
  via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
  lazy-spawn — consults isWorkerPortAlive (which gates
  captureProcessStartToken for PID-reuse safety via commit
  99060bac), then spawns detached with unref(), then
  waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
  exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
  idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
  only on external SIGTERM via supervisor signal handlers.

Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.

All 10 verification greps return 0. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast

Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
  - Phase 1/2: ObservationRow type doesn't exist; the four
    "formatters" operate on three incompatible types.
  - Phase 4: RECENCY_WINDOW_MS already imported from
    SEARCH_CONSTANTS at every call site.
  - Phase 7: getExistingChromaIds is NOT @deprecated and has an
    active caller in ChromaSync.backfillMissingSyncs.
  - Phase 8: estimateTokens already consolidated.
  - Phase 9: knowledge-corpus rewrite blocked on PG-3
    prompt-caching cost smoke test.

Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.

Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.

Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).

Tests updated (Principle 7 — delete in same PR):
  - search-orchestrator.test.ts: "fall back to SQLite" rewritten
    as "throw ChromaUnavailableError (HTTP 503)".
  - chroma/hybrid/sqlite-search-strategy tests: rewritten to
    rejects.toThrow; removed fellBack assertions.

Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.

Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 03 — ingestion path

Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.

- Phase 0: Created src/services/worker/http/shared.ts exporting
  ingestObservation/ingestPrompt/ingestSummary as direct
  in-process functions plus ingestEventBus (Node EventEmitter,
  reusing existing pattern — no third event bus introduced).
  setIngestContext wires the SessionManager dependency from
  worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
  returning { valid:true; kind: 'observation'|'summary'; data }
  | { valid:false; reason: string }. Inspects root element;
  <skip_summary reason="…"/> is a first-class summary case
  with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
  branches on the discriminated union. On invalid → markFailed
  + logger.warn(reason). On observation → ingestObservation.
  On summary → ingestSummary then emit summaryStoredEvent
  { sessionId, messageId } (consumed by Plan 05's blocking
  /api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
  (ResponseProcessor + SessionManager + worker-types) and
  MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
  guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
  replaced with fs.watch(transcriptsRoot, { recursive: true,
  persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
  Map deleted. tool_use rows insert with INSERT OR IGNORE on
  UNIQUE(session_id, tool_use_id) (added by Plan 01). New
  pairToolUsesByJoin query in PendingMessageStore for read-time
  pairing (UNIQUE INDEX provides idempotency; explicit consumer
  not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
  direct ingestObservation call. maybeParseJson silent-passthrough
  rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
  collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
  class) deleted. The active extractLastMessage at
  src/shared/transcript-parser.ts:41-144 is the sole survivor.

Tests updated (Principle 7 — same-PR delete):
  - tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
    to assert discriminated-union shape; coercion-specific
    scenarios collapse into { valid:false } assertions.
  - tests/worker/agents/response-processor.test.ts: circuit-breaker
    describe block skipped; non-XML/empty-response tests assert
    fail-fast markFailed behavior.

Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.

Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.

Plan: PATHFINDER-2026-04-22/03-ingestion-path.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 05 — hook surface

Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.

- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
  1..20; do curl -sf .../health && break; sleep 0.1; done` shell
  retry wrappers deleted. Hook commands invoke their bun entry
  point directly.
- Phase 2: src/shared/worker-utils.ts — added
  executeWithWorkerFallback<T>(url, method, body) returning
  T | { continue: true; reason?: string }. All 8 hook handlers
  (observation, session-init, context, file-context, file-edit,
  summarize, session-complete, user-message) rewritten to use
  it instead of duplicating the ensureWorkerRunning →
  workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
  using validateBody + sessionEndSchema (z.object({sessionId})).
  One-shot ingestEventBus.on('summaryStoredEvent') listener,
  30 s timer, req.aborted handler — all share one cleanup so
  the listener cannot leak. summarize.ts polling loop, plus
  MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
  memoizes SettingsDefaultsManager.loadFromFile per process.
  Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
  check entry; isProjectExcluded no longer referenced from
  src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
  (all 6 adapters: claude-code, cursor, raw, gemini-cli,
  windsurf). New AdapterRejectedInput error in
  src/cli/adapters/errors.ts. Handler-level isValidCwd checks
  deleted from file-edit.ts and observation.ts. hook-command.ts
  catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
  initAgent is idempotent. tests/hooks/context-reinjection-guard
  test (validated the deleted conditional) deleted in same PR
  per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
  .json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
  _THRESHOLD setting (default 3). On consecutive worker-unreachable
  ≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
  wrapping ensureWorkerRunning. executeWithWorkerFallback calls
  the memoized version.

Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.

Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.

Plan: PATHFINDER-2026-04-22/05-hook-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 06 — API surface

One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.

- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
  in src/services/worker/http/middleware/validateBody.ts —
  safeParse → 400 { error: 'ValidationError', issues: [...] }
  on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
  route file. 24 POST endpoints across SessionRoutes,
  CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
  LogsRoutes, SettingsRoutes now wrap with validateBody().
  /api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
  along with every call site. Inline coercion helpers
  (coerceStringArray, coercePositiveInteger) and inline
  if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
  from src/services/worker/http/middleware.ts. Worker binds
  127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
  via fs.readFileSync; served as Buffer with text/html content
  type. SKILL.md + per-operation .md files cached in
  Server.ts as Map<string, string>; loadInstructionContent
  helper deleted. NO fs.watch, NO TTL — process restart is the
  cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
  — /api/pending-queue (GET), /api/pending-queue/process (POST),
  /api/pending-queue/failed (DELETE), /api/pending-queue/all
  (DELETE). Helper methods that ONLY served them
  (getQueueMessages, getStuckCount, getRecentlyProcessed,
  clearFailed, clearAll) deleted from PendingMessageStore.
  KEPT: /api/processing-status (observability), /health
  (used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
  GracefulShutdown now calls getSupervisor().stop() directly.
  Two functions retained with clear roles:
    - performGracefulShutdown — worker-side 6-step shutdown
    - runShutdownCascade — supervisor-side child teardown
      (process.kill(-pgid), Windows tree-kill, PID-file cleanup)
  Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
  failure-marking path on PendingMessageStore. Old methods
  markSessionMessagesFailed and markAllSessionMessagesAbandoned
  deleted along with all callers (worker-service,
  SessionCompletionHandler, tests/zombie-prevention).

Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.

Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.

Plan: PATHFINDER-2026-04-22/06-api-surface.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: land PATHFINDER Plan 07 — dead code sweep

ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.

Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
  isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
  abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
  zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
  command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments

Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
  builders, ParsedObservation, ParsedSummary, ParseResult,
  SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
  via dynamic await import('../../../context-generator.js') in
  worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
  — used via dynamic await import in npx-cli/install.ts +
  uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
  ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
  orphan-recovery caller in worker-service.ts plus
  zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
  in same file.
- All Database.ts barrel re-exports — used downstream.

Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
  is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
  the methods are not thin wrappers but ~900 LoC of bodies, and
  two methods are documented as intentional mirrors so the
  context-generator.cjs bundle stays schema-consistent without
  pulling MigrationRunner. Deserves its own plan, not a sweep.

Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.

Plan: PATHFINDER-2026-04-22/07-dead-code.md

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: remove residual ProcessRegistry comment reference

Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile review (P1 + 2× P2)

P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
  - Added optional timeoutMs to executeWithWorkerFallback,
    forwarded to workerHttpRequest.
  - summarize.ts call site now passes 35_000 (5 s above server
    hold window).

P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
  - ResponseProcessor now calls ingestSummary({ kind: 'parsed',
    sessionDbId, messageId, contentSessionId, parsed }) so the
    event-emission path is single-sourced.
  - ingestSummary's requireContext() resolution moved inside the
    'queue' branch (the only branch that needs sessionManager /
    dbManager). 'parsed' is a pure event-bus emission and
    doesn't need worker-internal context — fixes mocked
    ResponseProcessor unit tests that don't call
    setIngestContext.

P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
  - Added a Symbol.for('claude-mem/worker-fallback') brand to
    WorkerFallback. isWorkerFallback now checks the brand, not
    a duck-typed property name.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 2 (P1 + P2)

P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.

  - Gate ingestSummary call on (parsed.data.skipped ||
    session.lastSummaryStored). Skipped summaries are an explicit
    no-op bypass and still confirm; real summaries only confirm
    when storage actually wrote a row.
  - Non-skipped + summaryId === null path logs a warn and lets
    the server-side timeout (504) surface to the hook instead of
    a false ok:true.

P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.

  - Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
    log instead of the misleading ENQUEUED line. No behavior
    change — the duplicate is still correctly suppressed by the
    DB (Principle 3); only the log surface is corrected.
  - confirmProcessed is never called with the enqueue() return
    value (it operates on session.processingMessageIds[] from
    claimNextMessage), so no caller is broken; the visibility
    fix prevents future misuse.

Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 3 (P1 + 2× P2)

- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
  context after SessionRoutes is constructed. setIngestContext runs
  before routes exist, so transcript-watcher observations queued via
  ingestObservation() had no way to auto-start the SDK generator.
  Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
  /api/session/end calls register one listener each and clean up on
  completion, so the default-10 warning fires spuriously under normal
  load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
  ingestObservation() instead of duplicating skip-tool / meta /
  privacy / queue logic. Single helper, matching the Plan 03 goal.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)

- processor.handleToolResult: restore in-memory tool-use→tool-result
  pairing via session.pendingTools for schemas (e.g. Codex) whose
  tool_result events carry only tool_use_id + output. Without this,
  neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
  of throwing. Previously a single malformed JSON-shaped field caused
  handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
  for purely-glob inputs so the caller skips the watch instead of
  anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
  log on the returned id; the SessionManager branches on id === 0.

* fix: forward tool_use_id through ingestObservation (Greptile iter 5)

P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.

- shared.ingestObservation: forward payload.toolUseId to
  queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
  tool_use_id (HTTP convention) and toolUseId (JS convention) from
  req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
  validator doesn't rely on .passthrough() alone.

* fix: drop dead pairToolUsesByJoin, close session-end listener race

- PendingMessageStore: delete pairToolUsesByJoin. The method was never
  called and its self-join semantics are structurally incompatible
  with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
  collapses any second row with the same pair, so a self-join can
  only ever match a row to itself. In-memory pendingTools in
  processor.ts remains the pairing path for split-event schemas.

- IngestEventBus: retain a short-lived (60s) recentStored map keyed
  by sessionId. Populated on summaryStoredEvent emit, evicted on
  consume or TTL.

- handleSessionEnd: drain the recent-events buffer before attaching
  the listener. Closes the register-after-emit race where the summary
  can persist between the hook's summarize POST and its session/end
  POST — previously that window returned 504 after the 30s timeout.

* chore: merge origin/main into vivacious-teeth

Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).

Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
  kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
  loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
  POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
  summaryStoredEvent supersedes main's SessionCompletionHandler DI
  refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
  reason; generator .finally() Stop-hook self-clean is a guard for a
  path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
  security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
  #2084) while preserving our Zod validateBody schema.

Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings

1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
   in wrapHandler — synchronous exceptions would hang the client rather
   than surfacing as 500s. Wrap it like every other handler.

2) processor.handleToolResult only consumed the session.pendingTools
   entry when the tool_result arrived without a toolName. In the
   split-schema path where tool_result carries both toolName and toolId,
   the entry was never deleted and the map grew for the life of the
   session. Consume the entry whenever toolId is present.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: typing cleanup and viewer tsconfig split for PR feedback

- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: address Greptile P2 findings (iter 2)

- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
  the unscoped-drain branch that would nuke every pending/processing
  row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
  cached event until TTL eviction so a retried Stop hook's second
  /api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
  already tailed (JSONL appends fire on every line; only unknown
  paths warrant a rescan).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: call finalizeSession in terminal session paths (Greptile iter 3)

terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.

Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: GC failed pending_messages rows at startup (Greptile iter 4)

Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.

Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)

1. startSessionProcessor success branch now calls completionHandler.
   finalizeSession before removeSessionImmediate. Hooks-disabled installs
   (and any Stop hook that fails before POST /api/sessions/complete) no
   longer leave sdk_sessions rows as status='active' forever. Idempotent
   — a subsequent /api/sessions/complete is a no-op.

2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
   closures that reference it (TDZ safety; safe at runtime today but
   fragile if timeout ever shrinks).

3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
   instead of constructing its own — prevents silent divergence if the
   handler ever becomes stateful.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: stop runaway crash-recovery loop on dead sessions

Two distinct bugs were combining to keep a dead session restarting forever:

Bug 1 (uncaught "The operation was aborted."):
  child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
  signal aborts. spawnSdkProcess() never attached an 'error' listener, so
  any async spawn failure became uncaughtException and escaped to the
  daemon-level handler. Attach an 'error' listener immediately after spawn,
  before the !child.pid early-return, so async spawn errors are logged
  (with errno code) and swallowed locally.

Bug 2 (sliding-window limiter never trips on slow restart cadence):
  RestartGuard tripped only when restartTimestamps.length exceeded
  MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
  exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
  session that fail-restart-fail-restart on 8s cycles would loop forever
  (consecutiveRestarts climbing past 30+ in observed logs). Add a
  consecutiveFailures counter that increments on every restart and resets
  only on recordSuccess(). Trip when consecutive failures exceed
  MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
  processing in between proves the session is dead. Both guards now run in
  parallel: tight loops still trip the windowed cap; slow loops trip the
  consecutive-failure cap.

Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* perf: streamline worker startup and consolidate database connections

1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)

* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations

Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.

- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
  before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
  when shouldTrackProject(cwd) is false, so the observer's own hooks
  cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
  boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
  on observations) inline so bundled artifacts (worker-service.cjs,
  context-generator.cjs) stay schema-consistent — without it, the
  ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
  supervisor can actually feed the observer's stdin.

Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.

* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)

Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.

* fix: cross-platform observer-dir containment; clarify SDK stdin pipe

claude-review feedback on PR #2124.

- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
  hard-coded a POSIX separator and missed Windows backslash paths plus any
  trailing-slash variance. Switched to a path.relative-based isWithin()
  helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
  SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
  consumes that pipe; 'ignore' would null it and the null-check below
  would tear the child down on every spawn.

* fix: make Stop hook fire-and-forget; remove dead /api/session/end

The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.

The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.

- summarize.ts: drop the /api/session/end long-poll and the trailing
  /api/sessions/complete await; ~40 lines removed; unused
  SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
  SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
  route registration. Drop the now-unused ingestEventBus and
  SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
  comments that referenced the dead endpoint. The IngestEventBus is
  left in place dormant (no listeners) for follow-up cleanup so this
  PR stays focused on the blocker.

Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.

Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* deps: bump all dependencies to latest including majors

Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.

Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix: surface real chroma errors and add deep status probe

Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.

Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.

Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: rebuild worker-service bundle to match merged src

Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: address coderabbit feedback on PLAN-fix-mcp-search.md

- replace machine-specific /Users/alexnewman absolute paths with portable
  <repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-04-25 13:37:40 -07:00
committed by GitHub
parent 8ace1d9c84
commit 94d592f212
159 changed files with 18091 additions and 5843 deletions
@@ -0,0 +1,91 @@
# Flowchart: context-injection-engine
## Sources Consulted
- `src/services/worker/http/routes/SearchRoutes.ts:209-249` (handleContextInject)
- `src/services/worker/http/routes/SearchRoutes.ts:258-296` (handleSemanticContext)
- `src/services/context/ContextBuilder.ts:46-186`
- `src/services/context/ContextConfigLoader.ts:17-40`
- `src/services/context/ObservationCompiler.ts:26-189`
- `src/services/context/TokenCalculator.ts:14-78`
- `src/services/context/sections/HeaderRenderer.ts:15-61`
- `src/services/context/sections/TimelineRenderer.ts:21-100`
- `src/services/context/sections/SummaryRenderer.ts:15-65`
- `src/services/context/sections/FooterRenderer.ts:15-42`
- `src/services/context/formatters/AgentFormatter.ts:36-98`
- `src/services/context/formatters/HumanFormatter.ts:35-80`
- `src/services/domain/ModeManager.ts:15-100`
## Happy Path Description
Two-part system. **Route-driven flow** (`/api/context/inject`): GET request with project(s) and `colors=true|false`. Handler parses comma-separated projects (worktree support), imports `generateContext`. ContextBuilder loads mode-specific config (observation types + concepts) from ModeManager, opens SQLite, queries observations and summaries filtered by mode, calculates token economics, and passes raw data to section renderers (Header, Timeline, Summary, Footer). Each renderer branches on `forHuman` — AgentFormatter emits compact markdown for LLMs, HumanFormatter emits ANSI-colored terminal output.
**Semantic flow** (`/api/context/semantic`): POST with user query. Delegates to SearchManager for Chroma similarity, formats top-N as compact markdown with title + narrative. Returns JSON for per-prompt injection.
## Mermaid Flowchart
```mermaid
flowchart TD
HTTPInject["GET /api/context/inject<br/>SearchRoutes.ts:209"] --> ExtractParams["Extract projects + colors<br/>SearchRoutes.ts:211-212"]
HTTPSemantic["POST /api/context/semantic<br/>SearchRoutes.ts:258"] --> ExtractParamsSem["Extract q + project + limit<br/>SearchRoutes.ts:259-261"]
ExtractParams --> ParseProjects["Split comma-separated<br/>SearchRoutes.ts:221"]
ParseProjects --> GenerateCtx["generateContext<br/>ContextBuilder.ts:130"]
ExtractParamsSem --> ValidateQuery["len(q) >= 20<br/>SearchRoutes.ts:263"]
ValidateQuery --> SearchMgr["SearchManager.search via Chroma<br/>SearchRoutes.ts:270"]
SearchMgr --> FormatSemantic["Top-N markdown<br/>SearchRoutes.ts:287-293"]
FormatSemantic --> ReturnSemJSON["Return JSON<br/>SearchRoutes.ts:295"]
GenerateCtx --> LoadConfig["loadContextConfig<br/>ContextBuilder.ts:134"]
LoadConfig --> ModeLoad["ModeManager.getActiveMode<br/>ContextConfigLoader.ts:22"]
ModeLoad --> CreateDB["initializeDatabase<br/>ContextBuilder.ts:152"]
CreateDB --> QueryObs["query observations<br/>ContextBuilder.ts:159"]
QueryObs --> ObsMulti{Multi-project worktree?}
ObsMulti -->|Yes| QueryObsMulti["queryObservationsMulti<br/>ObservationCompiler.ts:105"]
ObsMulti -->|No| QueryObsSingle["queryObservations<br/>ObservationCompiler.ts:26"]
QueryObsMulti --> QuerySumm["query summaries<br/>ContextBuilder.ts:162"]
QueryObsSingle --> QuerySumm
QuerySumm --> CheckEmpty{Empty?<br/>ContextBuilder.ts:167}
CheckEmpty -->|Yes| RenderEmptyState["renderEmptyState<br/>ContextBuilder.ts:73"]
CheckEmpty -->|No| BuildCtxOut["buildContextOutput<br/>ContextBuilder.ts:80-122"]
BuildCtxOut --> CalcEcon["calculateTokenEconomics<br/>TokenCalculator.ts:25"]
CalcEcon --> RenderHeader["renderHeader<br/>HeaderRenderer.ts:15"]
RenderHeader --> FormatMode{forHuman?}
FormatMode -->|true| HumanHeader["HumanFormatter<br/>HumanFormatter.ts:35"]
FormatMode -->|false| AgentHeader["AgentFormatter<br/>AgentFormatter.ts:36"]
HumanHeader --> RenderTimeline["renderTimeline<br/>TimelineRenderer.ts"]
AgentHeader --> RenderTimeline
RenderTimeline --> GroupDays["groupTimelineByDay<br/>TimelineRenderer.ts:21"]
GroupDays --> IterateDays[/"For each day"/]
IterateDays --> FormatDay{forHuman?}
FormatDay -->|true| RenderDayHuman["renderDayTimelineHuman<br/>TimelineRenderer.ts:97"]
FormatDay -->|false| RenderDayAgent["renderDayTimelineAgent<br/>TimelineRenderer.ts:56"]
RenderDayAgent --> CheckSummary["shouldShowSummary<br/>SummaryRenderer.ts:15"]
RenderDayHuman --> CheckSummary
CheckSummary --> RenderPrev["renderPreviouslySection<br/>FooterRenderer.ts:15"]
RenderPrev --> JoinLines["Join + trim<br/>ContextBuilder.ts:121"]
JoinLines --> HTTPReturn["Return text/plain<br/>SearchRoutes.ts:247"]
```
## Side Effects
- DB connection opened, closed in finally (ContextBuilder.ts:184).
- Mode state (ModeManager singleton) drives all filtering.
- Read-only — no writes during generation.
- Semantic path queries Chroma; inject path is SQLite-only.
## External Feature Dependencies
**Calls into:** ModeManager, SessionStore (SQLite), SearchManager (semantic path only), SettingsDefaultsManager, timeline-formatting utilities.
**Called by:** lifecycle-hooks (SessionStart context + UserPromptSubmit semantic), `/api/context/inject` clients (viewer UI), transcript-watcher post-session-end refresh.
## Confidence + Gaps
**High:** Route entry points; orchestration pipeline; mode filtering; Agent vs Human formatter split; token economics.
**Gaps:** HumanFormatter ANSI detail; ModeManager deep-merge inheritance; prior-session message extraction. No duplication observed internally — AgentFormatter/HumanFormatter are cleanly separated by audience.
@@ -0,0 +1,90 @@
# Flowchart: http-server-routes
## Sources Consulted
- `src/services/server/Server.ts:1-286`
- `src/services/server/Middleware.ts`
- `src/services/server/ErrorHandler.ts`
- `src/services/worker/http/middleware.ts`
- `src/services/worker/http/BaseRouteHandler.ts`
- All 8 route files under `src/services/worker/http/routes/`
## Route Inventory
| File | Endpoints | Method(s) | Purpose |
|---|---|---|---|
| ViewerRoutes.ts | `/`, `/health`, `/stream` | GET | UI HTML; SSE broadcaster |
| SearchRoutes.ts | `/api/search`, `/api/timeline`, `/api/decisions`, `/api/changes`, `/api/how-it-works`, `/api/search/*`, `/api/context/*` | GET/POST | Search + context injection |
| SessionRoutes.ts | `/sessions/:id/*`, `/api/sessions/*` | POST/GET/DELETE | Session init/observations/summarize/complete |
| DataRoutes.ts | `/api/observations`, `/api/summaries`, `/api/prompts`, `/api/stats`, `/api/projects`, `/api/processing-status`, `/api/pending-queue` | GET/POST/DELETE | Data retrieval + queue mgmt |
| SettingsRoutes.ts | `/api/settings`, `/api/mcp/*`, `/api/branch/*` | GET/POST | Settings + MCP toggle + branch |
| MemoryRoutes.ts | `/api/memory/save` | POST | Manual observation insert |
| CorpusRoutes.ts | `/api/corpus`, `/api/corpus/:name/*` | GET/POST/DELETE | Knowledge corpus CRUD |
| LogsRoutes.ts | `/api/logs`, `/api/logs/clear` | GET/POST | Log retrieval |
| Server.ts core | `/api/health`, `/api/readiness`, `/api/version`, `/api/instructions`, `/api/admin/*` | GET/POST | System health + admin |
## Happy Path Description
Request → middleware chain (JSON parse 5MB → CORS localhost → rate limit 300/min → request logging) → Express router → route handler extends `BaseRouteHandler` (provides `wrapHandler()` catching sync/async errors) → service call (SearchManager, DatabaseManager, etc.) → response (JSON, SSE, HTML). Global `errorHandler` catches uncaught errors. Admin endpoints require localhost.
## Mermaid Flowchart
```mermaid
flowchart TD
A([Request on :37777]) --> B["Middleware chain"]
B --> B1["JSON parse 5MB"]
B1 --> B2["CORS localhost"]
B2 --> B3["Rate limit 300/min/IP"]
B3 --> B4["Request logger"]
B4 --> C["Router match"]
C --> D{Route found?}
D -->|No| D1["notFoundHandler 404"]
D -->|Yes| E["Handler"]
E --> F["BaseRouteHandler.wrapHandler"]
F --> G{Try}
G -->|success| H["Service call"]
G -->|error| J["handleError"]
H --> I{Response type?}
I -->|JSON| I1["res.status.json"]
I -->|SSE| I3["text/event-stream<br/>register SSEBroadcaster"]
I -->|HTML| I6["file read + send"]
J --> J1["logger.error"]
J1 --> J2{Headers sent?}
J2 -->|No| J3["JSON error response"]
J2 -->|Yes| J4["Skip"]
I1 --> K([Sent])
I3 --> K
I6 --> K
J3 --> K
D1 --> K
L["Global errorHandler middleware"] --> J
```
## Repeated Patterns (Phase 2 candidates)
1. **Try-catch wrapping:** All routes inherit `BaseRouteHandler.wrapHandler()` — consistent, good.
2. **Validation:** Each route validates query/body **independently** — no shared validator middleware. Duplicated shape.
3. **Service injection:** Constructors accept services — consistent DI.
4. **Response shape:**
- Success: `res.status(200).json({ ... })`
- Error: `{ error, message, code?, details? }`
- 404: `notFoundHandler`
- 500: global errorHandler
5. **SSE is structurally different:** stateful persistent connection; managed by `SSEBroadcaster`.
## Side Effects
- SSE client registration grows connection list until close.
- Rate limiter in-memory IP map.
- Logger writes (stderr, async).
- Admin endpoints: `/api/admin/restart` and `/api/admin/shutdown` call `process.exit(0)`.
- File I/O for `/`, `/api/instructions`, `/api/logs` (synchronous).
## External Feature Dependencies
SearchManager, SessionManager, DatabaseManager, SSEBroadcaster, SettingsManager, BranchManager, ModeManager, CorpusStore/Builder/KnowledgeAgent, logger, AppError, Supervisor/ProcessRegistry.
## Confidence + Gaps
**High:** Middleware order; BaseRouteHandler pattern; error shape; SSE setup.
**Gaps:** No auth/permission middleware (single-machine trust model assumed); validator duplication; blocking synchronous file I/O in `/` and `/api/instructions`; SSE race on connect-mid-broadcast.
@@ -0,0 +1,97 @@
# Flowchart: hybrid-search-orchestration
## Sources Consulted
- `src/services/worker/search/SearchOrchestrator.ts:1-290`
- `src/services/worker/search/strategies/ChromaSearchStrategy.ts:1-120`
- `src/services/worker/search/strategies/SQLiteSearchStrategy.ts:1-120`
- `src/services/worker/search/strategies/HybridSearchStrategy.ts:1-240`
- `src/services/worker/search/ResultFormatter.ts:1-200`
- `src/services/worker/search/TimelineBuilder.ts:1-220`
- `src/services/worker/SearchManager.ts:1-600`
- `src/services/worker/http/routes/SearchRoutes.ts:1-150`
## Happy Path Description
`/api/search``SearchRoutes``SearchManager.search()` (thin facade) → `SearchOrchestrator` chooses among three strategies:
**Path 1 (Filter-only):** No query text → `SQLiteSearchStrategy` does metadata-only filter via SessionSearch (date range, project, concept/type/file).
**Path 2 (Semantic):** Query text + ChromaSync available → `ChromaSearchStrategy.queryChroma` → filter by recency (90-day default or custom) → categorize by doc type → hydrate from SQLite. If Chroma fails mid-query, orchestrator falls back to filter-only SQLite (drops the query term).
**Path 3 (Hybrid):** `findByConcept|Type|File` specialty methods → `HybridSearchStrategy` two-phase: (1) SQLite metadata filter → all matching IDs; (2) Chroma semantic ranking → re-rank; (3) intersect + hydrate → return metadata-matched IDs in Chroma rank order.
`ResultFormatter` renders markdown tables grouped by date/file. `TimelineBuilder` handles chronological grouping with anchor-based depth filtering.
## Mermaid Flowchart
```mermaid
flowchart TD
A["GET /api/search<br/>SearchRoutes.ts:22"] --> B["SearchManager.search<br/>SearchManager.ts:161"]
B --> C["SearchOrchestrator.search<br/>SearchOrchestrator.ts:71"]
C --> D{Decision<br/>SearchOrchestrator.ts:81}
D -->|no query| E["SQLiteStrategy.search<br/>SQLiteSearchStrategy.ts:38"]
D -->|query + Chroma| F["ChromaStrategy.search<br/>ChromaSearchStrategy.ts:42"]
D -->|no Chroma| G["Return empty<br/>SearchOrchestrator.ts:115"]
E --> E1["SessionSearch.searchObservations/Sessions/Prompts"]
E1 --> E4["StrategySearchResult<br/>SearchOrchestrator.ts:98"]
F --> F1["ChromaSync.queryChroma<br/>ChromaSearchStrategy.ts:104"]
F1 --> F3["filterByRecency 90d<br/>SearchOrchestrator.ts:119"]
F3 --> F4["categorizeByDocType<br/>SearchOrchestrator.ts:120"]
F4 --> F5["hydrate from SQLite"]
F5 --> F6["StrategySearchResult usedChroma=true"]
F --> F7[/Error?/]
F7 -->|yes| F8["SQLiteStrategy fallback<br/>SearchOrchestrator.ts:102"]
F8 --> E4_Fallback["fellBack=true<br/>SearchOrchestrator.ts:107"]
E4 --> H["SearchManager formats<br/>SearchManager.ts:320-444"]
E4_Fallback --> H
F6 --> H
G --> H
H --> Hfmt{format?}
Hfmt -->|json| H1["Raw JSON"]
Hfmt -->|markdown| H2["ResultFormatter.formatSearchResults<br/>ResultFormatter.ts:25"]
H2 --> H3["combineResults<br/>ResultFormatter.ts:115"]
H3 --> H4["groupByDate<br/>ResultFormatter.ts:49"]
H4 --> H5["groupByFile<br/>ResultFormatter.ts:61"]
H5 --> H9["Markdown tables"]
J["findByConcept/Type/File<br/>SearchOrchestrator.ts:126-180"] --> K["HybridStrategy<br/>HybridSearchStrategy.ts:26"]
K --> K1["Phase 1: SessionSearch metadata filter<br/>HybridSearchStrategy.ts:74/112/152"]
K1 --> K2["Phase 2: ChromaSync.queryChroma<br/>HybridSearchStrategy.ts:180/208"]
K2 --> K3["Phase 3: intersectWithRanking<br/>HybridSearchStrategy.ts:228"]
K3 --> K4["hydrate SQLite<br/>HybridSearchStrategy.ts:188"]
K4 --> K5["StrategySearchResult usedChroma=true"]
L["TimelineBuilder.buildTimeline<br/>TimelineBuilder.ts:46"] --> L1["Unify obs/sessions/prompts"]
L1 --> L2["filterByDepth<br/>TimelineBuilder.ts:73"]
L2 --> L3["formatTimeline<br/>TimelineBuilder.ts:124"]
```
## Side Effects
- Chroma unavailability → fallback to filter-only SQLite (drops query text).
- Default 90-day recency filter unless `dateRange` is explicit.
- HybridStrategy errors → metadata-only results with `fellBack=true`.
- SearchManager normalizes comma-separated URL params → arrays.
## External Feature Dependencies
**Calls into:** ChromaSync, SessionSearch (SQLite FTS5), SessionStore (hydration), ModeManager (type icons), timeline-formatting helpers.
**Called by:** Search routes, mem-search skill, CorpusBuilder (via SearchOrchestrator).
## Important Clarification: SearchManager vs SearchOrchestrator
- **SearchOrchestrator** is the canonical strategy coordinator introduced in Jan 2026 monolith refactor.
- **SearchManager** is a **thin facade** delegating to SearchOrchestrator, plus HTTP/display wrapping.
- **NOT duplicates.** But SearchManager retains legacy private methods (`queryChroma`, `searchChromaForTimeline` marked `@deprecated`) — candidates for cleanup.
## Confidence + Gaps
**High:** Three paths + fallback chains; SearchManager is thin facade; TimelineBuilder is standalone formatter.
**Gaps:** Pagination enforcement across strategies; CorpusBuilder's exact call into SearchOrchestrator; deprecated SearchManager methods still present.
@@ -0,0 +1,87 @@
# Flowchart: knowledge-corpus-builder
## Sources Consulted
- `src/services/worker/knowledge/CorpusBuilder.ts:1-174`
- `src/services/worker/knowledge/KnowledgeAgent.ts:1-284`
- `src/services/worker/knowledge/CorpusRenderer.ts:1-133`
- `src/services/worker/knowledge/CorpusStore.ts:1-127`
- `src/services/worker/http/routes/CorpusRoutes.ts:1-284`
- `src/services/worker/search/SearchOrchestrator.ts:1-80`
- `src/services/worker/search/ResultFormatter.ts:1-100`
- `src/services/context/formatters/AgentFormatter.ts:1-100`
## Happy Path Description
`POST /api/corpus``handleBuildCorpus``CorpusBuilder.build()` maps filters to `SearchOrchestrator.search()` → extract IDs → `SessionStore.getObservationsByIds()` hydrates full records → map to `CorpusObservation` → compute stats (type breakdown, date range) → `CorpusRenderer.generateSystemPrompt()``CorpusRenderer.renderCorpus()` produces full-detail markdown → persist to `~/.claude-mem/corpora/{name}.corpus.json` via `CorpusStore.write`.
`POST /api/corpus/:name/prime``KnowledgeAgent.prime()` → render full corpus text + system prompt → pass to Claude Agent SDK `query()` → capture `session_id` → persist in corpus.json.
`POST /api/corpus/:name/query``KnowledgeAgent.query()` resumes SDK session by id, agent answers from corpus context, auto-reprimes on expiration.
## Mermaid Flowchart
```mermaid
flowchart TD
A["POST /api/corpus<br/>CorpusRoutes.ts:43"] --> B["handleBuildCorpus"]
B --> C["CorpusBuilder.build<br/>CorpusBuilder.ts:50"]
C --> D["SearchOrchestrator.search<br/>CorpusBuilder.ts:64"]
D --> E["SessionStore.getObservationsByIds<br/>CorpusBuilder.ts:82"]
E --> F["mapObservationToCorpus<br/>CorpusBuilder.ts:126"]
F --> G["calculateStats<br/>CorpusBuilder.ts:146"]
G --> H["CorpusRenderer.generateSystemPrompt<br/>CorpusBuilder.ts:109"]
H --> I["CorpusRenderer.renderCorpus (estimate tokens)<br/>CorpusBuilder.ts:112"]
I --> J["CorpusStore.write<br/>CorpusBuilder.ts:116"]
J --> K[(~/.claude-mem/corpora/{name}.corpus.json<br/>CorpusStore.ts:14)]
L1["GET /api/corpus/:name"] --> L3["CorpusStore.read<br/>CorpusStore.ts:39"]
L3 --> K
M["POST /api/corpus/:name/prime<br/>CorpusRoutes.ts:213"] --> N["KnowledgeAgent.prime<br/>KnowledgeAgent.ts:58"]
N --> P["CorpusRenderer.renderCorpus<br/>CorpusRenderer.ts:14"]
P --> Q["Claude Agent SDK query<br/>KnowledgeAgent.ts:75"]
Q --> R["session_id captured<br/>KnowledgeAgent.ts:89"]
R --> S["CorpusStore.write update session_id<br/>KnowledgeAgent.ts:114"]
T["POST /api/corpus/:name/query<br/>CorpusRoutes.ts:235"] --> V["KnowledgeAgent.query<br/>KnowledgeAgent.ts:125"]
V --> W["Agent SDK resume session_id<br/>KnowledgeAgent.ts:190-200"]
W --> X{Session expired?}
X -->|Yes| Y["auto-reprime<br/>KnowledgeAgent.ts:148"]
X -->|No| Z["Return answer"]
AA["POST /api/corpus/:name/rebuild"] --> C
AB["POST /api/corpus/:name/reprime"] --> N
AC["DELETE /api/corpus/:name"] --> AD["CorpusStore.delete<br/>CorpusStore.ts:94"]
```
## Side Effects
- Writes `{name}.corpus.json` in `~/.claude-mem/corpora/`.
- Spawns Claude Agent SDK subprocess for prime/query.
- Creates `OBSERVER_SESSIONS_DIR` if absent.
- Environment isolation via `buildIsolatedEnv`.
## External Feature Dependencies
**Calls into:** SearchOrchestrator (strategy routing), SessionStore (hydration), Anthropic Claude Agent SDK, SettingsDefaultsManager, ChromaSync (indirect through hybrid).
**Called by:** CorpusRoutes HTTP endpoints; knowledge-agent skill (external).
## Potential Duplication Noted
**CorpusRenderer vs ResultFormatter vs AgentFormatter** — all three produce markdown from observations:
| Renderer | Audience | Density | Grouping |
|---|---|---|---|
| ResultFormatter | CLI search results | Compact table rows | Date/file |
| AgentFormatter | Session context injection | Compact per-line | Day timeline |
| CorpusRenderer | Agent priming corpus | FULL DETAIL narrative-first | List or chronological |
**No direct code reuse** but all three independently iterate observations and format markdown. Consolidating on a shared rendering interface (base class or strategy) could reduce surface area if output configurations overlap.
**Search logic NOT duplicated** — CorpusBuilder correctly delegates to SearchOrchestrator.
## Confidence + Gaps
**High:** Build → prime → query flow; 8 HTTP endpoints; session reprime on expiration.
**Gaps:** Exact "session expired" detection (regex match at KnowledgeAgent.ts:179); token heuristic (chars/4 at CorpusRenderer.ts:91); no quota enforcement for corpus count/size.
@@ -0,0 +1,128 @@
# Flowchart: lifecycle-hooks
## Sources Consulted
- `src/cli/hook-command.ts:1-122`
- `src/cli/handlers/index.ts:1-72`
- `src/cli/handlers/context.ts:1-95` (SessionStart)
- `src/cli/handlers/session-init.ts:1-192` (UserPromptSubmit)
- `src/cli/handlers/observation.ts:1-86` (PostToolUse)
- `src/cli/handlers/summarize.ts:1-170` (Stop / Summary phase)
- `src/cli/handlers/session-complete.ts:1-66` (Stop / Completion phase)
- `src/cli/handlers/user-message.ts:1-54` (SessionStart parallel)
- `src/cli/adapters/claude-code.ts:1-45`
- `src/hooks/hook-response.ts:1-12`
- `src/shared/hook-constants.ts:1-35`
- `src/services/worker-service.ts:1-100`
- `src/supervisor/index.ts:1-100`
- `src/services/worker/http/routes/SessionRoutes.ts:1-330`
- `src/services/worker/http/routes/SearchRoutes.ts:1-150`
- `src/services/infrastructure/GracefulShutdown.ts:1-100`
- `src/supervisor/process-registry.ts:1-80`
- `src/services/worker-spawner.ts:1-150`
## Happy Path Description
Claude-Mem's lifecycle-hooks system intercepts Claude Code's session lifecycle events and routes them through specialized handlers that coordinate session tracking, tool observation capture, semantic context injection, and session summarization.
**SessionStart** fires immediately when a session begins. The **context handler** ensures the worker daemon is running, queries the Chroma vector database for relevant past observations, and returns them as `additionalContext` for injection into Claude's prompt. In parallel, **user-message** displays formatted context information to the user's terminal and broadcasts the worker's live dashboard URL. Both handlers gracefully degrade if the worker is unavailable.
**UserPromptSubmit** fires when the user submits their first prompt. The **session-init handler** calls `/api/sessions/init` to create a session record in the database, captures the prompt, checks privacy settings, and optionally starts the Claude SDK agent. If semantic injection is enabled, it fetches relevant observations via `/api/context/semantic` and injects them as additional context alongside the user's prompt.
**PostToolUse** fires after Claude executes each tool. The **observation handler** sends the tool usage (name, input, response) to `/api/sessions/observations` where the worker validates privacy rules, enriches the observation with cwd/platform metadata, stores it in SQLite, and queues an async Chroma embedding for semantic search.
**Stop** hook fires when a session ends. This is split into two phases with different timing guarantees: **summarize handler** queues the session's final assistant message to `/api/sessions/summarize` and then polls `/api/sessions/status` to wait (up to 110s) for the SDK agent to finish processing the summary, then calls `/api/sessions/complete`. The **session-complete handler** (phase 2) marks the session inactive in the sessions map.
## Mermaid Flowchart
```mermaid
flowchart TD
Start([Claude Code Session<br/>Lifecycle Event]) --> Dispatch{Event Type?<br/>hook-command.ts:88}
Dispatch -->|SessionStart| CtxSetup["ensureWorkerRunning<br/>worker-spawner.ts:100"]
Dispatch -->|UserPromptSubmit| InitSetup["ensureWorkerRunning<br/>worker-spawner.ts:100"]
Dispatch -->|PostToolUse| ObsSetup["ensureWorkerRunning<br/>worker-spawner.ts:100"]
Dispatch -->|Stop| SumSetup["Check if subagent<br/>summarize.ts:34"]
CtxSetup -->|Worker unavailable| CtxEmpty["Return empty context<br/>context.ts:44-46"]
CtxSetup -->|Worker ready| CtxFetch["Fetch /api/context/inject<br/>context.ts:54-56"]
CtxFetch --> CtxInject["Return additionalContext<br/>context.ts:88-93"]
CtxInject --> UMsgStart["userMessageHandler parallel<br/>user-message.ts:32"]
UMsgStart --> UMsgFetch["GET /api/context/inject (colors)<br/>user-message.ts:13-29"]
UMsgFetch --> UMsgDisplay["Write formatted ctx to stderr<br/>user-message.ts:24-28"]
InitSetup --> InitGuard["Validate session + cwd + project<br/>session-init.ts:51-61"]
InitGuard --> InitCall["POST /api/sessions/init<br/>session-init.ts:75-84"]
InitCall --> InitProcess["Receive sessionDbId + promptNumber<br/>session-init.ts:97-106"]
InitProcess --> InitSDK["POST /sessions/{id}/init start SDK<br/>session-init.ts:141-150"]
InitSDK --> InitSemantic["Semantic injection enabled?<br/>session-init.ts:158-159"]
InitSemantic -->|Yes| SemanticFetch["POST /api/context/semantic<br/>session-init.ts:164-165"]
SemanticFetch --> SemanticInject["Return additionalContext<br/>session-init.ts:179-188"]
ObsSetup --> ObsGuard["Validate toolName + cwd + not excluded<br/>observation.ts:40-62"]
ObsGuard --> ObsSend["POST /api/sessions/observations<br/>observation.ts:65-77"]
ObsSend --> ObsDB["Worker stores + queues Chroma embed<br/>SessionRoutes.ts:30"]
SumSetup -->|Not subagent| SumEnsure["ensureWorkerRunning<br/>summarize.ts:44"]
SumEnsure --> SumValidate["Extract last assistant msg<br/>summarize.ts:50-78"]
SumValidate --> SumQueue["POST /api/sessions/summarize<br/>summarize.ts:86-104"]
SumQueue --> SumPoll["Poll /api/sessions/status 500ms up to 110s<br/>summarize.ts:117-150"]
SumPoll --> SumComplete["POST /api/sessions/complete<br/>summarize.ts:156-161"]
SumComplete --> SessionComplete["sessionCompleteHandler phase 2<br/>session-complete.ts:32"]
SessionComplete --> SCSend["POST /api/sessions/complete<br/>remove from active map<br/>session-complete.ts:54"]
CtxEmpty --> Done([Exit code 0<br/>hook-command.ts:106])
UMsgDisplay --> Done
SemanticInject --> Done
ObsDB --> Done
SCSend --> Done
```
## Side Effects
**HTTP Calls to Worker (port 37777):**
- `GET /api/context/inject` — returns markdown context for injection
- `POST /api/sessions/init` — creates session record, returns sessionDbId
- `POST /api/context/semantic` — semantic search on Chroma
- `POST /sessions/{sessionDbId}/init` — starts SDK agent
- `POST /api/sessions/observations` — stores tool usage observation
- `POST /api/sessions/summarize` — queues summary generation
- `GET /api/sessions/status` — polls queue length
- `POST /api/sessions/complete` — marks session inactive
**Database (SQLite via worker):**
- Inserts into `sdk_sessions`, `user_prompts`, `observations`
- Updates `sdk_sessions.summary` with `summary_stored` flag
**Process Management:**
- `ensureWorkerStarted` spawns worker daemon via `spawnDaemon` if not alive
- SDK agent subprocess spawned per session
- Summarize handler waits up to 110s for SDK agent to finish
**File I/O:**
- Worker PID file at `~/.claude-mem/worker.pid`
- Hook logs at `~/.claude-mem/logs/hook.log`
## External Feature Dependencies
**Calls into:**
- **context-injection-engine** (via `/api/context/inject`, `/api/context/semantic`)
- **sqlite-persistence** (all writes via worker HTTP)
- **vector-search-sync** (async Chroma embeds)
- **session-lifecycle-management** (session state, SDK subprocess)
- **privacy-tag-filtering** (observation content filtered before storage)
- **http-server-routes** (all HTTP communication)
**Called by:**
- Claude Code CLI plugin harness (registered hooks)
- Cursor IDE (routed through observation handler)
- Gemini CLI / OpenRouter adapters
## Confidence + Gaps
**High Confidence:** Hook lifecycle → handler mapping; HTTP endpoints + payloads; graceful degradation on worker unavailability; exit code 0 strategy.
**Medium Confidence:** Exact SDK agent lifecycle and crash recovery; Cursor hook integration paths.
**Gaps:** Hook installer (how hooks register in Claude Code settings); TypeScript build → CLI entry process.
@@ -0,0 +1,86 @@
# Flowchart: privacy-tag-filtering
## Sources Consulted
- `src/utils/tag-stripping.ts:1-92`
- `src/services/worker/http/routes/SessionRoutes.ts:1-900`
- `src/services/worker/SessionManager.ts:270-360`
- `src/services/sqlite/PendingMessageStore.ts:1-100`
- `src/cli/handlers/summarize.ts:1-150`
- `src/shared/transcript-parser.ts:1-130`
## Happy Path Description
User submits a prompt containing `<private>` tags via hook → Worker HTTP endpoint `/api/sessions/init` receives request → `SessionRoutes.handleSessionInitByClaudeId` (line 814) validates and extracts the prompt. At line 862, `stripMemoryTagsFromPrompt()` is called, which invokes `stripTagsInternal()` to remove six tag types: `<claude-mem-context>`, `<private>`, `<system_instruction>`, `<system-instruction>`, `<persisted-output>`, and `<system-reminder>`. The cleaned prompt is saved to `user_prompts`. Concurrently, tool observations flow through `handleObservationsByClaudeId` (line 565), where `tool_input` and `tool_response` are stringified and stripped via `stripMemoryTagsFromJson()` (lines 629, 633), then queued to `PendingMessageStore` as already-cleaned data.
Stripping occurs BEFORE persistence, ensuring the database never receives unfiltered content. However, the **assistant-message summarize path** only strips `<system-reminder>` at extraction time (summarize.ts:66), not the full suite — a known gap.
## Mermaid Flowchart
```mermaid
flowchart TD
Start([User prompt with tags<br/>SessionRoutes.ts:814]) --> Init["handleSessionInitByClaudeId<br/>SessionRoutes.ts:814"]
Start2([Tool invocation completes<br/>SessionRoutes.ts:565]) --> ObsRoute["handleObservationsByClaudeId<br/>SessionRoutes.ts:565"]
Start3([Session stops, summarize<br/>summarize.ts:66]) --> Extract["extractLastMessage stripSystemReminders=true<br/>summarize.ts:66"]
Init --> StripPrompt["stripMemoryTagsFromPrompt<br/>SessionRoutes.ts:862"]
StripPrompt --> StripInternal1["stripTagsInternal (all 6 tags)<br/>tag-stripping.ts:51"]
StripInternal1 --> RemoveTags1["Remove private, claude-mem-context,<br/>system_instruction, system-reminder,<br/>persisted-output, system-instruction<br/>tag-stripping.ts:53-59"]
RemoveTags1 --> CheckEmpty{Empty?<br/>SessionRoutes.ts:865}
CheckEmpty -->|Yes| SkipPrivate["Return skipped=true<br/>SessionRoutes.ts:872"]
CheckEmpty -->|No| SavePrompt["saveUserPrompt<br/>SessionRoutes.ts:882"]
SavePrompt --> DBPrompt["INSERT user_prompts<br/>SessionStore.ts"]
ObsRoute --> ExtractObs["Extract tool_input, tool_response<br/>SessionRoutes.ts:587"]
ExtractObs --> StripInput["stripMemoryTagsFromJson input<br/>SessionRoutes.ts:629"]
StripInput --> StripInternal2["stripTagsInternal<br/>tag-stripping.ts:51"]
StripInternal2 --> StripResponse["stripMemoryTagsFromJson response<br/>SessionRoutes.ts:633"]
StripResponse --> StripInternal3["stripTagsInternal<br/>tag-stripping.ts:51"]
StripInternal3 --> QueueObs["queueObservation<br/>SessionRoutes.ts:637"]
QueueObs --> EnqueueDB["PendingMessageStore.enqueue<br/>PendingMessageStore.ts:63"]
EnqueueDB --> DBObs["pending_messages cleaned"]
Extract --> PartialStrip["SYSTEM_REMINDER_REGEX only<br/>shared/transcript-parser.ts:84"]
PartialStrip --> SummarizeRoute["handleSummarizeByClaudeId<br/>SessionRoutes.ts:669"]
SummarizeRoute --> QueueSum["queueSummarize last_assistant_message<br/>SessionRoutes.ts:705"]
QueueSum --> PendingSum["pending_messages with INCOMPLETE strip"]
style PartialStrip fill:#fff9c4
style PendingSum fill:#fff9c4
style StripPrompt fill:#c8e6c9
style StripInput fill:#c8e6c9
style StripResponse fill:#c8e6c9
```
## Call Sites Inventory
| Location | Function | Data Protected | Tag Types | Entry |
|---|---|---|---|---|
| `SessionRoutes.ts:862` | `stripMemoryTagsFromPrompt()` | User prompts | All 6 | handleSessionInitByClaudeId |
| `SessionRoutes.ts:629` | `stripMemoryTagsFromJson()` | Tool inputs | All 6 | handleObservationsByClaudeId |
| `SessionRoutes.ts:633` | `stripMemoryTagsFromJson()` | Tool responses | All 6 | handleObservationsByClaudeId |
| `transcript-parser.ts:84` | `SYSTEM_REMINDER_REGEX` | None (read-time) | system-reminder only | Context extraction |
| `transcript-parser.ts:128` | `SYSTEM_REMINDER_REGEX` | None (read-time) | system-reminder only | Context extraction |
| `summarize.ts:66` | `extractLastMessage(..., true)` | Assistant msgs (summary path) | system-reminder only | Hook summarize handler |
| `SessionRoutes.ts:378` (LEGACY) | `handleObservations()` | Tool observations | **NONE** | Unused endpoint |
## Side Effects
- **ReDoS protection**: counts tags before regex, warns if > MAX_TAG_COUNT=100 (tag-stripping.ts:56-60).
- **Whitespace trim** after all replacements (tag-stripping.ts:65).
- **Multiple regex passes** — one per tag type. Could be unified.
## External Feature Dependencies
- **PrivacyCheckValidator** (SessionRoutes.ts:614) — after stripping, validates empty-result handling.
- **PendingMessageStore** — receives pre-cleaned data; no re-strip.
- **ResponseProcessor** — consumes pending messages; no re-strip.
- **ChromaSync** — operates on already-sanitized text from DB.
## Confidence + Gaps
**High confidence:** User prompts + tool observations fully stripped before DB write; ReDoS protection active.
**Known gaps:**
1. Assistant messages in summary path only strip `<system-reminder>`, not full suite (summarize.ts:66, SessionRoutes.ts:669).
2. Legacy endpoint `SessionRoutes.ts:378` has no stripping — stale route.
3. `stripTagsInternal` is called from two public wrappers (`stripMemoryTagsFromPrompt`, `stripMemoryTagsFromJson`) that differ only by caller context — minor DRY violation.
@@ -0,0 +1,100 @@
# Flowchart: response-parsing-storage
## Sources Consulted
- `src/services/worker/agents/ResponseProcessor.ts:49` (processAgentResponse)
- `src/sdk/parser.ts:1` (parseObservations, parseSummary, helpers)
- `src/services/worker/agents/ObservationBroadcaster.ts`
- `src/services/worker/agents/SessionCleanupHelper.ts`
- `src/services/sqlite/SessionStore.ts:1916` (storeObservations atomic)
- `src/services/worker/SDKAgent.ts`, `OpenRouterAgent.ts`, `GeminiAgent.ts` (callers)
- `src/services/sqlite/PendingMessageStore.ts`
## Happy Path Description
Agent returns final assistant text → `parseObservations` extracts `<observation>` blocks via regex, validates types, filters empty observations → `parseSummary` extracts `<summary>` (fallback coercion from observations if summary missing and `summaryExpected=true`) → ResponseProcessor detects non-XML responses (auth errors, garbage) and fails early → atomic transaction wraps both observation and summary storage with content-hash dedup → `confirmProcessed` deletes pending message (only AFTER commit) → SSE broadcasts observations + summaries → Chroma sync fire-and-forget → SessionCleanupHelper resets timestamp and broadcasts status → RestartGuard records success.
## Mermaid Flowchart
```mermaid
flowchart TD
A([Agent Returns Text<br/>SDKAgent.ts:266 / OpenRouterAgent.ts / GeminiAgent.ts]) --> B["processAgentResponse<br/>ResponseProcessor.ts:49"]
B --> C["Track lastGeneratorActivity"]
C --> D["Add to conversationHistory"]
D --> E["parseObservations<br/>parser.ts:33"]
E --> E1["Regex &lt;observation&gt; blocks"]
E1 --> E2["extractField / extractArrayElements"]
E2 --> E3["Validate type vs ModeManager"]
E3 --> E4["Skip ghost observations"]
E4 --> E6["ParsedObservation[]"]
D --> F["parseSummary<br/>parser.ts:122"]
F --> F1["Check &lt;skip_summary/&gt;"]
F1 --> F2["Regex &lt;summary&gt; block"]
F2 --> F5["coerceObservationToSummary fallback<br/>parser.ts:222"]
F5 --> F7["ParsedSummary or null"]
E6 --> G{Non-XML response?<br/>no tags + no obs}
F7 --> G
G -->|Yes| G2["Mark processingMessageIds FAILED"]
G2 --> G3([Return early])
G -->|No| H["Normalize null → empty string"]
H --> K["ATOMIC TX<br/>sessionStore.storeObservations<br/>SessionStore.ts:1916"]
K --> K1["computeContentHash"]
K1 --> K2["findDuplicateObservation 30s window"]
K2 --> K3["INSERT observations (or reuse id)"]
K3 --> K5["INSERT session_summaries if present"]
K5 --> K6["Return ids + epoch"]
K6 --> N["Circuit breaker: consecutiveSummaryFailures"]
N --> O["CLAIM-CONFIRM<br/>pendingStore.confirmProcessed each id"]
O --> O3["session.restartGuard.recordSuccess"]
O3 --> Q["syncAndBroadcastObservations<br/>ResponseProcessor.ts:270"]
Q --> Q1["getChromaSync().syncObservation FnF"]
Q1 --> Q2["worker.broadcastObservation SSE"]
Q2 --> Q3["Update folder CLAUDE.md if enabled"]
O3 --> R["syncAndBroadcastSummary<br/>ResponseProcessor.ts:363"]
R --> R1["syncSummary FnF"]
R1 --> R2["broadcastSummary SSE"]
Q3 --> S["cleanupProcessedMessages<br/>SessionCleanupHelper.ts:26"]
R2 --> S
S --> S1["Reset earliestPendingTimestamp"]
S1 --> S2["broadcastProcessingStatus"]
S2 --> T([End])
```
## Parsing Inventory
| Parser | Location | Tags | Notes |
|---|---|---|---|
| `parseObservations` | parser.ts:33 | `<observation>`, `<type>`, `<title>`, `<subtitle>`, `<narrative>`, `<facts>`, `<concept>`, `<files_read>`, `<files_modified>` | Validates types vs ModeManager; filters empty |
| `parseSummary` | parser.ts:122 | `<summary>`, `<skip_summary/>`, `<request>`, `<investigated>`, `<learned>`, `<completed>`, `<next_steps>`, `<notes>` | Skip-marker first; false-positive detection |
| `coerceObservationToSummary` | parser.ts:222 | obs → summary mapping | Fallback when summary missing + expected (#1633) |
| `extractField` | parser.ts:267 | Generic `<X>...</X>` | Non-greedy regex handles nested tags |
| `extractArrayElements` | parser.ts:282 | Generic `<Arr><Elem>...</Elem></Arr>` | Non-greedy, trims empties |
**Single parser architecture.** All XML parsing through `src/sdk/parser.ts`. No duplicate parsing layers.
## Side Effects
- Message queue cleanup via `confirmProcessed` (DELETE after commit).
- Chroma sync async fire-and-forget.
- SSE broadcasting to web UI.
- CLAUDE.md folder sync (feature-flagged).
- Session state tracking: `lastGeneratorActivity`, `lastSummaryStored`, `consecutiveSummaryFailures`, `restartGuard` metrics.
## External Feature Dependencies
**Calls into:** ModeManager (type validation), SettingsDefaultsManager, ChromaSync, SSEBroadcaster, PendingMessageStore, SessionStore.
**Called by:** SDKAgent, OpenRouterAgent, GeminiAgent (all agent providers).
## Confidence + Gaps
**High:** Single parser; atomic transaction; claim-confirm ordering; non-XML early-fail; coercion fallback.
**Gaps:** Chroma sync error propagation specifics; CLAUDE.md update error paths; content-hash window boundary conditions.
@@ -0,0 +1,125 @@
# Flowchart: session-lifecycle-management
## Sources Consulted
- `src/services/worker/SessionManager.ts:1-678`
- `src/services/worker/ProcessRegistry.ts:1-528`
- `src/services/queue/SessionQueueProcessor.ts:1-149`
- `src/services/sqlite/PendingMessageStore.ts:1-150`
- `src/supervisor/process-registry.ts:175-409`
- `src/services/worker-service.ts:173-174, 508-560, 1100-1111`
## Happy Path Description
1. HTTP request (SessionRoutes) triggers `SessionManager.initializeSession(sessionDbId)` (SessionManager.ts:118).
2. ActiveSession created in-memory with AbortController; stale memorySessionId cleared from DB (205-235).
3. SDK subprocess spawned via `createPidCapturingSpawn` → registered in supervisor ProcessRegistry (393, 57, supervisor/process-registry.ts:223).
4. Observations persisted to `PendingMessageStore` (claim-confirm) before processing (SessionManager.ts:276, PendingMessageStore.ts:63).
5. `SessionQueueProcessor.createIterator` yields messages via EventEmitter; resets stale-processing >60s on claim (SessionQueueProcessor.ts:32, PendingMessageStore.ts:99).
6. SDKAgent consumes iterator, updates `lastGeneratorActivity` per yield (SessionManager.ts:666).
7. Messages confirmed only after successful DB commit (prevents loss on crash).
8. Idle timeout (3 min) → `onIdleTimeout``session.abortController.abort()` → generator exits → session deleted (SessionManager.ts:651-655, 381).
9. Stuck-generator detection (5 min inactive) → `reapStaleSessions` SIGKILLs subprocess (516-568, 535).
10. Orphan reaper (30s) cleans dead sessions + system orphans + idle daemon children (ProcessRegistry.ts:349).
## Mermaid Flowchart
```mermaid
flowchart TD
A["SessionRoutes triggers init"] --> B["SessionManager.initializeSession<br/>SessionManager.ts:118"]
B --> C{In memory?}
C -->|Yes| D["Return cached"]
C -->|No| E["Create ActiveSession<br/>SessionManager.ts:205-235"]
E --> F["Clear stale memorySessionId<br/>SessionManager.ts:206-214"]
D --> G["SDKAgent.generateResponse<br/>SessionManager.ts:631-670"]
F --> G
G --> H["createPidCapturingSpawn<br/>ProcessRegistry.ts:393"]
H --> I["registerProcess<br/>ProcessRegistry.ts:57"]
I --> J["supervisor.registerProcess<br/>supervisor/process-registry.ts:223"]
K["queueObservation<br/>SessionManager.ts:276"] --> L["PendingMessageStore.enqueue<br/>PendingMessageStore.ts:63"]
L --> M["INSERT pending_messages status=pending"]
M --> N["emit 'message'"]
G --> O["getMessageIterator<br/>SessionManager.ts:631"]
O --> P["SessionQueueProcessor.createIterator<br/>SessionQueueProcessor.ts:32"]
P --> Q["claimNextMessage<br/>PendingMessageStore.ts:99"]
Q --> R["Reset processing>60s → pending<br/>PendingMessageStore.ts:107-116"]
R --> S["UPDATE status=processing"]
S --> T["Yield message<br/>SessionManager.ts:648"]
T --> U["lastGeneratorActivity=now<br/>SessionManager.ts:666"]
U --> V["SDK agent stores → confirmProcessed DELETE"]
V --> Q
Q -->|empty| Y["waitForMessage signal<br/>SessionQueueProcessor.ts:116"]
Y --> Z{idle >= 3min?}
Z -->|Yes| AA["onIdleTimeout<br/>SessionManager.ts:651"]
AA --> AB["abortController.abort"]
AB --> AC["Generator exits"]
AC --> AD["Auto-unregister on exit<br/>ProcessRegistry.ts:479"]
AC --> AF["SessionManager.deleteSession<br/>SessionManager.ts:381"]
AF --> AG["await generatorPromise 30s<br/>SessionManager.ts:392-403"]
AF --> AH["ensureProcessExit 5s<br/>ProcessRegistry.ts:185"]
AH -->|still alive| AI["SIGKILL escalation"]
AF --> AJ["supervisor reapSession SIGTERM→5s→SIGKILL<br/>supervisor/process-registry.ts:292"]
AF --> AL["sessions.delete + queues.delete<br/>SessionManager.ts:433-434"]
AL --> AM["onSessionDeletedCallback"]
AN["staleSessionReaperInterval 2min<br/>worker-service.ts:547"] --> AO["iterate active sessions<br/>SessionManager.ts:516-568"]
AO --> AP{idle > 5min?}
AP -->|Yes| AQ["detectStaleGenerator<br/>SessionManager.ts:59"]
AQ --> AR["SIGKILL<br/>SessionManager.ts:535"]
AR --> AS["abortController.abort"]
AO --> AU{idle > 15min?<br/>no generator + no pending}
AU -->|Yes| AF
AW["startOrphanReaper 30s<br/>ProcessRegistry.ts:508"] --> AX["reapOrphanedProcesses<br/>ProcessRegistry.ts:349"]
AX --> AY["getActiveSessionIds"]
AY --> AZ["Kill orphan PIDs"]
AX --> BB["killSystemOrphans ppid=1<br/>ProcessRegistry.ts:315"]
AX --> BC["killIdleDaemonChildren<br/>ProcessRegistry.ts:244"]
```
## Timer Inventory
| Timer | Purpose | Lifetime | Cleared On | Location |
|---|---|---|---|---|
| `waitForMessage()` setTimeout | Wait for next message or idle | Per message | clearTimeout or abort | SessionQueueProcessor.ts:145 |
| Idle timeout | Trigger onIdleTimeout at 3min | Per iterator session | resolves or signal aborts | SessionQueueProcessor.ts:130 |
| `staleSessionReaperInterval` | Reap stuck gens (5min) + old sessions (15min) | Worker lifetime | clearInterval on shutdown | worker-service.ts:547, 1108 |
| Orphan reaper (`startOrphanReaper`) | Kill dead-session procs, orphans, idle daemons | Worker lifetime | clearInterval returned | ProcessRegistry.ts:508 |
| Stale-processing self-heal | Atomic UPDATE reset >60s | Per claim (inline SQL) | n/a | PendingMessageStore.ts:106 |
| Generator-exit wait | 30s timeout on deleteSession | Per delete | AbortSignal.timeout + Promise.race | SessionManager.ts:397 |
| `ensureProcessExit` | 5s before SIGKILL | Per delete | setTimeout for escalation | ProcessRegistry.ts:200 |
## Side Effects
- Process registration persisted to supervisor.json.
- PendingMessage lifecycle persisted to SQLite (INSERT → UPDATE → DELETE).
- AbortController cascades through iterator.
- Pool-slot notification on process exit.
- Broadcast callbacks on session delete.
## External Feature Dependencies
**Calls into:** SQLite (pending_messages + sessions), supervisor ProcessRegistry, SDKAgent, RestartGuard, SSEBroadcaster.
**Called by:** SessionRoutes, DataRoutes, worker-service lifecycle (reapers, shutdown).
## Confidence + Gaps
**High:** Happy path; stale detection thresholds (5min generator, 15min session); 3-min idle timeout; 30s orphan reaper; claim-confirm; supervisor-delegated registry model.
**KNOWN GAPS (critical for duplication analysis):**
1. **ProcessRegistry duplication:** YES — two files exist:
- `src/services/worker/ProcessRegistry.ts` — worker-level facade
- `src/supervisor/process-registry.ts` — supervisor-level persistent registry
- NOT fully independent; worker-level delegates via `getSupervisor().getRegistry()`. But there is real surface-area duplication.
2. **staleSessionReaperInterval vs startUnifiedReaper:**
- `staleSessionReaperInterval` is ACTIVE at worker-service.ts:547.
- `startUnifiedReaper` NOT present in codebase search — observation notes suggest T31/T32 refactor planned to unify the two reapers but NOT yet implemented.
- Currently TWO independent reapers: `startOrphanReaper` (30s) + stale-session reaper (2min). Unification pending.
3. **MAX_SESSION_IDLE_MS (15 min)** is used only by reapStaleSessions — may be deprecated but code still in place.
@@ -0,0 +1,97 @@
# Flowchart: sqlite-persistence
## Sources Consulted
- `src/services/sqlite/Database.ts:1-349`
- `src/services/sqlite/migrations/runner.ts:1-1019`
- `src/services/sqlite/observations/store.ts:1-108`
- `src/services/sqlite/SessionStore.ts:1-500`
- `src/services/sqlite/PendingMessageStore.ts:1-150`
- `src/services/sqlite/index.ts:1-33`
## Happy Path Description
On startup, `ClaudeMemDatabase` opens a bun:sqlite connection to `DB_PATH`, optionally heals malformed schemas via Python sqlite3 wrapper, then applies PRAGMAs for WAL journaling and performance tuning (memory mapping, foreign keys, cache settings). The `MigrationRunner` runs 27 migrations in sequence, creating or altering core tables (`sdk_sessions`, `observations`, `session_summaries`, `user_prompts`, `pending_messages`) and their FTS5 virtual indexes. Each migration checks actual schema state via `PRAGMA table_info` to ensure idempotence across fresh installs, partial migrations, and cross-machine syncs.
A write cycle (e.g., `storeObservation`) computes a content hash for deduplication, checks for recent duplicates within a 30-second window, and if unique, INSERTs into `observations` with all structured fields. Reads use prepared statements with optional filtering, leveraging indexes on `created_at_epoch DESC`. Transaction boundaries are explicit via `db.transaction(fn)` wrappers. `PendingMessageStore.claimNextMessage()` self-heals stale processing messages (>60s) back to pending in a single transaction.
## Mermaid Flowchart
```mermaid
flowchart TD
Boot([Boot / SDK Call<br/>index.ts:1]) --> InitDB["ClaudeMemDatabase.ctor<br/>Database.ts:148"]
InitDB --> EnsureDir["ensureDir DATA_DIR<br/>Database.ts:151"]
EnsureDir --> OpenConn["new bun:sqlite Database<br/>Database.ts:155"]
OpenConn --> RepairSchema["repairMalformedSchema<br/>Database.ts:160"]
RepairSchema --> SetPRAGMAs["PRAGMA WAL/NORMAL/FK/mmap<br/>Database.ts:163-168"]
SetPRAGMAs --> MigRunner["new MigrationRunner<br/>Database.ts:171"]
MigRunner --> RunMigrations["runAllMigrations (27)<br/>Database.ts:172"]
RunMigrations --> Mig4["initializeSchema m4<br/>runner.ts:52-123"]
Mig4 --> Mig8["addObservationHierarchicalFields m8<br/>runner.ts:265-296"]
Mig8 --> Mig10["createUserPromptsTable m10<br/>runner.ts:383-433"]
Mig10 --> Mig16["createPendingMessagesTable m16<br/>runner.ts:506-548"]
Mig16 --> Mig22["addObservationContentHashColumn m22<br/>runner.ts:844-864"]
Mig22 --> Mig27["addObservationSubagentColumns m27<br/>runner.ts:982-1016"]
Mig27 --> Ready["DB Ready<br/>schema_versions sync'd"]
Ready --> UserWrite["storeObservation<br/>observations/store.ts:53"]
UserWrite --> ComputeHash["computeObservationContentHash<br/>observations/store.ts:21-29"]
ComputeHash --> CheckDup["findDuplicateObservation 30s window<br/>observations/store.ts:36-45"]
CheckDup -->|Dup| ReturnExisting["Return existing id+epoch"]
CheckDup -->|New| PrepareStmt["prepare INSERT observations<br/>observations/store.ts:77-82"]
PrepareStmt --> ExecInsert["stmt.run 17 params<br/>observations/store.ts:84-101"]
ExecInsert --> ReturnNew["Return id+epoch"]
Ready --> PendingMsg["PendingMessageStore.enqueue<br/>PendingMessageStore.ts:63"]
PendingMsg --> EnqueueStmt["INSERT pending_messages<br/>PendingMessageStore.ts:65-88"]
EnqueueStmt --> ClaimMsg["claimNextMessage TX<br/>PendingMessageStore.ts:99-144"]
ClaimMsg --> ResetStale["UPDATE stale → pending 60s<br/>PendingMessageStore.ts:107-115"]
ResetStale --> SelectNext["SELECT pending ORDER BY id LIMIT 1<br/>PendingMessageStore.ts:118-124"]
SelectNext --> MarkProcess["UPDATE status=processing<br/>PendingMessageStore.ts:129-134"]
Ready --> SessionWrite["SessionStore CRUD<br/>SessionStore.ts:34"]
SessionWrite --> SessionStmt["INSERT sdk_sessions<br/>SessionStore.ts:93-143"]
Ready --> UserRead["get observations<br/>observations/get.ts:14"]
UserRead --> PrepareQuery["prepare SELECT filters<br/>observations/get.ts:15-19"]
PrepareQuery --> ExecRead["stmt.get/all<br/>observations/get.ts:27-80"]
```
## Tables Owned
| Table | Owner | Purpose |
|---|---|---|
| `schema_versions` | MigrationRunner | Migration tracking |
| `sdk_sessions` | SessionStore | User + worker sessions |
| `observations` | Observations module | Work items (findings, actions) |
| `session_summaries` | Summaries module | Session conclusions |
| `user_prompts` | Prompts module | User input history |
| `pending_messages` | PendingMessageStore | Work queue (claim-confirm) |
| `observation_feedback` | SessionStore | Usage signals |
| `observations_fts` (virtual) | SessionSearch | FTS5 index |
| `session_summaries_fts` (virtual) | SessionSearch | FTS5 index |
| `user_prompts_fts` (virtual) | SessionStore | FTS5 index |
## Side Effects
**File I/O**: DB file, WAL (`db.sqlite-wal`), shared-memory (`db.sqlite-shm`).
**PRAGMAs**: `journal_mode=WAL`, `synchronous=NORMAL`, `foreign_keys=ON`, `temp_store=MEMORY`, `mmap_size=256MB`, `cache_size=10_000`.
**Transactions**: Single-connection architecture; explicit `db.transaction(fn)` for multi-step writes; `claimNextMessage` self-heals via transactional UPDATE.
**Schema Repair**: Python `sqlite3` subprocess invoked via `execFileSync('python3', ...)` for malformed-file recovery.
## External Feature Dependencies
**Called by:** SDK agents (observations/summaries), Response Processor, Search routes, Data import/export, Worker lifecycle.
**Calls into:** `bun:sqlite` driver, Python sqlite3 (repair only), logger, paths utility.
## Confidence + Gaps
**High:** init flow, migrations 4/16/22/27, dedup via content_hash + 30s window, claim-confirm with 60s stale reset.
**Medium:** FTS5 trigger mechanics, transaction isolation semantics under WAL.
**Gaps:** No explicit connection pool (single-writer via WAL); backup/restore not in scope.
@@ -0,0 +1,96 @@
# Flowchart: transcript-watcher-integration
## Sources Consulted
- `src/services/transcripts/watcher.ts:1-242`
- `src/services/transcripts/processor.ts:33-393`
- `src/services/transcripts/config.ts:1-100`
- `src/services/transcripts/types.ts:1-71`
- `src/services/worker-service.ts:91, 164, 466, 614-658`
- `src/services/integrations/CursorHooksInstaller.ts:1-100`
- `src/cli/handlers/observation.ts:1-87`
- `src/services/worker/http/routes/SessionRoutes.ts:378-660`
## Happy Path Description
Worker startup loads transcript-watch config and instantiates `TranscriptWatcher`. `FileTailer` uses `fs.watch()` on each JSONL transcript; on growth, reads new bytes and splits by newline. Each line is `JSON.parse`d and routed to `TranscriptEventProcessor.processEntry()`, which matches schema rules to classify the event (`session_init`, `tool_use`, `tool_result`, `session_end`). Per-session `SessionState` holds `pendingTools` map: `tool_use` stores name+input; `tool_result` retrieves pending, pairs with response, and calls `observationHandler.execute()` — which POSTs to `/api/sessions/observations` (the same endpoint used by lifecycle-hooks). On `session_end`, processor queues summary via `/api/sessions/summarize` and refreshes Cursor context via `/api/context/inject`.
## Mermaid Flowchart
```mermaid
flowchart TD
Start["Worker Start<br/>worker-service.ts:614"] --> Config["loadTranscriptWatchConfig<br/>config.ts:1"]
Config --> Watcher["new TranscriptWatcher<br/>watcher.ts:83-91"]
Watcher --> StartW["watcher.start<br/>watcher.ts:93"]
StartW --> SetupWatch["setupWatch per target<br/>watcher.ts:110-134"]
SetupWatch --> AddTailer["addTailer<br/>watcher.ts:169-210"]
AddTailer --> CreateTailer["new FileTailer<br/>watcher.ts:15-26"]
CreateTailer --> TailerStart["fs.watch filePath<br/>watcher.ts:28"]
TailerStart --> FileChange([File change event])
FileChange --> ReadNewData["readNewData<br/>watcher.ts:40-80"]
ReadNewData --> ParseLine["JSON.parse each line<br/>watcher.ts:220"]
ParseLine --> HandleLine["handleLine<br/>watcher.ts:212-236"]
HandleLine --> ProcessEntry["processor.processEntry<br/>processor.ts:36-46"]
ProcessEntry --> MatchRule["matchesRule<br/>processor.ts:42"]
MatchRule --> HandleEvent["handleEvent<br/>processor.ts:113-169"]
HandleEvent -->|session_init| SI["handleSessionInit<br/>processor.ts:138-142"]
HandleEvent -->|tool_use| TU["handleToolUse<br/>processor.ts:193-221"]
HandleEvent -->|tool_result| TR["handleToolResult<br/>processor.ts:224-246"]
HandleEvent -->|session_end| SE["handleSessionEnd<br/>processor.ts:309-320"]
SI --> SIhttp["POST /api/sessions/init"]
TU --> TUmap["session.pendingTools.set<br/>processor.ts:202"]
TR --> TRlookup["Lookup pending tool<br/>processor.ts:232-236"]
TRlookup --> SendObs["sendObservation<br/>processor.ts:240-244"]
SendObs --> ObsHandler["observationHandler.execute<br/>observation.ts:31-86"]
ObsHandler --> WorkerHttp["POST /api/sessions/observations<br/>observation.ts:77"]
WorkerHttp --> Routes["SessionRoutes.handleObservationsByClaudeId<br/>SessionRoutes.ts:565"]
Routes --> Strip["stripMemoryTagsFromJson<br/>SessionRoutes.ts:627-634"]
Strip --> Queue["sessionManager.queueObservation<br/>SessionRoutes.ts:637"]
Queue --> Gen["ensureGeneratorRunning<br/>SessionRoutes.ts:654"]
SE --> QS["queueSummary<br/>processor.ts:322-344"]
QS --> SumHttp["POST /api/sessions/summarize"]
SE --> UpdateCtx["updateContext<br/>processor.ts:346-392"]
UpdateCtx --> CtxHttp["GET /api/context/inject<br/>processor.ts:377"]
CtxHttp --> WriteAgentsMd["writeAgentsMd<br/>processor.ts:390"]
SE --> ClearState["sessions.delete<br/>processor.ts:319"]
```
## Side Effects
- Byte-offset state persisted to `transcript-watch-state.json`.
- Rescan timer every 5s for new transcript files (watcher.ts:124).
- PendingTools map state cleared after each paired observation.
- `AGENTS.md` context file written by Cursor session_end.
- SSE broadcast via existing pipeline when observations queued.
## External Feature Dependencies
**Calls into:** observationHandler (bridge), `/api/sessions/observations` endpoint (shared with lifecycle-hooks), `/api/sessions/summarize`, `/api/context/inject`. SessionManager processes identically regardless of source.
**Called by:** Worker-service initialization only; not user-invoked.
## Duplication with lifecycle-hooks?
**YES — significant re-implementation.** Both paths ingest observations, but via different capture mechanisms:
| Aspect | lifecycle-hooks | transcript-watcher |
|---|---|---|
| Source | Cursor/Claude Code PostToolUse hook | JSONL file via fs.watch + FileTailer |
| Tool pairing | Hook receives tool_name + response atomically | pendingTools map pairs tool_use + tool_result |
| Session init | observationHandler → sessionInitHandler | processor directly calls sessionInitHandler |
| HTTP transport | observationHandler → `/api/sessions/observations` | observationHandler → `/api/sessions/observations` (same) |
| Exclusion check | observationHandler checks `isProjectExcluded` | processor may skip this check; SessionRoutes enforces privacy |
| Storage convergence | SessionRoutes queue → SessionManager → SDK agent | SessionRoutes queue → SessionManager → SDK agent (same) |
**Conclusion:** transcript-watcher is a **parallel capture path** that re-implements session-init + observation dispatch logic but converges at the same HTTP endpoint. The pendingTools state machine is unique to transcripts. This is the clearest cross-feature duplication in the codebase and a prime target for Phase 3 unification.
## Confidence + Gaps
**High:** TranscriptWatcher → FileTailer → processor → observationHandler → shared HTTP endpoint.
**Medium:** Privacy filter coverage when bypassing observationHandler's exclusion check.
**Gaps:** FileTailer retry strategy on I/O errors; schema FieldSpec coalesce/default evaluation details; updateContext timing relative to sessionCompleteHandler.
@@ -0,0 +1,102 @@
# Flowchart: vector-search-sync
## Sources Consulted
- `src/services/sync/ChromaSync.ts:1-969`
- `src/services/sync/ChromaMcpManager.ts:1-509`
- `src/services/worker/agents/ResponseProcessor.ts:1-423`
- `src/services/worker/DatabaseManager.ts:1-100`
- `src/services/worker-service.ts:1-550`
- `src/services/infrastructure/WorktreeAdoption.ts:1-348`
- `src/services/infrastructure/GracefulShutdown.ts:1-110`
- `src/services/worker/SearchManager.ts:1-100`
## Happy Path Description
When a new observation is stored to SQLite, ResponseProcessor orchestrates two fire-and-forget async paths in parallel: (1) Database write commits the observation row transactionally, then (2) ChromaSync is notified via `syncObservation()` to send formatted documents to Chroma via MCP. If Chroma is disabled (`CLAUDE_MEM_CHROMA_ENABLED=false`), sync is skipped. ChromaMcpManager maintains a persistent singleton stdio connection to the chroma-mcp Python subprocess with lazy initialization, auto-reconnect with backoff, and graceful shutdown.
On worker startup, `ChromaSync.backfillAllProjects()` runs fire-and-forget to detect missing observations by comparing Chroma's metadata index with SQLite. It batches in 100-document chunks, formats each observation into multiple granular documents (one per field), and syncs to per-project collections named `cm__<sanitized_project>`.
## Mermaid Flowchart
```mermaid
flowchart TD
Start([Agent Response Returned<br/>ResponseProcessor.ts:49]) --> Parse["Parse Observations + Summary<br/>ResponseProcessor.ts:70-81"]
Parse --> StoreDB["Store to SQLite<br/>ResponseProcessor.ts:151"]
StoreDB --> ConfirmMsg["pendingStore.confirmProcessed<br/>ResponseProcessor.ts:206"]
ConfirmMsg --> SyncObsDef["syncAndBroadcastObservations<br/>ResponseProcessor.ts:270"]
ConfirmMsg --> SyncSumDef["syncAndBroadcastSummary<br/>ResponseProcessor.ts:363"]
SyncObsDef --> LoopObs["For each Observation<br/>ResponseProcessor.ts:280"]
LoopObs --> CheckChromaObs{Chroma Enabled?<br/>DatabaseManager.ts:34-39}
CheckChromaObs -->|Yes| CallSyncObs["getChromaSync().syncObservation<br/>ResponseProcessor.ts:286"]
CheckChromaObs -->|No| SkipObs["No-op skip"]
CallSyncObs --> SyncObsEntry["ChromaSync.syncObservation<br/>ChromaSync.ts:339"]
SyncObsEntry --> FormatObs["formatObservationDocs per field<br/>ChromaSync.ts:125"]
FormatObs --> EnsureCollObs["ensureCollectionExists<br/>ChromaSync.ts:96"]
EnsureCollObs --> AddDocObs["addDocuments batch<br/>ChromaSync.ts:262"]
AddDocObs --> SanitizeMeta["Filter null/empty metadata<br/>ChromaSync.ts:277-280"]
SanitizeMeta --> CallAddDocs["chromaMcp.callTool chroma_add_documents<br/>ChromaSync.ts:284"]
CallAddDocs --> CheckDupObs{ID Conflict?}
CheckDupObs -->|Yes| DelThenAdd["Delete then Re-add<br/>ChromaSync.ts:297-306"]
CheckDupObs -->|No| LogSuccess["Log success<br/>ChromaSync.ts:329"]
DelThenAdd --> LogSuccess
LogSuccess --> BroadcastObs["SSE broadcast<br/>ResponseProcessor.ts:312"]
SyncSumDef --> SyncSumEntry["ChromaSync.syncSummary<br/>ChromaSync.ts:384"]
SyncSumEntry --> FormatSum["formatSummaryDocs per field<br/>ChromaSync.ts:193"]
FormatSum --> CallAddSum["chroma_add_documents<br/>ChromaSync.ts:284"]
CallAddSum --> BroadcastSum["SSE broadcast<br/>ResponseProcessor.ts:403"]
InitWorker([Worker Initializes<br/>worker-service.ts:406-420]) --> InitDBMgr["dbManager.initialize<br/>DatabaseManager.ts:27"]
InitDBMgr --> CreateChromaSync["new ChromaSync<br/>DatabaseManager.ts:36"]
CreateChromaSync --> LazyMCP["ChromaMcpManager.getInstance<br/>ChromaMcpManager.ts:47"]
LazyMCP --> Backfill["backfillAllProjects FnF<br/>worker-service.ts:470"]
Backfill --> FetchProjects["SELECT DISTINCT project<br/>ChromaSync.ts:868"]
FetchProjects --> LoopProjects["For each project<br/>ChromaSync.ts:874"]
LoopProjects --> EnsureBackfilled["ensureBackfilled<br/>ChromaSync.ts:554"]
EnsureBackfilled --> GetChromaIds["getExistingChromaIds<br/>ChromaSync.ts:479"]
GetChromaIds --> RunPipeline["runBackfillPipeline<br/>ChromaSync.ts:575"]
RunPipeline --> BackfillObs["backfillObservations<br/>ChromaSync.ts:603"]
BackfillObs --> BackfillSum["backfillSummaries<br/>ChromaSync.ts:652"]
BackfillSum --> BackfillPrompts["backfillPrompts<br/>ChromaSync.ts:701"]
SearchFlow([User Search Query<br/>SearchManager.ts:56]) --> QueryChroma["chromaSync.queryChroma<br/>SearchManager.ts:59"]
QueryChroma --> CallQuery["chroma_query_documents<br/>ChromaSync.ts:768"]
CallQuery --> Dedupe["deduplicateQueryResults<br/>ChromaSync.ts:808"]
Shutdown([Worker Shutdown<br/>GracefulShutdown.ts:56]) --> StopChromaMcp["chromaMcpManager.stop<br/>GracefulShutdown.ts:73"]
StopChromaMcp --> KillSubproc["transport.close<br/>ChromaMcpManager.ts:357"]
```
## Side Effects
- **MCP Connection**: Singleton stdio connection to chroma-mcp, lazy-init, reconnect with backoff, graceful shutdown.
- **Per-project collections**: `cm__<sanitized_project>` naming.
- **Granular vectorization**: Observations split into multiple docs per field (3-5× vector count).
- **Batch reconciliation**: Duplicate IDs handled via delete-then-add within batch.
- **Fire-and-forget**: All sync is non-blocking; failures log but don't block.
- **Worktree metadata patching**: `merged_into_project` stamp applied idempotently.
## External Feature Dependencies
**Calls into:**
- `chroma-mcp` Python subprocess (via stdio MCP protocol)
- ChromaMcpManager (singleton lifecycle)
- SQLite (source of truth for backfill)
**Called by:**
- ResponseProcessor (observation/summary sync after DB write)
- SearchManager (read-side Chroma queries)
- WorktreeAdoption (post-merge metadata updates)
- Worker lifecycle (startup backfill, shutdown)
## Confidence + Gaps
**High Confidence**: Single sync implementation; fire-and-forget pattern; per-project metadata-scoped collections; lazy MCP init.
**Medium Confidence**: Exact chroma-mcp tool names verified via grep.
**Gaps**: Embedding model config is inside chroma-mcp package (not this codebase); HNSW/ANN parameters not visible.
@@ -0,0 +1,95 @@
# Flowchart: viewer-ui-layer
## Sources Consulted
- `src/ui/viewer/App.tsx:1-162`
- `src/ui/viewer/index.tsx:1-16`
- `src/ui/viewer/hooks/useSSE.ts:1-147`
- `src/ui/viewer/hooks/useSettings.ts:1-80`
- `src/ui/viewer/hooks/usePagination.ts:1-80`
- `src/ui/viewer/types.ts:1-80`
- `src/ui/viewer/components/Header.tsx:1-60`
- `src/ui/viewer/components/Feed.tsx:1-60`
- `src/ui/viewer/components/ObservationCard.tsx:1-60`
- `src/ui/viewer/components/ErrorBoundary.tsx:1-63`
- `src/ui/viewer/components/ContextSettingsModal.tsx:1-60`
- `src/services/worker/SSEBroadcaster.ts:1-77`
- `src/services/worker/http/routes/ViewerRoutes.ts`
## Component Tree
1. ErrorBoundary (root)
2. App (orchestrator)
3. Header — project/source filters, SSE status, theme toggle
4. Feed — interleaved cards, infinite scroll via IntersectionObserver
5. ObservationCard / SummaryCard / PromptCard
6. ContextSettingsModal
7. LogsDrawer
## Happy Path Description
User loads `http://localhost:37777` → static viewer.html served → React mounts via `index.tsx``<ErrorBoundary><App/></ErrorBoundary>` → App initializes hooks (`useSSE`, `useSettings`, `useTheme`, `usePagination`, `useStats`) → `useSSE` opens `EventSource('/stream')` → backend emits `initial_load` with catalog → Header + Feed render → IntersectionObserver triggers `handleLoadMore` on scroll → `pagination.*.loadMore()` fetches `/api/observations?offset=X&limit=20` → merged with live SSE data in `useMemo` (deduped by `(project, id)`) → re-render. Real-time events (`new_observation`, `new_summary`, `new_prompt`) update state → re-render. Settings modal saves via `POST /api/settings`.
## Mermaid Flowchart
```mermaid
flowchart TD
HTTP["GET /<br/>ViewerRoutes.ts"] --> EB["ErrorBoundary<br/>index.tsx:4"]
EB --> APP["App<br/>App.tsx:14"]
APP --> SSE["useSSE<br/>useSSE.ts:6"]
APP --> SETTINGS["useSettings<br/>useSettings.ts:8"]
APP --> PAGINATION["usePagination<br/>usePagination.ts:18"]
APP --> THEME["useTheme"]
APP --> STATS["useStats"]
SSE -->|EventSource| STREAM["/stream<br/>ViewerRoutes.handleSSEStream"]
STREAM --> BROADCASTER["SSEBroadcaster<br/>SSEBroadcaster.ts:15"]
BROADCASTER --> SSE
APP --> HEADER["Header<br/>Header.tsx:34"]
APP --> FEED["Feed<br/>Feed.tsx:18"]
APP --> MODAL["ContextSettingsModal"]
APP --> LOGS["LogsDrawer"]
HEADER --> FilterState[(currentFilter<br/>currentSource)]
FEED -->|IntersectionObserver| LoadMore["handleLoadMore"]
LoadMore --> PAGINATION
PAGINATION -->|GET /api/observations?offset=X| API_OBS["DataRoutes"]
FEED --> OBS["ObservationCard<br/>ObservationCard.tsx:33"]
FEED --> SUM["SummaryCard"]
FEED --> PRO["PromptCard"]
MODAL -->|POST /api/settings| API_SET["SettingsRoutes"]
```
## State Management
Hooks + local state; no Redux/Zustand/Context store.
- `useSSE`: observations, summaries, prompts, catalog, isConnected, isProcessing, queueDepth. EventSource events update.
- `useSettings`: settings object, isSaving, saveStatus.
- `usePagination`: per-datatype isLoading, hasMore, offsetRef, lastSelectionRef. Resets offset on filter change.
- `useTheme`: preference, applies to DOM.
- `useStats`: stats fetched once.
- App local: `currentFilter`, `currentSource`, `contextPreviewOpen`, `logsModalOpen`, `paginatedObservations/Summaries/Prompts`.
**Duplication note:** Observations live in both `useSSE().observations` (live) and App's `paginatedObservations` (older chunks). Merged in `useMemo` with `(project, id)` dedup.
## Side Effects
- EventSource auto-reconnect on error after `TIMING.SSE_RECONNECT_DELAY_MS`.
- IntersectionObserver setup/cleanup per Feed mount.
- Fetch settings + stats on mount.
- DOM theme attribute mutation.
## External Feature Dependencies
**Consumes:** SSEBroadcaster (backend SSE), DataRoutes (pagination), SettingsRoutes (config), SessionStore (catalog on init).
## Confidence + Gaps
**High:** SSE flow; hook composition; pagination; state merging.
**Medium:** Exact paginated response shape; catalog-update strategy (additive only).
**Gaps:** CSS layer; `TerminalPreview`, `ThemeToggle`, `GitHubStarsButton`; full LogsModal console capture; saveSettings error branch.