94d592f212
* docs: pathfinder refactor corpus + Node 20 preflight
Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 01 — data integrity
Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.
- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
UNIQUE(memory_session_id, content_hash) on observations; dedup
duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.
Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/01-data-integrity.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 02 — process lifecycle
OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).
- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
canonical registry at src/supervisor/process-registry.ts is the
sole survivor; SDK spawn site consolidated into it via new
createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
process.kill(-pgid, signal) on Unix when pgid is recorded;
Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
staleSessionReaperInterval setInterval (including the co-located
WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
WAL growth without an app-level timer), killIdleDaemonChildren,
killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
via generatorPromise.finally() already lives in worker-service
startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
lazy-spawn — consults isWorkerPortAlive (which gates
captureProcessStartToken for PID-reuse safety via commit
99060bac), then spawns detached with unref(), then
waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
only on external SIGTERM via supervisor signal handlers.
Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.
All 10 verification greps return 0. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast
Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
- Phase 1/2: ObservationRow type doesn't exist; the four
"formatters" operate on three incompatible types.
- Phase 4: RECENCY_WINDOW_MS already imported from
SEARCH_CONSTANTS at every call site.
- Phase 7: getExistingChromaIds is NOT @deprecated and has an
active caller in ChromaSync.backfillMissingSyncs.
- Phase 8: estimateTokens already consolidated.
- Phase 9: knowledge-corpus rewrite blocked on PG-3
prompt-caching cost smoke test.
Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.
Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.
Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).
Tests updated (Principle 7 — delete in same PR):
- search-orchestrator.test.ts: "fall back to SQLite" rewritten
as "throw ChromaUnavailableError (HTTP 503)".
- chroma/hybrid/sqlite-search-strategy tests: rewritten to
rejects.toThrow; removed fellBack assertions.
Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 03 — ingestion path
Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.
- Phase 0: Created src/services/worker/http/shared.ts exporting
ingestObservation/ingestPrompt/ingestSummary as direct
in-process functions plus ingestEventBus (Node EventEmitter,
reusing existing pattern — no third event bus introduced).
setIngestContext wires the SessionManager dependency from
worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
returning { valid:true; kind: 'observation'|'summary'; data }
| { valid:false; reason: string }. Inspects root element;
<skip_summary reason="…"/> is a first-class summary case
with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
branches on the discriminated union. On invalid → markFailed
+ logger.warn(reason). On observation → ingestObservation.
On summary → ingestSummary then emit summaryStoredEvent
{ sessionId, messageId } (consumed by Plan 05's blocking
/api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
(ResponseProcessor + SessionManager + worker-types) and
MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
replaced with fs.watch(transcriptsRoot, { recursive: true,
persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
Map deleted. tool_use rows insert with INSERT OR IGNORE on
UNIQUE(session_id, tool_use_id) (added by Plan 01). New
pairToolUsesByJoin query in PendingMessageStore for read-time
pairing (UNIQUE INDEX provides idempotency; explicit consumer
not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
direct ingestObservation call. maybeParseJson silent-passthrough
rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
class) deleted. The active extractLastMessage at
src/shared/transcript-parser.ts:41-144 is the sole survivor.
Tests updated (Principle 7 — same-PR delete):
- tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
to assert discriminated-union shape; coercion-specific
scenarios collapse into { valid:false } assertions.
- tests/worker/agents/response-processor.test.ts: circuit-breaker
describe block skipped; non-XML/empty-response tests assert
fail-fast markFailed behavior.
Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.
Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.
Plan: PATHFINDER-2026-04-22/03-ingestion-path.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 05 — hook surface
Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.
- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
1..20; do curl -sf .../health && break; sleep 0.1; done` shell
retry wrappers deleted. Hook commands invoke their bun entry
point directly.
- Phase 2: src/shared/worker-utils.ts — added
executeWithWorkerFallback<T>(url, method, body) returning
T | { continue: true; reason?: string }. All 8 hook handlers
(observation, session-init, context, file-context, file-edit,
summarize, session-complete, user-message) rewritten to use
it instead of duplicating the ensureWorkerRunning →
workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
using validateBody + sessionEndSchema (z.object({sessionId})).
One-shot ingestEventBus.on('summaryStoredEvent') listener,
30 s timer, req.aborted handler — all share one cleanup so
the listener cannot leak. summarize.ts polling loop, plus
MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
memoizes SettingsDefaultsManager.loadFromFile per process.
Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
check entry; isProjectExcluded no longer referenced from
src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
(all 6 adapters: claude-code, cursor, raw, gemini-cli,
windsurf). New AdapterRejectedInput error in
src/cli/adapters/errors.ts. Handler-level isValidCwd checks
deleted from file-edit.ts and observation.ts. hook-command.ts
catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
initAgent is idempotent. tests/hooks/context-reinjection-guard
test (validated the deleted conditional) deleted in same PR
per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
.json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
_THRESHOLD setting (default 3). On consecutive worker-unreachable
≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
wrapping ensureWorkerRunning. executeWithWorkerFallback calls
the memoized version.
Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.
Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.
Plan: PATHFINDER-2026-04-22/05-hook-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 06 — API surface
One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.
- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
in src/services/worker/http/middleware/validateBody.ts —
safeParse → 400 { error: 'ValidationError', issues: [...] }
on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
route file. 24 POST endpoints across SessionRoutes,
CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
LogsRoutes, SettingsRoutes now wrap with validateBody().
/api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
along with every call site. Inline coercion helpers
(coerceStringArray, coercePositiveInteger) and inline
if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
from src/services/worker/http/middleware.ts. Worker binds
127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
via fs.readFileSync; served as Buffer with text/html content
type. SKILL.md + per-operation .md files cached in
Server.ts as Map<string, string>; loadInstructionContent
helper deleted. NO fs.watch, NO TTL — process restart is the
cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
— /api/pending-queue (GET), /api/pending-queue/process (POST),
/api/pending-queue/failed (DELETE), /api/pending-queue/all
(DELETE). Helper methods that ONLY served them
(getQueueMessages, getStuckCount, getRecentlyProcessed,
clearFailed, clearAll) deleted from PendingMessageStore.
KEPT: /api/processing-status (observability), /health
(used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
GracefulShutdown now calls getSupervisor().stop() directly.
Two functions retained with clear roles:
- performGracefulShutdown — worker-side 6-step shutdown
- runShutdownCascade — supervisor-side child teardown
(process.kill(-pgid), Windows tree-kill, PID-file cleanup)
Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
failure-marking path on PendingMessageStore. Old methods
markSessionMessagesFailed and markAllSessionMessagesAbandoned
deleted along with all callers (worker-service,
SessionCompletionHandler, tests/zombie-prevention).
Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.
Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.
Plan: PATHFINDER-2026-04-22/06-api-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 07 — dead code sweep
ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.
Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments
Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
builders, ParsedObservation, ParsedSummary, ParseResult,
SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
via dynamic await import('../../../context-generator.js') in
worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
— used via dynamic await import in npx-cli/install.ts +
uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
orphan-recovery caller in worker-service.ts plus
zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
in same file.
- All Database.ts barrel re-exports — used downstream.
Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
the methods are not thin wrappers but ~900 LoC of bodies, and
two methods are documented as intentional mirrors so the
context-generator.cjs bundle stays schema-consistent without
pulling MigrationRunner. Deserves its own plan, not a sweep.
Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.
Plan: PATHFINDER-2026-04-22/07-dead-code.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove residual ProcessRegistry comment reference
Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review (P1 + 2× P2)
P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
- Added optional timeoutMs to executeWithWorkerFallback,
forwarded to workerHttpRequest.
- summarize.ts call site now passes 35_000 (5 s above server
hold window).
P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
- ResponseProcessor now calls ingestSummary({ kind: 'parsed',
sessionDbId, messageId, contentSessionId, parsed }) so the
event-emission path is single-sourced.
- ingestSummary's requireContext() resolution moved inside the
'queue' branch (the only branch that needs sessionManager /
dbManager). 'parsed' is a pure event-bus emission and
doesn't need worker-internal context — fixes mocked
ResponseProcessor unit tests that don't call
setIngestContext.
P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
- Added a Symbol.for('claude-mem/worker-fallback') brand to
WorkerFallback. isWorkerFallback now checks the brand, not
a duck-typed property name.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 2 (P1 + P2)
P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.
- Gate ingestSummary call on (parsed.data.skipped ||
session.lastSummaryStored). Skipped summaries are an explicit
no-op bypass and still confirm; real summaries only confirm
when storage actually wrote a row.
- Non-skipped + summaryId === null path logs a warn and lets
the server-side timeout (504) surface to the hook instead of
a false ok:true.
P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.
- Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
log instead of the misleading ENQUEUED line. No behavior
change — the duplicate is still correctly suppressed by the
DB (Principle 3); only the log surface is corrected.
- confirmProcessed is never called with the enqueue() return
value (it operates on session.processingMessageIds[] from
claimNextMessage), so no caller is broken; the visibility
fix prevents future misuse.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 3 (P1 + 2× P2)
- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
context after SessionRoutes is constructed. setIngestContext runs
before routes exist, so transcript-watcher observations queued via
ingestObservation() had no way to auto-start the SDK generator.
Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
/api/session/end calls register one listener each and clean up on
completion, so the default-10 warning fires spuriously under normal
load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
ingestObservation() instead of duplicating skip-tool / meta /
privacy / queue logic. Single helper, matching the Plan 03 goal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)
- processor.handleToolResult: restore in-memory tool-use→tool-result
pairing via session.pendingTools for schemas (e.g. Codex) whose
tool_result events carry only tool_use_id + output. Without this,
neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
of throwing. Previously a single malformed JSON-shaped field caused
handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
for purely-glob inputs so the caller skips the watch instead of
anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
log on the returned id; the SessionManager branches on id === 0.
* fix: forward tool_use_id through ingestObservation (Greptile iter 5)
P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.
- shared.ingestObservation: forward payload.toolUseId to
queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
tool_use_id (HTTP convention) and toolUseId (JS convention) from
req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
validator doesn't rely on .passthrough() alone.
* fix: drop dead pairToolUsesByJoin, close session-end listener race
- PendingMessageStore: delete pairToolUsesByJoin. The method was never
called and its self-join semantics are structurally incompatible
with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
collapses any second row with the same pair, so a self-join can
only ever match a row to itself. In-memory pendingTools in
processor.ts remains the pairing path for split-event schemas.
- IngestEventBus: retain a short-lived (60s) recentStored map keyed
by sessionId. Populated on summaryStoredEvent emit, evicted on
consume or TTL.
- handleSessionEnd: drain the recent-events buffer before attaching
the listener. Closes the register-after-emit race where the summary
can persist between the hook's summarize POST and its session/end
POST — previously that window returned 504 after the 30s timeout.
* chore: merge origin/main into vivacious-teeth
Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).
Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
summaryStoredEvent supersedes main's SessionCompletionHandler DI
refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
reason; generator .finally() Stop-hook self-clean is a guard for a
path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
#2084) while preserving our Zod validateBody schema.
Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings
1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
in wrapHandler — synchronous exceptions would hang the client rather
than surfacing as 500s. Wrap it like every other handler.
2) processor.handleToolResult only consumed the session.pendingTools
entry when the tool_result arrived without a toolName. In the
split-schema path where tool_result carries both toolName and toolId,
the entry was never deleted and the map grew for the life of the
session. Consume the entry whenever toolId is present.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: typing cleanup and viewer tsconfig split for PR feedback
- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings (iter 2)
- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
the unscoped-drain branch that would nuke every pending/processing
row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
cached event until TTL eviction so a retried Stop hook's second
/api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
already tailed (JSONL appends fire on every line; only unknown
paths warrant a rescan).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: call finalizeSession in terminal session paths (Greptile iter 3)
terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.
Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: GC failed pending_messages rows at startup (Greptile iter 4)
Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.
Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)
1. startSessionProcessor success branch now calls completionHandler.
finalizeSession before removeSessionImmediate. Hooks-disabled installs
(and any Stop hook that fails before POST /api/sessions/complete) no
longer leave sdk_sessions rows as status='active' forever. Idempotent
— a subsequent /api/sessions/complete is a no-op.
2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
closures that reference it (TDZ safety; safe at runtime today but
fragile if timeout ever shrinks).
3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
instead of constructing its own — prevents silent divergence if the
handler ever becomes stateful.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: stop runaway crash-recovery loop on dead sessions
Two distinct bugs were combining to keep a dead session restarting forever:
Bug 1 (uncaught "The operation was aborted."):
child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
signal aborts. spawnSdkProcess() never attached an 'error' listener, so
any async spawn failure became uncaughtException and escaped to the
daemon-level handler. Attach an 'error' listener immediately after spawn,
before the !child.pid early-return, so async spawn errors are logged
(with errno code) and swallowed locally.
Bug 2 (sliding-window limiter never trips on slow restart cadence):
RestartGuard tripped only when restartTimestamps.length exceeded
MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
session that fail-restart-fail-restart on 8s cycles would loop forever
(consecutiveRestarts climbing past 30+ in observed logs). Add a
consecutiveFailures counter that increments on every restart and resets
only on recordSuccess(). Trip when consecutive failures exceed
MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
processing in between proves the session is dead. Both guards now run in
parallel: tight loops still trip the windowed cap; slow loops trip the
consecutive-failure cap.
Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* perf: streamline worker startup and consolidate database connections
1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)
* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations
Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.
- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
when shouldTrackProject(cwd) is false, so the observer's own hooks
cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
on observations) inline so bundled artifacts (worker-service.cjs,
context-generator.cjs) stay schema-consistent — without it, the
ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
supervisor can actually feed the observer's stdin.
Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.
* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)
Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.
* fix: cross-platform observer-dir containment; clarify SDK stdin pipe
claude-review feedback on PR #2124.
- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
hard-coded a POSIX separator and missed Windows backslash paths plus any
trailing-slash variance. Switched to a path.relative-based isWithin()
helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
consumes that pipe; 'ignore' would null it and the null-check below
would tear the child down on every spawn.
* fix: make Stop hook fire-and-forget; remove dead /api/session/end
The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.
The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.
- summarize.ts: drop the /api/session/end long-poll and the trailing
/api/sessions/complete await; ~40 lines removed; unused
SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
route registration. Drop the now-unused ingestEventBus and
SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
comments that referenced the dead endpoint. The IngestEventBus is
left in place dormant (no listeners) for follow-up cleanup so this
PR stays focused on the blocker.
Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.
Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* deps: bump all dependencies to latest including majors
Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.
Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: surface real chroma errors and add deep status probe
Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.
Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.
Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: rebuild worker-service bundle to match merged src
Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: address coderabbit feedback on PLAN-fix-mcp-search.md
- replace machine-specific /Users/alexnewman absolute paths with portable
<repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
632 lines
39 KiB
Markdown
632 lines
39 KiB
Markdown
# Pathfinder Phase 5: Brutal Audit + Clean Flowcharts
|
||
|
||
**Date**: 2026-04-21
|
||
**Scope**: Strip every timer, fallback, wrapper, and coercion that exists to patch a failed abstraction. Preserve every user-facing feature. Replace patch-piles with single clear paths.
|
||
|
||
**Rules of engagement:**
|
||
- User-facing features (context injection, semantic search, Chroma sync, transcript watch, summary, viewer UI, corpus, CLAUDE.md folder sync, per-prompt semantic) — **KEEP**.
|
||
- Crash-recovery that solves a real OS-level problem (subprocess hang watchdog, dead-parent detection, FS watcher missing events on some platforms) — **KEEP but consolidate**.
|
||
- Cosmetic duplication, polling where events exist, fallbacks that hide contract violations, facades that pass through — **KILL**.
|
||
|
||
---
|
||
|
||
## Part 1: Bullshit Inventory
|
||
|
||
Every item here is a patch applied in place of a root-cause fix. They all go.
|
||
|
||
| # | Bullshit | Why it exists | Root cause to fix instead |
|
||
|---|---|---|---|
|
||
| 1 | `stripMemoryTagsFromPrompt` + `stripMemoryTagsFromJson` wrappers | Cosmetic naming; both call `stripTagsInternal` identically. | One public `stripMemoryTags(text)`. |
|
||
| 2 | Summary path only strips `<system-reminder>` | Different code path missed the fix. **SECURITY BUG**. | Funnel every ingest through the same strip call. |
|
||
| 3 | 6 sequential `.replace()` calls for 6 tags | One pass per tag. | One regex with alternation. |
|
||
| 4 | Worker-level `ProcessRegistry.ts` (528 lines) | Wraps supervisor registry with spawn helpers. | Supervisor registry is the source of truth; spawn helpers are free functions. |
|
||
| 5 | `staleSessionReaperInterval` (2 min) | Second reaper added later to catch what the first missed. | One reaper, three checks. |
|
||
| 6 | `startOrphanReaper` (30 s) | First reaper. | Same one reaper. |
|
||
| 7 | `detectStaleGenerator` helper + 5-min threshold | Watchdog for hung SDK subprocess. | Keep watchdog — it's real — but run it on the one reaper tick. |
|
||
| 8 | 15-min `MAX_SESSION_IDLE_MS` abandoned-session check | Crash recovery. | Keep — real — but same reaper. |
|
||
| 9 | 30-s `ensureProcessExit` + SIGKILL escalation ladder | Subprocesses ignore SIGTERM. | Keep SIGTERM → SIGKILL, delete the ladder framework — inline it. |
|
||
| 10 | `conversationHistory` in-memory accumulator | Multi-turn agent memory. | Keep — this is the agent's working memory, not a patch. |
|
||
| 11 | 500 ms polling `/api/sessions/status` up to 110 s in summarize hook | Hook needs to wait for SDK agent; no push mechanism. | `/api/sessions/summarize` blocks until done OR closes an SSE to the hook. Hook waits on one call. |
|
||
| 12 | `/api/context/inject` called TWICE at SessionStart (context + user-message) | Two handlers needed same data, ran in parallel. | One handler, one fetch, caller passes data to the formatter. |
|
||
| 13 | `ensureWorkerRunning` called at every hook entry | Hook has no shared state. | Cache `alive=true` in the hook process for the session. |
|
||
| 14 | `/api/context/inject` + `/api/context/semantic` both called at UserPromptSubmit | Two endpoints, two roundtrips, same session boot. | `/api/session/start` returns `{sessionDbId, contextMarkdown, semanticMarkdown}`. |
|
||
| 15 | 30-second dedup window in `storeObservation` | PostToolUse hook can fire twice on retry. | UNIQUE constraint on `(session_id, tool_use_id)`; DB rejects dup. |
|
||
| 16 | `claim-confirm` 60-s stale-reset in `PendingMessageStore.claimNextMessage` | Crash recovery mid-processing. | Keep — real — but move the reset into worker startup, not every claim call. |
|
||
| 17 | `pendingTools` map in `TranscriptEventProcessor` | Pairs `tool_use` and `tool_result` as they arrive. | JSONL lines carry `tool_use_id`; match by ID, no state map. |
|
||
| 18 | `observationHandler.execute()` HTTP loopback from transcript-watcher | Reuse of CLI handler inside worker process. | Extract `ingestObservation(payload)` helper; both call it directly. |
|
||
| 19 | 5-s rescan timer for new transcript files | `fs.watch` misses new files on some platforms. | Watch the parent directory too; add new files when created. Remove the interval. |
|
||
| 20 | `coerceObservationToSummary` fallback | Agent returns observations but no `<summary>`. | Agent contract says `<summary>` or `<skip_summary/>`. Enforce; fail the session. |
|
||
| 21 | Non-XML response detection + early-fail branch | Agent returns auth error or garbage instead of XML. | Same contract enforcement; one failure path. |
|
||
| 22 | Consecutive summary failures circuit breaker | Repeated parse failures. | Contract enforcement + RestartGuard covers this already; delete the separate counter. |
|
||
| 23 | `coerceObservationToSummary` regex chains | Summary-missing fallback only. | Delete with item 20. |
|
||
| 24 | `ChromaSync.backfillAllProjects` on every worker start | Writes sometimes fail silently, miss Chroma. | Write-path is atomic: SQLite row + Chroma doc in one `Promise.all` with hard failure. If Chroma is enabled but down at write time, mark `chroma_synced=false` on the row; backfill only rows where flag is false. No full-project scan. |
|
||
| 25 | Chroma "delete-then-add" on ID conflict | Chroma add() fails on duplicate. | Stable ID = `obs:<sqlite_rowid>`; use upsert. No conflict. |
|
||
| 26 | 3-5 granular docs per observation in Chroma | Each field separately vectorized. | One doc per observation: title + narrative + facts concatenated. Recall stays high; index is 1/4 the size. |
|
||
| 27 | Python `sqlite3` subprocess for schema repair | Historical migrations created malformed state. | Migrations are idempotent and tested; malformed state can't happen. Delete the repair path. Users on malformed DBs from v<X run a one-shot `claude-mem repair` command manually. |
|
||
| 28 | 27 migrations with copy-pasted `CREATE TABLE IF NOT EXISTS` / ALTER boilerplate | Each author wrote their own. | On fresh DB: one `schema.sql` defines current state. Migration runner only touches DBs with `schema_versions` rows < current. |
|
||
| 29 | `stripMemoryTagsFromJson` stringifies → strips → parses | Only JSON-shaped payloads. | Strip on the raw string fields (`tool_input.content`, `tool_response.output`) before serialization. One strip call per user-facing text field. |
|
||
| 30 | SearchManager `@deprecated` methods (`queryChroma`, `searchChromaForTimeline`) | Pre-Orchestrator code. | Delete. |
|
||
| 31 | SearchManager thin facade at HTTP boundary | HTTP wants markdown; Orchestrator returns structured. | Keep the display-wrap (it's real work), but delete every method that just forwards to Orchestrator. |
|
||
| 32 | `SearchOrchestrator` Chroma-fails-silently-drops-query-text fallback | Hide Chroma subprocess crashes. | Return `{error: "chroma_unavailable"}` to caller; caller decides whether to retry without query. No silent coercion. |
|
||
| 33 | 90-day default recency filter baked into `filterByRecency` | Older results are usually noise. | Orchestrator accepts `dateRange` or nothing; caller is explicit. No implicit filter. |
|
||
| 34 | `AgentFormatter` / `HumanFormatter` / `ResultFormatter` / `CorpusRenderer` — 4 independent observation walkers | Each audience implemented separately. | One `renderObservations(obs[], strategy)`; strategy = which columns/density/grouping. |
|
||
| 35 | KnowledgeAgent auto-reprime on session-expiration regex match | SDK session IDs expire silently. | Prime is cheap when corpus is loaded; just always prime on query — or store corpus content in a file the SDK loads fresh. No session_id persistence. |
|
||
| 36 | `corpus.json` stores `session_id` | Enables SDK resume. | Kill with item 35. |
|
||
| 37 | Per-route validation boilerplate × 8 files | No shared schema. | `validateBody(schema)` middleware; per-route Zod schema. |
|
||
| 38 | `/api/admin/restart` and `/api/admin/shutdown` with `process.exit(0)` | Manual worker control. | Keep (internal tooling used by version-bump). Not bullshit. |
|
||
| 39 | Rate limit 300/min in-memory IP map | Abuse limiter on localhost-only server. | Delete. Localhost trust model assumed everywhere else; this limiter doesn't add safety. |
|
||
| 40 | JSON parse 5MB limit on every request | Uploading observations that large would be pathological. | Keep (cheap), but delete any special handling for oversized — 413 is fine. |
|
||
|
||
**Total bullshit items**: 40.
|
||
**Lines expected to delete**: ~1400 (up from the 900 estimate in 03-unified-proposal.md once you audit bullshit, not just "duplication").
|
||
|
||
---
|
||
|
||
## Part 2: Clean Architecture — Root-Cause Fixes
|
||
|
||
Six decisions, applied everywhere:
|
||
|
||
**D1. One observation ingest path.** Hook, transcript-watcher, and manual-save all call `ingestObservation(payload)`. That function does: strip tags → validate privacy → INSERT `pending_messages`. No HTTP loopback inside the worker process.
|
||
|
||
**D2. One tag-strip function.** `stripMemoryTags(text)`. One regex with alternation. Called at every text-ingress point.
|
||
|
||
**D3. Zero repeating background timers** (revised 2026-04-22). Every recurring check is replaced by one of three mechanisms: (a) a subprocess-`exit`/`close` event handler for in-process subprocess death, (b) a per-session/per-operation `setTimeout` for time-bounded waits (resets on activity, fires and clears once), or (c) a boot-once reconciliation pass at worker startup for cleanup of state that can only have been orphaned by a previous worker instance. Worker-level `ProcessRegistry` facade deleted; supervisor registry is authoritative. No `setInterval` remains in `src/services/worker/` or `worker-service.ts`.
|
||
|
||
**D4. One renderer.** `renderObservations(obs[], strategy)` where `strategy` selects columns, density, and grouping. The four existing formatters become four small strategy configs.
|
||
|
||
**D5. Contract enforcement, not coercion.** Agent must return `<summary>` or `<skip_summary/>`. If it returns neither: `session.fail()`. No coerce, no circuit breaker, no non-XML fallback — RestartGuard already exists for repeated failures.
|
||
|
||
**D6. Blocking endpoints over polling.** `/api/sessions/summarize` doesn't return until the SDK has written the summary row (with a hard timeout). Hook does one request. No 500-ms loop.
|
||
|
||
---
|
||
|
||
## Part 3: New Flowcharts
|
||
|
||
Each diagram below replaces the same-named file in `01-flowcharts/`. Deleted nodes are listed under the diagram. All boxes cite target file:line for the clean implementation.
|
||
|
||
---
|
||
|
||
### 3.1 lifecycle-hooks (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
Start([Claude Code lifecycle event]) --> Dispatch{Event?}
|
||
|
||
Dispatch -->|SessionStart| SS["GET /api/session/start?project=...<br/>(one call returns ctx + semantic)"]
|
||
Dispatch -->|UserPromptSubmit| UPS["POST /api/session/prompt<br/>{sessionDbId, prompt}"]
|
||
Dispatch -->|PostToolUse| PTU["POST /api/session/observation<br/>{sessionDbId, tool_use_id, name, input, output}"]
|
||
Dispatch -->|Stop| STOP["POST /api/session/end<br/>{sessionDbId, last_assistant_message}<br/>BLOCKS until summary written or 110s timeout"]
|
||
|
||
SS --> SSR["Returns {sessionDbId, contextMarkdown, semanticMarkdown}"]
|
||
SSR --> Print["Write ctx to stdout for Claude<br/>Write human-formatted copy to stderr"]
|
||
|
||
UPS --> UPSR["Returns {promptId}"]
|
||
|
||
PTU --> PTUR["Returns {observationId}"]
|
||
|
||
STOP --> STOPR["Returns {summaryId or null}"]
|
||
|
||
Print --> Done([Exit 0])
|
||
UPSR --> Done
|
||
PTUR --> Done
|
||
STOPR --> Done
|
||
```
|
||
|
||
**Deleted from old flowchart:**
|
||
- `ensureWorkerRunning` at every entry point (cache `alive` for the hook lifetime)
|
||
- `POST /api/context/semantic` separate call (folded into `/api/session/start`)
|
||
- `POST /sessions/{id}/init` SDK-start endpoint (implicit inside `/api/session/prompt`)
|
||
- `userMessageHandler` duplicate `/api/context/inject` fetch (single fetch returned from `/api/session/start` covers both)
|
||
- 500-ms poll loop on `/api/sessions/status` (replaced by blocking `/api/session/end`)
|
||
- Two-phase Stop handling (summarize then session-complete) — one endpoint, one response
|
||
|
||
**Endpoint count**: 8 → 4.
|
||
|
||
---
|
||
|
||
### 3.2 privacy-tag-filtering (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
In["Any text ingress<br/>(prompt / tool_input / tool_output / assistant_message)"] --> Strip["stripMemoryTags(text)<br/>src/utils/tag-stripping.ts"]
|
||
Strip --> OneRegex["Single regex alternation:<br/>/<(private|claude-mem-context|system_instruction|system-instruction|persisted-output|system-reminder)>[\\s\\S]*?<\\/\\1>/g"]
|
||
OneRegex --> Count{Tag count > MAX=100?}
|
||
Count -->|Yes| Warn["logger.warn ReDoS suspicion"]
|
||
Count -->|No| Replace["Replace → empty string"]
|
||
Warn --> Replace
|
||
Replace --> Trim["String.trim()"]
|
||
Trim --> Empty{Empty after strip?}
|
||
Empty -->|Yes| Skip["Caller returns skipped=true"]
|
||
Empty -->|No| Pass["Return cleaned text"]
|
||
|
||
subgraph CallSites["Call sites (every text ingress uses the same function)"]
|
||
C1["ingestObservation: tool_input.content, tool_response.output"]
|
||
C2["ingestPrompt: user prompt text"]
|
||
C3["ingestSummary: last_assistant_message (CLOSES SECURITY GAP)"]
|
||
end
|
||
```
|
||
|
||
**Deleted:**
|
||
- `stripMemoryTagsFromPrompt` wrapper (20 lines)
|
||
- `stripMemoryTagsFromJson` wrapper + its stringify/parse dance (30 lines)
|
||
- Six sequential `.replace()` calls (one alternating regex instead)
|
||
- Summary-path partial strip at `summarize.ts:66` and `SessionRoutes.ts:669`
|
||
|
||
**Closes:** P1 security gap (private content reaching `session_summaries`).
|
||
|
||
---
|
||
|
||
### 3.3 sqlite-persistence (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
Boot["Worker boot<br/>src/services/sqlite/Database.ts"] --> Open["new bun:sqlite"]
|
||
Open --> Pragmas["PRAGMA WAL/NORMAL/FK/mmap (one block)"]
|
||
Pragmas --> Check["SELECT version FROM schema_versions"]
|
||
Check --> Fresh{Empty?}
|
||
Fresh -->|Yes| Schema["Execute schema.sql (current state)<br/>INSERT schema_versions=N"]
|
||
Fresh -->|No| Migrate["Run migrations where id > current"]
|
||
Schema --> Ready["DB ready"]
|
||
Migrate --> Ready
|
||
|
||
Ready --> Write["INSERT observations<br/>UNIQUE(session_id, tool_use_id)"]
|
||
Write --> Conflict{UNIQUE violation?}
|
||
Conflict -->|Yes| SkipWrite["Return existing id (idempotent)"]
|
||
Conflict -->|No| Inserted["Return new id + epoch"]
|
||
|
||
Ready --> Queue["INSERT pending_messages status=pending"]
|
||
Queue --> Claim["claimNextMessage TX<br/>SELECT pending ORDER BY id LIMIT 1<br/>UPDATE status=processing"]
|
||
Claim --> Worker["Worker processes, confirms (DELETE)"]
|
||
|
||
Ready --> Read["Prepared SELECTs (indexes on created_at_epoch DESC)"]
|
||
|
||
BootOnce["Worker startup ONCE<br/>(not on every claim)"] --> Recover["UPDATE pending_messages<br/>SET status=pending<br/>WHERE status=processing<br/>(crash recovery)"]
|
||
```
|
||
|
||
**Deleted:**
|
||
- Python `sqlite3` subprocess schema-repair path (~120 lines; if someone's DB is malformed from v<6.5, they run `claude-mem repair` explicitly)
|
||
- 30-second content-hash dedup window in `storeObservation` (replaced by DB UNIQUE constraint on `(session_id, tool_use_id)`)
|
||
- `findDuplicateObservation` function (~30 lines)
|
||
- 60-s stale-reset inside `claimNextMessage` (moved to one-time boot recovery; normal claims are a pure SELECT+UPDATE)
|
||
- 24+ migrations of `CREATE TABLE IF NOT EXISTS` boilerplate collapsed into one `schema.sql` for fresh DBs; the migration runner only runs actual upgrade steps
|
||
|
||
**Tables unchanged.** FTS5 triggers unchanged. WAL mode unchanged.
|
||
|
||
---
|
||
|
||
### 3.4 vector-search-sync (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
Write["Observation written to SQLite<br/>id=42, session_id=abc"] --> FlagCheck{Chroma enabled?}
|
||
FlagCheck -->|No| End([Skip])
|
||
FlagCheck -->|Yes| Format["formatDoc<br/>text = title + narrative + facts<br/>id = 'obs:42'"]
|
||
Format --> Upsert["chroma_mcp.upsert(id, text, metadata)<br/>(stable ID = stable upsert)"]
|
||
Upsert --> OK{Success?}
|
||
OK -->|Yes| Mark["UPDATE observations SET chroma_synced=1 WHERE id=42"]
|
||
OK -->|No| LogFail["Leave chroma_synced=0<br/>logger.warn"]
|
||
Mark --> End
|
||
LogFail --> End
|
||
|
||
BootOnce["Worker startup ONCE"] --> CheckUnsync["SELECT id FROM observations<br/>WHERE chroma_synced=0<br/>LIMIT 1000"]
|
||
CheckUnsync --> LoopBackfill["For each: formatDoc → upsert → mark"]
|
||
|
||
Query["User search query"] --> QueryChroma["chroma_mcp.query(project, text, n)"]
|
||
QueryChroma --> Hydrate["SELECT * FROM observations WHERE id IN (...)"]
|
||
Hydrate --> Return["Return results"]
|
||
```
|
||
|
||
**Deleted:**
|
||
- `ensureBackfilled` + `runBackfillPipeline` full-project scan on every startup (~200 lines)
|
||
- `getExistingChromaIds` metadata index scan (~80 lines)
|
||
- Delete-then-add for ID conflicts (replaced by `upsert`)
|
||
- Granular per-field doc formatter (3-5 docs per observation → 1 doc per observation)
|
||
- `backfillAllProjects` fire-and-forget on worker boot (replaced by targeted `WHERE chroma_synced=0`)
|
||
|
||
**Adds:** `chroma_synced` boolean column on `observations`. Schema migration.
|
||
|
||
**Effect:** Chroma index size drops ~70%. Backfill cost drops from "every startup, every project, full scan" to "boot once, only unsynced rows."
|
||
|
||
---
|
||
|
||
### 3.5 context-injection-engine (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
Route["GET /api/session/start?project=X"] --> Gen["generateContext(projects, forHuman=false)<br/>ContextBuilder.ts"]
|
||
Route --> GenH["generateContext(projects, forHuman=true)"]
|
||
Gen --> Mode["ModeManager.getActiveMode()"]
|
||
GenH --> Mode
|
||
Mode --> Fetch["SELECT observations + summaries<br/>filtered by mode types"]
|
||
Fetch --> Budget["calculateTokenEconomics"]
|
||
Budget --> Render["renderObservations(obs, strategy)<br/>(U2 unified renderer)"]
|
||
Render --> Strategy{strategy?}
|
||
Strategy -->|AgentContextStrategy| AgentOut["Compact markdown for LLM"]
|
||
Strategy -->|HumanContextStrategy| HumanOut["ANSI-colored terminal"]
|
||
AgentOut --> Return["Return contextMarkdown"]
|
||
HumanOut --> Return
|
||
|
||
Semantic["POST /api/session/start (also includes semantic)"] --> SearchO["SearchOrchestrator.search(query, limit=5)"]
|
||
SearchO --> Strategy
|
||
```
|
||
|
||
**Deleted:**
|
||
- Separate `renderEmptyState`, `renderHeader`, `renderTimeline`, `renderPreviouslySection`, `renderFooter` branches — one strategy definition carries the shape
|
||
- `formatDay` branching (forHuman split pushed to strategy)
|
||
- Independent `AgentFormatter` vs `HumanFormatter` traversals — one renderer, two strategies
|
||
|
||
**Kept user-facing:** Agent format (LLM), Human format (terminal ANSI), token budgets, mode filtering, semantic injection.
|
||
|
||
---
|
||
|
||
### 3.6 hybrid-search-orchestration (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
A["GET /api/search?q=...&project=...&concept=..."] --> B["SearchRoutes.handleSearch"]
|
||
B --> C["SearchOrchestrator.search(params)"]
|
||
C --> D{Decision}
|
||
|
||
D -->|q + Chroma enabled| Semantic["ChromaSearchStrategy.search"]
|
||
D -->|q + Chroma disabled| Err["Return 503<br/>error=chroma_unavailable<br/>(NO silent fallback)"]
|
||
D -->|no q| FilterOnly["SQLiteSearchStrategy.search"]
|
||
D -->|concept/type/file| Hybrid["HybridSearchStrategy.search<br/>(SQLite filter + Chroma rank)"]
|
||
|
||
Semantic --> Hydrate["Hydrate from SQLite"]
|
||
FilterOnly --> Hydrate
|
||
Hybrid --> Hydrate
|
||
|
||
Hydrate --> Fmt{format?}
|
||
Fmt -->|json| J["Raw JSON"]
|
||
Fmt -->|markdown| M["renderObservations(results, SearchResultStrategy)"]
|
||
```
|
||
|
||
**Deleted:**
|
||
- `SearchManager` thin facade (~300 lines; route handler talks to Orchestrator directly)
|
||
- `SearchManager.queryChroma`, `SearchManager.searchChromaForTimeline` (`@deprecated`)
|
||
- Silent Chroma-fails-drops-query fallback (returns 503 now)
|
||
- 90-day default recency filter (callers pass `dateRange` explicitly or get all)
|
||
- `filterByRecency` helper
|
||
|
||
**Kept user-facing:** All three search paths, markdown + json formats, per-concept/type/file filters, timeline builder.
|
||
|
||
---
|
||
|
||
### 3.7 response-parsing-storage (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
A["SDK agent returns text"] --> B["processAgentResponse"]
|
||
B --> C["parseAgentXml(text, { requireSummary })<br/>src/sdk/parser.ts"]
|
||
C --> D{Valid?}
|
||
D -->|No| Fail["session.recordFailure()<br/>Mark pending_messages FAILED<br/>RestartGuard handles repeats"]
|
||
D -->|Yes| Store["sessionStore.storeObservations(parsed)<br/>atomic TX"]
|
||
Store --> Confirm["pendingStore.confirmProcessed(ids)<br/>DELETE after commit"]
|
||
Confirm --> Sync["getChromaSync().syncObservation / syncSummary<br/>fire-and-forget"]
|
||
Confirm --> SSE["SSEBroadcaster.broadcast"]
|
||
Confirm --> Folder["Optional: writeAgentsMd (flagged)"]
|
||
```
|
||
|
||
**Deleted:**
|
||
- `coerceObservationToSummary` fallback (~40 lines) — agent must return `<summary>` or `<skip_summary/>`
|
||
- `parseObservations` and `parseSummary` as two separate functions → one `parseAgentXml(text, opts)` driven by a tag registry
|
||
- Non-XML early-fail special case (collapsed into single `parseAgentXml` → `{valid: false, reason}` response)
|
||
- `consecutiveSummaryFailures` counter + circuit-breaker logic (RestartGuard covers this already)
|
||
- Null-normalization hacks between parser and store (parser returns structured, never null)
|
||
|
||
**Kept:** Atomic transaction for obs + summary, content-hash dedup *within the parse output* (not window-based), SSE broadcast, Chroma sync trigger, CLAUDE.md folder sync (feature flagged).
|
||
|
||
---
|
||
|
||
### 3.8 session-lifecycle-management (clean) — **BIGGEST CULL**
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
A["POST /api/session/prompt"] --> B["SessionManager.initializeSession(sessionDbId)"]
|
||
B --> C{In memory?}
|
||
C -->|Yes| Use["Use cached"]
|
||
C -->|No| Create["Create ActiveSession<br/>spawn SDK subprocess<br/>register in supervisor.ProcessRegistry"]
|
||
Use --> Gen["SDKAgent.generateResponse iterator"]
|
||
Create --> Gen
|
||
|
||
Q["POST /api/session/observation"] --> Enqueue["ingestObservation(payload)<br/>strip → validate → INSERT pending_messages<br/>emit 'message' event"]
|
||
Enqueue --> Wake["iterator.wakeUp()"]
|
||
|
||
Gen --> Claim["claimNextMessage TX"]
|
||
Claim --> YieldMsg["yield message"]
|
||
YieldMsg --> Update["lastActivity = now"]
|
||
Update --> SDKProcess["SDK processes → ResponseProcessor confirms"]
|
||
SDKProcess --> Claim
|
||
|
||
Claim -->|queue empty + idle≥3min| Idle["signal abort"]
|
||
Idle --> Exit["iterator exits"]
|
||
Exit --> Unreg["Auto-unregister (process 'exit' event)"]
|
||
Unreg --> Delete["SessionManager.delete"]
|
||
|
||
End["POST /api/session/end"] --> Queue_Sum["queueSummarize as normal pending_message"]
|
||
Queue_Sum --> WaitSum["await summary_stored flag OR 110s timeout"]
|
||
WaitSum --> Abort["abortController.abort → iterator exits"]
|
||
Abort --> Delete
|
||
|
||
subgraph EventDriven["Event-driven cleanup — no repeating timers"]
|
||
EH1["child.on('exit') on SDK spawn<br/>ProcessRegistry.ts:479"] --> Unreg2["unregisterProcess(pid)"]
|
||
EH2["mcpProcess.once('exit')<br/>worker-service.ts:530"] --> Unreg3["supervisor.unregisterProcess('mcp-server')"]
|
||
IdleT["Per-iterator 3-min setTimeout<br/>SessionQueueProcessor.ts:6<br/>(resets on every chunk at :51-52, :62-63)"] --> IdleFire["onIdleTimeout → abortController.abort<br/>→ child.on('exit') fires → Unreg"]
|
||
AbandT["Per-session setTimeout(deleteSession, 15min)<br/>scheduled on last-generator-completion<br/>cleared on new activity"] --> Delete
|
||
end
|
||
|
||
EH1 -.-> Delete
|
||
EH2 -.-> Delete
|
||
IdleFire -.-> Delete
|
||
|
||
subgraph BootOnceBlock["Worker startup — boot-once reconciliation"]
|
||
BootOnce["Worker startup"] --> Recover["UPDATE pending_messages status processing → pending<br/>(crash recovery)"]
|
||
Recover --> BootOrphans["killSystemOrphans(): kill ppid=1 Claude processes<br/>from previous crashed worker instance<br/>(ProcessRegistry.ts:315-344, called ONCE)"]
|
||
BootOrphans --> BootPrune["supervisor.pruneDeadEntries():<br/>drop registry entries for PIDs no longer in OS"]
|
||
BootPrune --> BootSQL["clearFailedOlderThan(1h)<br/>(one-shot cleanup of stale failed rows)"]
|
||
end
|
||
```
|
||
|
||
**Deleted:**
|
||
- `src/services/worker/ProcessRegistry.ts` (facade, 528 lines) — supervisor registry is source of truth
|
||
- `staleSessionReaperInterval` (separate 2-min timer)
|
||
- `startOrphanReaper` (separate 30-s timer)
|
||
- `reapStaleSessions` / `reapHungGenerators` / `reapAbandonedSessions` as **background-scanner** sweeps — replaced by per-session `setTimeout`s that fire at the session itself, not from a global scanner
|
||
- `reapOrphanedProcesses` as a separate function — folded into boot-once `pruneDeadEntries` + per-spawn `exit` handlers
|
||
- `killIdleDaemonChildren` as a runtime sweep — its job is covered by subprocess `exit` handlers during runtime and by boot-once `killSystemOrphans` for ppid=1 leftovers from a prior worker crash
|
||
- `killSystemOrphans` as a **repeating** call — function kept, but called exactly once at boot (it can only catch state that predates this worker's existence)
|
||
- `ensureProcessExit` 5-s escalation scaffolding — inline the SIGTERM→wait 5s→SIGKILL in one function (remains per-operation, not repeating)
|
||
- 60-s self-healing `UPDATE stale → pending` inside `claimNextMessage` — runs once at boot instead
|
||
- `MAX_SESSION_IDLE_MS` global (just a constant — consolidated into per-session-timer config)
|
||
- Explicit `PRAGMA wal_checkpoint(PASSIVE)` call — SQLite's default `wal_autocheckpoint=1000` pages is the contract (`Database.ts:162-168` sets no override, so the default is live)
|
||
- Periodic `clearFailedOlderThan(1h)` — moved to boot-once in plan 02
|
||
|
||
**Repeating background timers**: 2 → 0.
|
||
**Process-registry files**: 2 → 1.
|
||
**Process-lifecycle lines**: ~900 → ~400.
|
||
|
||
**Kept user-facing:** Session init/observe/end, async SDK processing, subprocess crash recovery (via `exit` handlers), hung-generator cleanup (via per-session idle timeout that already exists at `SessionQueueProcessor.ts:6`), abandoned-session cleanup (via per-session `setTimeout`), cross-restart orphan cleanup (via boot-once `killSystemOrphans`). Zero functional loss.
|
||
|
||
---
|
||
|
||
### 3.9 http-server-routes (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
A([Request on :37777]) --> B["JSON parse 5MB<br/>CORS localhost<br/>request logger"]
|
||
B --> C{Route match}
|
||
C -->|Yes| D["validateBody(schema) middleware<br/>(Zod per route)"]
|
||
C -->|No| NF[404]
|
||
D --> E{Valid?}
|
||
E -->|No| BR["400 with field errors"]
|
||
E -->|Yes| F["BaseRouteHandler.wrapHandler"]
|
||
F --> G["Service call"]
|
||
G --> H{Response}
|
||
H -->|JSON| J1["res.json"]
|
||
H -->|SSE| J2["text/event-stream<br/>SSEBroadcaster register"]
|
||
H -->|HTML/file| J3["res.sendFile"]
|
||
G -->|error| Err["Global errorHandler → { error, message, code }"]
|
||
|
||
subgraph Routes["Route inventory (user-facing, unchanged)"]
|
||
R1["ViewerRoutes: /, /health, /stream"]
|
||
R2["SearchRoutes: /api/search, /api/timeline, /api/context/*"]
|
||
R3["SessionRoutes: /api/session/* (4 endpoints — see 3.1)"]
|
||
R4["DataRoutes: /api/observations, /api/summaries, /api/prompts, /api/stats, /api/projects"]
|
||
R5["SettingsRoutes: /api/settings, /api/mcp/*, /api/branch/*"]
|
||
R6["MemoryRoutes: /api/memory/save"]
|
||
R7["CorpusRoutes: /api/corpus/*"]
|
||
R8["LogsRoutes: /api/logs"]
|
||
end
|
||
```
|
||
|
||
**Deleted:**
|
||
- In-memory rate limiter (300/min IP map) — localhost trust model everywhere else makes this theater
|
||
- Per-route hand-rolled validation (Zod middleware replaces)
|
||
- Synchronous file read for `/` and `/api/instructions` (replace with cached `Buffer` loaded at boot)
|
||
- Legacy `SessionRoutes.handleObservations` (no-privacy-strip) endpoint at `SessionRoutes.ts:378`
|
||
|
||
**Kept:** All user-facing routes, SSE, middleware chain, admin endpoints (used by tooling).
|
||
|
||
---
|
||
|
||
### 3.10 viewer-ui-layer (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
HTTP["GET /"] --> HTML["viewer.html (cached at boot)"]
|
||
HTML --> React["React mount"]
|
||
React --> SSE["useSSE → EventSource('/stream')"]
|
||
SSE --> Initial["Receive initial_load catalog"]
|
||
Initial --> Feed["Feed renders<br/>IntersectionObserver → loadMore"]
|
||
Feed --> Page["GET /api/observations?offset&limit"]
|
||
Page --> Merge["useMemo dedup (project, id)<br/>live SSE + paginated"]
|
||
Merge --> Cards["ObservationCard / SummaryCard / PromptCard"]
|
||
|
||
SSE -->|new_observation / new_summary / new_prompt| Cards
|
||
|
||
Settings["ContextSettingsModal save"] -->|POST /api/settings| API
|
||
|
||
SSE -->|disconnect| Reconnect["EventSource auto-reconnect"]
|
||
Reconnect --> SSE
|
||
```
|
||
|
||
**Deleted:**
|
||
- (Nothing — this subsystem is clean. The only internal cosmetic is `useSSE().observations` + `paginatedObservations` dedup, which is a correct pattern for live + historical merging.)
|
||
|
||
**Kept:** Everything. User-facing.
|
||
|
||
---
|
||
|
||
### 3.11 knowledge-corpus-builder (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
A["POST /api/corpus<br/>{name, filters}"] --> B["CorpusBuilder.build"]
|
||
B --> C["SearchOrchestrator.search(filters)"]
|
||
C --> D["SessionStore.getObservationsByIds"]
|
||
D --> E["renderObservations(obs, CorpusDetailStrategy)<br/>(U2 unified renderer)"]
|
||
E --> F["CorpusStore.write(~/.claude-mem/corpora/{name}.corpus.json)"]
|
||
|
||
Q["POST /api/corpus/:name/query {question}"] --> R["CorpusStore.read(name)"]
|
||
R --> S["SDK.query(systemPrompt=corpus, userPrompt=question)<br/>(fresh query — no session resume)"]
|
||
S --> T["Return answer"]
|
||
|
||
Re["POST /api/corpus/:name/rebuild"] --> B
|
||
Del["DELETE /api/corpus/:name"] --> DelFile["CorpusStore.delete"]
|
||
```
|
||
|
||
**Deleted:**
|
||
- `KnowledgeAgent.prime` as a distinct operation — build IS prime (corpus.json is the prime artifact)
|
||
- `session_id` persisted in corpus.json
|
||
- Auto-reprime on regex-matched expiration (~40 lines)
|
||
- `reprime` endpoint (rebuild covers it)
|
||
|
||
**Kept user-facing:** Build, query, rebuild, delete. Same HTTP surface minus `/prime` and `/reprime`.
|
||
|
||
**Cost note:** Every query re-loads corpus as system prompt. Claude Agent SDK with prompt caching makes this cheap (cached system prompt TTL is 5 min). Cost approximately equal to session-resume path without the session-expiration brittleness.
|
||
|
||
---
|
||
|
||
### 3.12 transcript-watcher-integration (clean)
|
||
|
||
```mermaid
|
||
flowchart TD
|
||
Boot["Worker startup"] --> LoadCfg["loadTranscriptWatchConfig"]
|
||
LoadCfg --> ParentWatch["fs.watch(parent_dir, {recursive})<br/>watches existing files AND new files"]
|
||
ParentWatch --> OnChange([File event])
|
||
OnChange --> ReadDelta["FileTailer.readNewBytes"]
|
||
ReadDelta --> SplitLines["Split by \\n"]
|
||
SplitLines --> Parse["JSON.parse line"]
|
||
Parse --> Match["processor.matchesRule(schema)"]
|
||
Match --> Route{event type}
|
||
|
||
Route -->|session_init| Init["sessionManager.initializeSession(sessionDbId)<br/>(direct, no HTTP loopback)"]
|
||
Route -->|tool_use + tool_result paired by tool_use_id| Ingest["ingestObservation({sessionDbId, tool_use_id, name, input, output})"]
|
||
Route -->|session_end| EndFlow["sessionManager.endSession(sessionDbId)<br/>→ queueSummarize (same as hook path)"]
|
||
|
||
EndFlow --> WriteCtx["Optional: writeAgentsMd (Cursor flag)"]
|
||
|
||
Ingest --> Queue["Same pending_messages queue"]
|
||
```
|
||
|
||
**Deleted:**
|
||
- 5-second rescan timer for new files (parent-directory recursive watch catches new files natively)
|
||
- `pendingTools` state map (lines match by `tool_use_id`; no per-session pairing map needed)
|
||
- `observationHandler.execute()` HTTP loopback (direct `ingestObservation` call)
|
||
- `isProjectExcluded` re-check inside transcript processor (done once in `ingestObservation`)
|
||
|
||
**Kept user-facing:** Cursor, OpenCode, Gemini-CLI transcript ingestion. Summary generation at session end. AGENTS.md write.
|
||
|
||
---
|
||
|
||
## Part 4: Timer Census — Before vs After (revised 2026-04-22)
|
||
|
||
| Timer | Before | After |
|
||
|---|---|---|
|
||
| `staleSessionReaperInterval` (2 min) | ✓ | ✗ deleted (replaced by per-session `setTimeout` for abandoned sessions) |
|
||
| `startOrphanReaper` (30 s) | ✓ | ✗ deleted (replaced by `child.on('exit')` handlers + boot-once reconciliation) |
|
||
| Transcript rescan (5 s) | ✓ | ✗ parent watch (event-driven `fs.watch` recursive) |
|
||
| Summary poll (500 ms × 220 iter) | ✓ | ✗ endpoint blocks |
|
||
| Periodic `clearFailedOlderThan(1h)` (2 min) | ✓ | ✗ deleted (moved to boot-once in plan 02) |
|
||
| Explicit `PRAGMA wal_checkpoint(PASSIVE)` (2 min) | ✓ | ✗ deleted outright (SQLite `wal_autocheckpoint=1000` default is the contract) |
|
||
| Chroma MCP backoff reconnect | ✓ | ✓ (event-driven on disconnect — not a repeating sweeper) |
|
||
| Claim-confirm 60-s stale reset | ✓ per claim | ✗ replaced by boot-once `recoverStuckProcessing()` |
|
||
| `killSystemOrphans` ppid=1 sweep | ✓ (inside 30-s interval) | ✗ repeating form deleted; function kept and called ONCE at boot (catches leftovers from a prior worker crash) |
|
||
| Boot-once `supervisor.pruneDeadEntries` | — | ✓ NEW (catches any registry entry whose PID died before we saw the `exit` event, e.g., across worker restart) |
|
||
| Per-iterator idle 3-min `setTimeout` | ✓ | ✓ (per-session, resets on every chunk — now the only defense against hung SDK generators) |
|
||
| Per-session abandoned `setTimeout(deleteSession, 15min)` | — | ✓ NEW (per-session; scheduled on last-generator-completion; cleared on new activity) |
|
||
| `child.on('exit')` on SDK / MCP spawn | ✓ | ✓ (already wired; now the sole runtime subprocess-death signal) |
|
||
| Generator-exit 30-s wait | ✓ | ✓ (per-delete `Promise.race`, not repeating) |
|
||
| `ensureProcessExit` 5-s escalate | ✓ | ✓ (inline SIGTERM→SIGKILL, per-operation) |
|
||
| EventSource auto-reconnect (UI) | ✓ | ✓ (browser-owned) |
|
||
|
||
**Repeating background timers:** 3 → **0**.
|
||
**Polling loops:** 1 → 0.
|
||
**Per-operation timeouts:** unchanged (they're correct).
|
||
**Boot-once reconciliation steps:** 3 (recoverStuckProcessing, killSystemOrphans + pruneDeadEntries, clearFailedOlderThan).
|
||
|
||
**Why zero is achievable** (investigation 2026-04-22, see `08-reconciliation.md` Part 4 cross-check):
|
||
|
||
1. In-process subprocess death is covered by `child.on('exit')` handlers at `ProcessRegistry.ts:479` (SDK) and `worker-service.ts:530` (MCP). No scanner needed.
|
||
2. Hung SDK generators are caught by the per-iterator 3-min `setTimeout` at `SessionQueueProcessor.ts:6` (resets on every chunk at `:51-52, :62-63`). The background `reapHungGenerators` sweep was redundant with it.
|
||
3. Cross-restart orphans (ppid=1 Claude processes from a prior crashed worker) are the only case event handlers cannot catch — but they can only exist *before* this worker started, so a single boot-time `killSystemOrphans()` call covers them exhaustively.
|
||
4. Abandoned sessions (no activity for 15 min with no pending work) are now detected at the session itself via a per-session `setTimeout(deleteSession, 15min)` set on last-generator-completion and cleared on new activity — no global scanner.
|
||
5. SQLite housekeeping: `clearFailedOlderThan(1h)` becomes boot-once (`pending_messages` has no constraint needing periodic purge); explicit `wal_checkpoint(PASSIVE)` is deleted because SQLite's default `wal_autocheckpoint=1000` pages is active (`Database.ts:162-168` sets no override).
|
||
|
||
---
|
||
|
||
## Part 5: Deletion Totals
|
||
|
||
| Area | Lines deleted | Lines added | Net |
|
||
|---|---|---|---|
|
||
| `ProcessRegistry.ts` facade | -528 | — | -528 |
|
||
| `process-spawning.ts` extracted helpers | — | +150 | +150 |
|
||
| `staleSessionReaperInterval` + `startOrphanReaper` + `reapStaleSessions` body | -380 | +280 (UnifiedReaper) | -100 |
|
||
| `stripMemoryTagsFromPrompt` / `FromJson` wrappers + 6 regex passes | -60 | +15 | -45 |
|
||
| Summary-path privacy gap fix | — | +3 | +3 |
|
||
| `AgentFormatter` / `HumanFormatter` / `ResultFormatter` / `CorpusRenderer` traversals | -600 | +320 (renderer + 4 strategies) | -280 |
|
||
| `parseObservations` + `parseSummary` + `coerceObservationToSummary` | -280 | +150 (unified `parseAgentXml`) | -130 |
|
||
| Non-XML fallback + circuit breaker | -80 | — | -80 |
|
||
| SearchManager thin facade + `@deprecated` methods | -300 | +40 (display-wrap only) | -260 |
|
||
| Chroma silent-fallback + 90-day filter + granular docs + delete-then-add | -220 | +60 | -160 |
|
||
| Chroma backfill full-project scan | -200 | +40 (`chroma_synced` flag backfill) | -160 |
|
||
| 30-s content-hash dedup window + `findDuplicateObservation` | -80 | +10 (UNIQUE constraint + migration) | -70 |
|
||
| Python sqlite3 schema repair | -120 | — | -120 |
|
||
| 24+ migration boilerplate collapsed into schema.sql + upgrade-only migrations | -700 | +400 | -300 |
|
||
| Summarize 500-ms polling hook | -60 | +20 (blocking endpoint) | -40 |
|
||
| Double `/api/context/*` fetches → `/api/session/start` | -120 | +60 | -60 |
|
||
| Transcript 5-s rescan + `pendingTools` map + HTTP loopback | -150 | +40 | -110 |
|
||
| Rate-limit middleware | -40 | — | -40 |
|
||
| `KnowledgeAgent.prime` + `session_id` persistence + auto-reprime | -140 | +30 | -110 |
|
||
| Per-route validation boilerplate | -320 | +200 (Zod middleware + schemas) | -120 |
|
||
| **TOTAL** | **-4378** | **+1818** | **-2560** |
|
||
|
||
Estimate: ~2500 lines removed, ~1800 lines added, net ~2500 lines deleted. Actual numbers depend on how aggressively the schema.sql consolidation goes; conservative net is ~1800.
|
||
|
||
---
|
||
|
||
## Part 6: Execution Order
|
||
|
||
Clean-architecture migrations must land in dependency order:
|
||
|
||
1. **U6 — `stripMemoryTags`** (trivial; unblocks U1) [<1 hr]
|
||
2. **U1 — Summary privacy gap** (3 lines; security) [<1 hr]
|
||
3. **Ingest helper** (`ingestObservation`, `ingestPrompt`, `ingestSummary`) — consolidates privacy + queue. Foundation for everything else. [1 day]
|
||
4. **U5 + response-parser unification** — delete `coerceObservationToSummary`, unify parseAgentXml. [1 day]
|
||
5. **U7 + SearchOrchestrator direct routing** — delete SearchManager facade. [1 day]
|
||
6. **U4 — delete worker ProcessRegistry facade** — do before U3 because U3 depends on single-registry. [2 days]
|
||
7. **U3 — Zero-timer session lifecycle** (revised 2026-04-22) — delete `staleSessionReaperInterval` + `startOrphanReaper`; replace with (a) per-session `setTimeout(deleteSession, 15min)` for abandoned sessions, (b) boot-once `killSystemOrphans()` + `supervisor.pruneDeadEntries()` for cross-restart orphans, (c) trust existing `child.on('exit')` handlers + per-iterator 3-min idle `setTimeout` for in-process cleanup. No `ReaperTick`, no `setInterval` in `src/services/worker/`. [1 day]
|
||
8. **Transcript cleanup** — direct `ingestObservation`, parent watch, drop pendingTools map. [1 day]
|
||
9. **U2 — unified `renderObservations`** — largest refactor, lowest risk (pure code reorg, no behavior change). [3 days]
|
||
10. **SQLite consolidation** — UNIQUE constraint + schema.sql + delete Python repair + one-shot boot recovery. [2 days]
|
||
11. **Chroma rewrite** — stable IDs, `chroma_synced` flag, delete backfill scan. [2 days]
|
||
12. **Endpoint consolidation** — `/api/session/start`, blocking `/api/session/end`. [2 days]
|
||
13. **Zod validator middleware** — replaces per-route validation. [2 days]
|
||
14. **KnowledgeAgent simplification** — drop prime endpoint, drop session_id. [1 day]
|
||
15. **HTTP cleanup** — delete rate limit, cache static files. [<1 day]
|
||
|
||
Total estimated work: ~18 engineer-days for full clean-through. The first three items (privacy gap + ingest helper) can land in one day and close the security bug.
|
||
|
||
---
|
||
|
||
## Part 7: What This Does NOT Cull
|
||
|
||
For the record, the following are **not** bullshit and stay as-is:
|
||
|
||
- **Pending-messages queue** (async pipeline between hook ack and SDK processing)
|
||
- **Fire-and-forget Chroma sync from write path** (writes must not block on vector index)
|
||
- **SSE broadcasting** (live UI updates)
|
||
- **WAL mode + FTS5 triggers** (correct SQLite design)
|
||
- **Graceful shutdown with SIGTERM→SIGKILL escalation** (correct process lifecycle)
|
||
- **RestartGuard** (crash-loop prevention)
|
||
- **Mode-based filtering** (user-facing feature)
|
||
- **Per-project Chroma collections** (multi-tenant semantics)
|
||
- **Content-hash on observations** (useful for cross-machine dedup, just not the 30-s window)
|
||
- **EventSource auto-reconnect** (correct networking)
|
||
- **Agent provider abstraction** (SDKAgent / OpenRouterAgent / GeminiAgent)
|
||
- **Transcript schema-driven classification** (Cursor, OpenCode, etc.)
|
||
- **Human vs Agent context formats** (user-facing output shapes)
|
||
- **Admin restart/shutdown endpoints** (used by version-bump)
|
||
|
||
Everything above is real work. Everything deleted above it is accumulated patch cruft.
|