perf: streamline worker startup and consolidate database connections (#2122)
* docs: pathfinder refactor corpus + Node 20 preflight
Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 01 — data integrity
Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.
- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
UNIQUE(memory_session_id, content_hash) on observations; dedup
duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.
Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/01-data-integrity.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 02 — process lifecycle
OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).
- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
canonical registry at src/supervisor/process-registry.ts is the
sole survivor; SDK spawn site consolidated into it via new
createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
process.kill(-pgid, signal) on Unix when pgid is recorded;
Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
staleSessionReaperInterval setInterval (including the co-located
WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
WAL growth without an app-level timer), killIdleDaemonChildren,
killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
via generatorPromise.finally() already lives in worker-service
startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
lazy-spawn — consults isWorkerPortAlive (which gates
captureProcessStartToken for PID-reuse safety via commit
99060bac), then spawns detached with unref(), then
waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
only on external SIGTERM via supervisor signal handlers.
Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.
All 10 verification greps return 0. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast
Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
- Phase 1/2: ObservationRow type doesn't exist; the four
"formatters" operate on three incompatible types.
- Phase 4: RECENCY_WINDOW_MS already imported from
SEARCH_CONSTANTS at every call site.
- Phase 7: getExistingChromaIds is NOT @deprecated and has an
active caller in ChromaSync.backfillMissingSyncs.
- Phase 8: estimateTokens already consolidated.
- Phase 9: knowledge-corpus rewrite blocked on PG-3
prompt-caching cost smoke test.
Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.
Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.
Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).
Tests updated (Principle 7 — delete in same PR):
- search-orchestrator.test.ts: "fall back to SQLite" rewritten
as "throw ChromaUnavailableError (HTTP 503)".
- chroma/hybrid/sqlite-search-strategy tests: rewritten to
rejects.toThrow; removed fellBack assertions.
Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 03 — ingestion path
Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.
- Phase 0: Created src/services/worker/http/shared.ts exporting
ingestObservation/ingestPrompt/ingestSummary as direct
in-process functions plus ingestEventBus (Node EventEmitter,
reusing existing pattern — no third event bus introduced).
setIngestContext wires the SessionManager dependency from
worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
returning { valid:true; kind: 'observation'|'summary'; data }
| { valid:false; reason: string }. Inspects root element;
<skip_summary reason="…"/> is a first-class summary case
with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
branches on the discriminated union. On invalid → markFailed
+ logger.warn(reason). On observation → ingestObservation.
On summary → ingestSummary then emit summaryStoredEvent
{ sessionId, messageId } (consumed by Plan 05's blocking
/api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
(ResponseProcessor + SessionManager + worker-types) and
MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
replaced with fs.watch(transcriptsRoot, { recursive: true,
persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
Map deleted. tool_use rows insert with INSERT OR IGNORE on
UNIQUE(session_id, tool_use_id) (added by Plan 01). New
pairToolUsesByJoin query in PendingMessageStore for read-time
pairing (UNIQUE INDEX provides idempotency; explicit consumer
not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
direct ingestObservation call. maybeParseJson silent-passthrough
rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
class) deleted. The active extractLastMessage at
src/shared/transcript-parser.ts:41-144 is the sole survivor.
Tests updated (Principle 7 — same-PR delete):
- tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
to assert discriminated-union shape; coercion-specific
scenarios collapse into { valid:false } assertions.
- tests/worker/agents/response-processor.test.ts: circuit-breaker
describe block skipped; non-XML/empty-response tests assert
fail-fast markFailed behavior.
Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.
Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.
Plan: PATHFINDER-2026-04-22/03-ingestion-path.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 05 — hook surface
Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.
- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
1..20; do curl -sf .../health && break; sleep 0.1; done` shell
retry wrappers deleted. Hook commands invoke their bun entry
point directly.
- Phase 2: src/shared/worker-utils.ts — added
executeWithWorkerFallback<T>(url, method, body) returning
T | { continue: true; reason?: string }. All 8 hook handlers
(observation, session-init, context, file-context, file-edit,
summarize, session-complete, user-message) rewritten to use
it instead of duplicating the ensureWorkerRunning →
workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
using validateBody + sessionEndSchema (z.object({sessionId})).
One-shot ingestEventBus.on('summaryStoredEvent') listener,
30 s timer, req.aborted handler — all share one cleanup so
the listener cannot leak. summarize.ts polling loop, plus
MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
memoizes SettingsDefaultsManager.loadFromFile per process.
Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
check entry; isProjectExcluded no longer referenced from
src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
(all 6 adapters: claude-code, cursor, raw, gemini-cli,
windsurf). New AdapterRejectedInput error in
src/cli/adapters/errors.ts. Handler-level isValidCwd checks
deleted from file-edit.ts and observation.ts. hook-command.ts
catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
initAgent is idempotent. tests/hooks/context-reinjection-guard
test (validated the deleted conditional) deleted in same PR
per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
.json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
_THRESHOLD setting (default 3). On consecutive worker-unreachable
≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
wrapping ensureWorkerRunning. executeWithWorkerFallback calls
the memoized version.
Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.
Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.
Plan: PATHFINDER-2026-04-22/05-hook-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 06 — API surface
One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.
- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
in src/services/worker/http/middleware/validateBody.ts —
safeParse → 400 { error: 'ValidationError', issues: [...] }
on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
route file. 24 POST endpoints across SessionRoutes,
CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
LogsRoutes, SettingsRoutes now wrap with validateBody().
/api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
along with every call site. Inline coercion helpers
(coerceStringArray, coercePositiveInteger) and inline
if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
from src/services/worker/http/middleware.ts. Worker binds
127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
via fs.readFileSync; served as Buffer with text/html content
type. SKILL.md + per-operation .md files cached in
Server.ts as Map<string, string>; loadInstructionContent
helper deleted. NO fs.watch, NO TTL — process restart is the
cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
— /api/pending-queue (GET), /api/pending-queue/process (POST),
/api/pending-queue/failed (DELETE), /api/pending-queue/all
(DELETE). Helper methods that ONLY served them
(getQueueMessages, getStuckCount, getRecentlyProcessed,
clearFailed, clearAll) deleted from PendingMessageStore.
KEPT: /api/processing-status (observability), /health
(used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
GracefulShutdown now calls getSupervisor().stop() directly.
Two functions retained with clear roles:
- performGracefulShutdown — worker-side 6-step shutdown
- runShutdownCascade — supervisor-side child teardown
(process.kill(-pgid), Windows tree-kill, PID-file cleanup)
Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
failure-marking path on PendingMessageStore. Old methods
markSessionMessagesFailed and markAllSessionMessagesAbandoned
deleted along with all callers (worker-service,
SessionCompletionHandler, tests/zombie-prevention).
Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.
Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.
Plan: PATHFINDER-2026-04-22/06-api-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 07 — dead code sweep
ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.
Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments
Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
builders, ParsedObservation, ParsedSummary, ParseResult,
SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
via dynamic await import('../../../context-generator.js') in
worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
— used via dynamic await import in npx-cli/install.ts +
uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
orphan-recovery caller in worker-service.ts plus
zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
in same file.
- All Database.ts barrel re-exports — used downstream.
Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
the methods are not thin wrappers but ~900 LoC of bodies, and
two methods are documented as intentional mirrors so the
context-generator.cjs bundle stays schema-consistent without
pulling MigrationRunner. Deserves its own plan, not a sweep.
Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.
Plan: PATHFINDER-2026-04-22/07-dead-code.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove residual ProcessRegistry comment reference
Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review (P1 + 2× P2)
P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
- Added optional timeoutMs to executeWithWorkerFallback,
forwarded to workerHttpRequest.
- summarize.ts call site now passes 35_000 (5 s above server
hold window).
P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
- ResponseProcessor now calls ingestSummary({ kind: 'parsed',
sessionDbId, messageId, contentSessionId, parsed }) so the
event-emission path is single-sourced.
- ingestSummary's requireContext() resolution moved inside the
'queue' branch (the only branch that needs sessionManager /
dbManager). 'parsed' is a pure event-bus emission and
doesn't need worker-internal context — fixes mocked
ResponseProcessor unit tests that don't call
setIngestContext.
P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
- Added a Symbol.for('claude-mem/worker-fallback') brand to
WorkerFallback. isWorkerFallback now checks the brand, not
a duck-typed property name.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 2 (P1 + P2)
P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.
- Gate ingestSummary call on (parsed.data.skipped ||
session.lastSummaryStored). Skipped summaries are an explicit
no-op bypass and still confirm; real summaries only confirm
when storage actually wrote a row.
- Non-skipped + summaryId === null path logs a warn and lets
the server-side timeout (504) surface to the hook instead of
a false ok:true.
P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.
- Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
log instead of the misleading ENQUEUED line. No behavior
change — the duplicate is still correctly suppressed by the
DB (Principle 3); only the log surface is corrected.
- confirmProcessed is never called with the enqueue() return
value (it operates on session.processingMessageIds[] from
claimNextMessage), so no caller is broken; the visibility
fix prevents future misuse.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 3 (P1 + 2× P2)
- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
context after SessionRoutes is constructed. setIngestContext runs
before routes exist, so transcript-watcher observations queued via
ingestObservation() had no way to auto-start the SDK generator.
Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
/api/session/end calls register one listener each and clean up on
completion, so the default-10 warning fires spuriously under normal
load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
ingestObservation() instead of duplicating skip-tool / meta /
privacy / queue logic. Single helper, matching the Plan 03 goal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)
- processor.handleToolResult: restore in-memory tool-use→tool-result
pairing via session.pendingTools for schemas (e.g. Codex) whose
tool_result events carry only tool_use_id + output. Without this,
neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
of throwing. Previously a single malformed JSON-shaped field caused
handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
for purely-glob inputs so the caller skips the watch instead of
anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
log on the returned id; the SessionManager branches on id === 0.
* fix: forward tool_use_id through ingestObservation (Greptile iter 5)
P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.
- shared.ingestObservation: forward payload.toolUseId to
queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
tool_use_id (HTTP convention) and toolUseId (JS convention) from
req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
validator doesn't rely on .passthrough() alone.
* fix: drop dead pairToolUsesByJoin, close session-end listener race
- PendingMessageStore: delete pairToolUsesByJoin. The method was never
called and its self-join semantics are structurally incompatible
with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
collapses any second row with the same pair, so a self-join can
only ever match a row to itself. In-memory pendingTools in
processor.ts remains the pairing path for split-event schemas.
- IngestEventBus: retain a short-lived (60s) recentStored map keyed
by sessionId. Populated on summaryStoredEvent emit, evicted on
consume or TTL.
- handleSessionEnd: drain the recent-events buffer before attaching
the listener. Closes the register-after-emit race where the summary
can persist between the hook's summarize POST and its session/end
POST — previously that window returned 504 after the 30s timeout.
* chore: merge origin/main into vivacious-teeth
Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).
Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
summaryStoredEvent supersedes main's SessionCompletionHandler DI
refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
reason; generator .finally() Stop-hook self-clean is a guard for a
path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
#2084) while preserving our Zod validateBody schema.
Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings
1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
in wrapHandler — synchronous exceptions would hang the client rather
than surfacing as 500s. Wrap it like every other handler.
2) processor.handleToolResult only consumed the session.pendingTools
entry when the tool_result arrived without a toolName. In the
split-schema path where tool_result carries both toolName and toolId,
the entry was never deleted and the map grew for the life of the
session. Consume the entry whenever toolId is present.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: typing cleanup and viewer tsconfig split for PR feedback
- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings (iter 2)
- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
the unscoped-drain branch that would nuke every pending/processing
row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
cached event until TTL eviction so a retried Stop hook's second
/api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
already tailed (JSONL appends fire on every line; only unknown
paths warrant a rescan).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: call finalizeSession in terminal session paths (Greptile iter 3)
terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.
Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: GC failed pending_messages rows at startup (Greptile iter 4)
Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.
Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)
1. startSessionProcessor success branch now calls completionHandler.
finalizeSession before removeSessionImmediate. Hooks-disabled installs
(and any Stop hook that fails before POST /api/sessions/complete) no
longer leave sdk_sessions rows as status='active' forever. Idempotent
— a subsequent /api/sessions/complete is a no-op.
2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
closures that reference it (TDZ safety; safe at runtime today but
fragile if timeout ever shrinks).
3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
instead of constructing its own — prevents silent divergence if the
handler ever becomes stateful.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: stop runaway crash-recovery loop on dead sessions
Two distinct bugs were combining to keep a dead session restarting forever:
Bug 1 (uncaught "The operation was aborted."):
child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
signal aborts. spawnSdkProcess() never attached an 'error' listener, so
any async spawn failure became uncaughtException and escaped to the
daemon-level handler. Attach an 'error' listener immediately after spawn,
before the !child.pid early-return, so async spawn errors are logged
(with errno code) and swallowed locally.
Bug 2 (sliding-window limiter never trips on slow restart cadence):
RestartGuard tripped only when restartTimestamps.length exceeded
MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
session that fail-restart-fail-restart on 8s cycles would loop forever
(consecutiveRestarts climbing past 30+ in observed logs). Add a
consecutiveFailures counter that increments on every restart and resets
only on recordSuccess(). Trip when consecutive failures exceed
MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
processing in between proves the session is dead. Both guards now run in
parallel: tight loops still trip the windowed cap; slow loops trip the
consecutive-failure cap.
Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* perf: streamline worker startup and consolidate database connections
1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)
* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations
Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.
- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
when shouldTrackProject(cwd) is false, so the observer's own hooks
cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
on observations) inline so bundled artifacts (worker-service.cjs,
context-generator.cjs) stay schema-consistent — without it, the
ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
supervisor can actually feed the observer's stdin.
Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.
* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)
Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.
* fix: cross-platform observer-dir containment; clarify SDK stdin pipe
claude-review feedback on PR #2124.
- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
hard-coded a POSIX separator and missed Windows backslash paths plus any
trailing-slash variance. Switched to a path.relative-based isWithin()
helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
consumes that pipe; 'ignore' would null it and the null-check below
would tear the child down on every spawn.
* fix: make Stop hook fire-and-forget; remove dead /api/session/end
The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.
The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.
- summarize.ts: drop the /api/session/end long-poll and the trailing
/api/sessions/complete await; ~40 lines removed; unused
SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
route registration. Drop the now-unused ingestEventBus and
SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
comments that referenced the dead endpoint. The IngestEventBus is
left in place dormant (no listeners) for follow-up cleanup so this
PR stays focused on the blocker.
Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.
Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* deps: bump all dependencies to latest including majors
Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.
Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: surface real chroma errors and add deep status probe
Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.
Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.
Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: rebuild worker-service bundle to match merged src
Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: address coderabbit feedback on PLAN-fix-mcp-search.md
- replace machine-specific /Users/alexnewman absolute paths with portable
<repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,53 @@
|
||||
# 00 — Principles
|
||||
|
||||
**Purpose**: This document is the anchor that every other plan in the `PATHFINDER-2026-04-22` corpus cites. It names the seven principles, the six anti-pattern guards, the unifying diagnosis that ties the refactor together, and the five concrete cures mapped to subsystems. Every subsequent plan (`01-data-integrity.md` through `99-verification.md`) measures its changes against this file.
|
||||
|
||||
---
|
||||
|
||||
## The Seven Principles
|
||||
|
||||
1. **No recovery code for fixable failures.** If the primary path is correct, recovery never runs. If it's broken, recovery hides the bug.
|
||||
2. **Fail-fast over grace-degrade.** Local code does not circuit-break, coerce, or silently fall back. It throws and lets the caller decide.
|
||||
3. **UNIQUE constraint over dedup window.** DB schema prevents duplicates; don't time-gate them.
|
||||
4. **Event-driven over polling.** `fs.watch` over `setInterval` rescan. Server-side wait over client-side poll. `child.on('exit')` over periodic scan.
|
||||
5. **OS-supervised process groups over hand-rolled reapers.** `detached: true` + `kill(-pgid)` replaces orphan sweeps.
|
||||
6. **One helper, N callers.** Not N copies of a helper. Not a strategy class for each config.
|
||||
7. **Delete code in the same PR it becomes unused.** No `@deprecated` fence, no "remove next release."
|
||||
|
||||
---
|
||||
|
||||
## The Six Anti-pattern Guards
|
||||
|
||||
- No new `setInterval` in `src/services/worker/` or the plan text (plan 99 greps for this)
|
||||
- No new `coerce*`, `recover*`, `heal*`, `repair*`, `reap*`, `kill*Orphans*` function names
|
||||
- No new try/catch that swallows errors and returns a fallback value
|
||||
- No new schema column whose only purpose is to feed a recovery query
|
||||
- No new strategy class when a config object would do
|
||||
- No new HTTP endpoint for diagnostic / manual-repair purposes
|
||||
|
||||
---
|
||||
|
||||
## The Unifying Diagnosis
|
||||
|
||||
claude-mem's accumulated complexity is not five unrelated bugs; it is one pattern repeated across five subsystems. When the primary path is not proven correct, defensive code accretes around it — dedup windows, stale-row resets, orphan reapers, coercion helpers, fallback agents. That defensive code then hides the bugs in the primary path, because every failure is silently absorbed before it becomes visible. The hidden bugs spawn more defensive code, because each new symptom looks novel. The cure is not more defense: it is to make the primary path correct, let it fail loudly when it cannot, and delete the defense in the same PR. Same disease, five organs.
|
||||
|
||||
---
|
||||
|
||||
## Five Cures
|
||||
|
||||
| Subsystem | Symptom | Cure | Principle # |
|
||||
|---|---|---|---|
|
||||
| lifecycle | Orphan reapers, idle-evictors, fallback agents | OS process groups via `detached: true` + `kill(-pgid)`; lazy-spawn from hooks; no reapers | 5, 1 |
|
||||
| data | 60-s stale-reset, 30-s dedup window, `repairMalformedSchema` | `UNIQUE` constraints + `ON CONFLICT DO NOTHING`; self-healing claim via `worker_pid NOT IN live_pids` | 3, 1 |
|
||||
| search | Four formatter classes, `findByConcept`/`findByFile`/`findByType`, seven recency copies | One `renderObservations(obs, strategy)`; route all queries through `SearchOrchestrator`; one `RECENCY_WINDOW_MS` | 6 |
|
||||
| ingestion | `coerceObservationToSummary`, circuit breaker, `setInterval` rescan, in-memory `pendingTools` Map | Fail-fast `parseAgentXml` discriminated union; recursive `fs.watch`; DB-backed pairing via `UNIQUE(session_id, tool_use_id)` | 2, 4, 3 |
|
||||
| hooks | Client-side polling, silent error swallow, `@deprecated` dead classes | Blocking endpoint for summary wait; hooks throw on worker-unreachable; delete `TranscriptParser`, migration 19, `repairMalformedSchema` | 4, 2, 7 |
|
||||
|
||||
---
|
||||
|
||||
## Glossary
|
||||
|
||||
- **second-system effect** — The tendency to over-engineer a rewrite with features the first system proved unnecessary; in claude-mem, the canonical example is the worker-side `src/services/worker/ProcessRegistry.ts` duplicating the already-working `src/supervisor/process-registry.ts`.
|
||||
- **lease pattern** — A claim held by a live owner and invalidated by liveness of that owner, not by wall-clock timeout; in claude-mem, the canonical example is the self-healing claim query using `worker_pid NOT IN live_worker_pids` instead of `started_processing_at_epoch < now - 60s`.
|
||||
- **self-healing claim** — A single `UPDATE … WHERE status='pending' OR (status='processing' AND worker_pid NOT IN live_pids)` that is correct even after a crash, because liveness is checked at claim time rather than reset by a background timer; canonical example is the replacement for `STALE_PROCESSING_THRESHOLD_MS` at `src/services/sqlite/PendingMessageStore.ts:99-145`.
|
||||
- **fail-fast contract** — A function signature that returns a discriminated union `{ valid: true, data } | { valid: false, reason }` (or throws) instead of coercing, defaulting, or returning `undefined`; canonical example is `parseAgentXml` replacing `parseObservations` + `parseSummary` + `coerceObservationToSummary` at `src/sdk/parser.ts:222-259`.
|
||||
@@ -0,0 +1,282 @@
|
||||
# 01 — Data Integrity
|
||||
|
||||
**Purpose**: Cure the data layer's second-system accretion by letting the database enforce uniqueness, making the claim query self-heal against live-worker liveness, and deleting every recovery surface that existed only to paper over the absent primary-path correctness. The cure is four moves: add `UNIQUE` constraints to `pending_messages` and `observations`; rewrite `claimNextMessage` to be idempotent against crashes via `worker_pid NOT IN live_worker_pids`; replace the 30-s dedup window with `INSERT … ON CONFLICT DO NOTHING`; and delete `STALE_PROCESSING_THRESHOLD_MS`, `started_processing_at_epoch`, `DEDUP_WINDOW_MS`, `findDuplicateObservation`, `clearFailedOlderThan` (interval), `repairMalformedSchema`, and migration 19 — in the same PR that they stop being referenced.
|
||||
|
||||
---
|
||||
|
||||
## Principles invoked
|
||||
|
||||
This plan is measured against `00-principles.md`:
|
||||
|
||||
1. **Principle 1 — No recovery code for fixable failures.** `recoverStuckProcessing`, `clearFailedOlderThan` interval, and `repairMalformedSchema` all hide primary-path bugs. They are deleted, not relocated.
|
||||
2. **Principle 2 — Fail-fast over grace-degrade.** Chroma conflict errors surface through a narrow, flagged fallback; the rest of the data layer throws. No silent `.catch(() => undefined)`.
|
||||
3. **Principle 3 — UNIQUE constraint over dedup window.** The database prevents duplicates; no timer gates them. `DEDUP_WINDOW_MS` and `findDuplicateObservation` are replaced by `UNIQUE(memory_session_id, content_hash)` + `ON CONFLICT DO NOTHING`.
|
||||
|
||||
Principles 4, 6, 7 are invoked implicitly: the self-healing claim is event-driven against worker liveness rather than timer-scanned (4); the claim query is one helper for N workers (6); every deleted identifier goes in the same PR as its deletion (7).
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Fresh `schema.sql`
|
||||
|
||||
**Purpose**: Regenerate `schema.sql` from the current migration tip so fresh databases boot directly into the post-refactor shape without replaying migrations. Drops `started_processing_at_epoch`, adds `worker_pid INTEGER`, and adds both `UNIQUE` constraints inline.
|
||||
|
||||
**Files**:
|
||||
- `src/services/sqlite/schema.sql` (regenerate)
|
||||
- `src/services/sqlite/migrations/runner.ts:658-837` — cited as the authoritative shape of `observations` + `session_summaries` after migration 21 (FK cascade fix), per `_reference.md` Part 1 §Data layer.
|
||||
|
||||
**Schema changes**:
|
||||
|
||||
```sql
|
||||
-- pending_messages: self-healing claim columns
|
||||
CREATE TABLE pending_messages (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
session_id TEXT NOT NULL,
|
||||
tool_use_id TEXT NOT NULL,
|
||||
payload TEXT NOT NULL,
|
||||
status TEXT NOT NULL DEFAULT 'pending',
|
||||
worker_pid INTEGER, -- ADDED (self-healing claim)
|
||||
retry_count INTEGER NOT NULL DEFAULT 0,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
failed_at_epoch INTEGER,
|
||||
-- started_processing_at_epoch INTEGER -- DELETED (Phase 3)
|
||||
UNIQUE(session_id, tool_use_id) -- ADDED (Phase 4 + ingestion pairing)
|
||||
);
|
||||
|
||||
-- observations: UNIQUE over content_hash
|
||||
CREATE TABLE observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
content_hash TEXT NOT NULL,
|
||||
-- … other columns elided …
|
||||
UNIQUE(memory_session_id, content_hash) -- ADDED (replaces DEDUP_WINDOW_MS)
|
||||
);
|
||||
```
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — `runner.ts:658-837` (migration 21) is the precedent for the `observations` table's current column set; `PendingMessageStore.ts:99-145` names the columns this schema replaces.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Migrate existing databases
|
||||
|
||||
**Purpose**: Get every already-installed database onto the new shape via `ALTER TABLE` + backfill + `CREATE UNIQUE INDEX`. Existing rows with duplicate `(session_id, tool_use_id)` or `(memory_session_id, content_hash)` must be deduplicated before the index is created.
|
||||
|
||||
**Files**:
|
||||
- `src/services/sqlite/migrations/runner.ts` — add migration 23 (and 24 if split).
|
||||
|
||||
**Precedent**: Migration 22 at `src/services/sqlite/migrations/runner.ts:658-837` is the canonical pattern for non-trivial schema changes — it recreates tables wholesale to add `ON UPDATE CASCADE`. New migrations follow the same shape: recreate or `ALTER`, backfill, then add the unique index.
|
||||
|
||||
**Migration sketch**:
|
||||
|
||||
```sql
|
||||
-- Migration 23: pending_messages self-healing claim shape
|
||||
ALTER TABLE pending_messages ADD COLUMN worker_pid INTEGER;
|
||||
-- backfill: nothing to do; new column is NULL on existing rows
|
||||
-- drop old stale column in the table rebuild:
|
||||
CREATE TABLE pending_messages_new (… without started_processing_at_epoch …);
|
||||
INSERT INTO pending_messages_new SELECT … (excluding started_processing_at_epoch) … FROM pending_messages;
|
||||
DROP TABLE pending_messages;
|
||||
ALTER TABLE pending_messages_new RENAME TO pending_messages;
|
||||
-- dedup any existing duplicate (session_id, tool_use_id) rows before the index
|
||||
DELETE FROM pending_messages WHERE id NOT IN (
|
||||
SELECT MIN(id) FROM pending_messages GROUP BY session_id, tool_use_id
|
||||
);
|
||||
CREATE UNIQUE INDEX ux_pending_session_tool ON pending_messages(session_id, tool_use_id);
|
||||
|
||||
-- Migration 24: observations UNIQUE(memory_session_id, content_hash)
|
||||
DELETE FROM observations WHERE id NOT IN (
|
||||
SELECT MIN(id) FROM observations GROUP BY memory_session_id, content_hash
|
||||
);
|
||||
CREATE UNIQUE INDEX ux_observations_session_hash ON observations(memory_session_id, content_hash);
|
||||
```
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — migration 21 at `runner.ts:658-837` as the table-recreate precedent. Part 2 row "SQLite UNIQUE on added column" confirms the ALTER + backfill + unique-index sequence is the verified pattern.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Self-healing claim query
|
||||
|
||||
**Purpose**: Replace the 60-s stale-reset pattern with a single `UPDATE` whose predicate checks worker liveness at claim time. After this phase, `STALE_PROCESSING_THRESHOLD_MS` and `started_processing_at_epoch` are both gone; `claimNextMessage` has no "recover" branch because no recovery is needed.
|
||||
|
||||
**Files**:
|
||||
- `src/services/sqlite/PendingMessageStore.ts:99-145` — replace the transactional body of `claimNextMessage`.
|
||||
- `src/services/sqlite/PendingMessageStore.ts` — remove the `STALE_PROCESSING_THRESHOLD_MS` constant.
|
||||
|
||||
**Before** (current, at `PendingMessageStore.ts:99-145`): transactional claim that first `UPDATE … SET status='pending' WHERE status='processing' AND started_processing_at_epoch < now - STALE_PROCESSING_THRESHOLD_MS` (self-heal block, lines 107-115), then claims one `pending` row.
|
||||
|
||||
**After** (single statement, no self-heal block):
|
||||
|
||||
```sql
|
||||
UPDATE pending_messages
|
||||
SET worker_pid = ?,
|
||||
status = 'processing'
|
||||
WHERE id = (
|
||||
SELECT id FROM pending_messages
|
||||
WHERE status = 'pending'
|
||||
OR (status = 'processing' AND worker_pid NOT IN (SELECT pid FROM live_worker_pids))
|
||||
ORDER BY created_at_epoch
|
||||
LIMIT 1
|
||||
)
|
||||
RETURNING *;
|
||||
```
|
||||
|
||||
`live_worker_pids` is populated by the supervisor at claim time (in-process table or a parameterized IN-list of PIDs constructed from `supervisor/process-registry.ts`). The query is correct even after a crash: if a row's `worker_pid` is not a current live worker PID, the row is immediately reclaimable.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `STALE_PROCESSING_THRESHOLD_MS` constant
|
||||
- `started_processing_at_epoch` column (via Phase 2 migration)
|
||||
- The self-heal `UPDATE` block at `PendingMessageStore.ts:107-115`
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — `PendingMessageStore.ts:99-145` (current `claimNextMessage` transaction, self-heal block at 107-115 is the target).
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Delete dedup window
|
||||
|
||||
**Purpose**: Remove the 30-s content-hash dedup window entirely. The `UNIQUE(memory_session_id, content_hash)` constraint added in Phase 1/2 makes duplicates a database error that `ON CONFLICT DO NOTHING` silently absorbs.
|
||||
|
||||
**Files**:
|
||||
- `src/services/sqlite/observations/store.ts:13-46` — delete `DEDUP_WINDOW_MS` constant and `findDuplicateObservation` function.
|
||||
- `src/services/sqlite/observations/store.ts` — change the insert path to `ON CONFLICT DO NOTHING`.
|
||||
|
||||
**Before**: `insert()` first calls `findDuplicateObservation(memory_session_id, content_hash, DEDUP_WINDOW_MS)` and short-circuits if a row exists within the window.
|
||||
|
||||
**After**:
|
||||
|
||||
```sql
|
||||
INSERT INTO observations (memory_session_id, content_hash, …)
|
||||
VALUES (?, ?, …)
|
||||
ON CONFLICT(memory_session_id, content_hash) DO NOTHING;
|
||||
```
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `DEDUP_WINDOW_MS` constant
|
||||
- `findDuplicateObservation` function + all callers
|
||||
- Any test fixture that depended on the window's timing
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — `src/services/sqlite/observations/store.ts:13-46` (`DEDUP_WINDOW_MS` + `findDuplicateObservation`). Part 2 row "SQLite `INSERT OR IGNORE` / `ON CONFLICT DO NOTHING`" verifies the idempotent-insert primitive.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Delete `clearFailedOlderThan` interval
|
||||
|
||||
**Purpose**: A 2-minute background interval purging `status='failed'` rows is a retention policy pretending to be a correctness concern. Retention moves to a query-time filter; no timer runs.
|
||||
|
||||
**Files**:
|
||||
- `src/services/worker/worker-service.ts:567` — delete the `setInterval(() => …clearFailedOlderThan(…), …)` registration.
|
||||
- `src/services/sqlite/PendingMessageStore.ts:486-495` — the `clearFailedOlderThan` method itself stays only if an explicit user-invoked purge path needs it; otherwise delete in the same PR.
|
||||
|
||||
**After**: Every query that must exclude old failures applies the filter at read time:
|
||||
|
||||
```sql
|
||||
-- at any read site that doesn't want ancient failures
|
||||
SELECT … FROM pending_messages
|
||||
WHERE status != 'failed'
|
||||
OR failed_at_epoch > (strftime('%s','now') - 3600) * 1000;
|
||||
```
|
||||
|
||||
If no reader ever needs to suppress old failed rows, then no filter is needed — failed rows simply accumulate until an explicit user purge, and the `clearFailedOlderThan` method is deleted outright.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- The `setInterval` registration at `worker-service.ts:567`
|
||||
- (Probable) `PendingMessageStore.clearFailedOlderThan` method at `:486-495`
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — `PendingMessageStore.ts:486-495` (`clearFailedOlderThan`); §Worker/lifecycle — `worker-service.ts:567` (interval call site).
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — Delete `repairMalformedSchema` Python subprocess
|
||||
|
||||
**Purpose**: The Python fallback that rewrites a corrupt SQLite schema via `execFileSync` is cross-machine WAL corruption that should be root-caused, not repaired. Shipping repair code incentivizes accepting corruption as normal. Delete it; if WAL corruption recurs, investigate and fix the cause (likely an interrupted writer, a misconfigured `PRAGMA`, or a stale `.db-wal` at daemon startup).
|
||||
|
||||
**Files**:
|
||||
- `src/services/sqlite/Database.ts:37-130` — delete `repairMalformedSchema` function, its tempfile-write helper, and its `execFileSync` call site.
|
||||
- All callers of `repairMalformedSchema` — delete the call; let the original SQLite error propagate.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `repairMalformedSchema`
|
||||
- Any `// if malformed, repair` comment or try/catch around its invocation
|
||||
- The `python3` presence check that gates its availability
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — `src/services/sqlite/Database.ts:37-130`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Chroma sync — upsert semantics
|
||||
|
||||
**Purpose**: Chroma MCP has no native upsert. The current `ChromaSync` catches `already exist` on add, deletes the conflicting IDs, then re-adds. This is a brittle error-text match. Document the pattern, gate it behind `CHROMA_SYNC_FALLBACK_ON_CONFLICT=true`, and commit to removing the fallback once Chroma MCP ships upsert natively. The flag is not permanent; it is a bridge.
|
||||
|
||||
**Files**:
|
||||
- `src/services/sync/ChromaSync.ts:290-318` — wrap the delete-then-add reconciliation in the env-flag check.
|
||||
|
||||
**Flag contract**:
|
||||
|
||||
```ts
|
||||
// src/services/sync/ChromaSync.ts
|
||||
const CHROMA_SYNC_FALLBACK_ON_CONFLICT =
|
||||
process.env.CHROMA_SYNC_FALLBACK_ON_CONFLICT === 'true';
|
||||
|
||||
try {
|
||||
await chroma.add(ids, embeddings, metadatas, documents);
|
||||
} catch (err) {
|
||||
if (CHROMA_SYNC_FALLBACK_ON_CONFLICT && isAlreadyExistsError(err)) {
|
||||
await chroma.delete(ids);
|
||||
await chroma.add(ids, embeddings, metadatas, documents);
|
||||
return;
|
||||
}
|
||||
throw err;
|
||||
}
|
||||
```
|
||||
|
||||
**Bridge-out plan**: When Chroma MCP exposes `upsert(ids, …)`, replace the `try/add` with `await chroma.upsert(…)` and delete the flag, the error-text predicate, and this phase's code entirely — in the same PR.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — `src/services/sync/ChromaSync.ts:290-318`. Part 4 (Known gaps) row 1 flags the error-text brittleness.
|
||||
|
||||
---
|
||||
|
||||
## Phase 8 — Delete migration 19 no-op
|
||||
|
||||
**Purpose**: Migration 19 became a no-op after migration 17 made renames idempotent. It records itself as applied and does nothing. Absorb it into the fresh `schema.sql` (Phase 1) and delete its runner block.
|
||||
|
||||
**Files**:
|
||||
- `src/services/sqlite/migrations/runner.ts:621-628` — delete the migration 19 block.
|
||||
|
||||
**After**: No code references `version === 19` except the migration-history table, which is append-only; past-applied rows remain harmless.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- The migration 19 case block at `runner.ts:621-628`
|
||||
- Any fixture or test that invoked it
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Data layer — `src/services/sqlite/migrations/runner.ts:621-628` (migration 19 no-op).
|
||||
|
||||
---
|
||||
|
||||
## Verification grep targets
|
||||
|
||||
Each command below must return the indicated count after this plan lands.
|
||||
|
||||
```
|
||||
grep -rn "STALE_PROCESSING_THRESHOLD_MS" src/ → 0
|
||||
grep -rn "started_processing_at_epoch" src/ → 0
|
||||
grep -rn "DEDUP_WINDOW_MS" src/ → 0
|
||||
grep -rn "findDuplicateObservation" src/ → 0
|
||||
grep -rn "repairMalformedSchema" src/ → 0
|
||||
grep -rn "clearFailedOlderThan" src/services/worker/worker-service.ts → 0
|
||||
```
|
||||
|
||||
**Integration test**: Kill the worker process with `kill -9 <worker_pid>` mid-claim (between the `UPDATE` and the `RETURNING` round-trip, or immediately after a row transitions to `status='processing'`). Start a new worker. Assert the new worker's `claimNextMessage` call succeeds and returns the same row with the new worker's `worker_pid` stamped, and that the row is subsequently processed to completion. This is the acceptance test for the self-healing claim — no background timer is permitted to intervene.
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards
|
||||
|
||||
Directly enforced for this plan (reproduced verbatim from the rewrite plan):
|
||||
|
||||
- **Do NOT keep `recoverStuckProcessing()` as a boot-once function.** Self-healing claim replaces it entirely. Any identifier matching `recover*`, `heal*`, or `repair*` that survives this plan must be in a DELETE context.
|
||||
- **Do NOT add a new timer for Chroma backfill.** Backfill runs at boot-once OR on-demand when a downstream reader requests. No `setInterval`, no `setTimeout` loop.
|
||||
- **Do NOT add "repair" CLI commands.** If schema corruption recurs after `repairMalformedSchema` is deleted, root-cause it. Do not add a `claude-mem repair-db` command.
|
||||
|
||||
---
|
||||
|
||||
## Known gaps / deferrals
|
||||
|
||||
1. **Chroma upsert fallback brittleness.** The `CHROMA_SYNC_FALLBACK_ON_CONFLICT` flag in Phase 7 matches on error-text ("already exist"). That match is brittle — a Chroma MCP version bump could change the phrase and silently break reconciliation. The flag exists as a bridge, not a permanent surface. When Chroma MCP ships native `upsert`, Phase 7's code and flag both delete in the same PR. This is carried forward from `_rewrite-plan.md` §Known gaps #1 and `_reference.md` Part 4 row 1.
|
||||
@@ -0,0 +1,399 @@
|
||||
# 02 — Process Lifecycle
|
||||
|
||||
## Purpose
|
||||
|
||||
Delete the worker-side parallel registry at `src/services/worker/ProcessRegistry.ts`, consolidate to the canonical `src/supervisor/process-registry.ts`, lazy-spawn the worker from hooks, spawn Claude SDK children into their own process groups with `detached: true`, and tear those groups down via `process.kill(-pgid, signal)`. No reapers. No idle-shutdown. No fallback agent chain. The worker runs until killed; orphans are PREVENTED by process groups, not swept. This plan replaces a hand-rolled supervisor (orphan scanners, idle-evictors, stale-session reapers, `ppid==1` sweeps) with OS mechanisms that already do the job correctly.
|
||||
|
||||
---
|
||||
|
||||
## Principles invoked
|
||||
|
||||
- **Principle 1 — No recovery code for fixable failures.** Orphan sweeps, idle-evictors, and stale-session reapers are recovery code papering over a spawn bug. Fix the spawn (process groups), delete the recovery.
|
||||
- **Principle 2 — Fail-fast over grace-degrade.** SessionManager's Gemini → OpenRouter fallback chain hides SDK failures. Delete it; surface failures to the hook via exit code 2.
|
||||
- **Principle 4 — Event-driven over polling.** `child.on('exit')` is authoritative. Delete the 30-second orphan-reaper interval, the stale-session reaper interval, the `clearFailedOlderThan` interval, and the per-session `abandonedTimer` `setTimeout`.
|
||||
- **Principle 5 — OS-supervised process groups over hand-rolled reapers.** `spawn(cmd, args, { detached: true })` + `process.kill(-pgid, signal)` replaces `killSystemOrphans`, `killIdleDaemonChildren`, `reapOrphanedProcesses`, `reapStaleSessions`.
|
||||
|
||||
---
|
||||
|
||||
## Phase list
|
||||
|
||||
### Phase 1 — Delete `src/services/worker/ProcessRegistry.ts`
|
||||
|
||||
**Purpose**: Eliminate the worker-side parallel registry. The canonical registry at `src/supervisor/process-registry.ts` is the only one that survives.
|
||||
|
||||
**Anchors** (`_reference.md` Part 1 §Worker/lifecycle):
|
||||
- `src/services/worker/ProcessRegistry.ts:244-309` — `killIdleDaemonChildren`
|
||||
- `src/services/worker/ProcessRegistry.ts:315-344` — `killSystemOrphans`
|
||||
- `src/services/worker/ProcessRegistry.ts:349-382` — `reapOrphanedProcesses`
|
||||
- `src/services/worker/ProcessRegistry.ts:452-465` — SDK spawn site (MOVE to supervisor, then delete the file)
|
||||
- `src/supervisor/process-registry.ts:85-173` — `captureProcessStartToken` (KEEP — primary-path PID-reuse detection)
|
||||
|
||||
**Before** (conceptual):
|
||||
```ts
|
||||
// src/services/worker/ProcessRegistry.ts (the shadow registry — DELETE)
|
||||
export class ProcessRegistry {
|
||||
killIdleDaemonChildren(daemonPid: number) { /* ps -eo, ppid filter, kill */ }
|
||||
killSystemOrphans() { /* ppid==1 sweep, regex match */ }
|
||||
reapOrphanedProcesses() { /* three-layer sweep */ }
|
||||
spawnSdkChild(cmd, args) { return spawn(cmd, args, { stdio: 'pipe' }); }
|
||||
}
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// The only registry that exists is src/supervisor/process-registry.ts.
|
||||
// SDK spawn moves into a single helper there (see Phase 2).
|
||||
// There is no ppid sweep, no orphan reaper, no "shadow" registry.
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Worker/lifecycle; `_mapping.md` Old Plan 07 rows labeled DELETE for Mechanism C (boot-once reconciliation block).
|
||||
|
||||
---
|
||||
|
||||
### Phase 2 — Spawn SDK children into their own process groups
|
||||
|
||||
**Purpose**: Every Claude SDK child gets its own process group at spawn time, so the parent can signal the whole subtree with one call. This is the OS primitive that makes orphan reaping unnecessary.
|
||||
|
||||
**Anchors**:
|
||||
- `src/services/worker/ProcessRegistry.ts:452-465` — current spawn site (lifts to supervisor during Phase 1 consolidation)
|
||||
- `_reference.md` Part 2 row 1 — Node `child_process.spawn({ detached: true })` signature
|
||||
- `_reference.md` Part 2 row 3 — Bun.spawn does NOT support `detached`; we use Node's API
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/services/worker/ProcessRegistry.ts:452-465 (current)
|
||||
const proc = spawn(command, args, {
|
||||
stdio: 'pipe',
|
||||
// no detached, no process group
|
||||
});
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// consolidated into src/supervisor/process-registry.ts
|
||||
const proc = spawn(command, args, {
|
||||
detached: true, // Unix: setpgid, child becomes group leader
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
});
|
||||
const pgid = proc.pid; // group leader's PID == pgid on Unix
|
||||
record.pgid = pgid; // track for teardown in Phase 3
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 2 row 1 (`spawn(cmd, args, { detached: true, stdio: ['ignore','pipe','pipe'] })` — creates new process group on Unix via `setpgid`); `_reference.md` Part 1 §Worker/lifecycle `src/services/worker/ProcessRegistry.ts:452-465`.
|
||||
|
||||
---
|
||||
|
||||
### Phase 3 — Shutdown cascade kills process groups, not single PIDs
|
||||
|
||||
**Purpose**: Teardown signals the group, not the leader. All descendants receive the signal; we never need to walk `ps` to find stragglers.
|
||||
|
||||
**Anchors**:
|
||||
- `src/supervisor/shutdown.ts:22-99` — `runShutdownCascade` (5-phase)
|
||||
- `src/supervisor/shutdown.ts:116` — current `process.kill(pid, 'SIGTERM')` call
|
||||
- `src/supervisor/shutdown.ts:163` — current `process.kill(pid, 'SIGKILL')` call
|
||||
- `_reference.md` Part 2 row 2 — `process.kill(-pgid, signal)` semantics
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/supervisor/shutdown.ts:116, 163 (current — single PID only)
|
||||
process.kill(record.pid, 'SIGTERM');
|
||||
// wait 5s
|
||||
process.kill(record.pid, 'SIGKILL');
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/supervisor/shutdown.ts:116, 163
|
||||
// Negative PID signals the WHOLE process group on Unix (POSIX kill(2)).
|
||||
// This tears down the SDK child and every descendant it spawned in one call.
|
||||
process.kill(-record.pgid, 'SIGTERM');
|
||||
// wait 5s for graceful exit (child.on('exit') resolves the cascade early)
|
||||
process.kill(-record.pgid, 'SIGKILL');
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 2 row 2 (`process.kill(-pgid, signal)` — negative PID signals whole group on Unix; works in Bun via libuv); `_reference.md` Part 1 §Worker/lifecycle `src/supervisor/shutdown.ts:22-99, 116, 163`.
|
||||
|
||||
---
|
||||
|
||||
### Phase 4 — Delete all reaper intervals
|
||||
|
||||
**Purpose**: Zero repeating background timers in the worker. Orphans are prevented by Phase 2; stale sessions are an artifact of broken exit handling (fixed by Phase 5); failed rows are a retention policy question (handled at query time by `01-data-integrity.md`, not swept here).
|
||||
|
||||
**Anchors**:
|
||||
- `src/services/worker-service.ts:537` — `startOrphanReaper` call (DELETE)
|
||||
- `src/services/worker-service.ts:547` — `staleSessionReaperInterval = setInterval(...)` (DELETE)
|
||||
- `src/services/worker-service.ts:567` — `clearFailedOlderThan` interval setup (DELETE)
|
||||
- `src/services/worker/ProcessRegistry.ts:244-309` — `killIdleDaemonChildren` body (DELETE)
|
||||
- `src/services/worker/ProcessRegistry.ts:315-344` — `killSystemOrphans` body (DELETE)
|
||||
- `src/services/worker/ProcessRegistry.ts:349-382` — `reapOrphanedProcesses` body (DELETE)
|
||||
- `src/services/worker/SessionManager.ts:516-568` — `reapStaleSessions` body (DELETE)
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/services/worker-service.ts:537, 547, 567 (current)
|
||||
this.startOrphanReaper(); // 30s interval
|
||||
this.staleSessionReaperInterval = setInterval(
|
||||
() => this.sessionManager.reapStaleSessions(), 60_000
|
||||
);
|
||||
this.clearFailedInterval = setInterval(
|
||||
() => this.pendingStore.clearFailedOlderThan(ms), 120_000
|
||||
);
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/services/worker-service.ts
|
||||
// (nothing — no intervals, no reapers)
|
||||
// child.on('exit') drives session teardown; Phase 2 process groups prevent orphans;
|
||||
// 01-data-integrity handles failed-row retention via query-time filters.
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Worker/lifecycle `src/services/worker-service.ts:537, 547, 567`; Part 1 §Worker/lifecycle `ProcessRegistry.ts:244-309, 315-344, 349-382`; `SessionManager.ts:516-568`.
|
||||
|
||||
---
|
||||
|
||||
### Phase 5 — Delete the per-session `abandonedTimer` setTimeout
|
||||
|
||||
**Purpose**: `abandonedTimer` is a polling loop wearing a `setTimeout` disguise — it exists because the primary-path cleanup in `generatorPromise.finally` was unreliable. Fix the primary path, delete the defense.
|
||||
|
||||
**Anchors**:
|
||||
- `src/services/worker/SessionManager.ts:631-670` — `getMessageIterator` + idle-timer callback
|
||||
- `_mapping.md` Old Plan 07 Mechanism B row — DELETE verdict
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/services/worker/SessionManager.ts (current, conceptual)
|
||||
session.abandonedTimer = setTimeout(() => {
|
||||
this.cleanupSession(session.id); // polling via timer
|
||||
}, ABANDONED_MS);
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// cleanup runs synchronously when the generator settles — one path, no timer
|
||||
generatorPromise.finally(() => {
|
||||
this.cleanupSession(session.id);
|
||||
});
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Worker/lifecycle `SessionManager.ts:631-670`; `_mapping.md` Old Plan 07 row "Mechanism B: Per-session `abandonedTimer` setTimeout" — DELETE.
|
||||
|
||||
---
|
||||
|
||||
### Phase 6 — Delete idle-eviction from SessionManager
|
||||
|
||||
**Purpose**: Evicting the "idlest" session to make room for a new one is load-shedding implemented at the wrong layer. Backpressure belongs on the queue, not on the pool.
|
||||
|
||||
**Anchors**:
|
||||
- `src/services/worker/SessionManager.ts:477-506` — `evictIdlestSession`
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/services/worker/SessionManager.ts:477-506 (current)
|
||||
evictIdlestSession() {
|
||||
// scan pool, find oldest lastActiveAt, kill it to free a slot
|
||||
}
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// deleted. Pool admission is gated by queue depth at SessionQueueProcessor;
|
||||
// a full pool applies backpressure upstream instead of kicking live sessions.
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Worker/lifecycle `SessionManager.ts:477-506`.
|
||||
|
||||
---
|
||||
|
||||
### Phase 7 — Delete fallback agent chain (Gemini → OpenRouter)
|
||||
|
||||
**Purpose**: A fallback-agent chain hides SDK failures behind "it kind of worked with a different model." Principle 2 (fail-fast): surface the failure to the hook via exit code 2, let the caller decide.
|
||||
|
||||
**Anchors**:
|
||||
- `src/services/worker/SessionManager.ts` — `fallbackAgent` / Gemini / OpenRouter references
|
||||
- `_reference.md` Part 2 row 7 — Claude Code hook exit codes (0/1/2)
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/services/worker/SessionManager.ts (current, conceptual)
|
||||
try {
|
||||
return await runClaudeSdk(payload);
|
||||
} catch (err) {
|
||||
logger.warn('SDK failed, falling back to Gemini');
|
||||
return await runGemini(payload); // silent degrade
|
||||
}
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// SDK failure surfaces. Worker returns non-200; hook exits 2 so Claude Code sees it.
|
||||
return await runClaudeSdk(payload);
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 2 row 7 (exit-code contract); principle 2 (no silent fallbacks).
|
||||
|
||||
---
|
||||
|
||||
### Phase 8 — Lazy-spawn wrapper in every hook
|
||||
|
||||
**Purpose**: Hooks start the worker when needed, detached from the hook process's lifetime. The wrapper is a few lines with no daemon-mode, no supervisor-in-a-box. Inherits PID-reuse safety from the supervisor start-guard (see Phase 9 and PID-reuse section).
|
||||
|
||||
**Anchors**:
|
||||
- `src/shared/worker-utils.ts:221-239` — current `ensureWorkerRunning` (port health check)
|
||||
- `src/services/infrastructure/ProcessManager.ts:1013-1032` — daemon spawn pattern reference (`setsid` on Unix, `detached: true` fallback)
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/shared/worker-utils.ts:221-239 (current)
|
||||
export async function ensureWorkerRunning(): Promise<boolean> {
|
||||
// ping port; return true/false — caller degrades on false
|
||||
}
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/shared/worker-utils.ts — lazy-spawn wrapper skeleton (~10 lines)
|
||||
export async function ensureWorkerRunning(): Promise<boolean> {
|
||||
if (await isWorkerPortAlive()) return true; // inherits PID-reuse check (99060bac)
|
||||
const proc = spawn(bunPath, [workerPath], {
|
||||
detached: true,
|
||||
stdio: ['ignore', 'ignore', 'ignore'],
|
||||
});
|
||||
proc.unref(); // hook exit doesn't kill worker
|
||||
return await waitForWorkerPort({ attempts: 3, backoffMs: 250 });
|
||||
}
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/shared/worker-utils.ts:221-239`; Part 1 §Worker/lifecycle `ProcessManager.ts:1013-1032`; Part 2 row 1 (spawn signature) and row 3 (Bun.spawn lacks `detached` — we use Node's API).
|
||||
|
||||
**Decision point — `respawn` dep vs hand-rolled retry**: see dedicated subsection below. Chosen path: **(b) hand-rolled 3-attempt retry with exponential backoff.**
|
||||
|
||||
---
|
||||
|
||||
### Phase 9 — Delete worker self-shutdown
|
||||
|
||||
**Purpose**: The worker has no business deciding to exit on idle. If no work arrives, the worker sits idle; `proc.unref()` already ensures it does not keep the launching hook alive. The worker runs until killed (SIGTERM from installer, SIGKILL from crash, or OS reboot).
|
||||
|
||||
**Anchors**:
|
||||
- `src/services/worker-service.ts:1094-1120` — shutdown sequence (KEEP the sequence for explicit SIGTERM; DELETE any idle-triggered self-shutdown path)
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// conceptual — any idleCheck / idleTimeout that calls performGracefulShutdown on its own
|
||||
if (Date.now() - lastActivity > IDLE_MAX_MS) this.shutdown();
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// no idle timer. Worker exits only on external signal or crash.
|
||||
// performGracefulShutdown (GracefulShutdown.ts:52-86) remains for external SIGTERM.
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Worker/lifecycle `src/services/worker-service.ts:1094-1120`; Part 1 §Worker/lifecycle `GracefulShutdown.ts:52-86`.
|
||||
|
||||
---
|
||||
|
||||
## Required code snippets
|
||||
|
||||
### Process-group spawn (Unix)
|
||||
|
||||
```ts
|
||||
// Node child_process.spawn — detached: true creates a new process group (setpgid).
|
||||
// The child survives parent death; parent signals the whole subtree via negative PID.
|
||||
const proc = spawn(command, args, {
|
||||
detached: true,
|
||||
stdio: ['ignore', 'pipe', 'pipe'],
|
||||
});
|
||||
const pgid = proc.pid; // on Unix, group leader's PID is the pgid
|
||||
```
|
||||
|
||||
### Kill the whole process group
|
||||
|
||||
```ts
|
||||
// Negative PID signals the whole process group on Unix (POSIX kill(2)).
|
||||
// Tears down the SDK child AND every descendant it spawned in one syscall.
|
||||
// UNIX ONLY — on Windows, process.kill(-pgid, …) is not supported; see Platform caveat.
|
||||
process.kill(-pgid, 'SIGTERM');
|
||||
// wait up to 5s for graceful exit; child.on('exit') may short-circuit the wait
|
||||
process.kill(-pgid, 'SIGKILL');
|
||||
```
|
||||
|
||||
### Lazy-spawn wrapper (hook-side)
|
||||
|
||||
```ts
|
||||
// src/shared/worker-utils.ts
|
||||
export async function ensureWorkerRunning(): Promise<boolean> {
|
||||
if (await isWorkerPortAlive()) return true; // port check inherits PID-reuse guard
|
||||
const proc = spawn(bunPath, [workerPath], {
|
||||
detached: true,
|
||||
stdio: ['ignore', 'ignore', 'ignore'],
|
||||
});
|
||||
proc.unref(); // hook exit doesn't keep worker linked
|
||||
return await waitForWorkerPort({ attempts: 3, backoffMs: 250 });
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification grep targets
|
||||
|
||||
```
|
||||
grep -rn "setInterval" src/services/worker/ → 0
|
||||
grep -rn "startOrphanReaper" src/ → 0
|
||||
grep -rn "staleSessionReaperInterval" src/ → 0
|
||||
grep -rn "killSystemOrphans" src/ → 0
|
||||
grep -rn "killIdleDaemonChildren" src/ → 0
|
||||
grep -rn "reapStaleSessions" src/ → 0
|
||||
grep -rn "reapOrphanedProcesses" src/ → 0
|
||||
grep -rn "evictIdlestSession" src/ → 0
|
||||
grep -rn "abandonedTimer" src/ → 0
|
||||
grep -rn "fallbackAgent\|Gemini\|OpenRouter" src/services/worker/SessionManager.ts → 0
|
||||
test ! -e src/services/worker/ProcessRegistry.ts → file does NOT exist
|
||||
test -d src/supervisor/ → directory DOES exist
|
||||
Integration test: kill -9 <worker-pid> → next hook respawns worker; no orphan children
|
||||
Integration test: graceful SIGTERM → all SDK children exit within 6s
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards
|
||||
|
||||
- Do NOT keep `killSystemOrphans` as a boot-once function — orphans are PREVENTED by process groups, not swept.
|
||||
- Do NOT add idle-timer self-shutdown to the worker.
|
||||
- Do NOT introduce a third process registry during the migration.
|
||||
|
||||
---
|
||||
|
||||
## Platform caveat — Windows
|
||||
|
||||
`process.kill(-pgid, signal)` is **Unix-only**. On Windows, negative PIDs are not a valid signal target; the Node API surface differs (no POSIX process groups, no `setpgid`). The Windows equivalent is a **Job Object**: a child is assigned to a Job, and `TerminateJobObject` tears down the whole Job. Node does not expose Job Objects directly; a native addon (`node-windows-killtree`, `taskkill /T /F /PID`, or Windows-specific `child_process` flags) is required.
|
||||
|
||||
This is a documented gap-to-fix, carried forward from `_rewrite-plan.md` Known gaps #3. **This plan does not commit to a Windows implementation.** Current claude-mem users on Windows are served via WSL (which exposes Unix process-group semantics). A native Windows port is future work and belongs in its own plan.
|
||||
|
||||
---
|
||||
|
||||
## `respawn` dep decision
|
||||
|
||||
**Options**:
|
||||
- **(a)** Adopt the [`respawn` npm package](https://github.com/mafintosh/respawn) (~200 LOC pure JS; by `mafintosh`; NOT currently a dependency per `_reference.md` Part 2 row "`respawn` npm package").
|
||||
- **(b)** Hand-roll a 3-attempt retry with exponential backoff inside the lazy-spawn wrapper.
|
||||
|
||||
**Chosen: (b) — hand-roll 3-attempt retry with exponential backoff.**
|
||||
|
||||
**Rationale**:
|
||||
1. **Fewer deps.** `respawn` would be a new top-level runtime dependency for behaviour that fits in ~10 lines (`waitForWorkerPort({ attempts: 3, backoffMs: 250 })`). Principle 6 (one helper, N callers) prefers the narrow local helper over a general-purpose supervisor library.
|
||||
2. **The retry is trivial.** Three attempts, 250ms → 500ms → 1000ms backoff. No supervision semantics beyond "start one child and wait for its port to open."
|
||||
3. **Supervision is already handled by the OS.** `respawn` shines when you want auto-restart-on-crash while the parent keeps running. We explicitly do NOT want that: the hook is short-lived and detaches via `proc.unref()`; long-running supervision is the OS's job (launchd / systemd user unit — documented in `_reference.md` Part 2 rows 8-9 as future installer work, NOT adopted here).
|
||||
4. **We control the failure mode.** If all three attempts fail, the hook reports via exit code 2 (Phase 7 contract), which surfaces to Claude. A library would add an opinion layer we don't need.
|
||||
|
||||
If a future phase demands auto-restart-while-parent-lives semantics (e.g., a persistent hook that wants to keep the worker alive inside its own process tree), revisit (a). Not this plan.
|
||||
|
||||
---
|
||||
|
||||
## PID-reuse safety
|
||||
|
||||
The lazy-spawn wrapper's port-check fast path (`if (await isWorkerPortAlive()) return true`) must NOT be fooled by a stale PID-file pointing at a recycled PID. This is the exact failure mode fixed by commit **`99060bac`** ("fix: detect PID reuse in worker start-guard (container restarts)"), which introduced `captureProcessStartToken` at `src/supervisor/process-registry.ts:85-173` (reads `/proc/<pid>/stat` field 22 on Linux, `ps -o lstart=` on macOS; returns `null` on Windows).
|
||||
|
||||
**Requirement for Phase 8**: the `isWorkerPortAlive()` helper — or the layer above it — must compare the current process start-token against the recorded token before treating "port open at recorded PID" as "our worker is alive." If the tokens differ, treat the port as dead (a different process is squatting on it) and fall through to the spawn path. This inherits the primary-path correctness of commit `99060bac` rather than reimplementing it. No new PID-reuse logic lives in `worker-utils.ts`; it calls the supervisor's start-token check.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Worker/lifecycle `src/supervisor/process-registry.ts:85-173` — `captureProcessStartToken` (KEEP, legitimate primary-path correctness); commit `99060bac`.
|
||||
@@ -0,0 +1,399 @@
|
||||
# 03 — Ingestion Path
|
||||
|
||||
## Purpose
|
||||
|
||||
Cure the ingestion layer's second-system accretion by making the parser fail-fast, collapsing the worker-internal HTTP loopback into direct function calls, replacing the 5-second rescan `setInterval` with a recursive `fs.watch`, and delegating tool-use / tool-result pairing to the database via `UNIQUE(session_id, tool_use_id)` instead of a per-process in-memory Map. The cure is ten moves: expose `ingestObservation` / `ingestPrompt` / `ingestSummary` as direct worker functions (prerequisite for plans `05-hook-surface.md` + `06-api-surface.md`); replace `parseObservations` + `parseSummary` + `coerceObservationToSummary` with a single `parseAgentXml` returning a discriminated union; migrate `ResponseProcessor` to the new parser and emit `summaryStoredEvent` for the blocking endpoint; delete the circuit breaker; delete `coerceObservationToSummary`; swap rescan for recursive `fs.watch`; delete the `pendingTools` Map and pair via DB JOIN; call the ingest helper directly (no HTTP loopback); consolidate tag stripping to one regex; delete the dead `TranscriptParser` class — in the same PR that they stop being referenced.
|
||||
|
||||
---
|
||||
|
||||
## Principles invoked
|
||||
|
||||
This plan is measured against `00-principles.md`:
|
||||
|
||||
1. **Principle 1 — No recovery code for fixable failures.** `coerceObservationToSummary` exists only to recover from LLM contract violations on the summary path. Fix the contract (fail-fast to `markFailed`), delete the coercion helper.
|
||||
2. **Principle 2 — Fail-fast over grace-degrade.** `parseAgentXml` returns `{ valid: false, reason }` on malformed input; callers mark the message failed and surface the reason. No circuit breaker, no coercion, no silent passthrough.
|
||||
3. **Principle 3 — UNIQUE constraint over dedup window.** Tool-use / tool-result pairing is enforced by `UNIQUE(session_id, tool_use_id)` on `pending_messages` (defined in `01-data-integrity.md` Phase 1), not by a per-process `pendingTools` Map that disappears on worker restart.
|
||||
4. **Principle 4 — Event-driven over polling.** `fs.watch(dir, { recursive: true })` replaces the 5-second rescan `setInterval` at `src/services/transcripts/watcher.ts:124-132`.
|
||||
6. **Principle 6 — One helper, N callers.** One `parseAgentXml` for observation + summary XML. One `ingestObservation` for every worker-internal caller (no HTTP loopback). One tag-stripping regex with alternation.
|
||||
7. **Principle 7 — Delete code in the same PR it becomes unused.** Circuit-breaker fields, `coerceObservationToSummary`, the `pendingTools` Map, and the `TranscriptParser` class all delete in the same PR that their last caller is rewritten.
|
||||
|
||||
**Cross-references**:
|
||||
|
||||
- `01-data-integrity.md` Phase 1 defines the `UNIQUE(session_id, tool_use_id)` constraint on `pending_messages` that Phase 6 of this plan depends on. Phase 6 is blocked until `01-data-integrity.md` Phase 1 + Phase 2 (fresh schema + ALTER migration) land.
|
||||
- `05-hook-surface.md` consumes the `summaryStoredEvent` emitted by Phase 2 of this plan as the signal that unblocks the blocking `/api/session/end` endpoint. Phase 2's event name and payload shape is the contract; `05-hook-surface.md` Phase 3 references it.
|
||||
- `02-process-lifecycle.md` is orthogonal to ingestion — the helpers defined in Phase 0 run inside the worker process regardless of how it was spawned — but Phase 0's prohibition on HTTP loopback is a pre-condition for `02-process-lifecycle.md`'s process-group teardown to leave no in-flight loopback requests stranded.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0 — Ingest helpers (prerequisite for plans 05, 06, 07)
|
||||
|
||||
**Purpose**: Expose `ingestObservation(payload)`, `ingestPrompt(payload)`, and `ingestSummary(payload)` as direct functions on the worker. Every worker-internal caller (the transcript processor, the ResponseProcessor, any future in-process producer) invokes the function directly. No `http://localhost:37777` loopback for worker→worker calls. Hooks (cross-process) still use HTTP; this phase exists to kill the loopback inside the single worker process.
|
||||
|
||||
**Files**:
|
||||
- New: `src/services/worker/http/shared.ts` — exports `ingestObservation`, `ingestPrompt`, `ingestSummary` (plus the HTTP route handlers that delegate to the same three functions, so plans `05-hook-surface.md` and `06-api-surface.md` can share them).
|
||||
- `_reference.md` Part 3 row "HTTP loopback replacement" documents this file as the canonical landing spot.
|
||||
|
||||
**Contract**:
|
||||
|
||||
```ts
|
||||
// src/services/worker/http/shared.ts
|
||||
export function ingestObservation(payload: ObservationPayload): IngestResult;
|
||||
export function ingestPrompt(payload: PromptPayload): IngestResult;
|
||||
export function ingestSummary(payload: SummaryPayload): IngestResult;
|
||||
|
||||
// IngestResult is either the inserted row's id, or a discriminated-union error the caller surfaces.
|
||||
```
|
||||
|
||||
**Callers after this plan lands**:
|
||||
- `src/services/transcripts/processor.ts:252` — calls `ingestObservation(payload)` directly (Phase 7).
|
||||
- `src/services/worker/agents/ResponseProcessor.ts` — calls `ingestSummary(payload)` and emits `summaryStoredEvent` (Phase 2).
|
||||
- Hook handlers (`src/cli/handlers/observation.ts`, `src/cli/handlers/session-init.ts`, …) call via HTTP; the HTTP route handler in `06-api-surface.md` delegates to the same three functions.
|
||||
|
||||
**By principle 6 (one helper, N callers)**: a single implementation backs both the in-process caller and the cross-process HTTP route. No duplicated insert logic.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/services/transcripts/processor.ts:252` (current HTTP loopback call site); `_reference.md` Part 3 row "HTTP loopback replacement" (target file location).
|
||||
|
||||
**Plans that depend on Phase 0**:
|
||||
- `05-hook-surface.md` Phase 3 consumes `summaryStoredEvent` emitted by `ingestSummary` callers.
|
||||
- `06-api-surface.md` Phase 2's `validateBody` Zod middleware delegates to these helpers after validation passes.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — `parseAgentXml` discriminated union
|
||||
|
||||
**Purpose**: Replace `parseObservations`, `parseSummary`, and `coerceObservationToSummary` with a single entry point that inspects the root element and returns a discriminated union. By principle 2 (fail-fast), the function never coerces and never returns `undefined`; it either parses a valid payload or names the reason it failed. The caller is responsible for deciding whether a malformed payload is a retry or a `markFailed`.
|
||||
|
||||
**Files**:
|
||||
- `src/sdk/parser.ts:33-111` — `parseObservations` (inlined into `parseAgentXml`)
|
||||
- `src/sdk/parser.ts:122-259` — `parseSummary` + `coerceObservationToSummary` (former inlined, latter deleted entirely in Phase 4)
|
||||
|
||||
**Signature**:
|
||||
|
||||
```ts
|
||||
type ParseResult =
|
||||
| { valid: true; kind: 'observation' | 'summary'; data: ParsedObservation | ParsedSummary }
|
||||
| { valid: false; reason: string };
|
||||
function parseAgentXml(raw: string): ParseResult;
|
||||
```
|
||||
|
||||
**Semantics**:
|
||||
- Inspect the root element: `<observation>` → parse observation, return `{ valid: true, kind: 'observation', data }`. `<summary>` → parse summary, return `{ valid: true, kind: 'summary', data }`. Anything else, or well-formed XML with missing required children → `{ valid: false, reason: '<root element>: <missing field or malformed child>' }`.
|
||||
- `reason` is a short human-readable string suitable for inclusion in `pending_messages.failed_reason` (column exists; surfaces in the viewer).
|
||||
- The `<skip_summary reason="…"/>` bypass (documented in `_reference.md` Part 3) is parsed as a valid summary with a `skipped: true` flag on `ParsedSummary` — it is not a coercion, it is a first-class case in the schema.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/sdk/parser.ts:33-111` (current `parseObservations`) and `src/sdk/parser.ts:122-259` (current `parseSummary` + `coerceObservationToSummary` target). `_reference.md` Part 3 row "Summary XML" and "Observation XML" fix the element shapes.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — ResponseProcessor migration + `summaryStoredEvent`
|
||||
|
||||
**Purpose**: Rewrite the SDK response handler so it calls `parseAgentXml` exactly once, branches on the discriminated union, and on valid summaries emits `summaryStoredEvent` for the blocking endpoint in `05-hook-surface.md` to await. On invalid, it calls `markFailed(messageId, reason)` — no coercion retry, no circuit breaker, no silent passthrough.
|
||||
|
||||
**Files**:
|
||||
- `src/services/worker/agents/ResponseProcessor.ts:96-200` — replace body of the parse-and-dispatch section.
|
||||
- `src/services/sqlite/PendingMessageStore.ts:349-374` — `markFailed` is unchanged; its retry ladder (`retry_count < maxRetries`) is the legitimate primary-path surface for transient failures.
|
||||
|
||||
**Before** (conceptual):
|
||||
```ts
|
||||
// src/services/worker/agents/ResponseProcessor.ts:96-200 (current)
|
||||
const obs = parseObservations(raw);
|
||||
if (obs) return storeObservations(obs);
|
||||
const summary = parseSummary(raw) ?? coerceObservationToSummary(obs); // silent coerce
|
||||
if (this.consecutiveSummaryFailures > MAX_CONSECUTIVE_SUMMARY_FAILURES) { … } // circuit breaker
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/services/worker/agents/ResponseProcessor.ts:96-200 (after this phase)
|
||||
const result = parseAgentXml(raw);
|
||||
if (!result.valid) {
|
||||
await pendingStore.markFailed(messageId, result.reason);
|
||||
return;
|
||||
}
|
||||
if (result.kind === 'observation') {
|
||||
ingestObservation(result.data); // Phase 0 helper
|
||||
return;
|
||||
}
|
||||
// kind === 'summary'
|
||||
ingestSummary(result.data); // Phase 0 helper
|
||||
eventBus.emit('summaryStoredEvent', { sessionId, messageId }); // consumed by 05-hook-surface.md Phase 3
|
||||
```
|
||||
|
||||
**Event contract** (stable surface for `05-hook-surface.md`):
|
||||
|
||||
```ts
|
||||
type SummaryStoredEvent = { sessionId: string; messageId: number };
|
||||
// emitted once per successful ingestSummary call; blocking /api/session/end awaits this
|
||||
```
|
||||
|
||||
**By principle 1 (no recovery code for fixable failures)**: the coercion-then-circuit-breaker pattern existed to recover from a broken primary path (the LLM occasionally returned `<observation>` when asked for `<summary>`). The cure is to mark the message failed, surface the reason, and let the retry ladder in `markFailed` do its job. No coerce.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/services/worker/agents/ResponseProcessor.ts:96-200` (current parse-and-dispatch block); `01-data-integrity.md` for `markFailed` retry ladder context.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Delete circuit breaker
|
||||
|
||||
**Purpose**: `consecutiveSummaryFailures` + `MAX_CONSECUTIVE_SUMMARY_FAILURES` is a second-system effect — a counter that trips after N bad parses and stops attempting to parse. By principle 2 (fail-fast), each malformed payload is independently marked failed; a storm of bad parses is a signal to surface (via the retry ladder hitting `maxRetries`), not a signal to silently stop trying.
|
||||
|
||||
**Files**:
|
||||
- `src/services/worker/agents/ResponseProcessor.ts:96-200` — delete `consecutiveSummaryFailures` field, `MAX_CONSECUTIVE_SUMMARY_FAILURES` constant, and every `if (this.consecutiveSummaryFailures > …)` guard.
|
||||
- `src/services/worker/SessionManager.ts` — delete any SessionManager-side guards that read the same counter.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- Field: `consecutiveSummaryFailures`
|
||||
- Constant: `MAX_CONSECUTIVE_SUMMARY_FAILURES`
|
||||
- Every guard that reads them
|
||||
- Any log line of the form "circuit breaker tripped"
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/services/worker/agents/ResponseProcessor.ts:96-200` (circuit-breaker lives in this block).
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Delete `coerceObservationToSummary`
|
||||
|
||||
**Purpose**: Remove the coercion helper that maps `<observation>` fields into a `<summary>` shape when the LLM violates the summary contract. By principle 1 (no recovery code for fixable failures), the contract violation is surfaced to the caller via `parseAgentXml`'s `{ valid: false, reason }` branch; there is no coercion path.
|
||||
|
||||
**Files**:
|
||||
- `src/sdk/parser.ts:222-259` — delete `coerceObservationToSummary` function entirely.
|
||||
- Every caller — after Phase 2 migration, the only caller was `ResponseProcessor.ts`; its rewrite removes the call.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- The function body at `src/sdk/parser.ts:222-259`
|
||||
- Any import of `coerceObservationToSummary` across the codebase
|
||||
- Any unit test that asserted coercion behavior (these now assert the `{ valid: false, reason }` branch instead)
|
||||
|
||||
**By principle 7 (delete code in the same PR)**: no `@deprecated` fence, no "remove next release." The function deletes in the PR that rewrites `ResponseProcessor`.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/sdk/parser.ts:222-259` (the target function).
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Recursive `fs.watch`
|
||||
|
||||
**Purpose**: Replace the 5-second `setInterval` rescan in `src/services/transcripts/watcher.ts:124-132` with a single `fs.watch(transcriptsRoot, { recursive: true })`. By principle 4 (event-driven over polling), the OS notifies us when a transcript file is created or modified; we do not walk the directory every 5 seconds.
|
||||
|
||||
**Files**:
|
||||
- `src/services/transcripts/watcher.ts:124-132` — replace rescan `setInterval` with `fs.watch`.
|
||||
- `package.json` — bump `engines.node` to `>=20.0.0`. This is the preflight gate; the phase does not land until the engines bump ships.
|
||||
|
||||
**Preflight**: `engines.node >= 20.0.0`. Recursive mode on Linux was experimental before Node 20; it became stable in Node 20 across all major platforms (Linux, macOS, Windows). See `_reference.md` Part 2 row "`fs.watch(dir, { recursive: true })` on Linux" citing the Node 20 release notes.
|
||||
|
||||
**Signature + gotcha callout**:
|
||||
|
||||
```ts
|
||||
import { watch } from 'node:fs';
|
||||
const w = watch(transcriptsRoot, { recursive: true, persistent: true }, (event, name) => { … });
|
||||
```
|
||||
|
||||
**Gotcha**: Recursive mode on Linux was experimental before Node 20 and unsupported before Node 18; shipping this phase on a Node 18 install would silently fall back to non-recursive mode on Linux and miss every subdirectory. The `engines.node >= 20.0.0` bump in `package.json` is the load-bearing gate — the plan does not ship without it. Cite: `_reference.md` Part 2 row `fs.watch` (Node 20 release-notes anchor) and Part 4 row 3 ("Node 20+ requirement").
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/services/transcripts/watcher.ts:124-132 (current)
|
||||
this.rescanInterval = setInterval(() => this.rescanTranscripts(), 5_000);
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/services/transcripts/watcher.ts:124-132 (after this phase)
|
||||
import { watch } from 'node:fs';
|
||||
this.watcher = watch(transcriptsRoot, { recursive: true, persistent: true }, (event, name) => {
|
||||
if (!name) return; // some events omit filename
|
||||
void this.onTranscriptEvent(event, resolve(transcriptsRoot, name));
|
||||
});
|
||||
```
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `rescanInterval` field
|
||||
- Every `setInterval` in `src/services/transcripts/watcher.ts`
|
||||
- The 5-second `rescanTranscripts` method body if no other caller remains
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/services/transcripts/watcher.ts:124-132` (rescan target); Part 2 row `fs.watch` recursive (Node 20+); Part 4 row 3 (engines.node bump preflight).
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — DB-backed tool pairing
|
||||
|
||||
**Purpose**: Delete the per-process `pendingTools` Map at `src/services/transcripts/processor.ts:23`. Insert both `tool_use` and `tool_result` rows into `pending_messages` with the `UNIQUE(session_id, tool_use_id)` constraint (defined in `01-data-integrity.md` Phase 1 on the `pending_messages` table and enforced by the UNIQUE INDEX added in `01-data-integrity.md` Phase 2). Pair `tool_use` with its `tool_result` by JOIN at read time — the database is the authority on what is paired, not an in-memory Map that empties on worker restart.
|
||||
|
||||
**Files**:
|
||||
- `src/services/transcripts/processor.ts:23` — delete `pendingTools: Map<string, ToolInput>`.
|
||||
- `src/services/transcripts/processor.ts:202, :232-236` — delete the dispatcher's Map-based pairing; both `tool_use` and `tool_result` go through `pending_messages` insert.
|
||||
- `src/services/sqlite/PendingMessageStore.ts` — the insert path uses `INSERT … ON CONFLICT(session_id, tool_use_id) DO NOTHING` to make ingestion idempotent against replayed transcript lines.
|
||||
|
||||
**Pairing query** (read-time JOIN):
|
||||
|
||||
```sql
|
||||
-- pair tool_use with its tool_result by session_id + tool_use_id
|
||||
SELECT u.payload AS tool_use_payload,
|
||||
r.payload AS tool_result_payload
|
||||
FROM pending_messages u
|
||||
JOIN pending_messages r USING (session_id, tool_use_id)
|
||||
WHERE u.kind = 'tool_use'
|
||||
AND r.kind = 'tool_result'
|
||||
AND u.session_id = ?;
|
||||
```
|
||||
|
||||
**By principle 3 (UNIQUE constraint over dedup window)**: the database prevents duplicate pairings. There is no timer gate, no Map survival question, no "what if the worker restarted mid-pair" failure mode.
|
||||
|
||||
**Cross-reference**: `01-data-integrity.md` Phase 1 defines the `UNIQUE(session_id, tool_use_id)` constraint inline in the fresh `schema.sql`. `01-data-integrity.md` Phase 2 adds the equivalent UNIQUE INDEX via ALTER migration for already-installed databases, with a pre-index dedup pass. Phase 6 of this plan is blocked until both land.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `pendingTools` Map field at `processor.ts:23`
|
||||
- Every `pendingTools.set` / `pendingTools.get` / `pendingTools.delete` call
|
||||
- The dispatcher pairing block at `processor.ts:202` and `:232-236`
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/services/transcripts/processor.ts:23, 202, 232-236`; `01-data-integrity.md` Phase 1 (schema) + Phase 2 (migration) for the UNIQUE constraint.
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Direct `ingestObservation` call (no HTTP loopback)
|
||||
|
||||
**Purpose**: Replace the HTTP loopback at `src/services/transcripts/processor.ts:252` with a direct call to `ingestObservation(payload)` (the helper from Phase 0). The transcript processor runs inside the worker; calling the worker's own HTTP endpoint from inside the worker is second-system round-tripping. One function call, no network stack, no JSON round-trip.
|
||||
|
||||
**Files**:
|
||||
- `src/services/transcripts/processor.ts:252` — replace `observationHandler.execute()` + `workerHttpRequest` round-trip with `ingestObservation(payload)`.
|
||||
- `src/services/transcripts/processor.ts:275-285` — `maybeParseJson` silent passthrough is rewritten to fail-fast (by principle 2): if the JSON is malformed, throw; do not ingest the raw string.
|
||||
|
||||
**Before** (conceptual):
|
||||
```ts
|
||||
// src/services/transcripts/processor.ts:252 (current)
|
||||
await observationHandler.execute(payload);
|
||||
// … which internally does workerHttpRequest(POST, 'http://localhost:37777/api/observations', payload)
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/services/transcripts/processor.ts:252 (after this phase)
|
||||
const result = ingestObservation(payload); // Phase 0 helper, same process
|
||||
if (!result.ok) throw new Error(`ingest failed: ${result.reason}`);
|
||||
```
|
||||
|
||||
**Delete in the same PR**:
|
||||
- Every `observationHandler.execute()` call site inside `src/services/transcripts/`
|
||||
- Any import of `workerHttpRequest` in `src/services/transcripts/`
|
||||
- The `maybeParseJson` silent-passthrough branch at `processor.ts:275-285` (replace with fail-fast parse)
|
||||
|
||||
**By principle 6 (one helper, N callers)**: the single `ingestObservation` helper from Phase 0 is called by the processor (in-process) AND by the HTTP route handler in `06-api-surface.md` (cross-process). The route handler is a thin adapter; the business logic is in the helper.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/services/transcripts/processor.ts:252` (current HTTP loopback call); `:275-285` (silent `maybeParseJson` passthrough). `_reference.md` Part 3 row "HTTP loopback replacement" (target pattern).
|
||||
|
||||
---
|
||||
|
||||
## Phase 8 — Single-regex tag strip
|
||||
|
||||
**Purpose**: Consolidate `src/utils/tag-stripping.ts` `countTags` + `stripTagsInternal` into one regex with alternation. Current implementation makes six `.replace()` / `.match()` calls for six tag types; by principle 6 (one helper, N callers), this is six copies of the same concern.
|
||||
|
||||
**Files**:
|
||||
- `src/utils/tag-stripping.ts:37-44` — `countTags` (six separate `.match()` calls)
|
||||
- `src/utils/tag-stripping.ts:63-69` — `stripTagsInternal` (six separate `.replace()` calls)
|
||||
|
||||
**After**: A single regex with alternation across all six tag names, single-pass over the input.
|
||||
|
||||
```ts
|
||||
// src/utils/tag-stripping.ts (after this phase)
|
||||
const STRIP_REGEX = /<(private|claude-mem-context|system-reminder|…)\b[^>]*>[\s\S]*?<\/\1>/g;
|
||||
|
||||
export function stripTags(input: string): { stripped: string; counts: Record<TagName, number> } {
|
||||
const counts: Record<TagName, number> = Object.fromEntries(TAG_NAMES.map(n => [n, 0]));
|
||||
const stripped = input.replace(STRIP_REGEX, (_, name) => { counts[name]++; return ''; });
|
||||
return { stripped, counts };
|
||||
}
|
||||
```
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `countTags` as a separate exported function
|
||||
- `stripTagsInternal` as a separate exported function
|
||||
- All six per-tag `.replace()` / `.match()` call sites
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/utils/tag-stripping.ts:37-44, 63-69` (the two functions being consolidated). Part 3 row "Privacy tags" (the six tag names this regex must cover).
|
||||
|
||||
---
|
||||
|
||||
## Phase 9 — Delete dead `TranscriptParser` class
|
||||
|
||||
**Purpose**: The `TranscriptParser` class at `src/utils/transcript-parser.ts:28-90` has no active importers. The active parser is `extractLastMessage` at `src/shared/transcript-parser.ts:41-144`. By principle 7 (delete code in the same PR it becomes unused), the dead class deletes now — not fenced with `@deprecated`, not "removed next release."
|
||||
|
||||
**Files**:
|
||||
- `src/utils/transcript-parser.ts` — delete the file in its entirety (the `TranscriptParser` class at `:28-90` is the file's only export).
|
||||
|
||||
**Pre-deletion check**: `grep -rn "from.*utils/transcript-parser" src/` must return 0 before deletion. If any import exists, it was missed during prior cleanup and must be rewritten to `src/shared/transcript-parser.ts` in the same PR.
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `src/utils/transcript-parser.ts` (entire file)
|
||||
- Any test file whose sole purpose was exercising `TranscriptParser` (its assertions are covered by tests against `extractLastMessage`)
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Ingestion `src/utils/transcript-parser.ts:28-90` (dead class) and `src/shared/transcript-parser.ts:41-144` (active replacement function).
|
||||
|
||||
---
|
||||
|
||||
## Parser signature (verbatim contract)
|
||||
|
||||
Phase 1 establishes the single entry point for agent-XML parsing. Every caller branches on the discriminated union; nothing else parses agent XML after this plan lands.
|
||||
|
||||
```ts
|
||||
type ParseResult =
|
||||
| { valid: true; kind: 'observation' | 'summary'; data: ParsedObservation | ParsedSummary }
|
||||
| { valid: false; reason: string };
|
||||
function parseAgentXml(raw: string): ParseResult;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## `fs.watch` signature + gotcha callout (verbatim contract)
|
||||
|
||||
Phase 5 establishes the single directory-watch surface. The rescan `setInterval` is deleted in the same PR.
|
||||
|
||||
```ts
|
||||
import { watch } from 'node:fs';
|
||||
const w = watch(transcriptsRoot, { recursive: true, persistent: true }, (event, name) => { … });
|
||||
```
|
||||
|
||||
**Gotcha**: recursive mode on Linux was experimental before Node 20. The plan's preflight is `engines.node >= 20.0.0` in `package.json`; shipping Phase 5 on Node 18 would silently fall back to non-recursive mode on Linux and miss every subdirectory. Cite: `_reference.md` Part 2 row `fs.watch(dir, { recursive: true })` (Node 20 release-notes anchor); Part 4 row 3 (engines.node bump preflight).
|
||||
|
||||
---
|
||||
|
||||
## Verification grep targets
|
||||
|
||||
Each command below must return the indicated count (or the indicated condition) after this plan lands.
|
||||
|
||||
```
|
||||
grep -n coerceObservationToSummary src/ → 0
|
||||
grep -n consecutiveSummaryFailures src/ → 0
|
||||
grep -n "pendingTools" src/services/transcripts/ → 0
|
||||
grep -n "setInterval" src/services/transcripts/watcher.ts → 0
|
||||
grep -n "observationHandler.execute" src/services/transcripts/ → 0
|
||||
test ! -e src/utils/transcript-parser.ts → exit 0 (file deleted)
|
||||
jq '.engines.node' package.json → ">=20.0.0" (or stricter)
|
||||
```
|
||||
|
||||
**Fuzz test 1** (orphan `tool_use`): Drop a JSONL file containing a `tool_use` line with no matching `tool_result`. The `tool_use` row is inserted into `pending_messages`, the pairing JOIN (Phase 6 query) returns zero pairs, no observation is emitted, and no error is logged beyond a debug-level "unpaired tool_use" note. The worker does not crash.
|
||||
|
||||
**Fuzz test 2** (phantom `tool_result`): Drop a JSONL file containing a `tool_result` line referencing a `tool_use_id` that does not exist in the same session. The `tool_result` row is inserted into `pending_messages` (the `UNIQUE(session_id, tool_use_id)` constraint allows it; the constraint pairs kinds, not forbids them), the pairing JOIN returns zero pairs, a debug-level "phantom tool_result" log line is emitted, no observation is produced, and the worker does not crash.
|
||||
|
||||
**Nine verification targets total**: seven greps (above) + two fuzz tests.
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards
|
||||
|
||||
Reproduced verbatim from the rewrite plan:
|
||||
|
||||
- Do NOT keep coercion as a "lenient mode" flag.
|
||||
- Do NOT ship a polling fallback for `fs.watch` — Node 20+ handles recursive Linux natively.
|
||||
- Do NOT preserve the in-memory Map behind a feature flag.
|
||||
|
||||
Additional hard rules enforced by this plan:
|
||||
|
||||
- No new `coerce*`, `heal*`, `recover*`, `repair*` function name appears in `src/` after this plan lands, except inside a DELETE directive.
|
||||
- No new `setInterval` is introduced in `src/services/transcripts/`.
|
||||
- No new HTTP round-trip from the worker to its own `localhost:37777` endpoint is introduced; worker-internal producers use Phase 0 helpers directly.
|
||||
|
||||
---
|
||||
|
||||
## Known gaps / deferrals
|
||||
|
||||
1. **Preflight sequencing.** Phase 5 (`fs.watch` recursive) cannot land before the `engines.node >= 20.0.0` bump ships in `package.json`. Plan `98-execution-order.md` will sequence this as a preflight gate. Until then, Phase 5 is blocked.
|
||||
2. **Schema dependency.** Phase 6 (DB-backed pairing) cannot land before `01-data-integrity.md` Phase 1 (fresh `schema.sql` with `UNIQUE(session_id, tool_use_id)`) and Phase 2 (ALTER migration + pre-index dedup) ship. Plan `98-execution-order.md` will sequence this as a DAG edge from `01` Phase 2 → this plan Phase 6.
|
||||
3. **Event-bus choice.** Phase 2 emits `summaryStoredEvent`; the event-bus implementation (Node `EventEmitter` vs a dedicated `src/services/infrastructure/eventBus.ts`) is left to the implementer. `05-hook-surface.md` Phase 3 specifies the consumer contract but not the emitter mechanism.
|
||||
@@ -0,0 +1,208 @@
|
||||
# 04 — Read Path
|
||||
|
||||
**Purpose**: Collapse the read path — rendering, search, and knowledge-corpus query — to a single shape per concern. One `renderObservations(obs, strategy: RenderStrategy)` function replaces `AgentFormatter`, `HumanFormatter`, `ResultFormatter`, and `CorpusRenderer`, driven by a config object (not a class hierarchy). One search path routes every caller through `SearchOrchestrator`; `SearchManager.findByConcept` / `findByFile` / `findByType` and seven hand-rolled copies of the recency filter are deleted. Chroma failure throws `503` instead of silently re-querying SQLite, and `HybridSearchStrategy`'s silent fallbacks to metadata-only are deleted in the same PR. The `@deprecated getExistingChromaIds` fence is removed, the duplicate `estimateTokens` implementations are collapsed to one utility, and the knowledge-corpus layer is simplified by deleting `session_id` persistence, the `prime` / `reprime` operations, and the auto-reprime regex in `KnowledgeAgent`; `/query` becomes a fresh SDK call with `systemPrompt` that relies on SDK prompt caching. Chroma sync behavior (delete-then-add, `chroma_synced` column, boot-once backfill) is defined by `01-data-integrity.md` §Phase 7 and consumed unchanged here.
|
||||
|
||||
---
|
||||
|
||||
## Principles invoked
|
||||
|
||||
This plan is measured against `00-principles.md`:
|
||||
|
||||
1. **Principle 2 — Fail-fast over grace-degrade.** `SearchOrchestrator` throws `503` on Chroma error. `ChromaSearchStrategy` returns `usedChroma: false` only when Chroma is explicitly uninitialized; every real error propagates. `HybridSearchStrategy`'s three try/catch fallbacks that returned metadata-only results are deleted. No silent coerce, no silent degrade.
|
||||
2. **Principle 6 — One helper, N callers.** One `renderObservations(obs, strategy)` replaces four formatter classes; one `SearchOrchestrator` path replaces `SearchManager.findBy*`; one `RECENCY_WINDOW_MS` import replaces seven copies; one `estimateTokens` utility replaces two per-formatter duplicates.
|
||||
3. **Principle 7 — Delete code in the same PR it becomes unused.** The four formatter classes, the `SearchManager.findBy*` methods, the seven recency copies, the `fellBack: true` flag path, the `@deprecated getExistingChromaIds` fence, and the knowledge-corpus `prime` / `reprime` / `session_id` persistence all delete in the PR that lands this plan — no `@deprecated` window, no "remove next release."
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — `renderObservations(obs, strategy: RenderStrategy)`
|
||||
|
||||
**Purpose**: Replace `AgentFormatter`, `HumanFormatter`, `ResultFormatter`, and `CorpusRenderer` with a single function parameterized by a `RenderStrategy` config object. The four existing formatters share a common walk over `ObservationRow[]` with four knobs that differ: which header to emit, whether to group rows, how dense each row is, and whether ANSI colors are used. Those knobs become fields on `RenderStrategy`.
|
||||
|
||||
`RenderStrategy` is a **config type**, not a class hierarchy — per principle 6, a config object is the correct shape when the only per-variant state is data:
|
||||
|
||||
```ts
|
||||
type RenderStrategy = {
|
||||
header: 'agent' | 'human' | 'terse' | 'corpus';
|
||||
grouping: 'none' | 'byDate' | 'byType';
|
||||
rowDensity: 'compact' | 'verbose';
|
||||
colors: boolean;
|
||||
columns: Array<keyof ObservationRow>;
|
||||
};
|
||||
```
|
||||
|
||||
NO abstract class. NO factory. NO `RenderStrategyBase` with subclasses. Just a config type passed by value. This follows the mapping-doc verdict on old Plan 05: "Four RenderStrategy classes — DELETE — Strategies collapse to ONE config object with four literals — violates 'no speculative abstraction' principle" (`_mapping.md` old Plan 05 row).
|
||||
|
||||
**Files**:
|
||||
- `src/services/context/renderObservations.ts` (new) — single function, takes `(obs: ObservationRow[], strategy: RenderStrategy)`, returns `string`.
|
||||
- Four call-site configs exported as named constants: `agentConfig`, `humanConfig`, `terseConfig`, `corpusConfig`.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path — `src/services/context/formatters/` contains four formatters sharing a common walk with four strategy knobs (header, grouping, row density, colors); the four formatters are the direct inputs to this consolidation.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Delete four formatter classes
|
||||
|
||||
**Purpose**: Remove the four class files in `src/services/context/formatters/` in the same PR as Phase 1. Each class is replaced by one of the four exported configs passed to `renderObservations`. Directory is deleted entirely if empty after the sweep.
|
||||
|
||||
**Files**:
|
||||
- `src/services/context/formatters/AgentFormatter.ts` — DELETE; replaced by `agentConfig`.
|
||||
- `src/services/context/formatters/HumanFormatter.ts` — DELETE; replaced by `humanConfig`.
|
||||
- `src/services/context/formatters/ResultFormatter.ts` — DELETE; replaced by `terseConfig` (or `resultConfig`, name chosen at write time).
|
||||
- `src/services/context/formatters/CorpusRenderer.ts` — DELETE; replaced by `corpusConfig`.
|
||||
- Every importer of those four classes is rewritten to call `renderObservations(obs, <config>)`.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path — "four formatters (AgentFormatter, HumanFormatter, ResultFormatter, CorpusRenderer) share a common walk with four strategy knobs." `_mapping.md` old Plan 05 rows confirm all four delete in the same PR.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Delete `SearchManager.findBy*` duplicated methods
|
||||
|
||||
**Purpose**: `SearchManager` carries its own `findByConcept`, `findByFile`, and `findByType` implementations that duplicate the routing already performed by `SearchOrchestrator`. Delete the three methods; every caller routes through `SearchOrchestrator` instead. This removes two copies of the same query-decision logic (SearchManager vs. HybridSearchStrategy) from the codebase.
|
||||
|
||||
**Files**:
|
||||
- `src/services/worker/SearchManager.ts:1209-1310` — delete `findByConcept`.
|
||||
- `src/services/worker/SearchManager.ts:1277` — delete `findByFile`.
|
||||
- `src/services/worker/SearchManager.ts:1399` — delete `findByType`.
|
||||
- Every caller of `SearchManager.findByConcept` / `findByFile` / `findByType` — rewrite to call the corresponding `SearchOrchestrator` entry point.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path — `src/services/worker/SearchManager.ts:230, 247-259, 488, 978-985, 1064-1071, 1150-1157, 1209-1310, 1277, 1399, 1840-1847` ("findByConcept/File/Type implementations that duplicate HybridSearchStrategy").
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Consolidate recency filter
|
||||
|
||||
**Purpose**: `SearchManager` hand-rolls the `created_at_epoch > now - RECENCY_WINDOW_MS` predicate in seven separate call sites. The constant `RECENCY_WINDOW_MS` already exists at `src/services/worker/types.ts:16`. Import it everywhere; delete the seven hand-rolled copies.
|
||||
|
||||
**Files**:
|
||||
- `src/services/worker/types.ts:16` — canonical `RECENCY_WINDOW_MS` (per principle 6, the one helper).
|
||||
- `src/services/worker/SearchManager.ts:230, 247-259, 488, 978-985, 1064-1071, 1150-1157, 1840-1847` — delete the seven hand-rolled filter copies; import from `types.ts:16` wherever the filter is still needed. After Phase 3 deletes `findBy*`, some of those sites vanish; whichever remain import the constant.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path — "Seven duplicated recency-filter call sites." `_mapping.md` Cross-plan coupling — "`RECENCY_WINDOW_MS` constant | `types.ts:16` (already exists; consolidation in `04-read-path.md` §Phase 3)."
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Fail-fast Chroma
|
||||
|
||||
**Purpose**: Replace the silent fallback in `SearchOrchestrator` (per principle 2, fail-fast over grace-degrade). When Chroma is configured and reachable but returns an error, the orchestrator throws a `503 Service Unavailable` rather than stripping the query and re-querying SQLite. `ChromaSearchStrategy` returns `usedChroma: false` only when Chroma is **explicitly uninitialized** (e.g., the user has not set it up yet); every other error propagates to the orchestrator and then to the HTTP layer.
|
||||
|
||||
**Files**:
|
||||
- `src/services/worker/search/SearchOrchestrator.ts:85-110` — delete the branch that strips the query on `usedChroma=false` and re-queries SQLite. On Chroma error, throw `503`.
|
||||
- `src/services/worker/search/strategies/ChromaSearchStrategy.ts:76-86` — narrow the `try/catch { return usedChroma: false }` to catch only the explicit-uninitialized sentinel; rethrow all other errors.
|
||||
|
||||
The `chroma_synced` column, boot-once backfill, and delete-then-add reconciliation are owned by `01-data-integrity.md` §Phase 7 — this plan consumes that Chroma sync behavior without re-specifying it. Fail-fast applies at the **read** path; write-path reconciliation stays where it lives.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path — `SearchOrchestrator.ts:85-110` (silent fallback with three paths; stripping branch is the target of deletion) and `ChromaSearchStrategy.ts:76-86` (catch-all that swallows real errors).
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — Delete hybrid silent fallbacks
|
||||
|
||||
**Purpose**: `HybridSearchStrategy` has three near-identical methods, each wrapping its Chroma call in a `try/catch` that returns a metadata-only result with `fellBack: true` on any error. This is the same silent-degrade pattern at the strategy layer; delete all three. Errors propagate to `SearchOrchestrator` (Phase 5), which propagates to the HTTP layer as `503`.
|
||||
|
||||
**Files**:
|
||||
- `src/services/worker/search/strategies/HybridSearchStrategy.ts:82-95` — delete the first try/catch fallback path (findByConcept variant).
|
||||
- `src/services/worker/search/strategies/HybridSearchStrategy.ts:120-134` — delete the second (findByType variant).
|
||||
- `src/services/worker/search/strategies/HybridSearchStrategy.ts:161-173` — delete the third (findByFile variant).
|
||||
- Every producer of a `fellBack: true` return — delete.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path — `HybridSearchStrategy.ts:64-185` ("three near-identical methods … each with its own try/catch fallback to metadata-only. … propagate errors, don't silently degrade to metadata-only"). `_mapping.md` old Plan 06 row — "Silent-fallback to filter-only — DELETE — Violates 'fail-fast'."
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Delete `@deprecated getExistingChromaIds`
|
||||
|
||||
**Purpose**: Per principle 7, no `@deprecated` fence survives the PR that makes it unused. The `getExistingChromaIds` function is flagged `@deprecated` in the current code and has no active callers after Phases 5-6 land. Delete the function, the JSDoc fence, and any imports in the same PR.
|
||||
|
||||
**Files**:
|
||||
- Wherever `getExistingChromaIds` is defined (Chroma sync / search module; see `_reference.md` Part 1 §Search / read path) — DELETE the function and the `@deprecated` block above it.
|
||||
- Every import of `getExistingChromaIds` — DELETE.
|
||||
|
||||
**Citation**: `_mapping.md` old Plan 04 row — "`getExistingChromaIds` `@deprecated` fence — DELETE — Violates 'no dead code' principle. Gone in same PR."
|
||||
|
||||
---
|
||||
|
||||
## Phase 8 — Single `estimateTokens` utility
|
||||
|
||||
**Purpose**: Two different token estimates exist — one in `ResultFormatter.ts:264`, one in `CorpusRenderer.ts:90`. After Phase 2 deletes both classes, their `estimateTokens` helpers would orphan. Consolidate to one shared utility at `src/shared/estimate-tokens.ts`, imported by `renderObservations` and any other caller that needs it.
|
||||
|
||||
**Files**:
|
||||
- `src/shared/estimate-tokens.ts` (new) — single `estimateTokens(obs: ObservationRow): number` export.
|
||||
- `src/services/worker/search/ResultFormatter.ts:264` — DELETE the inline estimate (the whole file is deleted in Phase 2; this line is explicitly called out to confirm no salvage-copy is left).
|
||||
- `src/services/worker/knowledge/CorpusRenderer.ts:90` — DELETE the inline estimate (same note).
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path — "Two different token estimates. Plan `04-read-path` §Utilities: one shared `estimateTokens(obs)` in `src/shared/`."
|
||||
|
||||
---
|
||||
|
||||
## Phase 9 — Knowledge-corpus simplification
|
||||
|
||||
**Purpose**: The knowledge-corpus layer carries three kinds of debt: `session_id` persistence on corpus rows (a feature never actually used by queries), `prime` / `reprime` operations (which warm an agent's context with a corpus, then re-warm on drift), and an auto-reprime regex in `KnowledgeAgent` that re-runs `prime` when the agent's response matches a freshness-failure pattern. All three go. `/query` becomes one fresh SDK call per request, constructed with the corpus's compiled `systemPrompt`; repeated calls benefit from the Anthropic SDK's native prompt-caching behavior rather than an in-process warm-context table.
|
||||
|
||||
**Files**:
|
||||
- Knowledge-corpus persistence layer — delete `session_id` column and every write that populates it, every read that consumes it.
|
||||
- Knowledge-corpus command surface — delete `prime` and `reprime` endpoints / handlers.
|
||||
- `KnowledgeAgent` (whichever file defines it in `src/services/worker/knowledge/`) — delete the auto-reprime regex and the branch that calls `reprime`.
|
||||
- `/query` handler — rewrite to construct an SDK call on the fly: compile the corpus into a `systemPrompt`, issue one `messages.create` call, return the response. The SDK's automatic prompt caching is the caching layer (per `_reference.md` Part 2 on SDK behavior and Part 4 Known gap #2 — "Prompt-caching TTL assumption — Plan 04 depends on SDK cache TTL ≈ 5 min. Run a cost smoke test before Plan 10 lands.").
|
||||
|
||||
**Reliance on SDK prompt caching**: The Anthropic SDK's prompt-cache behavior (ephemeral, ~5-minute TTL on `cache_control` blocks) provides the same cost benefit the old `prime` / `reprime` path was trying to hand-roll in-process, without the session persistence, without the regex, and without the auto-reprime loop. Because the benefit is SDK-side, no corpus-side state survives between `/query` calls. This is verified in `_reference.md` Part 2 (Anthropic SDK / prompt-caching row) and called out as a gap in Part 4 row 2.
|
||||
|
||||
**Citation**: `_reference.md` Part 1 §Search / read path and Part 2 (SDK / prompt-caching row); `_mapping.md` old Plan 10 row — "All content | KEEP | `04-read-path.md` §Phases 13-18 (delete session_id, delete prime/reprime auto-reprime regex, rewrite /query with systemPrompt)."
|
||||
|
||||
---
|
||||
|
||||
## Snapshot-test requirement (MANDATORY before Phase 2 deletion)
|
||||
|
||||
**Status: MANDATORY. Blocking gate on Phase 2.** The four formatters must NOT be deleted until this snapshot test is in place and passing against the new `renderObservations` path. Without byte-identical verification, formatter regressions are undetectable — this is explicitly flagged as Known gap #5 in `_rewrite-plan.md` and reproduced here.
|
||||
|
||||
**The gate**:
|
||||
|
||||
1. Before touching the four formatter classes, construct a fixed fixture set — a hand-picked `ObservationRow[]` covering each header type, each grouping mode, each row density, and color on/off.
|
||||
2. Run the current `AgentFormatter`, `HumanFormatter`, `ResultFormatter`, and `CorpusRenderer` on the fixture set. Capture their output **byte-for-byte** into four `__snapshots__` files.
|
||||
3. Land Phase 1 (`renderObservations`) as additive — do NOT delete the four formatters yet.
|
||||
4. Write the snapshot test: `renderObservations(obs, agentConfig)` against the same fixture set must match the captured `AgentFormatter` snapshot **byte-for-byte**; same for `humanConfig` vs. `HumanFormatter`; same for `terseConfig` vs. `ResultFormatter`; same for `corpusConfig` vs. `CorpusRenderer`.
|
||||
5. Only when all four snapshot comparisons pass byte-identical, execute Phase 2 (delete the four classes).
|
||||
|
||||
Without this gate, Phase 2 is a blind deletion: the new renderer could differ from the old in whitespace, ordering, ANSI escape sequences, or token-estimate math, and no test in the corpus would catch it. The byte-identical snapshot is the acceptance boundary between "consolidation" and "silent regression."
|
||||
|
||||
**Citation**: `_rewrite-plan.md` §Phase 3 3B Verification — "Snapshot test: `renderObservations` with agent config produces byte-identical output to the old `AgentFormatter` on the same input." `_mapping.md` old Plan 05 row — "Phase 6: Verification | KEEP | `04-read-path.md` §Verification (byte-equality snapshot)."
|
||||
|
||||
---
|
||||
|
||||
## Verification grep targets
|
||||
|
||||
Each command below must return the indicated count or state after this plan lands.
|
||||
|
||||
```
|
||||
grep -n "SearchManager\.findBy" src/ → 0 # definitions deleted
|
||||
grep -rn "RECENCY_WINDOW_MS" src/services/worker/SearchManager.ts → 0 # seven hand-rolled copies deleted
|
||||
grep -n "fellBack: true" src/ → 0 # silent-fallback flag deleted
|
||||
grep -n "getExistingChromaIds" src/ → 0 # @deprecated fence gone
|
||||
ls src/services/context/formatters/ → empty (or directory deleted)
|
||||
```
|
||||
|
||||
**Integration test — fail-fast Chroma**: Shut Chroma down (kill the MCP subprocess, or point the client at an unreachable host). Issue a search request that requires Chroma. Assert the HTTP response is `503` with a non-empty error body — NOT an empty result set, NOT a metadata-only fallback, NOT a `fellBack: true` payload.
|
||||
|
||||
**Snapshot test — `renderObservations` byte-identity**: With the `AgentFormatter` snapshot captured against the fixed fixture set (see "Snapshot-test requirement"), assert `renderObservations(fixtureObs, agentConfig)` produces byte-identical output. Same assertion for `humanConfig`, `terseConfig`, `corpusConfig`. Failure of any single comparison blocks Phase 2.
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards
|
||||
|
||||
Reproduced verbatim from `_rewrite-plan.md` §3B:
|
||||
|
||||
- **Do NOT create a `RenderStrategy` class hierarchy. Config object only.** No `abstract class RenderStrategy`, no subclass-per-formatter, no factory, no registry. The `type RenderStrategy = { … }` definition in Phase 1 is the whole surface. If a change to this plan later reaches for a class, revisit principle 6 — the knob set is known and finite.
|
||||
- **Do NOT add a feature flag to "disable fail-fast Chroma" — callers either handle 503 or they don't.** Per principle 2, fail-fast is a contract, not an opt-in. A flag that restores the silent-fallback path would be a fresh violation of the same principle Phase 5 exists to enforce.
|
||||
|
||||
Implicit guards (from `00-principles.md` §Six Anti-pattern Guards):
|
||||
|
||||
- No new `coerce*`, `recover*`, `heal*`, `repair*` function names in the search or render path.
|
||||
- No new try/catch that swallows errors and returns a fallback value.
|
||||
- No new strategy class when a config object would do.
|
||||
|
||||
---
|
||||
|
||||
## Known gaps / deferrals
|
||||
|
||||
**Prompt-caching cost smoke test (MANDATORY preflight for Phase 9).** Before the knowledge-corpus simplification phases ship, a cost smoke test must assert `cache_read_input_tokens > 0` on the **2nd and 3rd** call to `/api/corpus/:name/query` (same corpus, same systemPrompt, within the SDK's cache TTL — approximately 5 minutes). If the cache does not hit, Phase 9's reliance on SDK prompt caching is unfounded, and the cost characteristics will be worse than the deleted `prime` / `reprime` path. This gate is tracked in `98-execution-order.md` §Preflight and verified in `99-verification.md` — per `_reference.md` Part 4 row 2 and `_mapping.md` old Plan 05 row ("Phase 7: Prompt-caching cost note | REWRITE | `99-verification.md` §Cost smoke test gate").
|
||||
|
||||
**Dependence on `01-data-integrity.md` §Phase 7.** Chroma write-side reconciliation (delete-then-add under `CHROMA_SYNC_FALLBACK_ON_CONFLICT`) is owned by `01-data-integrity.md`. This plan's Phase 5 fail-fast read behavior is independent of that flag — a read-path `503` is correct even while the write-path fallback remains active, because a read-path Chroma error means the reader cannot serve the request, regardless of whether the write path later reconciles successfully.
|
||||
@@ -0,0 +1,393 @@
|
||||
# 05 — Hook Surface
|
||||
|
||||
## Purpose
|
||||
|
||||
Consolidate worker HTTP plumbing across the eight hook handlers, cache settings once per hook process, delete the 20-iteration `curl` retry loops in `plugin/hooks/hooks.json`, delete the 120-second client-side polling loop in `src/cli/handlers/summarize.ts`, and escalate to exit code 2 after N consecutive `ensureWorkerRunning()` failures so the worker's death surfaces to Claude instead of being silently absorbed. The cure is nine moves: delete the shell retry loops; introduce one `executeWithWorkerFallback` helper with eight callers; replace the polling loop with a server-side blocking `/api/session/end` endpoint that awaits the `summaryStoredEvent` emitted by `03-ingestion-path.md` Phase 2; cache settings at module scope; collapse three duplicated exclusion checks into one `shouldTrackProject(cwd)` helper; move cwd validation to the adapter boundary so it runs once; delete the always-init conditional on the agent (init is idempotent); track consecutive failures in a state file and exit 2 after N; and consolidate the alive-heuristic cache into one `ensureWorkerAliveOnce()` call site.
|
||||
|
||||
---
|
||||
|
||||
## Principles invoked
|
||||
|
||||
This plan is measured against `00-principles.md`:
|
||||
|
||||
- **Principle 2 — Fail-fast over grace-degrade.** Consecutive hook failures do not degrade silently into "exit 0 and hope next time works." After N consecutive `ensureWorkerRunning == false` results, the hook exits code 2 so Claude Code's hook contract surfaces the problem. No retry inside the hook. No timeout-and-exit-0 papering.
|
||||
- **Principle 4 — Event-driven over polling.** The 120-second client-side polling loop in `src/cli/handlers/summarize.ts:117-150` is replaced by a single POST to `/api/session/end` that the server holds open until the `summaryStoredEvent` (emitted by `03-ingestion-path.md` Phase 2) fires. One request, one response, no polling on either side.
|
||||
- **Principle 6 — One helper, N callers.** The eight-handler copy of `ensureWorkerRunning → workerHttpRequest → if (!ok) return { continue: true }` collapses to one exported `executeWithWorkerFallback(url, method, body)`. Three duplicated `isProjectExcluded(cwd, …)` call sites collapse to one `shouldTrackProject(cwd)`. Four per-handler `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` calls collapse to one module-scope `loadFromFileOnce()`.
|
||||
|
||||
**Cross-references**:
|
||||
|
||||
- `03-ingestion-path.md` Phase 2 emits `summaryStoredEvent` with payload `{ sessionId: string; messageId: number }`. Phase 3 of this plan consumes that event inside the Express handler for `/api/session/end`. The emitter lives inside the worker (`src/services/worker/agents/ResponseProcessor.ts` after its rewrite); the consumer lives inside the HTTP route. Event-bus implementation is left to the implementer per `03-ingestion-path.md` §Known gaps #3.
|
||||
- `02-process-lifecycle.md` Phase 8 defines the lazy-spawn wrapper (`ensureWorkerRunning` in `src/shared/worker-utils.ts:221-239`) that this plan's `executeWithWorkerFallback` calls as its first step. If the worker is not alive, lazy-spawn attempts to start it; if the port check still fails afterwards, the helper returns `{ continue: true }` and this plan's Phase 8 fail-loud counter increments. The two plans do not duplicate spawn logic — lazy-spawn is defined in 02, consumed here.
|
||||
- `06-api-surface.md` defines the Zod `validateBody` middleware (Phase 2 of that plan). The blocking `/api/session/end` endpoint introduced in Phase 3 below uses the same middleware to validate its POST body before entering the event-wait loop; no hand-rolled validation lives in the hook-surface plumbing.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Delete shell retry loops
|
||||
|
||||
**Purpose**: Remove the 20-iteration `curl` retry loops wrapping three hook entries in `plugin/hooks/hooks.json`. Shell-level retry is a bash expression of the same anti-pattern principle 2 forbids at the TypeScript layer. `ensureWorkerRunning()` (`02-process-lifecycle.md` Phase 8) is the one check; it either succeeds or the fail-loud counter (Phase 8 below) escalates. A shell loop papers over that signal.
|
||||
|
||||
**Anchors** (`_reference.md` Part 1 §Hooks/CLI):
|
||||
- `plugin/hooks/hooks.json:27` — `for i in 1 2 3 4 5 6 7 …` curl retry wrapper
|
||||
- `plugin/hooks/hooks.json:32` — same pattern, second hook entry
|
||||
- `plugin/hooks/hooks.json:43` — same pattern, third hook entry
|
||||
|
||||
**Before** (conceptual):
|
||||
```jsonc
|
||||
// plugin/hooks/hooks.json:27 (current)
|
||||
"command": "for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20; do curl -sf http://localhost:37777/health && break; sleep 0.1; done && bun .../observation-hook.js"
|
||||
```
|
||||
|
||||
**After**:
|
||||
```jsonc
|
||||
// plugin/hooks/hooks.json:27 (after this phase)
|
||||
"command": "bun .../observation-hook.js"
|
||||
```
|
||||
|
||||
The handler invokes `executeWithWorkerFallback` (Phase 2) on entry; that helper calls `ensureWorkerRunning()` (`02-process-lifecycle.md` Phase 8) which performs a single port check plus one lazy-spawn attempt. No shell loop.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `plugin/hooks/hooks.json:27, 32, 43` (target call sites).
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — `executeWithWorkerFallback(url, method, body)` helper
|
||||
|
||||
**Purpose**: Consolidate the eight hook handlers' copy of `ensureWorkerRunning → workerHttpRequest → if (!ok) return { continue: true }` into one exported helper. The helper is added to `src/shared/worker-utils.ts` alongside `ensureWorkerRunning`; every handler imports and calls it instead of reproducing the sequence.
|
||||
|
||||
**Anchors**:
|
||||
- `src/shared/worker-utils.ts:221-239` — `ensureWorkerRunning` (existing, consumed by the new helper)
|
||||
- `src/cli/handlers/observation.ts:17` — one of eight call sites that reproduces the sequence
|
||||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/observation.ts:17, 53-54, 58-61` (current duplicated pattern)
|
||||
|
||||
**Contract** (required signature, see "`executeWithWorkerFallback` signature" section below for the canonical block).
|
||||
|
||||
**Behavior**:
|
||||
1. Call `ensureWorkerRunning()`. If it returns `false`, increment the fail-loud counter (Phase 8) and return `{ continue: true, reason: 'worker_unreachable' }`.
|
||||
2. If `true`, call `workerHttpRequest(url, method, body)` and return its parsed response typed as `T`.
|
||||
3. Reset the fail-loud counter on the first success.
|
||||
|
||||
**Callers after this plan lands** (all eight):
|
||||
- `src/cli/handlers/observation.ts`
|
||||
- `src/cli/handlers/session-init.ts`
|
||||
- `src/cli/handlers/context.ts`
|
||||
- `src/cli/handlers/file-context.ts`
|
||||
- `src/cli/handlers/file-edit.ts`
|
||||
- `src/cli/handlers/summarize.ts`
|
||||
- (two additional handlers in `src/cli/handlers/` that reproduce the pattern — see `_reference.md` Part 1 §Hooks/CLI for anchors)
|
||||
|
||||
**By principle 6 (one helper, N callers)**: the request/fallback sequence has one implementation; eight handlers import it. No handler reimplements the "worker missing → exit gracefully" path.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/shared/worker-utils.ts:221-239` and `src/cli/handlers/observation.ts:17`. Cross-reference: `02-process-lifecycle.md` Phase 8 for the `ensureWorkerRunning` contract this helper depends on.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Blocking `/api/session/end` endpoint
|
||||
|
||||
**Purpose**: Replace the client-side 120-second polling loop in `src/cli/handlers/summarize.ts:117-150` with a single POST to `/api/session/end` that the server holds open until the summary-stored event fires. By principle 4 (event-driven over polling), the server already knows when the summary is persisted — it just emitted `summaryStoredEvent` in `03-ingestion-path.md` Phase 2 — so there is no reason for the hook to walk back in and ask repeatedly.
|
||||
|
||||
**Anchors**:
|
||||
- `src/cli/handlers/summarize.ts:117-150` — 120-second polling loop (1 s tick, `MAX_WAIT_FOR_SUMMARY_MS`, `POLL_INTERVAL_MS`) — DELETE
|
||||
- `03-ingestion-path.md` Phase 2 — emits `summaryStoredEvent` with payload `{ sessionId: string; messageId: number }`
|
||||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/summarize.ts:117-150` (current polling target)
|
||||
|
||||
**Server-side pattern** (Express-level; event bus + per-request timeout + single response):
|
||||
|
||||
```ts
|
||||
// Express route registered in src/services/worker/http/routes/SessionRoutes.ts
|
||||
// after 06-api-surface.md Phase 2 validateBody middleware runs.
|
||||
router.post('/api/session/end', validateBody(sessionEndSchema), (req, res) => {
|
||||
const { sessionId } = req.body;
|
||||
|
||||
// one-shot listener; cleared on either fulfillment or timeout
|
||||
const onStored = (evt: SummaryStoredEvent) => {
|
||||
if (evt.sessionId !== sessionId) return;
|
||||
cleanup();
|
||||
res.status(200).json({ ok: true, messageId: evt.messageId });
|
||||
};
|
||||
|
||||
const timer = setTimeout(() => {
|
||||
cleanup();
|
||||
res.status(504).json({ ok: false, reason: 'summary_not_stored_in_time' });
|
||||
}, SERVER_SIDE_SUMMARY_TIMEOUT_MS);
|
||||
|
||||
const cleanup = () => {
|
||||
clearTimeout(timer);
|
||||
eventBus.off('summaryStoredEvent', onStored);
|
||||
};
|
||||
|
||||
eventBus.on('summaryStoredEvent', onStored);
|
||||
|
||||
// request aborted by client (hook process died): drop the listener immediately
|
||||
req.on('close', cleanup);
|
||||
});
|
||||
```
|
||||
|
||||
Per-hook call site:
|
||||
|
||||
```ts
|
||||
// src/cli/handlers/summarize.ts (after this phase)
|
||||
const result = await executeWithWorkerFallback<SessionEndResponse>(
|
||||
'/api/session/end', 'POST', { sessionId },
|
||||
);
|
||||
// one POST, one response. No loop.
|
||||
```
|
||||
|
||||
**Delete in the same PR**:
|
||||
- `src/cli/handlers/summarize.ts:117-150` — polling loop body
|
||||
- `MAX_WAIT_FOR_SUMMARY_MS` constant
|
||||
- `POLL_INTERVAL_MS` constant
|
||||
- Any helper that existed only to drive the loop (`pollUntilSummary`, `waitForSummarySync`, …)
|
||||
|
||||
**Cross-reference (load-bearing)**: `03-ingestion-path.md` Phase 2 is the emitter side of the contract. Its `summaryStoredEvent` payload `{ sessionId: string; messageId: number }` is consumed verbatim here. If Phase 2 changes the event name or shape, this phase's route handler changes with it. The event bus implementation (`EventEmitter` vs dedicated `src/services/infrastructure/eventBus.ts`) is per `03-ingestion-path.md` §Known gaps #3.
|
||||
|
||||
**Cross-reference (validation)**: `06-api-surface.md` Phase 2 defines `validateBody`. The `sessionEndSchema` Zod schema is declared at the top of `SessionRoutes.ts` per `06-api-surface.md` Phase 3.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/summarize.ts:117-150`; `_reference.md` Part 2 row 7 (hook exit-code contract — a 504 returned to the hook flows through `executeWithWorkerFallback` and triggers the fail-loud counter like any other failure).
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Cache settings once per hook process
|
||||
|
||||
**Purpose**: Each hook process is short-lived and reads `USER_SETTINGS_PATH` independently. Four handlers currently call `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` on every handler entry; since settings do not mutate during a single hook execution, module-scope caching eliminates three redundant disk reads per invocation across the eight handlers.
|
||||
|
||||
**Anchors**:
|
||||
- `src/cli/handlers/context.ts:36` — per-handler `loadFromFile` call
|
||||
- `src/cli/handlers/session-init.ts:57` — same
|
||||
- `src/cli/handlers/observation.ts:58` — same
|
||||
- `src/cli/handlers/file-context.ts:211` — same
|
||||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/session-init.ts:57-60` and `src/cli/handlers/observation.ts:17, 53-54, 58-61`
|
||||
- `_reference.md` Part 3 row "Settings schema" — `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` pattern
|
||||
|
||||
**After**: a module-scope `loadFromFileOnce()` in (e.g.) `src/shared/hook-settings.ts` that memoizes the `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` result for the lifetime of the process. Every handler imports `loadFromFileOnce` instead of calling `loadFromFile` directly.
|
||||
|
||||
```ts
|
||||
// src/shared/hook-settings.ts (after this phase)
|
||||
let cachedSettings: Settings | null = null;
|
||||
export function loadFromFileOnce(): Settings {
|
||||
if (cachedSettings !== null) return cachedSettings;
|
||||
cachedSettings = SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH);
|
||||
return cachedSettings;
|
||||
}
|
||||
```
|
||||
|
||||
**Delete in the same PR**: the per-handler `loadFromFile` calls at `context.ts:36`, `session-init.ts:57`, `observation.ts:58`, `file-context.ts:211`. After this phase, the only `SettingsDefaultsManager.loadFromFile` call in `src/cli/handlers/` is inside `loadFromFileOnce` (verification grep below).
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI (call sites); Part 3 row "Settings schema" (current pattern).
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — `shouldTrackProject(cwd)` helper
|
||||
|
||||
**Purpose**: Three handlers duplicate the pattern `isProjectExcluded(cwd, settings.CLAUDE_MEM_EXCLUDED_PROJECTS)` — each one reloads settings (fixed by Phase 4) and calls the same exclusion check. Consolidate to one `shouldTrackProject(cwd)` helper that is the single answer to "does this hook run for this cwd?"
|
||||
|
||||
**Anchors**:
|
||||
- `src/cli/handlers/observation.ts:58-61` — exclusion check call site
|
||||
- `src/cli/handlers/context.ts` — exclusion check call site
|
||||
- `src/cli/handlers/file-context.ts:211` region — exclusion check call site
|
||||
- `src/utils/project-name.ts` — `getProjectContext(cwd)` returning `{ primary, allProjects, excluded }` per `_reference.md` Part 3 row "Project scoping"
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/shared/should-track-project.ts (after this phase)
|
||||
export function shouldTrackProject(cwd: string): boolean {
|
||||
const settings = loadFromFileOnce(); // Phase 4
|
||||
return !isProjectExcluded(cwd, settings.CLAUDE_MEM_EXCLUDED_PROJECTS);
|
||||
}
|
||||
```
|
||||
|
||||
**Callers**: every handler that currently reads `CLAUDE_MEM_EXCLUDED_PROJECTS` imports and calls `shouldTrackProject(cwd)` at the top of its handler body. No handler references the setting key directly after this phase.
|
||||
|
||||
**By principle 6 (one helper, N callers)**: three exclusion-check sites → one helper. The verification grep below asserts that `isProjectExcluded` is referenced exactly once in `src/cli/handlers/` (inside `shouldTrackProject`); every other caller routes through the helper.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/observation.ts:58-61`; Part 3 row "Project scoping".
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — cwd validation at adapter boundary
|
||||
|
||||
**Purpose**: cwd validation currently runs twice on some paths — once after the adapter normalizes input and once inside the handler. Move validation into the adapter's `normalizeInput()` function so it runs exactly once, at the boundary.
|
||||
|
||||
**Anchors**:
|
||||
- `src/cli/handlers/file-edit.ts:50-51` — cwd validation after adapter normalization (DELETE; move to adapter)
|
||||
- `src/cli/handlers/observation.ts:53-54` — same pattern (DELETE; move to adapter)
|
||||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/observation.ts:17, 53-54, 58-61`
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/cli/handlers/observation.ts:53-54 (current)
|
||||
const payload = adapter.normalizeInput(raw);
|
||||
if (!isValidCwd(payload.cwd)) return { continue: true }; // handler-level check
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// adapter body (conceptual)
|
||||
normalizeInput(raw) {
|
||||
const payload = this.parse(raw);
|
||||
if (!isValidCwd(payload.cwd)) throw new AdapterRejectedInput('invalid_cwd');
|
||||
return payload;
|
||||
}
|
||||
|
||||
// handler body — no cwd check remains
|
||||
const payload = adapter.normalizeInput(raw);
|
||||
```
|
||||
|
||||
**Delete in the same PR**: the two handler-level `isValidCwd` checks at `file-edit.ts:50-51` and `observation.ts:53-54`.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI anchors above.
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Always-init agent
|
||||
|
||||
**Purpose**: `src/cli/handlers/session-init.ts:120-129` wraps agent initialization in `if (!initResult.contextInjected)`. The conditional exists to avoid re-initializing the agent when context was already injected; but agent init is idempotent (second call is a no-op), so the conditional adds branching without reducing work. Delete it.
|
||||
|
||||
**Anchors**:
|
||||
- `src/cli/handlers/session-init.ts:120-129` — conditional guard around agent init
|
||||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/session-init.ts:57-60, 120-129`
|
||||
|
||||
**Before**:
|
||||
```ts
|
||||
// src/cli/handlers/session-init.ts:120-129 (current)
|
||||
if (!initResult.contextInjected) {
|
||||
await initAgent(…);
|
||||
}
|
||||
```
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/cli/handlers/session-init.ts (after this phase)
|
||||
await initAgent(…); // idempotent; safe to always call
|
||||
```
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/session-init.ts:120-129`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 8 — Fail-loud after N consecutive failures
|
||||
|
||||
**Purpose**: Escalate silent failure to a surfaced failure. When `ensureWorkerRunning()` returns `false`, the hook still exits `0` (first time) to avoid breaking the user's Claude Code session; but the helper increments a counter in a state file, and after N (default 3) consecutive failures, the hook exits code 2. Per `_reference.md` Part 2 row 7, exit code 2 is a **blocking error** that Claude Code feeds back to Claude — it is the correct surface for "the worker has been unreachable 3 times in a row; something is actually broken."
|
||||
|
||||
**This counter is NOT a retry.** A retry would reinvoke the failed operation inside the hook to try again; this plan forbids that (see Anti-pattern guards below). The counter records how many consecutive hook invocations have seen the worker unreachable and escalates only the Nth invocation to exit 2 — the first (N−1) invocations still return the graceful-degradation response. Retry loops live work forward within one invocation; the fail-loud counter surfaces a persistent outage across invocations. They are disjoint mechanisms.
|
||||
|
||||
**Anchors**:
|
||||
- `src/shared/worker-utils.ts:221-239` — `ensureWorkerRunning` (the call whose `false` return increments the counter)
|
||||
- `_reference.md` Part 2 row 7 — Claude Code hook exit codes (0 success, 1 non-blocking, 2 blocking)
|
||||
- `CLAUDE.md` §Exit Code Strategy — claude-mem's philosophy that worker-unreachable alone exits 0 to prevent Windows Terminal tab accumulation, overridden here by the N-th consecutive failure escalating to 2
|
||||
|
||||
**Counter location**: the existing claude-mem state directory (the same directory that already holds other per-process state under `~/.claude-mem/`). Place the counter at `~/.claude-mem/state/hook-failures.json`. **Do NOT create a new top-level directory**; use the state directory that already exists. If the state directory does not yet exist (implementer discovers at landing time), the existing state-directory creation path creates it; this plan does not introduce a new creation path.
|
||||
|
||||
**File shape**:
|
||||
```json
|
||||
{ "consecutiveFailures": 2, "lastFailureAt": 1713830400000 }
|
||||
```
|
||||
|
||||
**Atomic write**: write to `~/.claude-mem/state/hook-failures.json.tmp`, then `rename` over the destination. POSIX rename is atomic within a filesystem; no partial-write window. No `fs.watch` or lock is needed because each hook invocation reads-then-writes as a short sequence, and a race across two simultaneous hooks at most over- or under-counts by one — which is acceptable given the threshold is 3.
|
||||
|
||||
**Behavior (in `executeWithWorkerFallback`)**:
|
||||
1. `ensureWorkerRunning()` returns `true` → reset counter to 0 (atomic write), proceed with request.
|
||||
2. `ensureWorkerRunning()` returns `false` → read counter, increment by 1, atomic write:
|
||||
- If new value < N → exit the hook with code 0 and return `{ continue: true, reason: 'worker_unreachable' }` to the caller.
|
||||
- If new value ≥ N → exit the hook with code **2** so Claude Code surfaces the outage. stderr: "claude-mem worker unreachable for <N> consecutive hooks."
|
||||
|
||||
**N (threshold)**: default 3. Settings key `CLAUDE_MEM_HOOK_FAIL_LOUD_THRESHOLD` (integer, optional; defaults to 3 if absent).
|
||||
|
||||
**Distinguishing from a retry**: the helper does NOT call `ensureWorkerRunning()` twice, does NOT sleep-and-retry the HTTP request, does NOT attempt the operation a second time inside the same hook. It runs the primary path once, records the result in the counter, and either returns or escalates. A retry reinvokes work; the counter records work. If an implementer is tempted to add a "just try once more before incrementing" line, refer to the Anti-pattern guards section and stop.
|
||||
|
||||
**Reset**: any successful `ensureWorkerRunning()` resets the counter to 0 in the same atomic write. This is not a retry either — it is a success-path acknowledgment that the outage ended.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/shared/worker-utils.ts:221-239`; `_reference.md` Part 2 row 7 (exit-code contract); `CLAUDE.md` §Exit Code Strategy.
|
||||
|
||||
---
|
||||
|
||||
## Phase 9 — Delete cache alive heuristic duplication
|
||||
|
||||
**Purpose**: Multiple handlers re-derive "is the worker alive?" heuristics (port check, recent-success flag, …) each invocation. Collapse into one `ensureWorkerAliveOnce()` with module-scope caching, consumed by `executeWithWorkerFallback` from Phase 2.
|
||||
|
||||
**Anchors**:
|
||||
- `src/shared/worker-utils.ts:221-239` — `ensureWorkerRunning` (the underlying port check; `ensureWorkerAliveOnce` wraps it with one per-process memoization)
|
||||
- handlers that duplicate alive-heuristic checks — covered by the grep "SettingsDefaultsManager.loadFromFile" (Phase 4) and "isProjectExcluded" (Phase 5) verifications plus this phase's consolidation
|
||||
|
||||
**After**:
|
||||
```ts
|
||||
// src/shared/worker-utils.ts (after this phase)
|
||||
let aliveCache: boolean | null = null;
|
||||
export async function ensureWorkerAliveOnce(): Promise<boolean> {
|
||||
if (aliveCache !== null) return aliveCache;
|
||||
aliveCache = await ensureWorkerRunning();
|
||||
return aliveCache;
|
||||
}
|
||||
```
|
||||
|
||||
`executeWithWorkerFallback` (Phase 2) calls `ensureWorkerAliveOnce()` instead of `ensureWorkerRunning()`. Within a single hook process, the first call hits the network; subsequent calls return the memoized value. This matters because a single hook invocation may issue multiple requests (e.g., session-init issues several), and the alive-state cannot change mid-invocation without the process exiting.
|
||||
|
||||
**By principle 6 (one helper, N callers)**: the memoization lives in one place; eight handlers call the memoized wrapper transparently.
|
||||
|
||||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/shared/worker-utils.ts:221-239`.
|
||||
|
||||
---
|
||||
|
||||
## `executeWithWorkerFallback` signature (verbatim contract)
|
||||
|
||||
Phase 2 establishes the single helper consumed by all eight handlers. The discriminated return type makes the degrade-gracefully branch an explicit caller concern rather than an ad-hoc `{ continue: true }` literal scattered across handlers.
|
||||
|
||||
```ts
|
||||
type WorkerFallback = { continue: true } | { continue: true, reason: string };
|
||||
async function executeWithWorkerFallback<T>(
|
||||
url: string,
|
||||
method: 'GET' | 'POST' | 'PUT' | 'DELETE',
|
||||
body?: unknown,
|
||||
): Promise<T | WorkerFallback>;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Fail-loud counter location callout
|
||||
|
||||
The fail-loud counter (Phase 8) lives at `~/.claude-mem/state/hook-failures.json` — inside the **existing** state directory under `~/.claude-mem/`. This plan does not create a new directory; it writes to the directory that already holds claude-mem's per-process state. Atomic write via the temp-file + rename pattern (`write hook-failures.json.tmp → rename hook-failures.json.tmp hook-failures.json`). POSIX rename within one filesystem is atomic; no partial-file window.
|
||||
|
||||
Reminder: this counter is **not** a retry. See Phase 8's "Distinguishing from a retry" subsection and the Anti-pattern guards below.
|
||||
|
||||
---
|
||||
|
||||
## Verification grep targets
|
||||
|
||||
Each command must return the indicated count after this plan lands.
|
||||
|
||||
```
|
||||
grep -rn "for i in 1 2 3 4 5 6 7" plugin/hooks/hooks.json → 0
|
||||
grep -rn "SettingsDefaultsManager.loadFromFile" src/cli/handlers/ → 1 # cached location only (loadFromFileOnce)
|
||||
grep -rn "isProjectExcluded" src/cli/handlers/ → 1 # inside shouldTrackProject only
|
||||
grep -rn "MAX_WAIT_FOR_SUMMARY_MS\|POLL_INTERVAL_MS" src/cli/handlers/ → 0
|
||||
```
|
||||
|
||||
**Integration test 1** (fail-loud counter): block the worker port (e.g., kill the worker with a firewall rule or a `iptables`/`pfctl` reject on 37777). Invoke any hook; assert it exits **0** and writes `{ "consecutiveFailures": 1 }` to `~/.claude-mem/state/hook-failures.json`. Invoke again; assert exit 0 and counter at 2. Invoke a third time; assert exit **2** with stderr naming the outage. Unblock the port and invoke once more; assert exit 0 and counter reset to 0.
|
||||
|
||||
**Integration test 2** (session end blocks without polling): start a session end hook while a session is in flight. Assert a single POST to `/api/session/end` is issued from the hook (tcpdump/strace count or application-level log asserts request count == 1). The request hangs until the worker stores the summary (triggering `summaryStoredEvent`), then returns 200 in one response. No tick-loop, no repeated requests.
|
||||
|
||||
**Six verification targets total**: four greps + two integration tests.
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards
|
||||
|
||||
Reproduced verbatim from `_rewrite-plan.md` §4A:
|
||||
|
||||
- Do NOT add a retry loop inside the hook (any kind).
|
||||
- Do NOT add a timeout-and-exit-0 pattern.
|
||||
- Do NOT keep the shell retry loops behind a feature flag.
|
||||
|
||||
Additional hard rules enforced by this plan:
|
||||
|
||||
- Do NOT add polling anywhere in the hook. The session-end summary wait is server-side, single POST, single response.
|
||||
- Do NOT add a shell-level retry loop in `plugin/hooks/hooks.json`. Phase 1 deletes the existing ones; none may be reintroduced.
|
||||
- Do NOT treat the fail-loud counter as a retry. It does not reinvoke work; it records work. If tempted to add "one more attempt before incrementing," see Phase 8's distinguishing subsection and stop.
|
||||
- Do NOT migrate the fail-loud counter to a new directory. It lives at `~/.claude-mem/state/hook-failures.json` inside the existing state directory.
|
||||
- Do NOT introduce a second `ensureWorkerRunning`-like helper; consumers go through `executeWithWorkerFallback` (Phase 2) or `ensureWorkerAliveOnce` (Phase 9). Both wrap the single primitive from `02-process-lifecycle.md` Phase 8.
|
||||
|
||||
---
|
||||
|
||||
## Known gaps / deferrals
|
||||
|
||||
1. **Event-bus choice.** Phase 3's `/api/session/end` endpoint listens for `summaryStoredEvent` from `03-ingestion-path.md` Phase 2. The event-bus implementation (`node:events` `EventEmitter` vs a dedicated `src/services/infrastructure/eventBus.ts` module) is left to the implementer per `03-ingestion-path.md` §Known gaps #3. This plan specifies only the consumer contract.
|
||||
2. **Server-side timeout default.** `SERVER_SIDE_SUMMARY_TIMEOUT_MS` for the blocking endpoint is not fixed by this plan; the implementer picks a value bounded by the SDK's worst-case summary latency. A 30-s default is a reasonable starting point; revisit once Phase 2 (ingestion) is in place and we have measured latency distribution.
|
||||
3. **Windows counter path.** `~/.claude-mem/state/hook-failures.json` resolves via the existing `~/.claude-mem/` base path logic. On Windows under WSL the path is Unix-shaped; native-Windows behavior inherits the platform caveat from `02-process-lifecycle.md` §Platform caveat — Windows.
|
||||
@@ -0,0 +1,224 @@
|
||||
# 06 — API Surface
|
||||
|
||||
**Purpose**: Lock the worker HTTP surface behind one Zod-based validator, delete the rate limiter and the pending-queue diagnostic endpoints, cache `viewer.html` and `/api/instructions` in-memory at boot, and consolidate the four overlapping shutdown paths and two failure-marking paths into a single function each. Net effect: fewer handlers, fewer defensive wrappers, one schema-per-route, and zero second-system endpoints added "for debugging only."
|
||||
|
||||
---
|
||||
|
||||
## Principles invoked
|
||||
|
||||
- **Principle 1 — No recovery code for fixable failures.** The pending-queue diagnostic endpoints exist to poke at rows a correct ingestion path should never leave behind. Deleting them is the cure; shipping them is the hidden-bug engine.
|
||||
- **Principle 2 — Fail-fast over grace-degrade.** `safeParse` returns a discriminated result; on `success=false` the middleware responds 400 with the Zod `issues` array. No `try/catch` swallow, no coercion, no "best-effort" defaults.
|
||||
- **Principle 6 — One helper, N callers.** One `validateBody(schema)` middleware wraps every validated POST/PUT; one `performGracefulShutdown` is the only shutdown path; one `transitionMessagesTo(status)` is the only failure/abandon writer.
|
||||
- **Principle 7 — Delete code in the same PR it becomes unused.** `validateRequired`, `WorkerService.shutdown`, `runShutdownCascade` wrappers, `markSessionMessagesFailed`, `markAllSessionMessagesAbandoned`, and the rate limiter are deleted in-PR, not `@deprecated`-fenced.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Preflight: `npm install zod@^3.x`
|
||||
|
||||
Add Zod 3.x as a runtime dependency.
|
||||
|
||||
**Version pinning rationale**: Zod 3.x is the stable, shipped line (current minor `^3.23`). Zod 4.x is in active rework at time of writing — breaking changes to error shape and `safeParse` return signature are expected. Pinning `^3.x` gives us the ecosystem (tRPC, AI SDK, most Express middleware) without strapping into an experimental release.
|
||||
|
||||
Per `_reference.md` Part 4 §Confidence + gaps #4: "Zod is not currently a dep — Plan 06 Phase 1 is `npm install zod@^3.x`."
|
||||
|
||||
Cites principle 6 (one helper). After this phase, all runtime validation flows through Zod — no second validator, no Ajv, no hand-rolled type-guards left in `src/services/worker/http/`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — `validateBody` middleware
|
||||
|
||||
Single Express middleware using Zod `safeParse`. Returns 400 with field errors on failure; on success, replaces `req.body` with the parsed (and now typed) value and calls `next()`. Per `_reference.md` Part 2 row on `safeParse`: discriminated-union return is the fail-fast contract the middleware is designed around.
|
||||
|
||||
Place at `src/services/worker/http/middleware/validateBody.ts`. Every validated POST/PUT route imports this one function.
|
||||
|
||||
```ts
|
||||
import type { RequestHandler } from 'express';
|
||||
import type { ZodTypeAny } from 'zod';
|
||||
|
||||
export const validateBody = <S extends ZodTypeAny>(schema: S): RequestHandler =>
|
||||
(req, res, next) => {
|
||||
const result = schema.safeParse(req.body);
|
||||
if (!result.success) {
|
||||
return res.status(400).json({
|
||||
error: 'ValidationError',
|
||||
issues: result.error.issues.map(i => ({
|
||||
path: i.path,
|
||||
message: i.message,
|
||||
code: i.code,
|
||||
})),
|
||||
});
|
||||
}
|
||||
req.body = result.data;
|
||||
return next();
|
||||
};
|
||||
```
|
||||
|
||||
Cites principle 2 (fail-fast) and principle 6 (one helper, N callers).
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Per-route Zod schemas
|
||||
|
||||
One schema per POST/PUT endpoint, defined at the top of the route file that owns the endpoint. Schemas are **not** shared across routes — the `_reference.md` §API surface row shows these routes already have divergent body shapes (`SessionRoutes.ts:148` threshold-check body ≠ `DataRoutes.ts:305` processing-status body ≠ observation-ingest body). A "shared common" schema would paper over real divergence with a union or optional-everywhere object — the opposite of what Zod buys us.
|
||||
|
||||
**Cross-reference `05-hook-surface.md`**: the blocking `/api/session/end` endpoint pattern is defined in plan 05 (Phase 3: server-side wait-for-`summaryStoredEvent`). The Zod body schema for that endpoint lives **here** — it is one of the per-route schemas declared at the top of `SessionRoutes.ts` alongside every other validated POST on that router. Plan 05 owns the endpoint's server-side wait semantics; plan 06 owns its request-shape contract.
|
||||
|
||||
Example, in `DataRoutes.ts` (observations ingest):
|
||||
|
||||
```ts
|
||||
import { z } from 'zod';
|
||||
import { validateBody } from '../middleware/validateBody';
|
||||
|
||||
const ObservationBody = z.object({
|
||||
session_id: z.string().min(1),
|
||||
content: z.string(),
|
||||
// ...per-endpoint fields stay colocated with the handler that reads them
|
||||
});
|
||||
|
||||
router.post('/api/observations', validateBody(ObservationBody), handler);
|
||||
```
|
||||
|
||||
Cites principle 6 (one middleware wraps many per-route schemas — not N middlewares).
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Delete hand-rolled validation
|
||||
|
||||
Grep-and-delete every `validateRequired(...)` call, every inline `typeof req.body.x !== 'string'` check, and every `coerce*` helper across `src/services/worker/http/routes/`. Each deletion is justified by the `validateBody(schema)` wrapper that now runs before the handler — the handler sees a parsed object or the request is already 400'd.
|
||||
|
||||
Cites principle 7 (delete in-PR, no `@deprecated` fence) and principle 2 (no coercion in handlers).
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Delete rate limiter
|
||||
|
||||
The worker listens on `127.0.0.1:37777`. There is no untrusted caller. Rate limiting a localhost process is a second-system effect — it masks contention from a real concurrency bug rather than fixing the bug. If two callers are actually colliding on a shared resource, the cure is to find the collision (missing `UNIQUE` constraint, non-transactional claim, shared mutable state) and fix it in the relevant plan:
|
||||
|
||||
- Claim-side contention → `01-data-integrity.md` Phase 3 (self-healing claim).
|
||||
- Ingestion duplicates → `01-data-integrity.md` Phase 4 (`UNIQUE(session_id, tool_use_id)` + `ON CONFLICT DO NOTHING`).
|
||||
|
||||
Cites principle 1 (no recovery code for fixable failures) and the anti-pattern guard "No new HTTP endpoint for diagnostic / manual-repair purposes" — the rate limiter is the HTTP-handler analogue of that pattern.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — Cache `viewer.html` + `/api/instructions` in memory
|
||||
|
||||
At worker boot, read both files into `Buffer` once and serve the buffered bytes from the route handler. No `fs.watch`, no TTL, no "refresh in background" — per-process lifecycle. If the build changes the file, the next worker start picks it up; mid-process mutation is not a supported scenario.
|
||||
|
||||
```ts
|
||||
// at module init for ViewerRoutes / instructions handler
|
||||
const viewerHtmlBytes: Buffer = fs.readFileSync(VIEWER_HTML_PATH);
|
||||
const instructionsBytes: Buffer = fs.readFileSync(INSTRUCTIONS_MD_PATH);
|
||||
```
|
||||
|
||||
Handlers return the cached `Buffer` with the correct `Content-Type`. Cites principle 1 (no watcher-plus-TTL "cache-invalidation" recovery code) and principle 4 (event-driven — process restart is the event).
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Delete diagnostic endpoints
|
||||
|
||||
Per `_reference.md` Part 1 §API surface at `DataRoutes.ts:305, 475, 510, 529, 548`:
|
||||
|
||||
- **DELETE** `/api/pending-queue` GET at `DataRoutes.ts:475` — inspection endpoint. Use the viewer.
|
||||
- **DELETE** `/api/pending-queue/process` POST at `DataRoutes.ts:510` — manual kick. Correct ingestion does not need a kick; if it does, the bug is in the claim query (fixed by `01-data-integrity.md` Phase 3).
|
||||
- **DELETE** `/api/pending-queue/failed` DELETE at `DataRoutes.ts:529` — manual purge of failed rows. Retention is a boot-once concern or a user-purge concern, not an always-on endpoint.
|
||||
- **DELETE** `/api/pending-queue/all` DELETE at `DataRoutes.ts:548` — nuke-the-queue button. Never correct to expose.
|
||||
- **KEEP** `/api/processing-status` at `DataRoutes.ts:305` — this is observability for a live system, not a repair lever. It reads and reports; it does not mutate.
|
||||
- **KEEP** `/health` at `ViewerRoutes.ts:32` — liveness check used by `ensureWorkerRunning` in plan 05. It reads and reports; it does not mutate.
|
||||
|
||||
Cites principle 1 (recovery endpoints hide primary-path bugs) and the anti-pattern guard "No new HTTP endpoint for diagnostic / manual-repair purposes" — the deletions here are that guard applied retroactively.
|
||||
|
||||
---
|
||||
|
||||
## Phase 8 — Consolidate shutdown paths
|
||||
|
||||
Per `_reference.md` Part 1 §Worker / lifecycle, `GracefulShutdown.ts:52-86` owns the canonical 6-step shutdown: HTTP server close → sessions → MCP → Chroma → DB → supervisor. Three wrappers currently front it:
|
||||
|
||||
- `WorkerService.shutdown` — calls `performGracefulShutdown` after clearing timers (`worker-service.ts:1094-1120`).
|
||||
- `runShutdownCascade` at `src/supervisor/shutdown.ts:22-99` — supervisor-side SIGTERM/SIGKILL cascade.
|
||||
- `stopSupervisor` — supervisor teardown wrapper.
|
||||
|
||||
**Delete all three wrappers.** Timer cleanup and process-group teardown move into `performGracefulShutdown` directly (or are deleted entirely by `02-process-lifecycle.md`, which removes the `setInterval` callers at `worker-service.ts:547, 567, 581` that create the timers in the first place).
|
||||
|
||||
**Cross-reference `02-process-lifecycle.md`**: plan 02 Phase 3 defines the process-group teardown (`process.kill(-pgid, 'SIGTERM')` replaces the per-PID cascade in `runShutdownCascade`). Plan 06 must **not** re-wrap that teardown — the canonical call lives inside `performGracefulShutdown`, nowhere else.
|
||||
|
||||
After this phase, there is one shutdown path — `performGracefulShutdown` — called by the worker's `SIGTERM`/`SIGINT` handler and nowhere else. Cites principle 6 (one helper, N callers — but here N=1 caller is correct) and principle 7 (delete the wrappers, don't `@deprecated` them).
|
||||
|
||||
---
|
||||
|
||||
## Phase 9 — Consolidate failure-marking paths
|
||||
|
||||
Two methods currently mark messages as non-`processing`:
|
||||
|
||||
- `markSessionMessagesFailed` at `SessionRoutes.ts:256` — marks a session's messages `failed` (per `_reference.md` Part 1 §API surface).
|
||||
- `markAllSessionMessagesAbandoned` at `worker-service.ts:943` — marks everything abandoned during shutdown.
|
||||
|
||||
Both are thin UPDATE-with-WHERE wrappers. Replace both with one method on `PendingMessageStore`:
|
||||
|
||||
```ts
|
||||
transitionMessagesTo(status: 'failed' | 'abandoned', filter: { session_id?: string }): number
|
||||
```
|
||||
|
||||
Callers pass the target status and the optional session-id filter. One SQL path, one place to add a new terminal status later, zero divergence between the two call sites.
|
||||
|
||||
Cites principle 6 (one helper, N callers) and principle 7 (delete both wrappers in the same PR).
|
||||
|
||||
---
|
||||
|
||||
## `validateBody` middleware (copy-paste pattern)
|
||||
|
||||
```ts
|
||||
import type { RequestHandler } from 'express';
|
||||
import type { ZodTypeAny } from 'zod';
|
||||
|
||||
export const validateBody = <S extends ZodTypeAny>(schema: S): RequestHandler =>
|
||||
(req, res, next) => {
|
||||
const result = schema.safeParse(req.body);
|
||||
if (!result.success) {
|
||||
return res.status(400).json({
|
||||
error: 'ValidationError',
|
||||
issues: result.error.issues.map(i => ({
|
||||
path: i.path,
|
||||
message: i.message,
|
||||
code: i.code,
|
||||
})),
|
||||
});
|
||||
}
|
||||
req.body = result.data;
|
||||
return next();
|
||||
};
|
||||
```
|
||||
|
||||
## Example per-route schema (observations)
|
||||
|
||||
```ts
|
||||
import { z } from 'zod';
|
||||
import { validateBody } from '../middleware/validateBody';
|
||||
|
||||
const ObservationBody = z.object({
|
||||
session_id: z.string().min(1),
|
||||
content: z.string(),
|
||||
// ...
|
||||
});
|
||||
|
||||
router.post('/api/observations', validateBody(ObservationBody), handler);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
- [ ] `grep -rn "validateRequired\|rateLimit" src/services/worker/http/` → 0
|
||||
- [ ] `grep -rn "/api/pending-queue" src/` → 0
|
||||
- [ ] `grep -rn "markSessionMessagesFailed\|markAllSessionMessagesAbandoned" src/` → 0 (or 1, only inside `transitionMessagesTo`)
|
||||
- [ ] `grep -rn "WorkerService.prototype.shutdown\|runShutdownCascade\|stopSupervisor" src/` → 0 (or 1 at the canonical call site)
|
||||
- [ ] **Integration test**: `POST /api/observations` with malformed body → 400 response, body contains `{ error: 'ValidationError', issues: [...] }` (not 500, not silent pass).
|
||||
- [ ] **Integration test**: first request for `viewer.html` after boot, then second request while blocking read on `VIEWER_HTML_PATH` — second request still succeeds (served from memory, no disk read after boot).
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards (verbatim)
|
||||
|
||||
- Do NOT add per-route middleware stacks; one middleware for all validated POST/PUT.
|
||||
- Do NOT add a diagnostic endpoint "for debugging only."
|
||||
- Do NOT keep a shutdown wrapper "for backward compat."
|
||||
@@ -0,0 +1,179 @@
|
||||
# 07 — Dead Code Sweep
|
||||
|
||||
**Purpose**: This is the sweep plan. It catches any dead code the other six plans don't explicitly delete. It runs last in the DAG (see `98-execution-order.md`, to be written in Phase 6 of `_rewrite-plan.md`). Its job is twofold: (1) verify that the deletions scheduled by the other plans have actually landed, and (2) delete anything that slipped through — unused exports, commented-out blocks, `@deprecated` fences, unused spawn helpers, and duplicated migration logic. If this sweep finds something unexpected, that is a signal: an earlier plan missed a coupling, and the finding should be fed back to the plan that owns the subsystem, not patched over here.
|
||||
|
||||
---
|
||||
|
||||
## Principles invoked
|
||||
|
||||
**Primary anchor — Principle 7** from `00-principles.md`:
|
||||
|
||||
> **7. Delete code in the same PR it becomes unused.** No `@deprecated` fence, no "remove next release."
|
||||
|
||||
This plan is the operational enforcement of Principle 7 across the corpus. Every other plan deletes the specific code it rewrites around; this plan guarantees that the overall tree is free of dead code after the rewrite lands.
|
||||
|
||||
**Secondary anchor — Principle 6**:
|
||||
|
||||
> **6. One helper, N callers.** Not N copies of a helper. Not a strategy class for each config.
|
||||
|
||||
Invoked for the `SessionStore.ts:52-70` duplication: `SessionStore` re-runs every `ensure*` / `add*` migration step that `MigrationRunner` already owns. Two copies of the migration sequence is exactly the "N copies of a helper" that principle 6 forbids. The sweep consolidates to `new MigrationRunner(db).runAllMigrations()`.
|
||||
|
||||
---
|
||||
|
||||
## Relationship to other plans
|
||||
|
||||
The other plans explicitly delete several named dead-code items. This plan does not re-claim them — it verifies each one has landed and only deletes if an earlier plan missed it.
|
||||
|
||||
**Rule**: *If earlier plans delete, this plan verifies; if earlier plans miss, this plan deletes.*
|
||||
|
||||
| Dead code item | Owning plan | This plan's role |
|
||||
|---|---|---|
|
||||
| `TranscriptParser` class at `src/utils/transcript-parser.ts:28-90` | `03-ingestion-path.md` Phase 9 | Verify the file is gone; grep `TranscriptParser` in `src/` returns 0. If still present, delete here and flag the Phase 9 regression. |
|
||||
| Migration 19 no-op at `src/services/sqlite/migrations/runner.ts:621-628` | `01-data-integrity.md` Phase 8 | Verify the case block is gone and migration 19 is absorbed into the fresh `schema.sql`. If still present, delete here and flag the Phase 8 regression. |
|
||||
| `@deprecated getExistingChromaIds` | `04-read-path.md` Phase 7 | Verify the function, its JSDoc fence, and every import are gone; grep `getExistingChromaIds` in `src/` returns 0. If still present, delete here and flag the Phase 7 regression. |
|
||||
|
||||
---
|
||||
|
||||
## Scope — the catch-all list
|
||||
|
||||
Items in scope for this sweep (anything below that is still present after plans 01–06 land is deleted here):
|
||||
|
||||
1. **Commented-out code** — any `// removed`, `// old`, `// legacy`, `// TODO remove`, or similar commented-out blocks in `src/`.
|
||||
2. **Unused exports** — anything `ts-prune` (or `knip`) flags as exported but not imported anywhere in `src/` or `tests/`.
|
||||
3. **Unused spawn / path helpers** — any `bun-resolver.ts`, `bun-path.ts`, `BranchManager.ts`, `runtime.ts` spawn-site or helper that no longer has a caller after plans 02 and 05 land (lazy-spawn consolidation may strip their only callers).
|
||||
4. **Duplicated migration logic** at `src/services/sqlite/SessionStore.ts:52-70` — the block that re-calls every `ensure*` / `add*` migration method already owned by `MigrationRunner`. Collapse to `new MigrationRunner(db).runAllMigrations()`.
|
||||
5. **Residual `@deprecated` fences** — any JSDoc `@deprecated` block left in `src/` after the named ones above are handled.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Tool install + inventory
|
||||
|
||||
Install `ts-prune` as the dead-code finder:
|
||||
|
||||
```bash
|
||||
npm install -D ts-prune
|
||||
```
|
||||
|
||||
**Tool choice**: `ts-prune` over `knip`. Rationale: `ts-prune`'s output is a flat `file:line - name` list that's trivial to grep and pipe into the Phase 3 test-import verification. `knip` produces a richer but noisier report (configs, binaries, dependencies) that requires a config file to tune down; for a one-shot sweep against a known TypeScript source tree, `ts-prune`'s single-purpose output is the lower-friction choice. If `ts-prune` misses something the test suite later flags, revisit with `knip`.
|
||||
|
||||
Run it and capture the working list:
|
||||
|
||||
```bash
|
||||
npx ts-prune --project tsconfig.json src/ > .pathfinder-sweep/ts-prune.txt
|
||||
```
|
||||
|
||||
The contents of `ts-prune.txt` are the starting inventory for Phases 2–4.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Grep for commented-out code patterns
|
||||
|
||||
Scan `src/` for the canonical commented-out-block markers:
|
||||
|
||||
```bash
|
||||
grep -rn "^[[:space:]]*// \(removed\|old\|legacy\|TODO remove\)" src/ | head -200
|
||||
```
|
||||
|
||||
Review each hit. Categories:
|
||||
|
||||
- **Code the author thought they'd restore**: delete. If it's needed, git history preserves it.
|
||||
- **A comment that happens to match the pattern but isn't dead code** (e.g., a docstring referring to "the old format"): leave it; these are false positives.
|
||||
- **A `@deprecated` fence**: carries into Phase 4 for deletion.
|
||||
|
||||
Append findings to `.pathfinder-sweep/commented-blocks.txt`.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Verify against test imports
|
||||
|
||||
For every candidate flagged in Phase 1 (unused exports) and Phase 2 (commented-out blocks whose removal might expose something), confirm the symbol is not imported by a test.
|
||||
|
||||
```bash
|
||||
grep -rn "<symbol-name>" tests/ "src/**/*.test.ts"
|
||||
```
|
||||
|
||||
**Rule**: if any test imports the symbol, do NOT delete. A test exercising a symbol means either (a) the symbol has a real caller via the test harness, or (b) the test itself is dead and belongs in a different cleanup pass — not this one.
|
||||
|
||||
Trim the Phase 1 / Phase 2 lists accordingly. The remaining entries are the deletion queue for Phase 4.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Delete dead code with rationale
|
||||
|
||||
Walk the deletion queue. Batch related deletions (e.g., all four unused exports from `src/utils/bun-path.ts` land together). Each commit uses a one-line message in this form:
|
||||
|
||||
```
|
||||
dead code: <symbol or file> (no importers in src/ or tests/)
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
```
|
||||
dead code: bun-resolver.resolveBunBinary (no importers in src/ or tests/)
|
||||
dead code: SessionStore.ts:52-70 migration duplication (delegates to MigrationRunner)
|
||||
dead code: src/utils/transcript-parser.ts file (03-ingestion-path Phase 9 missed it)
|
||||
```
|
||||
|
||||
The commit message is load-bearing: it names the symbol and states the evidence (no importers). If the evidence is something else (e.g., "absorbed into fresh schema.sql"), state that instead.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Re-run build + tests
|
||||
|
||||
After each batched deletion commit:
|
||||
|
||||
```bash
|
||||
npm run build-and-sync
|
||||
npm test
|
||||
```
|
||||
|
||||
Both must pass. On failure:
|
||||
|
||||
1. Revert that commit.
|
||||
2. Re-investigate. A failure means either (a) a test transitively imports the deleted symbol, which Phase 3's grep missed (unlikely but possible with re-exports), or (b) a runtime path not covered by static analysis.
|
||||
3. If the symbol really is reachable, leave it and remove it from the deletion queue.
|
||||
4. If the symbol is reachable only through a `@deprecated` public-API contract with no internal caller, escalate via the Failure escape hatch below — do not force-delete.
|
||||
|
||||
---
|
||||
|
||||
## Verification
|
||||
|
||||
- [ ] `npx ts-prune` shows zero unused exports in `src/`
|
||||
- [ ] `npm run build-and-sync` passes
|
||||
- [ ] Test suite passes (`npm test`)
|
||||
- [ ] `grep -rn "// @deprecated\|// TODO remove\|// old$\|// legacy$" src/` → 0
|
||||
- [ ] `grep -rn "TranscriptParser" src/` → 0 (verifies `03-ingestion-path` Phase 9)
|
||||
- [ ] `grep -rn "getExistingChromaIds" src/` → 0 (verifies `04-read-path` Phase 7)
|
||||
- [ ] `src/services/sqlite/migrations/runner.ts` contains no case block for migration 19 (verifies `01-data-integrity` Phase 8)
|
||||
- [ ] `src/services/sqlite/SessionStore.ts:52-70` duplication is gone; `SessionStore` delegates to `MigrationRunner`
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards (verbatim)
|
||||
|
||||
- Do NOT delete anything still imported by a test.
|
||||
- Do NOT delete types still referenced by exported interfaces.
|
||||
|
||||
Additional guards specific to this sweep:
|
||||
|
||||
- Do NOT add a `@deprecated` fence on anything — by principle 7, it is either dead (delete now) or it is not (leave it).
|
||||
- Do NOT re-delete what an earlier plan owns; file a regression note against that plan instead.
|
||||
- Do NOT gate deletions behind a feature flag or environment variable.
|
||||
|
||||
---
|
||||
|
||||
## Failure escape hatch
|
||||
|
||||
If `ts-prune` flags a file that cannot be confidently deleted — e.g., a public API the docs describe, or a symbol referenced by an external plugin consumer — leave it in place and open a follow-up issue recording:
|
||||
|
||||
- The symbol and file:line
|
||||
- Why it appears unused (no internal importers)
|
||||
- The external contract that keeps it alive (docs link, plugin consumer, marketplace entry)
|
||||
|
||||
The acceptance criterion for this plan is "no dead code," not "`ts-prune` exit 0." Force-deleting a public-API symbol to satisfy the grep is a worse outcome than leaving a documented follow-up issue.
|
||||
|
||||
---
|
||||
|
||||
## DAG position
|
||||
|
||||
This plan is **last** in the execution DAG. It depends on every other plan (`00` through `06`) having landed, because its job is to sweep what those plans leave behind. The DAG, preflight gates, and critical path are defined in `98-execution-order.md` (to be written in Phase 6 of `_rewrite-plan.md`); this plan's last-in-DAG position is recorded there as the sink node.
|
||||
@@ -0,0 +1,215 @@
|
||||
# 98 — Execution Order
|
||||
|
||||
## Purpose
|
||||
|
||||
This document is the dependency DAG, preflight gates, critical path, parallel branches, and post-landing verification pointer for the entire `PATHFINDER-2026-04-22/` corpus. It tells an executor which plan to open first, which can run in parallel, which invariants are owned by which plan (so two plans never both change the same contract), and what must be true of the environment before Phase 1 of anything starts. It is consumed by the `/do` orchestrator, by Phase 7 principle-cross-check, and by any engineer executing a phase from a fresh chat. It does not duplicate verification greps — those live in `99-verification.md`.
|
||||
|
||||
---
|
||||
|
||||
## The DAG
|
||||
|
||||
### Bulleted dependency list
|
||||
|
||||
- `00-principles.md` — root, no deps. Every other plan cites it.
|
||||
- `01-data-integrity.md` — deps: `{00}`. Owns schema, UNIQUE constraints, self-healing claim, Chroma table shape.
|
||||
- `02-process-lifecycle.md` — deps: `{00}`. Owns process-group spawn, `kill(-pgid)`, lazy-spawn, shutdown cascade. Independent of `01`.
|
||||
- `03-ingestion-path.md` — deps: `{01, 02}`. Needs `UNIQUE(session_id, tool_use_id)` on `pending_messages` (from `01` §Phase 1) and the process-group spawn contract (from `02` §Phase 2) that its SDK children inherit.
|
||||
- `04-read-path.md` — deps: `{01}`. Needs the Chroma table shape + `chroma_synced` column (from `01` §Phase 2). Does NOT depend on `02` — the read path runs inside the already-spawned worker.
|
||||
- `05-hook-surface.md` — deps: `{02, 03}`. Needs the lazy-spawn contract (`02` §Phase 8) and `summaryStoredEvent` emission (`03` §Phase 2) for the blocking `/api/session/end` endpoint.
|
||||
- `06-api-surface.md` — deps: `{05}`. The `/api/session/end` body schema that Zod validates is defined by `05`'s hook-side contract; Zod middleware wraps that contract, it doesn't define it.
|
||||
- `07-dead-code.md` — deps: `{00, 01, 02, 03, 04, 05, 06}`. Sweep plan: runs only after every other plan has deleted what it knows about. Catches orphaned exports / commented-out blocks / dead migrations the other plans missed.
|
||||
- `99-verification.md` — does NOT sit in the DAG as a blocking node. It runs **alongside** each plan: each plan's phase-level verification checks live here, and the consolidated grep chain + integration tests run after every plan's phases complete.
|
||||
|
||||
### ASCII diagram
|
||||
|
||||
```
|
||||
00-principles.md
|
||||
/ \
|
||||
v v
|
||||
01-data-integrity.md 02-process-lifecycle.md
|
||||
| \ |
|
||||
| \___________ |
|
||||
| \ |
|
||||
v v v
|
||||
04-read-path.md 03-ingestion-path.md
|
||||
| |
|
||||
| v
|
||||
| 05-hook-surface.md
|
||||
| |
|
||||
| v
|
||||
| 06-api-surface.md
|
||||
| |
|
||||
+--------------+-----------+
|
||||
|
|
||||
v
|
||||
07-dead-code.md
|
||||
|
||||
99-verification.md ← runs alongside every plan above
|
||||
(acceptance checks; not a blocking node)
|
||||
```
|
||||
|
||||
### Mermaid (equivalent)
|
||||
|
||||
```mermaid
|
||||
graph TD
|
||||
P00[00-principles.md]
|
||||
P01[01-data-integrity.md]
|
||||
P02[02-process-lifecycle.md]
|
||||
P03[03-ingestion-path.md]
|
||||
P04[04-read-path.md]
|
||||
P05[05-hook-surface.md]
|
||||
P06[06-api-surface.md]
|
||||
P07[07-dead-code.md]
|
||||
P99[99-verification.md]
|
||||
|
||||
P00 --> P01
|
||||
P00 --> P02
|
||||
P01 --> P03
|
||||
P02 --> P03
|
||||
P01 --> P04
|
||||
P02 --> P05
|
||||
P03 --> P05
|
||||
P05 --> P06
|
||||
P01 --> P07
|
||||
P02 --> P07
|
||||
P03 --> P07
|
||||
P04 --> P07
|
||||
P05 --> P07
|
||||
P06 --> P07
|
||||
|
||||
P99 -. alongside .- P00
|
||||
P99 -. alongside .- P01
|
||||
P99 -. alongside .- P02
|
||||
P99 -. alongside .- P03
|
||||
P99 -. alongside .- P04
|
||||
P99 -. alongside .- P05
|
||||
P99 -. alongside .- P06
|
||||
P99 -. alongside .- P07
|
||||
```
|
||||
|
||||
### Acyclicity check
|
||||
|
||||
Node → incoming edges (must contain no cycle):
|
||||
|
||||
- `00` ← ∅
|
||||
- `01` ← {00}
|
||||
- `02` ← {00}
|
||||
- `03` ← {01, 02}
|
||||
- `04` ← {01}
|
||||
- `05` ← {02, 03}
|
||||
- `06` ← {05}
|
||||
- `07` ← {00, 01, 02, 03, 04, 05, 06}
|
||||
- `99` ← ∅ (runs alongside; not in the blocking DAG)
|
||||
|
||||
Topological sort exists: `00, 01, 02, 03, 04, 05, 06, 07`. All edges point strictly forward in this order. No back-edges. **DAG is acyclic.**
|
||||
|
||||
---
|
||||
|
||||
## Preflight gates
|
||||
|
||||
These MUST be satisfied before Phase 1 of ANY individual plan starts. They are infra/toolchain preconditions that multiple plans depend on; centralising them here prevents plan-by-plan rediscovery.
|
||||
|
||||
| # | Gate | Owner of dependency | Verification |
|
||||
|---|---|---|---|
|
||||
| PG-1 | `engines.node >= 20.0.0` in `package.json` | `03-ingestion-path.md` §Phase 5 (recursive `fs.watch`) | `jq -r .engines.node package.json` ≥ `20.0.0` |
|
||||
| PG-2 | `zod@^3.x` installed | `06-api-surface.md` §Phase 1 (Zod middleware) | `npm ls zod` returns `zod@3.*` |
|
||||
| PG-3 | Prompt-caching cost smoke test harness exists and passes baseline | `04-read-path.md` §Phase 9 (knowledge-corpus simplification — relies on SDK prompt caching) | Three sequential `/api/corpus/:name/query` calls; calls 2 & 3 return `cache_read_input_tokens > 0` |
|
||||
| PG-4 | Chroma MCP availability + documented upsert-conflict error-text pattern | `01-data-integrity.md` §Phase 7 (`CHROMA_SYNC_FALLBACK_ON_CONFLICT` flag) | Chroma MCP reachable from worker; error-text regex captured in `01-data-integrity.md` §Phase 7 |
|
||||
|
||||
If any gate is red, STOP. Fix the gate (install Node 20, install zod, write the smoke-test harness, document the Chroma error text) before touching any plan.
|
||||
|
||||
---
|
||||
|
||||
## Critical path
|
||||
|
||||
**Sequence**: `00 → 01 → 02 → 03 → 05 → 06 → 07`
|
||||
|
||||
(`04` is not on the critical path — it hangs off `01` in parallel with the `02 → 03 → 05 → 06` spine. `99` runs alongside every node and is not on the linear path.)
|
||||
|
||||
### Why this order
|
||||
|
||||
- **`00` first**: every other plan cites the seven principles and six anti-pattern guards verbatim. If `00` changes mid-corpus, every downstream plan's citations go stale. Land `00` and freeze it.
|
||||
- **`01` and `02` are both "foundational"**: plans `03`, `04`, `05` all depend on at least one of them. `01` owns the schema shape (UNIQUE constraints, `worker_pid`, `chroma_synced`) that `03` and `04` read/write against. `02` owns the spawn contract (`detached: true` + `pgid` tracking) that `03`'s SDK children and `05`'s lazy-spawn wrapper both inherit. Neither can be skipped; both must land before anything that reads their contracts.
|
||||
- **`03` before `05`**: `summaryStoredEvent` is emitted inside the ingestion path (`03` §Phase 2). The blocking `/api/session/end` endpoint in `05` §Phase 3 awaits that event. If `05` lands first, the endpoint awaits an event that nothing fires — it hangs.
|
||||
- **`05` before `06`**: the Zod schemas in `06` §Phase 3 validate request bodies for the hook-facing endpoints. The shape of those bodies (for `/api/session/end`, `/api/session/start`, `/api/observations`, etc.) is defined by `05`'s hook-side contract. `06` wraps a contract `05` defines; it cannot define it first.
|
||||
- **`07` last**: the sweep plan uses `ts-prune` / `knip` to catch unused exports. An export is only "unused" after every plan that used to reference it has deleted those references. Running `07` earlier would produce a false-negative list. Running it last produces the real residue.
|
||||
|
||||
---
|
||||
|
||||
## Parallel branches
|
||||
|
||||
- **`04-read-path.md` runs after `01` independently of `02`.** The read path (renderer, search, Chroma fail-fast, knowledge corpus) operates entirely inside the already-spawned worker process. It reads the Chroma table shape (`01`) but never spawns, kills, or supervises processes (`02`). A second engineer can own `04` while the first engineer drives the `02 → 03 → 05 → 06` spine.
|
||||
- **`07-dead-code.md` has exactly one concurrency mode: last.** It is a whole-tree sweep. Running it in parallel with any of `01`–`06` produces stale results because those plans are still deleting code.
|
||||
- **Within a single plan, phases may be parallelized** if the plan text does not specify an ordering between them. The plan author's phase numbering is advisory unless a phase explicitly states "depends on Phase N." Most plans are internally ordered; assume sequential unless the plan says otherwise.
|
||||
|
||||
---
|
||||
|
||||
## Cross-plan invariants
|
||||
|
||||
Each invariant below has **exactly one owner**. Consumers reference the owner's contract; they do not redefine it. Derived from `_mapping.md` §Cross-plan coupling points.
|
||||
|
||||
| Invariant | Owner (single source of truth) | Consumers |
|
||||
|---|---|---|
|
||||
| `UNIQUE(session_id, tool_use_id)` on `pending_messages` | `01-data-integrity.md` §Phase 1 | `03-ingestion-path.md` §Phase 6 (DB-backed tool pairing) |
|
||||
| `worker_pid` column + self-healing claim query | `01-data-integrity.md` §Phase 3 | All worker claim call sites; kills per-row `started_processing_at_epoch` |
|
||||
| `chroma_synced` column + boot-once backfill | `01-data-integrity.md` §Phase 2 | Chroma sync module; read-path fail-fast in `04-read-path.md` §Phase 5 |
|
||||
| `RECENCY_WINDOW_MS` single source | `04-read-path.md` §Phase 4 (consolidation; constant itself in `types.ts:16`) | Every search / filter call site; seven hand-rolled copies in `SearchManager` deleted |
|
||||
| Process groups / `pgid` spawn + `kill(-pgid)` shutdown | `02-process-lifecycle.md` §Phases 2–3 | `05-hook-surface.md` §Phase 8 (lazy-spawn uses same `detached: true` contract) |
|
||||
| `summaryStoredEvent` emission | `03-ingestion-path.md` §Phase 2 | `05-hook-surface.md` §Phase 3 (blocking `/api/session/end` awaits this event) |
|
||||
| `ingestObservation` / `ingestPrompt` / `ingestSummary` direct helpers | `03-ingestion-path.md` §Phase 0 | Transcript watcher (`03` §Phase 7), hook handlers (`05`), worker HTTP routes (`06`) |
|
||||
| `renderObservations(obs, strategy)` single renderer | `04-read-path.md` §Phase 1 | All formatters (deleted), search results, corpus detail view |
|
||||
| Zod schemas + `validateBody` middleware | `06-api-surface.md` §Phases 2–3 | All POST/PUT route handlers; hook-side contracts defined by `05` |
|
||||
| `performGracefulShutdown` single shutdown path | `06-api-surface.md` §Phase 8 | `02-process-lifecycle.md` §Phase 3 (references only, does not duplicate); `WorkerService.shutdown`, `runShutdownCascade`, `stopSupervisor` wrappers all deleted |
|
||||
| `stripMemoryTags` single-regex alternation | `03-ingestion-path.md` §Phase 8 | All ingestion paths (tag-stripping utility) |
|
||||
| `transitionMessagesTo(status)` single failure-marking path | `06-api-surface.md` §Phase 9 | Replaces `markSessionMessagesFailed` + `markAllSessionMessagesAbandoned` |
|
||||
|
||||
**Invariant discipline**: if Phase 7 principle-cross-check finds two plans defining the same invariant, the non-owner plan gets sent back for revision. Shared ownership is a bug.
|
||||
|
||||
---
|
||||
|
||||
## Blocking issues
|
||||
|
||||
Inherited verbatim from `_rewrite-plan.md` §Known gaps and old `PATHFINDER-2026-04-21/08-reconciliation.md` Part 5. Each issue blocks the phase that depends on it; none block the whole corpus.
|
||||
|
||||
1. **Chroma upsert fallback is brittle.** The delete-then-add bridge pattern depends on Chroma's exact error text when a duplicate ID is upserted. **Blocks**: `01-data-integrity.md` §Phase 7. **Resolution**: flag `CHROMA_SYNC_FALLBACK_ON_CONFLICT=true`; document the exact error regex; remove once Chroma MCP adds native upsert. (PG-4 enforces this.)
|
||||
2. **Prompt-caching TTL assumption.** The knowledge-corpus simplification relies on the SDK's prompt-caching behavior being stable across the 5-min TTL window. **Blocks**: `04-read-path.md` §Phase 9. **Resolution**: cost smoke test (PG-3) must pass before `04` §Phase 9 ships. If caching degrades, the plan reverts to an explicit cache-control strategy.
|
||||
3. **Windows process-group behavior.** `process.kill(-pgid)` is Unix-only; Windows needs Job Objects. **Blocks**: `02-process-lifecycle.md` on Windows only. **Resolution**: plan `02` documents Windows as a "platform caveat" section with Job Objects as follow-up. Unix ships first; Windows follow-up is tracked but not in this corpus.
|
||||
4. **`respawn` dep decision.** The lazy-spawn wrapper needs a retry strategy for startup failure. **Resolved** in `02-process-lifecycle.md` §Phase 8: **hand-roll a 3-attempt retry with exponential backoff**. Do NOT adopt the `respawn` npm dep — adds supply-chain surface for 20 lines of retry logic.
|
||||
5. **Snapshot tests for renderer collapse.** Without byte-equality snapshots of the four old formatters, regressions from collapsing to `renderObservations(obs, strategy)` are invisible. **Blocks**: `04-read-path.md` §Phase 2 (formatter deletion). **Resolution**: MANDATORY — capture snapshots of `AgentFormatter`, `HumanFormatter`, `ResultFormatter`, `CorpusRenderer` output on a fixed input set BEFORE deleting any of them. Snapshot diff = 0 bytes or the phase fails.
|
||||
|
||||
---
|
||||
|
||||
## Post-landing verification
|
||||
|
||||
See `99-verification.md` for:
|
||||
|
||||
- Consolidated grep chain (every `grep -rn "..." src/ → 0` target from every plan's verification section, deduplicated)
|
||||
- Integration test list (kill-mid-claim, SIGTERM worker, Chroma down, malformed POST, consecutive hook failures)
|
||||
- Prompt-caching cost smoke test procedure
|
||||
- Viewer regression harness (12 invariants I1–I12, 11 tests T1–T11)
|
||||
- Final acceptance criteria (net LoC, test pass, viewer regression pass, cost smoke pass)
|
||||
|
||||
Do not duplicate verification content here. This document is structural (DAG, gates, ownership). `99-verification.md` is operational (what to run, what must pass).
|
||||
|
||||
---
|
||||
|
||||
## How to execute a phase from a fresh chat
|
||||
|
||||
1. Open a new chat in this repo root (`vivacious-teeth` branch).
|
||||
2. Load the following files into context (in this order):
|
||||
- `PATHFINDER-2026-04-22/_rewrite-plan.md` (master task list)
|
||||
- `PATHFINDER-2026-04-22/_reference.md` (code anchors + external API signatures)
|
||||
- `PATHFINDER-2026-04-22/_mapping.md` (old → new section map + coupling table)
|
||||
- `PATHFINDER-2026-04-22/98-execution-order.md` (this file — for DAG + gates + invariant ownership)
|
||||
- `PATHFINDER-2026-04-22/00-principles.md` (principles cited by every plan)
|
||||
- Any predecessor plan in the DAG above the one you are executing (e.g., to execute `05`, load `02` and `03`)
|
||||
- The plan you are executing
|
||||
3. Verify all applicable preflight gates (PG-1…PG-4) are green.
|
||||
4. Execute the plan's phase list **sequentially**, unless the plan explicitly marks phases as parallelizable.
|
||||
5. After the last phase, run the plan's own verification checklist, then the slice of `99-verification.md` that covers your plan's grep targets and integration tests.
|
||||
6. Do NOT declare the plan done until every verification item is checked.
|
||||
7. Commit per-phase (small commits, plan+phase cited in the commit message), not one mega-commit at the end.
|
||||
|
||||
---
|
||||
|
||||
**Status: READY.** The DAG is acyclic, critical path is single and unambiguous, all four preflight gates are enumerated with owners, twelve cross-plan invariants are documented with single ownership each, and all five known blocking issues from the rewrite plan are carried forward with resolution pointers.
|
||||
@@ -0,0 +1,214 @@
|
||||
# 99 — Verification
|
||||
|
||||
## Purpose
|
||||
|
||||
This is the acceptance-criteria document for the entire PATHFINDER-2026-04-22 refactor. Every grep target, integration test, fuzz test, snapshot test, viewer-regression invariant, and prompt-caching cost smoke test for the refactor is consolidated here. Every plan's own Verification section cites this file as its canonical checklist — individual plans enumerate their local targets; `99-verification.md` is the union, grouped by pattern, with the acceptance gates the refactor ships against. No plan ships independently; the refactor lands when the checklist below is green.
|
||||
|
||||
## Timer census
|
||||
|
||||
The refactor replaces hand-rolled background supervision with OS-level primitives. The concrete count:
|
||||
|
||||
| Timer | File (before) | Status after refactor |
|
||||
|---|---|---|
|
||||
| `startOrphanReaper` (repeating `setInterval`) | `src/services/worker/worker-service.ts:537` | **DELETED** (`02-process-lifecycle.md` Phase 4) |
|
||||
| `staleSessionReaperInterval` (repeating `setInterval`) | `src/services/worker/worker-service.ts:547` | **DELETED** (`02-process-lifecycle.md` Phase 4) |
|
||||
| `clearFailedOlderThan` interval (repeating `setInterval`) | `src/services/worker/worker-service.ts:567` | **DELETED** (`01-data-integrity.md` Phase 5; `02-process-lifecycle.md` Phase 4) |
|
||||
|
||||
**Before**: 3 repeating background timers in `src/services/worker/`.
|
||||
**After**: 0 repeating background timers in `src/services/worker/`.
|
||||
|
||||
**Acceptable exceptions** — the following are **not** counted as "repeating background timers" and are permitted:
|
||||
|
||||
- Per-operation one-shot `setTimeout` (e.g., the 5-second shutdown kill-escalation between SIGTERM and SIGKILL in `src/supervisor/shutdown.ts`). These are (a) non-repeating, (b) bound to the lifetime of a specific operation, (c) disposed in the same scope that created them, and (d) never monitored by health checks.
|
||||
- The `transcripts/watcher.ts` `fs.watch` subscription (per `03-ingestion-path.md` Phase 5). `fs.watch` is event-driven, not a timer.
|
||||
|
||||
The acceptance grep `grep -rn "setInterval" src/services/worker/ → 0` enforces the census.
|
||||
|
||||
## Polling loops
|
||||
|
||||
The refactor replaces the client-side summary-storage poll with a server-side blocking endpoint.
|
||||
|
||||
| Polling loop | File (before) | Status after refactor |
|
||||
|---|---|---|
|
||||
| Summary-stored client poll | `src/cli/handlers/summarize.ts:117-150` | **DELETED**. Replaced by blocking `POST /api/session/end` that server-side-waits on `summaryStoredEvent` (`05-hook-surface.md` Phase 3; event emission in `03-ingestion-path.md` Phase 2). |
|
||||
|
||||
**Before**: 1 polling loop.
|
||||
**After**: 0 polling loops.
|
||||
|
||||
The acceptance grep `grep -rn "MAX_WAIT_FOR_SUMMARY_MS\|POLL_INTERVAL_MS" src/cli/handlers/ → 0` enforces this.
|
||||
|
||||
## Full grep target list
|
||||
|
||||
Each line is runnable as-is. Expected count appears after `→`. Every target is sourced from the Verification section of the plan listed in the trailing comment.
|
||||
|
||||
### Process-lifecycle / timers
|
||||
|
||||
```
|
||||
grep -rn "setInterval" src/services/worker/ → 0 # 02-process-lifecycle Phase 4
|
||||
grep -rn "startOrphanReaper" src/ → 0 # 02-process-lifecycle Phase 4
|
||||
grep -rn "staleSessionReaperInterval" src/ → 0 # 02-process-lifecycle Phase 4
|
||||
grep -rn "recoverStuckProcessing\|killSystemOrphans\|reapStaleSessions\|reapOrphanedProcesses\|killIdleDaemonChildren" src/ → 0 # 01-data-integrity Phase 3 + 02-process-lifecycle Phase 4
|
||||
grep -rn "killSystemOrphans" src/ → 0 # 02-process-lifecycle Phase 4
|
||||
grep -rn "killIdleDaemonChildren" src/ → 0 # 02-process-lifecycle Phase 4
|
||||
grep -rn "reapStaleSessions" src/ → 0 # 02-process-lifecycle Phase 4
|
||||
grep -rn "reapOrphanedProcesses" src/ → 0 # 02-process-lifecycle Phase 4
|
||||
grep -rn "evictIdlestSession" src/ → 0 # 02-process-lifecycle Phase 6
|
||||
grep -rn "abandonedTimer\|evictIdlestSession" src/ → 0 # 02-process-lifecycle Phase 5 + 6
|
||||
grep -rn "abandonedTimer" src/ → 0 # 02-process-lifecycle Phase 5
|
||||
grep -rn "fallbackAgent\|Gemini\|OpenRouter" src/services/worker/ → 0 # 02-process-lifecycle Phase 7
|
||||
grep -rn "fallbackAgent\|Gemini\|OpenRouter" src/services/worker/SessionManager.ts → 0 # 02-process-lifecycle Phase 7
|
||||
grep -rn "ProcessRegistry" src/services/worker/ → 0 # 02-process-lifecycle Phase 1
|
||||
```
|
||||
|
||||
### Data integrity
|
||||
|
||||
```
|
||||
grep -n "STALE_PROCESSING_THRESHOLD_MS" src/ → 0 # 01-data-integrity Phase 3
|
||||
grep -n "started_processing_at_epoch" src/ → 0 # 01-data-integrity Phase 3
|
||||
grep -rn "DEDUP_WINDOW_MS\|findDuplicateObservation" src/ → 0 # 01-data-integrity Phase 4
|
||||
grep -n "DEDUP_WINDOW_MS" src/ → 0 # 01-data-integrity Phase 4
|
||||
grep -n "findDuplicateObservation" src/ → 0 # 01-data-integrity Phase 4
|
||||
grep -n "repairMalformedSchema" src/ → 0 # 01-data-integrity Phase 6
|
||||
grep -n "clearFailedOlderThan" src/services/worker/worker-service.ts → 0 # 01-data-integrity Phase 5
|
||||
```
|
||||
|
||||
### Ingestion path
|
||||
|
||||
```
|
||||
grep -rn "coerceObservationToSummary\|consecutiveSummaryFailures" src/ → 0 # 03-ingestion-path Phase 3 + 4
|
||||
grep -n "coerceObservationToSummary" src/ → 0 # 03-ingestion-path Phase 4
|
||||
grep -n "consecutiveSummaryFailures" src/ → 0 # 03-ingestion-path Phase 3
|
||||
grep -n "pendingTools" src/services/transcripts/ → 0 # 03-ingestion-path Phase 6
|
||||
grep -n "setInterval" src/services/transcripts/watcher.ts → 0 # 03-ingestion-path Phase 5
|
||||
grep -n "observationHandler.execute" src/services/transcripts/ → 0 # 03-ingestion-path Phase 7
|
||||
grep -n "TranscriptParser" src/utils/transcript-parser.ts → 0 # 03-ingestion-path Phase 9 (file deleted)
|
||||
grep -n "repairMalformedSchema\|TranscriptParser" src/ → 0 # 03-ingestion-path Phase 9 + 01-data-integrity Phase 6
|
||||
```
|
||||
|
||||
### Read path
|
||||
|
||||
```
|
||||
grep -n "SearchManager\.findBy" src/ → 0 # 04-read-path Phase 3
|
||||
grep -rn "RECENCY_WINDOW_MS" src/services/worker/SearchManager.ts → 0 # 04-read-path Phase 4
|
||||
grep -n "fellBack: true" src/ → 0 # 04-read-path Phase 6
|
||||
grep -n "getExistingChromaIds" src/ → 0 # 04-read-path Phase 7 + 07-dead-code
|
||||
grep -n "fellBack: true\|getExistingChromaIds" src/ → 0 # 04-read-path Phase 6 + 7
|
||||
```
|
||||
|
||||
### Hook surface
|
||||
|
||||
```
|
||||
grep -rn "for i in 1 2 3 4 5 6 7" plugin/hooks/hooks.json → 0 # 05-hook-surface Phase 1
|
||||
grep -rn "SettingsDefaultsManager.loadFromFile" src/cli/handlers/ → 1 # 05-hook-surface Phase 4 (only inside loadFromFileOnce)
|
||||
grep -rn "isProjectExcluded" src/cli/handlers/ → 1 # 05-hook-surface Phase 5 (only inside shouldTrackProject)
|
||||
grep -rn "MAX_WAIT_FOR_SUMMARY_MS\|POLL_INTERVAL_MS" src/cli/handlers/ → 0 # 05-hook-surface Phase 3
|
||||
```
|
||||
|
||||
### API surface
|
||||
|
||||
```
|
||||
grep -rn "validateRequired\|rateLimit" src/services/worker/http/ → 0 # 06-api-surface Phase 4 + 5
|
||||
grep -rn "/api/pending-queue" src/ → 0 # 06-api-surface Phase 7
|
||||
grep -rn "markSessionMessagesFailed\|markAllSessionMessagesAbandoned" src/ → 0 or 1 # 06-api-surface Phase 9 — "1" only if inside transitionMessagesTo
|
||||
grep -rn "WorkerService.prototype.shutdown\|runShutdownCascade\|stopSupervisor" src/ → 0 or 1 # 06-api-surface Phase 8 — "1" only at canonical call site
|
||||
```
|
||||
|
||||
### Dead-code sweep
|
||||
|
||||
```
|
||||
grep -rn "// @deprecated\|// TODO remove\|// old$\|// legacy$" src/ → 0 # 07-dead-code
|
||||
grep -rn "TranscriptParser" src/ → 0 # 07-dead-code (regression-verifies 03-ingestion-path Phase 9)
|
||||
grep -rn "getExistingChromaIds" src/ → 0 # 07-dead-code (regression-verifies 04-read-path Phase 7)
|
||||
```
|
||||
|
||||
**Total: 30 grep targets** (expected count varies from 0 to "0 or 1" where a canonical call site is permitted, as noted inline).
|
||||
|
||||
## Prompt-caching cost smoke test
|
||||
|
||||
The knowledge-corpus phases in `04-read-path.md` (Phase 9) rely on Anthropic prompt caching to amortize the system-prompt cost across consecutive queries against the same corpus. If caching is not actually hitting, the phase's cost model breaks and the simplification does not ship.
|
||||
|
||||
### Harness
|
||||
|
||||
Issue three **sequential** HTTP calls to `POST /api/corpus/:name/query` against the same `:name`, with three different query bodies that each invoke the same cached system prompt. Collect the `api_usage` object (or equivalent, e.g., `usage`) returned in each response body.
|
||||
|
||||
### Assertions
|
||||
|
||||
- Each response includes an `api_usage` (or equivalent) field with `input_tokens`, `cache_creation_input_tokens`, and `cache_read_input_tokens`.
|
||||
- **Call 1** is a cache-write. `cache_creation_input_tokens > 0`. `cache_read_input_tokens` may be `0`.
|
||||
- **Call 2** and **Call 3**: `cache_read_input_tokens > 0`.
|
||||
- **Threshold (steady-state)**: on calls 2 and 3, `cache_read_input_tokens / input_tokens ≥ 0.5`.
|
||||
|
||||
### Failure mode
|
||||
|
||||
If either call 2 or call 3 misses the threshold, the knowledge-corpus phases in `04-read-path.md` (specifically Phase 9: knowledge-corpus simplification + reliance on SDK prompt caching) **do not ship**. Re-investigate the caching path before re-running.
|
||||
|
||||
## Viewer regression harness
|
||||
|
||||
The viewer UI (`plugin/ui/viewer.html`, served from `src/services/worker/http/ViewerRoutes.ts`) must not regress across the refactor. Since the refactor touches the HTTP surface (`06-api-surface.md`), the read path (`04-read-path.md`), and ingestion semantics (`03-ingestion-path.md`) — all upstream of the viewer — a lockdown harness runs at every plan's start and end.
|
||||
|
||||
### Baseline-capture schedule
|
||||
|
||||
`tests/viewer-lockdown/` is **captured at phase start**: on the first commit of any plan that modifies files imported by `ViewerRoutes.ts`, `DataRoutes.ts`, or the formatter layer, run the harness to produce a baseline screenshot + DOM snapshot + JSON payload snapshot per test. At phase end, re-run and diff. No DOM diff (modulo timestamps/IDs) ⇒ pass.
|
||||
|
||||
If `tests/viewer-lockdown/` does not exist when the refactor begins, it **will be captured at phase start** of the first plan touching viewer-relevant code (that is `03-ingestion-path.md` under the current DAG).
|
||||
|
||||
### 12 Invariants
|
||||
|
||||
- **I1**: Observation list renders without JavaScript console errors.
|
||||
- **I2**: The filter pane respects the date-window filter — the rendered row count equals the server-reported filtered count.
|
||||
- **I3**: Session grouping in the observation list matches server-side `session_id` grouping (no visual merge across sessions).
|
||||
- **I4**: Tag filters (e.g., `<private>`, concept, file) render the same set of rows the API returns for the same query parameters.
|
||||
- **I5**: `/health` endpoint returns `200` and the viewer's health indicator reflects it.
|
||||
- **I6**: Static asset caching — `viewer.html` served from memory after boot (no disk re-read on subsequent GETs; see `06-api-surface.md` Phase 6).
|
||||
- **I7**: `/api/processing-status` stream renders live counts matching SQLite state (the only non-deleted diagnostic endpoint, per `06-api-surface.md` Phase 7).
|
||||
- **I8**: Deleted diagnostic endpoints (`/api/pending-queue*`) return `404`, not `200` with a fallback body.
|
||||
- **I9**: Malformed `POST` bodies surface a `400` response with Zod field errors visible to the viewer's error toast, not a silent `500`.
|
||||
- **I10**: Chroma-down search renders a `503` error state in the viewer (not an empty result list, not a "fell back" banner).
|
||||
- **I11**: Observation detail pane renders byte-identical text to the `renderObservations(obs, humanConfig)` snapshot (ties to the `04-read-path.md` byte-equality snapshot test).
|
||||
- **I12**: Privacy tags (`<private>...</private>`) are stripped at hook layer before reaching the viewer — no `<private>` text appears in any rendered row.
|
||||
|
||||
### 11 Tests
|
||||
|
||||
- **T1** — load `/` → assert I1 (no console errors) + I5 (health 200).
|
||||
- **T2** — apply a 7-day date-window filter → assert I2.
|
||||
- **T3** — load a session with 3 distinct child sessions → assert I3.
|
||||
- **T4** — query by concept tag → assert I4.
|
||||
- **T5** — kill Chroma, issue a search → assert I10 (503 rendered, no fallback).
|
||||
- **T6** — GET `/api/pending-queue` → assert I8 (404).
|
||||
- **T7** — GET `/api/pending-queue/process` → assert I8 (404).
|
||||
- **T8** — POST malformed body to `/api/observations` → assert I9 (400 + Zod field errors).
|
||||
- **T9** — boot worker, GET `viewer.html` twice; block disk read between GETs → assert I6 (second GET succeeds from memory).
|
||||
- **T10** — render a fixture observation set with a known human-config snapshot → assert I11 (byte-identity).
|
||||
- **T11** — ingest a transcript line containing `<private>secret</private>` → assert I12 (the substring "secret" is absent from any viewer response body).
|
||||
|
||||
`/api/processing-status` is exercised by T1 (load includes the status stream), covering I7 without an additional test.
|
||||
|
||||
## Integration tests
|
||||
|
||||
Consolidated across all plans. Each test cites the plan that introduces the behavior under test.
|
||||
|
||||
- **IT1** — Kill worker mid-claim → next worker picks up the row. Source: `01-data-integrity.md` Phase 3 (self-healing claim query).
|
||||
- **IT2** — `kill -9 <worker-pid>` → next hook respawns worker; no orphan children remain. Source: `02-process-lifecycle.md` Phase 8 (lazy-spawn wrapper).
|
||||
- **IT3** — Graceful `SIGTERM` to worker → all SDK children exit within 6s via process-group teardown. Source: `02-process-lifecycle.md` Phase 3 (process-group shutdown cascade).
|
||||
- **IT4** — Drop JSONL with `tool_use` line and no matching `tool_result` → row stays pending, pairing JOIN returns zero pairs, no observation emitted, no crash. Source: `03-ingestion-path.md` Phase 6 (fuzz test 1).
|
||||
- **IT5** — Drop JSONL with `tool_result` referencing an unknown `tool_use_id` → row inserted, debug log emitted, no phantom observation, no crash. Source: `03-ingestion-path.md` Phase 6 (fuzz test 2).
|
||||
- **IT6** — Chroma down → search returns `503` with non-empty error body (not empty result, not `fellBack: true`). Source: `04-read-path.md` Phase 5 + 6.
|
||||
- **IT7** — `renderObservations` byte-identity snapshot test against `AgentFormatter`/`HumanFormatter`/`ResultFormatter`/`CorpusRenderer` fixtures. Source: `04-read-path.md` Phase 1 + 2.
|
||||
- **IT8** — Block worker port; hook exits `0` first time, exits `0` second time with `consecutiveFailures: 2` on disk, exits `2` on the third call; unblock and invoke once more → counter reset to `0`. Source: `05-hook-surface.md` Phase 8.
|
||||
- **IT9** — Session end hook issues a single `POST /api/session/end` that blocks until `summaryStoredEvent` fires; request count == 1, no polling. Source: `05-hook-surface.md` Phase 3.
|
||||
- **IT10** — Malformed `POST /api/observations` body → `400` with `{ error: 'ValidationError', issues: [...] }` (not 500, not silent pass). Source: `06-api-surface.md` Phase 2 + 3.
|
||||
- **IT11** — First request for `viewer.html` after boot loads from disk; second request while disk-read is blocked still succeeds from memory. Source: `06-api-surface.md` Phase 6.
|
||||
|
||||
## Acceptance criteria
|
||||
|
||||
The refactor ships when **all** of the following pass:
|
||||
|
||||
1. Every grep target in §"Full grep target list" returns its expected count (0, 1, or "0 or 1" per the inline spec). No exceptions.
|
||||
2. Every integration test in §"Integration tests" (IT1 through IT11) passes.
|
||||
3. The prompt-caching cost smoke test in §"Prompt-caching cost smoke test" passes: `cache_read_input_tokens > 0` on calls 2 and 3, and `cache_read_input_tokens / input_tokens ≥ 0.5` on calls 2 and 3.
|
||||
4. The viewer regression harness in §"Viewer regression harness" passes: all 12 invariants hold, all 11 tests green, DOM diff modulo timestamps/IDs is empty against the captured baseline.
|
||||
5. `npm run build` succeeds.
|
||||
6. The full unit test suite (`tests/`) passes.
|
||||
7. **Net lines deleted ≥ ~3,800** across the new corpus compared to the pre-refactor baseline (target from `_rewrite-plan.md` line 21).
|
||||
|
||||
If any one criterion fails, the refactor does not ship. Plans whose verification greps or integration tests regress are sent back for revision per the DAG in `98-execution-order.md`.
|
||||
@@ -0,0 +1,243 @@
|
||||
# PATHFINDER-2026-04-22 Mapping
|
||||
|
||||
Section-by-section mapping from the old `PATHFINDER-2026-04-21/` corpus to the new `PATHFINDER-2026-04-22/` corpus. Every plan author cites this document to know what old content flows where, what mutates, what gets deleted.
|
||||
|
||||
**Verification date**: 2026-04-22. Produced by Phase 0 Agent A after full read of all 12 old plans + 9 supporting docs.
|
||||
|
||||
---
|
||||
|
||||
## Legend
|
||||
|
||||
- **KEEP** — flows into new plan as-is (or near-as-is)
|
||||
- **REWRITE** — concept migrates but under cleaner principles
|
||||
- **DELETE** — no longer needed (second-system effect, happy-path violation, obsolete)
|
||||
- **SPLIT** — portions go to multiple new plans
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 01: privacy-tag-filtering
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview | KEEP | `00-principles.md` §Fail-fast tag-stripping closure |
|
||||
| Dependencies | KEEP | `03-ingestion-path.md` §Dependencies |
|
||||
| Verified facts V7a-V7k | REWRITE | `03-ingestion-path.md` §Concrete findings (citing `_reference.md`) |
|
||||
| Concrete target signatures | KEEP | `03-ingestion-path.md` §Phase 1 (single-regex alternation) |
|
||||
| Phase 1: Write parseAgentXml | KEEP | `03-ingestion-path.md` §Phase 1 |
|
||||
| Phase 1b: Update agent contract | KEEP | `03-ingestion-path.md` §Phase 1b |
|
||||
| Phase 2: Replace parse path in ResponseProcessor | KEEP | `03-ingestion-path.md` §Phase 2 |
|
||||
| Phase 3: Remove `consecutiveSummaryFailures` | KEEP | `03-ingestion-path.md` §Phase 3 |
|
||||
| Phase 4: Verification sweep | KEEP | `03-ingestion-path.md` §Phase 4 |
|
||||
| Blast radius | REWRITE | `03-ingestion-path.md` §Files modified (condensed) |
|
||||
|
||||
**Net**: ~135 LoC deleted, ~35 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 02: sqlite-persistence
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview / Scope | REWRITE | `01-data-integrity.md` §Scope |
|
||||
| Dependencies | KEEP | `01-data-integrity.md` §Dependencies |
|
||||
| Verified facts | REWRITE | `01-data-integrity.md` §Concrete findings |
|
||||
| Phase 1: Add `schema.sql` | KEEP | `01-data-integrity.md` §Phase 1 (fresh schema, constraints, triggers) |
|
||||
| Phase 2: Add `chroma_synced` | KEEP | `01-data-integrity.md` §Phase 2 |
|
||||
| Phase 3: Migrate to UNIQUE | KEEP | `01-data-integrity.md` §Phase 3 |
|
||||
| Phase 4: Boot-once `recoverStuckProcessing` | **DELETE** | Violates "no recovery code" principle. Replaced by self-healing claim query in `01-data-integrity.md` §Phase 4. |
|
||||
| Phase 5: WAL housekeeping deletion | KEEP | `01-data-integrity.md` §Phase 5 (rely on SQLite default `wal_autocheckpoint=1000`) |
|
||||
|
||||
**Net**: ~140 LoC source-only reduction, +~295 LoC for fresh `schema.sql`.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 03: response-parsing-storage
|
||||
|
||||
**Heavy overlap with Plan 01.** Plans 01 and 03 both define `parseAgentXml` and touch `ResponseProcessor`. Recommendation: **consolidate Plan 03's unique content (atomic TX, `summaryStoredEvent` wiring) into the new `03-ingestion-path.md`, delete Plan 03 as a standalone artifact.**
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview / Dependencies | KEEP | `03-ingestion-path.md` §Dependencies |
|
||||
| Verified facts V7a-V7k | REWRITE | `03-ingestion-path.md` §Concrete findings (deduplicated with Plan 01) |
|
||||
| Phase 1: parseAgentXml in parser.ts | **DELETE** | Duplicate of old Plan 01 Phase 1 |
|
||||
| Phase 1b: Agent contract update | **DELETE** | Duplicate of old Plan 01 Phase 1b |
|
||||
| Phase 2: Replace parse path | REWRITE | Merged into `03-ingestion-path.md` §Phase 2 (add `summaryStoredEvent` emission) |
|
||||
| Phase 3: Remove `consecutiveSummaryFailures` | **DELETE** | Duplicate of old Plan 01 Phase 3 |
|
||||
| Phase 4: Verification sweep | REWRITE | Merged with Plan 01 sweep into `03-ingestion-path.md` §Phase 4 |
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 04: vector-search-sync
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview / Scope | REWRITE | `01-data-integrity.md` §Chroma sync |
|
||||
| Dependencies | KEEP | `01-data-integrity.md` §Dependencies |
|
||||
| All 6 phases | REWRITE | `01-data-integrity.md` §Phase 6-8 (one-doc-per-observation, upsert-not-delete, `chroma_synced` column, backfill at boot) |
|
||||
| `getExistingChromaIds` `@deprecated` fence | **DELETE** | Violates "no dead code" principle. Gone in same PR. |
|
||||
|
||||
**Net**: ~320 LoC deleted, ~60 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 05: context-injection-engine
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview | REWRITE | `04-read-path.md` §Unified rendering |
|
||||
| Dependencies | KEEP | `04-read-path.md` §Dependencies |
|
||||
| Four RenderStrategy classes | **DELETE** | Strategies collapse to ONE config object with four literals — violates "no speculative abstraction" principle |
|
||||
| Phase 1: Create `renderObservations(obs, strategy)` | KEEP | `04-read-path.md` §Phase 1 (extract common walk, accept `RenderStrategy` config) |
|
||||
| Phases 2-5: Delete old formatters, wire consumers | KEEP | `04-read-path.md` §Phases 2-5 |
|
||||
| Phase 6: Verification | KEEP | `04-read-path.md` §Verification (byte-equality snapshot) |
|
||||
| Phase 7: Prompt-caching cost note | REWRITE | `99-verification.md` §Cost smoke test gate |
|
||||
|
||||
**Net**: ~1,250 LoC deleted, ~320 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 06: hybrid-search-orchestration
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview | REWRITE | `04-read-path.md` §Search consolidation |
|
||||
| Dependencies | KEEP | `04-read-path.md` §Dependencies |
|
||||
| Verified facts | REWRITE | `04-read-path.md` §Concrete findings |
|
||||
| All 7 phases | KEEP | `04-read-path.md` §Phases 6-12 (delete `SearchManager.findBy*`, consolidate recency filter, route through `SearchOrchestrator`) |
|
||||
| Silent-fallback to filter-only | **DELETE** | Violates "fail-fast" — Plan 04 §Phase 6 throws 503 on Chroma error |
|
||||
|
||||
**Net**: ~1,700 LoC deleted, ~40 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 07: session-lifecycle-management — NEEDS REWRITE WHOLESALE
|
||||
|
||||
This is the plan that carried all the lifecycle debt. Almost every section maps to DELETE or REWRITE.
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview / Scope | REWRITE | `02-process-lifecycle.md` §Scope (lazy-spawn from hooks, process groups, no supervisor, no reapers, no idle-shutdown) |
|
||||
| Dependencies | KEEP | `02-process-lifecycle.md` §Dependencies |
|
||||
| Concrete findings (ProcessRegistry, SessionManager) | REWRITE | `02-process-lifecycle.md` §Concrete findings |
|
||||
| Mechanism A: Exit handlers | KEEP | `02-process-lifecycle.md` §Mechanism A (retains `child.on('exit')` as authoritative) |
|
||||
| Mechanism B: Per-session `abandonedTimer` setTimeout | **DELETE** | Polling loop in timer clothing. Replaced by synchronous cleanup in `generatorPromise.finally` |
|
||||
| Mechanism C: Boot-once reconciliation block | **DELETE** | `recoverStuckProcessing`, `killSystemOrphans`, `pruneDeadEntries`, `clearFailedOlderThan` — all violate "no recovery code" |
|
||||
| Phase 1: Ingest helpers | SPLIT | Helpers (`ingestObservation`, `ingestPrompt`, `ingestSummary`) move to `03-ingestion-path.md` §Phase 0 (prerequisite) |
|
||||
| Phase 2-7: Process lifecycle | REWRITE | `02-process-lifecycle.md` §Phases 1-8 |
|
||||
| Phase 8: Verification | KEEP | `02-process-lifecycle.md` §Verification (zero setInterval grep, process-group kill test) |
|
||||
|
||||
**Net**: ~900 LoC deleted, ~400 LoC added, massive cleanup of second-system content.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 08: transcript-watcher-integration
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| All content | KEEP | `03-ingestion-path.md` §Phases 5-9 (recursive `fs.watch`, `pendingTools` → DB UNIQUE, HTTP loopback → direct `ingestObservation`) |
|
||||
|
||||
**Net**: ~161 LoC deleted, ~75 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 09: lifecycle-hooks
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview / Scope | REWRITE | `05-hook-surface.md` §Scope (10 endpoints → 4, cache alive once, blocking `/api/session/end`) |
|
||||
| Endpoint reality check | KEEP | `05-hook-surface.md` §Endpoint inventory |
|
||||
| Hook → endpoint mapping | KEEP | `05-hook-surface.md` §Mapping table |
|
||||
| Phase 1-7: Delete legacy endpoints, consolidate | KEEP | `05-hook-surface.md` §Phases 1-7 |
|
||||
| Summarize polling loop | **DELETE** | Violates "fail-fast" — `05-hook-surface.md` §Phase 3 replaces with blocking endpoint |
|
||||
| Shell retry loops in hooks.json | **DELETE** | Violates DRY + "no retry in hooks" — `05-hook-surface.md` §Phase 1 deletes them |
|
||||
|
||||
**Net**: ~487 LoC deleted, ~25 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 10: knowledge-corpus-builder
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| All content | KEEP | `04-read-path.md` §Phases 13-18 (delete session_id, delete prime/reprime auto-reprime regex, rewrite /query with systemPrompt) |
|
||||
|
||||
**Net**: ~228 LoC deleted, ~30 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 11: http-server-routes
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Overview | REWRITE | `06-api-surface.md` §Scope (Zod middleware, delete rate limiter, cache static files) |
|
||||
| Anti-patterns | KEEP | `06-api-surface.md` §Anti-patterns |
|
||||
| Phase 1: Zod dependency | KEEP | `06-api-surface.md` §Phase 1 (preflight: `npm install zod@^3.x`) |
|
||||
| Phase 2-8: validateBody middleware, schemas, cache, oversize, verification | KEEP | `06-api-surface.md` §Phases 2-8 |
|
||||
| Diagnostic endpoint deletions | SPLIT | `/api/pending-queue/*` deletions move to `06-api-surface.md` §Phase 9 |
|
||||
|
||||
**Net**: ~180 LoC deleted, ~60 LoC added.
|
||||
|
||||
---
|
||||
|
||||
## Old Plan 12: viewer-ui-layer
|
||||
|
||||
| Old section | Verdict | New location |
|
||||
|---|---|---|
|
||||
| Plan type (lockdown/regression) | KEEP | `99-verification.md` §Viewer lockdown |
|
||||
| Phases 1-6: Inventory, invariants, regression tests | KEEP | `99-verification.md` §Phases 1-6 |
|
||||
|
||||
**Net**: 0 LoC source change; 12 regression artifacts under `tests/viewer-lockdown/`.
|
||||
|
||||
---
|
||||
|
||||
## Supporting documents
|
||||
|
||||
| Old file | Verdict | New location |
|
||||
|---|---|---|
|
||||
| `00-features.md` | KEEP as audit trail | Archive to `PATHFINDER-2026-04-21/_archive/` (reference only) |
|
||||
| `02-duplication-report.md` | KEEP as audit trail | Archive |
|
||||
| `03-unified-proposal.md` | KEEP as audit trail | Archive |
|
||||
| `04-handoff-prompts.md` | REWRITE | Becomes per-plan "how to run this" blocks in each new plan |
|
||||
| `05-clean-flowcharts.md` | KEEP as source of truth | Flowcharts cited by new plans; file itself archived |
|
||||
| `06-implementation-plan.md` Phase 0 (V1-V20) | KEEP | Merged into `_reference.md` |
|
||||
| `06-implementation-plan.md` Phases 1-15 | **DELETE** | Superseded by per-plan structure |
|
||||
| `07-master-plan.md` | REWRITE | Becomes `98-execution-order.md` |
|
||||
| `08-reconciliation.md` | REWRITE | Merged into `98-execution-order.md` |
|
||||
| `09-execution-runbook.md` | REWRITE | Merged into `98-execution-order.md` (DAG + preflight + post-landing grep) |
|
||||
|
||||
---
|
||||
|
||||
## Orphan content
|
||||
|
||||
**Archive `PATHFINDER-2026-04-21/` wholesale once the new corpus lands.** No orphans — every section either maps to a new plan or goes to the archive. If the new corpus passes Phase 7 principle-cross-check, the old directory becomes pure history.
|
||||
|
||||
---
|
||||
|
||||
## Cross-plan coupling points
|
||||
|
||||
| Shared invariant | Owner (new corpus) | Consumers |
|
||||
|---|---|---|
|
||||
| `stripMemoryTags` single-regex | `03-ingestion-path.md` §Phase 1 | All ingestion paths |
|
||||
| `ingestObservation`/`ingestPrompt`/`ingestSummary` helpers | `03-ingestion-path.md` §Phase 0 | Transcript watcher, hook handlers, worker routes |
|
||||
| `chroma_synced` column + boot-once backfill | `01-data-integrity.md` §Phase 2 | Chroma sync module |
|
||||
| `UNIQUE(session_id, tool_use_id)` | `01-data-integrity.md` §Phase 3 | `PendingMessageStore`, transcript processor |
|
||||
| `summaryStoredEvent` emission | `03-ingestion-path.md` §Phase 2 | `05-hook-surface.md` §Phase 3 (blocking endpoint awaits this event) |
|
||||
| `renderObservations(obs, strategy)` | `04-read-path.md` §Phase 1 | All formatters, search results, corpus detail |
|
||||
| `RECENCY_WINDOW_MS` constant | `types.ts:16` (already exists; consolidation in `04-read-path.md` §Phase 3) | Every search/filter call site |
|
||||
| Process-group spawn + `kill(-pgid)` | `02-process-lifecycle.md` §Mechanism A | `ProcessRegistry` (deleted), `supervisor/process-registry.ts` (kept) |
|
||||
| Zod schemas + `validateBody` middleware | `06-api-surface.md` §Phase 2 | All POST/PUT route handlers |
|
||||
|
||||
---
|
||||
|
||||
## Gaps to resolve before plan authoring
|
||||
|
||||
1. **Plan 01 / Plan 03 overlap** — new `03-ingestion-path.md` must merge their unique content cleanly. Authoring checkpoint: one `parseAgentXml` definition, one `ResponseProcessor` modification path.
|
||||
2. **Plan 07 Phase 1 co-ownership** — ingest helpers land BEFORE `03-ingestion-path`'s other phases. Mark as Phase 0 of `03-ingestion-path`.
|
||||
3. **Prompt-caching cost smoke test** — gate before `04-read-path` knowledge-corpus phases land. Verification lives in `99-verification.md`.
|
||||
4. **`engines.node >= 20.0.0` bump** — preflight for `03-ingestion-path` recursive `fs.watch`.
|
||||
5. **`npm install zod@^3.x`** — preflight for `06-api-surface` Zod middleware.
|
||||
6. **Chroma upsert fallback flag** — `01-data-integrity.md` §Chroma must gate behind a flag documented here.
|
||||
|
||||
---
|
||||
|
||||
**Status: READY FOR CORPUS AUTHORING.** Every new-plan author knows their scope, sources, and cross-plan couplings.
|
||||
@@ -0,0 +1,270 @@
|
||||
# Phase 7 — Principle Cross-Check
|
||||
|
||||
**Reviewer**: Phase 7 meta-review subagent
|
||||
**Date**: 2026-04-22
|
||||
**Scope**: Corpus files in `PATHFINDER-2026-04-22/` excluding `_rewrite-plan.md`, `_reference.md`, `_mapping.md`.
|
||||
**Corpus under review**: `00-principles.md`, `01-data-integrity.md`, `02-process-lifecycle.md`, `03-ingestion-path.md`, `04-read-path.md`, `05-hook-surface.md`, `06-api-surface.md`, `07-dead-code.md`, `98-execution-order.md`, `99-verification.md`.
|
||||
|
||||
## Summary verdict
|
||||
|
||||
**PASS** — 0 violations across all 7 checks.
|
||||
|
||||
---
|
||||
|
||||
## Check 1 — Dangerous identifiers (`recover|reap|heal|repair|orphan|coerce|fallback`, case-insensitive)
|
||||
|
||||
**Total hits**: 96 across the corpus (9 review files + supporting docs). Every hit in a review file classifies as DELETE-context, NEVER-ADD-guard, canonical-example, glossary definition, or invariant (self-heal) that is explicitly the new primary path. No hit advocates a new recovery / coerce / silent-fallback pattern.
|
||||
|
||||
### 00-principles.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 9 | "No **recovery** code" | Principle 1 statement | OK (principle) |
|
||||
| 10 | "circuit-break, **coerce**, or silently fall back" | Principle 2 statement | OK (principle) |
|
||||
| 13 | "process groups over hand-rolled **reapers**" + "**orphan** sweeps" | Principle 5 statement | OK (principle) |
|
||||
| 22 | "No new `**coerce***`, `**recover***`, `**heal***`, `**repair***`, `**reap***`, `kill*Orphans*` function names" | Anti-pattern guard | OK (NEVER-ADD) |
|
||||
| 23 | "try/catch that swallows errors and returns a **fallback** value" | Anti-pattern guard | OK (NEVER-ADD) |
|
||||
| 24 | "new schema column whose only purpose is to feed a **recovery** query" | Anti-pattern guard | OK (NEVER-ADD) |
|
||||
| 26 | "HTTP endpoint for diagnostic / manual-**repair** purposes" | Anti-pattern guard | OK (NEVER-ADD) |
|
||||
| 40 | "**Orphan** **reapers**, idle-evictors, **fallback** agents" | Inventory of DELETEd mechanisms | OK (DELETE) |
|
||||
| 41 | "`**repair**MalformedSchema`" + "self-**heal**ing claim" | DELETE target + canonical-example (self-heal is new invariant) | OK (DELETE + canonical) |
|
||||
| 43 | "`**coerce**ObservationToSummary`, circuit breaker" | DELETE target | OK (DELETE) |
|
||||
| 44 | "`@deprecated` dead classes" + "**repair**MalformedSchema" | DELETE targets | OK (DELETE) |
|
||||
| 51–53 | Glossary: "lease pattern," "self-**healing** claim," "fail-fast contract" | Definitions of canonical new patterns (self-healing claim is the approved replacement invariant; lease pattern is a concept definition) | OK (canonical example / glossary) |
|
||||
|
||||
### 01-data-integrity.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 11 | "`**recover**StuckProcessing`, `clearFailedOlderThan` interval, `**repair**MalformedSchema` all hide bugs. They are deleted" | Principle 1 application | OK (DELETE) |
|
||||
| 12 | "Chroma conflict errors surface through a narrow, flagged **fallback**; rest throws" | Scoped + flagged bridge, documented as non-permanent | OK (canonical bridge, gated by `CHROMA_SYNC_FALLBACK_ON_CONFLICT` flag with removal-commitment at line 282) |
|
||||
| 15 | "self-**heal**ing claim is event-driven" | Canonical new invariant name | OK (canonical) |
|
||||
| 30, 37, 71, 96, 98, 104, 106, 127, 129 | "self-**heal**ing claim" / "self-**heal** block" | Canonical invariant naming + "self-heal block" is the DELETE target within `claimNextMessage` | OK (canonical + DELETE) |
|
||||
| 165, 180 | `clearFailedOlderThan` interval | DELETE target | OK (DELETE) |
|
||||
| 187, 189, 192–197, 262 | `**repair**MalformedSchema` | DELETE target (Phase 6) | OK (DELETE) |
|
||||
| 206, 239, 282 | "Chroma upsert **fallback**" + `CHROMA_SYNC_FALLBACK_ON_CONFLICT` | Flag-gated, bridge-only, documented for removal | OK (justified bridge) |
|
||||
| 274 | "Do NOT keep `**recover**StuckProcessing()` … any identifier matching `**recover***`, `**heal***`, or `**repair***` that survives must be in a DELETE context" | NEVER-ADD guard | OK (NEVER-ADD) |
|
||||
| 275 | "No `setInterval`, no `setTimeout` loop" | Backfill design constraint | OK (NEVER-ADD) |
|
||||
| 276 | "Do NOT add '**repair**' CLI commands" | NEVER-ADD guard | OK (NEVER-ADD) |
|
||||
|
||||
### 02-process-lifecycle.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 11 | "**Orphan** sweeps, idle-evictors, stale-session **reapers** are recovery code papering over a spawn bug" | Principle 1 application | OK (DELETE rationale) |
|
||||
| 12 | "Gemini → OpenRouter **fallback** chain hides SDK failures. Delete it" | DELETE target | OK (DELETE) |
|
||||
| 13 | "Delete the 30-second **orphan**-**reaper** interval, the stale-session **reaper** interval" | DELETE targets | OK (DELETE) |
|
||||
| 14 | "`killSystemOrphans`, `killIdleDaemonChildren`, `**reap**OrphanedProcesses`, `**reap**StaleSessions`" | DELETE list | OK (DELETE) |
|
||||
| 27 | "`**reap**OrphanedProcesses`" | DELETE target (file anchor) | OK (DELETE) |
|
||||
| 37 | "`**reap**OrphanedProcesses() { /* three-layer sweep */ }`" | Before-snippet in DELETE diff | OK (DELETE) |
|
||||
| 46 | "There is no ppid sweep, no **orphan** **reaper**, no 'shadow' registry" | After-state assertion | OK (NEVER-ADD) |
|
||||
| 55 | "OS primitive that makes **orphan** **reap**ing unnecessary" | Rationale | OK (rationale) |
|
||||
| 118, 120, 128, 129, 136 | "Delete all **reaper** intervals" + `**reap**OrphanedProcesses` / `**reap**StaleSessions` / `reapStaleSessions()` | DELETE targets | OK (DELETE) |
|
||||
| 146–147 | "no **reap**ers" + "Phase 2 process groups prevent **orphan**s" | After-state comment | OK (NEVER-ADD) |
|
||||
| 208, 210, 213 | "Delete **fallback** agent chain (Gemini → OpenRouter)" + `**fallback**Agent` references | DELETE target (Phase 7) | OK (DELETE) |
|
||||
| 233 | "no silent **fallback**s" | Reference to principle 2 | OK (NEVER-ADD) |
|
||||
| 243 | "`detached: true` **fallback**" | Documented OS-level spawn primitive (daemon spawn pattern reference, not a silent-error fallback) | OK (canonical spawn primitive) |
|
||||
| 341, 346, 347, 350, 353, 361 | Grep-zero greps for `**reap**StaleSessions`, `**reap**OrphanedProcesses`, `**fallback**Agent\|Gemini\|OpenRouter`, `**orphan** children`; "Do NOT keep `killSystemOrphans`" | Verification / NEVER-ADD | OK (DELETE-verification + NEVER-ADD) |
|
||||
|
||||
### 03-ingestion-path.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 13 | "`**coerce**ObservationToSummary` exists only to **recover** from LLM contract violations. Fix the contract, delete the coercion helper" | Principle 1 application | OK (DELETE) |
|
||||
| 18 | "`**coerce**ObservationToSummary`, `pendingTools` Map, `TranscriptParser` class — all delete in the same PR" | DELETE list | OK (DELETE) |
|
||||
| 64, 68, 84, 101, 129, 153, 155, 158, 163, 362 | `**coerce**ObservationToSummary` | DELETE target (Phase 4) + before-snippet + verification grep-zero | OK (DELETE) |
|
||||
| 166 | "no `@deprecated` fence, no 'remove next release'" | Anti-pattern reminder | OK (NEVER-ADD) |
|
||||
| 316 | "the dead class deletes now — not fenced with `@deprecated`" | Anti-pattern reminder | OK (NEVER-ADD) |
|
||||
| 371 | "fuzz test: drop a JSONL file with an **orphan** tool_use" | Test case name describing input data, not a pattern to implement | OK (test vocabulary) |
|
||||
| 384 | "Do NOT ship a polling **fallback** for `fs.watch`" | NEVER-ADD guard | OK (NEVER-ADD) |
|
||||
| 389 | "No new `**coerce***`, `**heal***`, `**recover***`, `**repair***` function name" | NEVER-ADD guard | OK (NEVER-ADD) |
|
||||
|
||||
### 04-read-path.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 11 | "`SearchOrchestrator` throws `503` on Chroma error. … three try/catch **fallback**s that returned metadata-only are deleted" | Principle 2 application | OK (DELETE) |
|
||||
| 13 | "the `fell**Back**: true` flag path, the `@deprecated getExistingChromaIds` fence … all delete in the PR" | DELETE list | OK (DELETE) |
|
||||
| 86, 94, 98, 103, 108, 178, 183, 194 | "fell**Back**: true" / "silent **fallback**s" / "three near-identical methods … try/catch **fallback** to metadata-only" / "metadata-only **fallback**" / "Do NOT add a feature flag to 'disable fail-fast Chroma'" | DELETE targets + NEVER-ADD guard | OK (DELETE + NEVER-ADD) |
|
||||
| 126 | "After Phase 2 deletes both classes, their `estimateTokens` helpers would **orphan**" | English verb (referring to consolidating helpers that would be orphaned), not a pattern | OK (narrative language) |
|
||||
| 198–200 | "No new `**coerce***`, `**recover***`, `**heal***`, `**repair***` function names" / "try/catch that swallows errors and returns a **fallback** value" | NEVER-ADD guards | OK (NEVER-ADD) |
|
||||
| 208 | "read-path `503` is correct even while the write-path **fallback** remains active" | Explicit scoped-to-write-path Chroma bridge (owned by 01) | OK (canonical bridge) |
|
||||
|
||||
### 05-hook-surface.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 77 | "the request/**fallback** sequence has one implementation; eight handlers import it. No handler reimplements the 'worker missing → exit gracefully' path" | Describes the one single helper path that handles worker-unreachable — explicitly the non-silent, escalates-to-exit-2 path. Used in sense of "alternative path" not "silent recovery" | OK (canonical single-helper description; the handler uses exit-code escalation per principle 2) |
|
||||
|
||||
### 06-api-surface.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 9 | "pending-queue diagnostic endpoints exist to poke at rows a correct ingestion path should never leave behind. Deleting them is the cure" | Principle 1 application | OK (DELETE) |
|
||||
| 87 | "grep-and-delete every … `**coerce***` helper across route files" | DELETE directive | OK (DELETE) |
|
||||
| 97 | "Claim-side contention → `01-data-integrity.md` Phase 3 (self-**heal**ing claim)" | Canonical invariant reference | OK (canonical) |
|
||||
| 100 | "'No new HTTP endpoint for diagnostic / manual-**repair** purposes' — the rate limiter is the HTTP-handler analogue" | NEVER-ADD guard citation | OK (NEVER-ADD) |
|
||||
| 114 | "principle 1 (no watcher-plus-TTL 'cache-invalidation' **recovery** code)" | Principle 1 rationale | OK (NEVER-ADD) |
|
||||
| 126 | "KEEP `/api/processing-status` … not a **repair** lever. It reads and reports" | Definition of what is kept (non-repair) | OK (boundary statement) |
|
||||
| 129 | "'No new HTTP endpoint for diagnostic / manual-**repair** purposes' — the deletions here are that guard applied retroactively" | NEVER-ADD guard citation | OK (NEVER-ADD) |
|
||||
|
||||
### 07-dead-code.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 33, 45, 81, 135, 144, 159 | `@deprecated` identifiers / fences | All DELETE directives or NEVER-ADD guards | OK (DELETE + NEVER-ADD) |
|
||||
|
||||
### 98-execution-order.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 14 | "self-**heal**ing claim" | Canonical invariant name | OK (canonical) |
|
||||
| 20 | "catches **orphan**ed exports / commented-out blocks / dead migrations" | Sweep-plan scope language | OK (narrative about dead-code sweep, not a pattern) |
|
||||
| 154 | "self-**heal**ing claim query" | Canonical invariant reference | OK (canonical) |
|
||||
| 174 | "Chroma upsert **fallback** is brittle" | Documented bridge with flag + removal-commitment | OK (justified bridge) |
|
||||
| 177 | "lazy-spawn wrapper needs a retry **strategy**" — resolved to hand-rolled 3-attempt retry | Describing the decision (hand-rolled logic, no new class) | OK (narrative describing resolution) |
|
||||
|
||||
### 99-verification.md
|
||||
|
||||
| Line | Matched text | Context | Verdict |
|
||||
|---|---|---|---|
|
||||
| 50, 53, 54, 58, 59, 71, 78, 79 | Grep-zero checks for `**recover**StuckProcessing`, `killSystem**Orphan**s`, `**reap**StaleSessions`, `**reap**OrphanedProcesses`, `killIdleDaemonChildren`, `**fallback**Agent`, `**repair**MalformedSchema`, `**coerce**ObservationToSummary` | Verification (must return 0) | OK (DELETE-verification) |
|
||||
| 161 | "I5: `/health` endpoint" — mention in "**heal**th" endpoint name | Substring match on word "health" in endpoint name (not a recovery/heal pattern) | OK (substring, not the pattern the rule targets) |
|
||||
| 164 | "Deleted diagnostic endpoints return `404`, not `200` with a **fallback** body" | Verification that NO silent fallback exists | OK (NEVER-ADD verification) |
|
||||
| 176 | "kill Chroma, issue a search → 503 rendered, no **fallback**" | Verification of no silent fallback | OK (NEVER-ADD verification) |
|
||||
| 191 | "no **orphan** children remain" | Integration-test assertion | OK (verification) |
|
||||
|
||||
**Verdict**: PASS. Every hit is DELETE-context, NEVER-ADD guard, canonical-example (self-healing claim, lease pattern, fail-fast contract as glossary), or a scoped + flagged Chroma-upsert bridge with documented removal.
|
||||
|
||||
---
|
||||
|
||||
## Check 2 — Timers (`setInterval|setTimeout`)
|
||||
|
||||
**Total hits**: 35 across the corpus (excluding support docs). Every hit is a DELETE target OR the explicitly justified per-operation kill-escalation `setTimeout` in `src/supervisor/shutdown.ts` (the SIGTERM→SIGKILL 5-second escalator — non-repeating, bound to a specific operation, disposed in-scope).
|
||||
|
||||
### Per-file breakdown
|
||||
|
||||
| File | Hits | Classification |
|
||||
|---|---|---|
|
||||
| 00-principles.md | 2 (lines 12, 21) | Principle 4 statement + NEVER-ADD guard for `src/services/worker/` |
|
||||
| 01-data-integrity.md | 3 (lines 165, 180, 275) | DELETE + NEVER-ADD ("no `setInterval`, no `setTimeout` loop" for Chroma backfill) |
|
||||
| 02-process-lifecycle.md | 7 (lines 13, 124, 135, 138, 155, 166, 341) | All DELETE targets (reaper intervals, `abandonedTimer` setTimeout) + verification grep-zero |
|
||||
| 03-ingestion-path.md | 6 (lines 16, 174, 177, 194, 209, 346, 365, 390) | DELETE targets (the 5-second rescan `setInterval` at watcher.ts:124-132) + verification + NEVER-ADD guard |
|
||||
| 05-hook-surface.md | 1 (line 107) | `const timer = setTimeout(...)` in the consecutive-failure-counter code snippet — this is the narrowly-scoped per-operation timer in the hook (see below) |
|
||||
| 06-api-surface.md | 1 (line 141) | DELETE directive for shutdown wrappers that create `setInterval` callers |
|
||||
| 99-verification.md | 5 (lines 13, 14, 15, 22, 25, 47, 82) | DELETE targets in census + explicit justification for per-operation one-shot `setTimeout` in `src/supervisor/shutdown.ts` (kill-escalation) |
|
||||
|
||||
**Line 107 of 05-hook-surface.md** — `const timer = setTimeout(...)`: this is a per-operation timer inside the consecutive-failure escalation code (bounded scope, cleared synchronously, not a repeating background sweep). Matches the "narrowly-justified per-operation" allowance in `99-verification.md:22`.
|
||||
|
||||
**Verdict**: PASS. No hit proposes a new repeating background timer in `src/services/worker/` or equivalent. Every repeating timer is a DELETE target. The only non-DELETE mentions are (a) the 5-second shutdown kill-escalation explicitly called out in 99, (b) the per-operation timer in 05 line 107 (bounded to the request lifecycle).
|
||||
|
||||
---
|
||||
|
||||
## Check 3 — Strategy/Factory/Builder
|
||||
|
||||
**Total hits**: 27 across the corpus (case-insensitive). All hits justify as one of: (a) `RenderStrategy` as a **config type** (not a class — explicitly enforced by `04-read-path.md` lines 33, 193); (b) existing module path `ChromaSearchStrategy` / `HybridSearchStrategy` (file-system name from existing code); (c) DELETE directives for the four old formatter "strategy classes"; (d) narrative descriptions (e.g., "retry strategy" for hand-rolled retry logic).
|
||||
|
||||
### Per-file breakdown
|
||||
|
||||
| File | Hits | Classification |
|
||||
|---|---|---|
|
||||
| 00-principles.md | 2 (lines 14, 25, 42) | Principle 6 statement + NEVER-ADD guard + "four formatter classes" = DELETE inventory | OK |
|
||||
| 04-read-path.md | 15+ | `RenderStrategy` as config type (not class) — enforced explicitly at line 33 ("NO abstract class. NO factory. NO `RenderStrategyBase`") and line 193 ("Config object only. No `abstract class RenderStrategy`, no subclass-per-formatter, no factory, no registry"). `ChromaSearchStrategy` / `HybridSearchStrategy` are existing module paths from `src/services/worker/search/strategies/`. DELETE directives for old per-formatter strategies at lines 100, 103. | OK |
|
||||
| 05-hook-surface.md | 2 (lines 275, 298) | "CLAUDE.md §Exit Code **Strategy**" — naming of the existing CLAUDE.md section, not a new class | OK |
|
||||
| 06-api-surface.md | 0 | — |
|
||||
| 07-dead-code.md | 1 (line 17) | Principle 6 quote — NEVER-ADD guard | OK |
|
||||
| 98-execution-order.md | 3 (lines 160, 175, 177, 178) | `renderObservations(obs, strategy)` references config type; "retry **strategy**" at 177 resolves to "hand-roll a 3-attempt retry" (no new class); "explicit cache-control **strategy**" at 175 is a fallback plan description, not a proposed abstraction | OK |
|
||||
|
||||
**Verdict**: PASS. No hit proposes a new abstract-class / factory / builder layer. `RenderStrategy` is a `type` (object literal) and this is guarded three times in `04-read-path.md`.
|
||||
|
||||
---
|
||||
|
||||
## Check 4 — Forbidden phrases (`for backward compat|for one release|@deprecated`)
|
||||
|
||||
**Total hits**: 24 across the corpus. Every hit is a DELETE directive, a NEVER-ADD guard, or a reference to principle 7.
|
||||
|
||||
### Per-file breakdown
|
||||
|
||||
| File | Hits | Classification |
|
||||
|---|---|---|
|
||||
| 00-principles.md | 2 (lines 15, 44) | Principle 7 statement + DELETE inventory | OK (NEVER-ADD) |
|
||||
| 03-ingestion-path.md | 2 (lines 166, 316) | "no `@deprecated` fence, no 'remove next release'" — NEVER-ADD reminder | OK |
|
||||
| 04-read-path.md | 6 (lines 13, 112, 114, 117, 120, 179) | Phase 7 section DELETES `@deprecated getExistingChromaIds` | OK (DELETE) |
|
||||
| 06-api-surface.md | 3 (lines 12, 89, 145, 224) | DELETE wrappers in-PR "not `@deprecated`-fenced"; "Do NOT keep a shutdown wrapper 'for backward compat'" | OK (NEVER-ADD) |
|
||||
| 07-dead-code.md | 9 (lines 11, 33, 45, 81, 135, 144, 159) | Principle 7 quote + DELETE of residual `@deprecated` fences + NEVER-ADD guard | OK (DELETE + NEVER-ADD) |
|
||||
| 99-verification.md | 1 (line 119) | Verification grep-zero for `// @deprecated\|// TODO remove\|// old$\|// legacy$` | OK (verification) |
|
||||
|
||||
**Verdict**: PASS. Zero advocacy for deprecated-fence or backward-compat retention; every mention is a DELETE directive or NEVER-ADD guard.
|
||||
|
||||
---
|
||||
|
||||
## Check 5 — `_reference.md` citations per plan
|
||||
|
||||
| Plan | `_reference.md` citations | Verdict |
|
||||
|---|---|---|
|
||||
| 00-principles.md | 0 | OK — 00 is the root principles doc; it defines anti-patterns and is cited by every downstream plan. It does not need to cite `_reference.md` because it asserts rules, not facts about specific code anchors. |
|
||||
| 01-data-integrity.md | 10 | OK |
|
||||
| 02-process-lifecycle.md | 17 | OK |
|
||||
| 03-ingestion-path.md | 15 | OK |
|
||||
| 04-read-path.md | 12 | OK |
|
||||
| 05-hook-surface.md | 20 | OK |
|
||||
| 06-api-surface.md | 6 | OK |
|
||||
| 07-dead-code.md | 0 | ACCEPTABLE — 07 is the dead-code sweep plan. Its targets are identified by downstream DELETE directives in plans 01-06 (each of which cites `_reference.md`). 07 cites `_mapping.md` DELETE rows and runs `ts-prune`/`knip` for residue. Sweeping unused exports does not require line anchors — if a symbol has no callers after 01-06 land, it is dead. |
|
||||
| 98-execution-order.md | 1 | OK (structural doc; cites as part of the "how to execute a phase" load list) |
|
||||
| 99-verification.md | 0 | ACCEPTABLE — 99 is the verification-operational doc. It runs greps and integration tests whose targets are defined by the plans that cite `_reference.md`. Verification targets (e.g., `coerceObservationToSummary` grep → 0) are inherited from plans 01-06 that cite the anchors. |
|
||||
|
||||
**Verdict**: PASS. Every plan that touches existing code anchors cites `_reference.md` at least 6 times. The three plans with zero citations (00, 07, 99) are structurally correct: 00 asserts rules, 07 sweeps residue from plans that already cited, 99 verifies grep-zero against targets already cited.
|
||||
|
||||
---
|
||||
|
||||
## Check 6 — Mapping completeness
|
||||
|
||||
`_mapping.md` accounts for every old `PATHFINDER-2026-04-21` plan (Plans 01 through 12) and every supporting document (`00-features.md`, `02-duplication-report.md`, `03-unified-proposal.md`, `04-handoff-prompts.md`, `05-clean-flowcharts.md`, `06-implementation-plan.md` Phase 0 + Phases 1-15, `07-master-plan.md`, `08-reconciliation.md`, `09-execution-runbook.md`). Each row has a verdict (KEEP / REWRITE / DELETE / SPLIT) and a new-plan destination or explicit archive location.
|
||||
|
||||
Line 210-212 of `_mapping.md` explicitly asserts: "**Archive `PATHFINDER-2026-04-21/` wholesale once the new corpus lands. No orphans** — every section either maps to a new plan or goes to the archive."
|
||||
|
||||
No orphan old sections identified. Plan 03 (response-parsing-storage) is flagged as heavily duplicating Plan 01 — its unique content is consolidated into `03-ingestion-path.md` and duplicate phases are explicitly DELETE'd (lines 62-66). Plan 07 (session-lifecycle-management) — the heaviest-debt plan — has every mechanism line-item accounted for (Mechanism A KEEP, Mechanism B/C DELETE, Phase 1 SPLIT to 03 Phase 0, Phases 2-7 REWRITE to 02, Phase 8 KEEP).
|
||||
|
||||
**Verdict**: PASS.
|
||||
|
||||
---
|
||||
|
||||
## Check 7 — DAG in 98-execution-order.md
|
||||
|
||||
### Node → incoming edges
|
||||
|
||||
- `00` ← ∅
|
||||
- `01` ← {00}
|
||||
- `02` ← {00}
|
||||
- `03` ← {01, 02}
|
||||
- `04` ← {01}
|
||||
- `05` ← {02, 03}
|
||||
- `06` ← {05}
|
||||
- `07` ← {00, 01, 02, 03, 04, 05, 06}
|
||||
- `99` ← ∅ (alongside, not blocking)
|
||||
|
||||
### Confirmations
|
||||
|
||||
- **No edge references a non-existent node**: every source of an incoming edge is in the node set {00, 01, 02, 03, 04, 05, 06, 07, 99}. ✓
|
||||
- **Topological sort exists and is emitted**: `00 → 01 → 02 → 03 → 04 → 05 → 06 → 07`. All edges point strictly forward. ✓
|
||||
- **All plans 00-07 appear as DAG nodes**: confirmed. ✓
|
||||
- **99 listed as "runs alongside"**: confirmed (line 21 of 98-execution-order.md, line 102). ✓
|
||||
- **Acyclicity**: confirmed by explicit check at line 104: "No back-edges. DAG is acyclic." ✓
|
||||
|
||||
**Verdict**: PASS.
|
||||
|
||||
---
|
||||
|
||||
## Revisions needed
|
||||
|
||||
**None.** Every check passes. No plan requires revision before ship.
|
||||
|
||||
---
|
||||
|
||||
## Overall recommendation
|
||||
|
||||
**Ship as-is.** The corpus passes all seven Phase 7 cross-checks with zero violations. Every dangerous-identifier mention (`recover`, `reap`, `heal`, `repair`, `orphan`, `coerce`, `fallback`) is either a DELETE target, a NEVER-ADD guard, a canonical-example glossary entry, or the single flagged + scoped + removal-committed Chroma upsert bridge. Every `setInterval`/`setTimeout` is either a DELETE target or a narrowly-scoped per-operation timer justified in `99-verification.md` §22. Every `strategy`/`factory`/`builder` mention either (a) is guarded against class-hierarchy expansion (`04-read-path.md` line 33, 193), (b) refers to an existing module-path filename, or (c) quotes principle 6 in a NEVER-ADD context. Every `@deprecated` mention is a DELETE directive or a NEVER-ADD guard. Every plan that touches existing code anchors cites `_reference.md` extensively. The mapping accounts for every old section with explicit verdicts. The execution DAG is acyclic with a clean topological sort.
|
||||
|
||||
The only residual items that remain operational risks (not review violations) are the five blocking issues already enumerated in `98-execution-order.md` §Blocking issues — these are carried forward with resolution pointers and are not Phase 7 concerns.
|
||||
|
||||
**Confidence: HIGH** that this corpus is ready to enter the execution DAG.
|
||||
@@ -0,0 +1,269 @@
|
||||
# PATHFINDER-2026-04-22 Reference
|
||||
|
||||
Verified API signatures, current-code anchors, and canonical snippets. Every plan in this corpus cites this document for exact file:line anchors and verified APIs.
|
||||
|
||||
**Verification date**: 2026-04-22. Anchors verified by direct file read. External APIs verified against documentation and usage patterns.
|
||||
|
||||
---
|
||||
|
||||
## Correction to prior conversation assumptions
|
||||
|
||||
1. **Bun.spawn does NOT support `detached` option.** `detached: true` is a Node `child_process.spawn` option, not a Bun one.
|
||||
2. **claude-mem uses Node's `child_process`, not `Bun.spawn`.** Every subprocess spawn in the codebase uses `node:child_process.spawn`/`spawnSync` (verified by cross-check with Deno migration audit). So `detached: true` + `setsid` IS available to us — through the Node API, not through Bun.
|
||||
3. **`respawn` npm package is NOT currently a dependency.** Adding it is a new-dep decision.
|
||||
4. **`fs.watch(dir, { recursive: true })` on Linux requires Node 20+.** `package.json` currently pins `>=18.0.0`. Preflight: bump to `>=20.0.0`.
|
||||
|
||||
---
|
||||
|
||||
## Part 1: Current-code anchors
|
||||
|
||||
### Data layer
|
||||
|
||||
**`src/services/sqlite/PendingMessageStore.ts:99-145` — `claimNextMessage`**
|
||||
|
||||
Transaction-wrapped claim. Resets stale rows (`status='processing'` older than `STALE_PROCESSING_THRESHOLD_MS=60_000`) INSIDE the claim transaction. The self-heal block (lines 107-115) is the target of Plan `01-data-integrity` Phase 4.
|
||||
|
||||
**`src/services/sqlite/PendingMessageStore.ts:486-495` — `clearFailedOlderThan`**
|
||||
|
||||
`DELETE FROM pending_messages WHERE status='failed' AND COALESCE(failed_at_epoch,…) < ?`. Currently called from 2-minute interval at `worker-service.ts:567`. Moves to boot-once OR gets deleted entirely (Plan 02 principles: if nothing needs purge, don't purge).
|
||||
|
||||
**`src/services/sqlite/PendingMessageStore.ts:349-374` — `markFailed`**
|
||||
|
||||
Retry ladder: reads `retry_count`, bumps to `pending` if `< maxRetries`, marks `failed` otherwise. Principle decision for Plan 01: retry exists for a reason (transient SDK failures); KEEP the ladder but verify `maxRetries` is reasonable (currently 3).
|
||||
|
||||
**`src/services/sqlite/Database.ts:37-130` — `repairMalformedSchema`**
|
||||
|
||||
Python subprocess fallback when SQLite reports `malformed database schema`. Writes script to tempfile, execFileSync. Closes connection first to avoid lock conflicts. Target for Plan `07-dead-code` deletion — this is cross-machine WAL corruption that should be root-caused, not repaired.
|
||||
|
||||
**`src/services/sqlite/migrations/runner.ts:621-628` — Migration 19 (DEPRECATED)**
|
||||
|
||||
No-op after migration 17 made renames idempotent. Records itself as applied, does nothing. Dead code. Plan `07-dead-code` deletes with next schema.sql regeneration.
|
||||
|
||||
**`src/services/sqlite/migrations/runner.ts:658-837` — Migration 21 (FK cascade fix)**
|
||||
|
||||
Recreates `observations` + `session_summaries` tables to add `ON UPDATE CASCADE`. Exists because an earlier design allowed `memory_session_id` mutations. Plan `01-data-integrity` §Invariants: `memory_session_id` must be immutable post-creation; if this holds, migration 21 is a one-time historical fix, safe to absorb into `schema.sql`.
|
||||
|
||||
**`src/services/sqlite/observations/store.ts:13-46` — `DEDUP_WINDOW_MS` + `findDuplicateObservation`**
|
||||
|
||||
30-second content-hash dedup window. Plan `01-data-integrity` Phase 2 replaces with DB `UNIQUE(memory_session_id, content_hash)` constraint + `ON CONFLICT DO NOTHING`.
|
||||
|
||||
**`src/services/sqlite/SessionStore.ts:52-70` — Duplicated migration logic**
|
||||
|
||||
Re-calls every `ensure*` / `add*` migration method already owned by `MigrationRunner`. Plan `07-dead-code`: SessionStore delegates to a single `new MigrationRunner(db).runAllMigrations()`.
|
||||
|
||||
**`src/services/sync/ChromaSync.ts:290-318` — Delete-then-add reconciliation**
|
||||
|
||||
Chroma MCP has no upsert. On `already exist` error, the code deletes the IDs then re-adds. Plan `01-data-integrity` §Chroma: document the brittle error-text match; consider guarding behind a flag until Chroma exposes upsert natively.
|
||||
|
||||
### Worker / lifecycle
|
||||
|
||||
**`src/services/worker/ProcessRegistry.ts:244-309` — `killIdleDaemonChildren`**
|
||||
|
||||
Walks `ps -eo` output, filters by `ppid == daemonPid`, kills any child idle > 1 minute. Used by 30s-interval `startOrphanReaper`. Plan `02-process-lifecycle` DELETES (function body) — replaced by process-group teardown.
|
||||
|
||||
**`src/services/worker/ProcessRegistry.ts:315-344` — `killSystemOrphans`**
|
||||
|
||||
ppid=1 sweep matching `claude.*haiku|claude.*output-format`. Plan `02-process-lifecycle` DELETES — orphans are prevented by process-group spawning, not swept.
|
||||
|
||||
**`src/services/worker/ProcessRegistry.ts:349-382` — `reapOrphanedProcesses`**
|
||||
|
||||
Three-layer cleanup (registry-tracked, ppid=1, idle daemon children). DELETES wholesale.
|
||||
|
||||
**`src/services/worker/ProcessRegistry.ts:452-465` — spawn site for Claude SDK children**
|
||||
|
||||
Currently uses `spawn(command, args, { stdio: 'pipe', … })` with NO `detached` and NO process group. Plan `02-process-lifecycle` Phase 2: change to `spawn(cmd, args, { detached: true, stdio: ['ignore','pipe','pipe'] })` and track via `pgid`.
|
||||
|
||||
**`src/services/worker/worker-service.ts:537, 547, 567, 581, 1094-1120`**
|
||||
|
||||
- `:537` — `startOrphanReaper` call
|
||||
- `:547` — `staleSessionReaperInterval = setInterval(…)`
|
||||
- `:567` — `clearFailedOlderThan` interval
|
||||
- `:581` — explicit `PRAGMA wal_checkpoint(PASSIVE)` interval
|
||||
- `:1094-1120` — shutdown sequence (clears intervals, calls `performGracefulShutdown`)
|
||||
|
||||
Plan `02-process-lifecycle` deletes all interval setup and collapses shutdown.
|
||||
|
||||
**`src/supervisor/process-registry.ts:85-173` — `captureProcessStartToken`**
|
||||
|
||||
Reads `/proc/<pid>/stat` field 22 on Linux, `ps -o lstart=` on macOS, returns `null` on Windows. Used for PID-reuse detection (commit 99060bac). Plan `02-process-lifecycle` KEEPS — legitimate primary-path correctness.
|
||||
|
||||
**`src/supervisor/shutdown.ts:22-99, 116, 163` — `runShutdownCascade`**
|
||||
|
||||
5-phase: SIGTERM all → wait 5s → SIGKILL survivors → wait 1s → unregister + rm PID file. Uses `process.kill(pid, signal)` — SINGLE-PID, not process group. Plan `02-process-lifecycle` Phase 3: change to `process.kill(-pgid, signal)` where children have their own process groups.
|
||||
|
||||
**`src/services/worker/SessionManager.ts:397, 477, 516-568, 573-579, 631-670`**
|
||||
|
||||
- `:397` — `deleteSession(sessionDbId)` — awaits generator + subprocess exit
|
||||
- `:477-506` — `evictIdlestSession` (pool-eviction, candidate for DELETE per Tier 1 #11)
|
||||
- `:516-568` — `reapStaleSessions` (DELETE per Plan 02)
|
||||
- `:573-579` — `shutdownAll`
|
||||
- `:631-670` — `getMessageIterator` (idle-timer callback is second-system per earlier audit)
|
||||
|
||||
**`src/services/worker/SessionQueueProcessor.ts:6, 51-52, 62-63, 130, 145`**
|
||||
|
||||
Per-iterator idle `setTimeout` (3-min). Plan `02-process-lifecycle` §Invariants: this is per-session not global-scanner. KEEP as the only runtime defense against hung SDK generators.
|
||||
|
||||
**`src/services/infrastructure/GracefulShutdown.ts:52-86` — `performGracefulShutdown`**
|
||||
|
||||
6-step canonical shutdown (HTTP server close → sessions → MCP → Chroma → DB → supervisor). Plan `06-api-surface` CONSOLIDATES — currently four shutdown functions (`WorkerService.shutdown`, `performGracefulShutdown`, `runShutdownCascade`, `stopSupervisor`) collapse to this one.
|
||||
|
||||
**`src/services/infrastructure/ProcessManager.ts:1013-1032, 1053-1075`**
|
||||
|
||||
Daemon spawn + liveness. `:1013` uses `setsid` on Unix, `:1028` falls back to `detached: true` on macOS. Liveness at `:1053-1075` is plain `process.kill(pid, 0)`. Plan `02-process-lifecycle` KEEPS daemon spawn pattern; extends to SDK children.
|
||||
|
||||
### Ingestion
|
||||
|
||||
**`src/sdk/parser.ts:33-111` — `parseObservations`**
|
||||
|
||||
Parses `<observation>` blocks. Fallback type logic (line 54-69) is legitimate (type field is optional per schema). KEEP.
|
||||
|
||||
**`src/sdk/parser.ts:122-259` — `parseSummary` + `coerceObservationToSummary`**
|
||||
|
||||
`coerceObservationToSummary` at lines 222-259 is a second-system effect (maps `<observation>` fields to `<summary>` when LLM violates contract). Plan `03-ingestion-path` DELETES the coerce function. Contract violations must fail-fast to `markFailed`, not coerce.
|
||||
|
||||
**`src/services/worker/agents/ResponseProcessor.ts:96-200` — Circuit breaker**
|
||||
|
||||
`consecutiveSummaryFailures` + `MAX_CONSECUTIVE_SUMMARY_FAILURES`. Plan `03-ingestion-path` DELETES field, constant, guard.
|
||||
|
||||
**`src/services/transcripts/processor.ts:23, 202, 232-236, 252, 275-285, 317`**
|
||||
|
||||
- `:23` — `pendingTools` Map (per-session toolId → toolInput)
|
||||
- `:202, :232-236` — dispatcher pairing `tool_use` with `tool_result`
|
||||
- `:252` — HTTP loopback (`observationHandler.execute()` → `workerHttpRequest` → same worker)
|
||||
- `:275-285` — `maybeParseJson` silent passthrough
|
||||
|
||||
Plan `03-ingestion-path` Phase 1 deletes the Map; Phase 2 routes through direct function call `ingestObservation(payload)` (no HTTP loopback); Phase 3 changes `maybeParseJson` to fail-fast.
|
||||
|
||||
**`src/services/transcripts/watcher.ts:124-132, 156-159, 183-188`**
|
||||
|
||||
- `:124-132` — 5-second `setInterval` rescan
|
||||
- `:156-159` — `resolveWatchFiles` silent empty-return on stat() failure
|
||||
- `:183-188` — `startAtEnd` offset fallback (benign, KEEP)
|
||||
|
||||
Plan `03-ingestion-path` replaces rescan with `fs.watch(dir, { recursive: true })`.
|
||||
|
||||
**`src/utils/tag-stripping.ts:37-44, 63-69` — `countTags`, `stripTagsInternal`**
|
||||
|
||||
Six separate `.replace()` / `.match()` calls for six tag types. Plan `03-ingestion-path` §Tag stripping: one regex with alternation, single-pass.
|
||||
|
||||
**`src/utils/transcript-parser.ts:28-90` — DEAD CLASS**
|
||||
|
||||
`TranscriptParser` class exists but has no active imports. Plan `07-dead-code` DELETES.
|
||||
|
||||
**`src/shared/transcript-parser.ts:41-144` — Active function**
|
||||
|
||||
`extractLastMessage(path, role, opts)` — the active parser. KEEP.
|
||||
|
||||
### Search / read path
|
||||
|
||||
**`src/services/worker/search/SearchOrchestrator.ts:85-110` — Silent fallback**
|
||||
|
||||
Three paths: (1) filter-only → SQLite, (2) query + Chroma → try Chroma, on `usedChroma=false` strip query and re-query SQLite, (3) no Chroma → empty silent. Plan `04-read-path` Phase 1: DELETE the stripping branch. On Chroma failure, throw 503.
|
||||
|
||||
**`src/services/worker/search/strategies/ChromaSearchStrategy.ts:76-86`**
|
||||
|
||||
`try { … } catch { return usedChroma: false }` swallows real errors. Plan `04-read-path` Phase 1: only return `usedChroma: false` when Chroma is explicitly not initialized; propagate real errors.
|
||||
|
||||
**`src/services/worker/search/strategies/HybridSearchStrategy.ts:64-185`**
|
||||
|
||||
Three near-identical methods (`findByConcept`, `findByType`, `findByFile`) each with its own try/catch fallback to metadata-only. Plan `04-read-path` Phase 2: propagate errors, don't silently degrade to metadata-only.
|
||||
|
||||
**`src/services/worker/SearchManager.ts:230, 247-259, 488, 978-985, 1064-1071, 1150-1157, 1209-1310, 1277, 1399, 1840-1847`**
|
||||
|
||||
- Seven duplicated recency-filter call sites
|
||||
- `findByConcept/File/Type` implementations that duplicate `HybridSearchStrategy`
|
||||
|
||||
Plan `04-read-path` Phase 3: import `RECENCY_WINDOW_MS` from `types.ts:16`, delete the seven copies; delete `SearchManager.findBy*` methods and route through `SearchOrchestrator`.
|
||||
|
||||
**`src/services/worker/search/ResultFormatter.ts:264` vs `src/services/worker/knowledge/CorpusRenderer.ts:90`**
|
||||
|
||||
Two different token estimates. Plan `04-read-path` §Utilities: one shared `estimateTokens(obs)` in `src/shared/`.
|
||||
|
||||
**`src/services/context/formatters/`** — four formatters (AgentFormatter, HumanFormatter, ResultFormatter, CorpusRenderer) share a common walk with four strategy knobs (header, grouping, row density, colors). Plan `04-read-path` Phase 4: single `renderObservations(obs, strategy: RenderStrategy)`.
|
||||
|
||||
### Hooks / CLI
|
||||
|
||||
**`src/shared/worker-utils.ts:221-239` — `ensureWorkerRunning`**
|
||||
|
||||
Single health check, returns false on failure. Caller decides whether to proceed. Plan `05-hook-surface` §Primary path: KEEP the check; REPLACE "proceed gracefully" with consecutive-failure counter that exits code 2 after N failures (surface worker death instead of hiding it).
|
||||
|
||||
**`src/cli/handlers/summarize.ts:117-150` — 120s polling loop**
|
||||
|
||||
Polls every 1s for 120s waiting for summary completion, logs `timeout` on failure but exits 0. Plan `05-hook-surface` Phase 2: replace with blocking `/api/session/end` endpoint (server-side wait, single HTTP POST with server-side timeout). Delete the polling loop.
|
||||
|
||||
**`src/cli/handlers/session-init.ts:57-60, 120-129`**
|
||||
|
||||
Settings loaded per-handler. Agent init conditional on `initResult.contextInjected` → skips agent spawn when context already present. Plan `05-hook-surface` Phase 1: settings cached once per hook process. Phase 3: agent init is idempotent (always call).
|
||||
|
||||
**`src/cli/handlers/observation.ts:17, 53-54, 58-61`**
|
||||
|
||||
HTTP loopback + cwd validation after adapter normalization + project exclusion. Plan `05-hook-surface` §DRY: `executeWithWorkerFallback()` helper; cwd validation moves to adapter boundary.
|
||||
|
||||
**`plugin/hooks/hooks.json:27, 32, 43` — Shell retry loops**
|
||||
|
||||
20-iteration `curl` health-check retries across three hook entries. Plan `05-hook-surface` Phase 1: delete shell retries; `ensureWorkerRunning()` does the one check.
|
||||
|
||||
### API surface
|
||||
|
||||
**`src/services/worker/http/routes/DataRoutes.ts:305, 475, 510, 529, 548`**
|
||||
|
||||
- `:305` — `/api/processing-status` (KEEP)
|
||||
- `:475` — `/api/pending-queue` GET inspection (DELETE)
|
||||
- `:510` — `/api/pending-queue/process` POST (convert to internal startup call or DELETE)
|
||||
- `:529` — `/api/pending-queue/failed` DELETE (DELETE)
|
||||
- `:548` — `/api/pending-queue/all` DELETE (DELETE)
|
||||
|
||||
Plan `06-api-surface` Phase 1: delete diagnostic endpoints.
|
||||
|
||||
**`src/services/worker/http/routes/SessionRoutes.ts:148, 256`** — threshold check + markSessionMessagesFailed. Plan `06-api-surface` consolidates failure-marking paths.
|
||||
|
||||
---
|
||||
|
||||
## Part 2: External API verification
|
||||
|
||||
| API | Verified | Signature | Canonical use | Source |
|
||||
|---|---|---|---|---|
|
||||
| **Node `child_process.spawn({ detached: true })`** | ✅ yes | `spawn(cmd, args, { detached: true, stdio: ['ignore','pipe','pipe'] })` | Creates new process group on Unix (`setpgid`). Child survives parent death unless parent signals group. | Node docs: https://nodejs.org/api/child_process.html#optionsdetached |
|
||||
| **Node `process.kill(-pgid, signal)`** | ✅ yes | Negative PID signals the whole process group on Unix. Works in Bun (uses libuv). | `process.kill(-pgid, 'SIGTERM')` tears down the whole child subtree. | POSIX kill(2); Node docs. |
|
||||
| **Bun.spawn `detached`** | ❌ NOT SUPPORTED | No `detached` option. Use `proc.unref()` for detach-from-parent-exit behavior only. | Not applicable to claude-mem — claude-mem uses Node API. | Bun docs: https://bun.com/docs/runtime/child-process |
|
||||
| **SQLite `INSERT OR IGNORE` / `ON CONFLICT DO NOTHING`** | ✅ yes | `INSERT INTO t (a,b) VALUES (?,?) ON CONFLICT(a,b) DO NOTHING` | Idempotent insert; silently skips row on UNIQUE violation. | SQLite core docs. |
|
||||
| **SQLite UNIQUE on added column** | ✅ yes with caveat | `ALTER TABLE t ADD COLUMN c TEXT` then `CREATE UNIQUE INDEX ux_t_c ON t(c)` | Must backfill `c` before creating unique index, or backfill with unique random values. See migration 22 precedent in runner.ts. | SQLite ALTER TABLE limitations doc. |
|
||||
| **`fs.watch(dir, { recursive: true })` on Linux** | ✅ Node 20+ only | Recursive mode works on Linux in Node 20+ (was macOS/Windows-only earlier). | `fs.watch(transcriptsRoot, { recursive: true }, (eventType, filename) => {…})` | Node 20 release notes. **Preflight: bump `engines.node` to `>=20.0.0`.** |
|
||||
| **Claude Code hook exit codes** | ✅ per claude-mem CLAUDE.md | 0 = success / graceful shutdown; 1 = non-blocking error (stderr to user); 2 = blocking error (stderr fed back to Claude) | `process.exit(0)` default; `process.exit(2)` to surface consecutive failures. | `CLAUDE.md` §Exit Code Strategy. |
|
||||
| **launchd user LaunchAgent plist** | ✅ (not currently used) | `<key>KeepAlive</key><true/>` + `<key>ProgramArguments</key>…` in `~/Library/LaunchAgents/ai.cmem.worker.plist` | Documented for future installer if/when we adopt OS-supervised fallback. | Apple: launchd.plist(5). |
|
||||
| **systemd user unit** | ✅ (not currently used) | `[Service]\nType=simple\nExecStart=/path/to/bun worker.js\nRestart=on-failure\nKillMode=control-group` | Documented for future installer. | systemd.service(5), systemd.kill(5). |
|
||||
| **`respawn` npm package** | ✅ exists, NOT currently a dep | `respawn(command, opts).start()` with `maxRestarts`, `sleep`, `kill`. ~200 LOC pure JS. | Optional — only needed in the lazy-spawn wrapper for startup-crash retries. | https://github.com/mafintosh/respawn |
|
||||
|
||||
---
|
||||
|
||||
## Part 3: Plugin conventions
|
||||
|
||||
| Concern | File | Pattern |
|
||||
|---|---|---|
|
||||
| Hook manifest | `plugin/hooks/hooks.json` | Setup, SessionStart, UserPromptSubmit, PreToolUse (Read matcher), PostToolUse, Stop, SessionEnd. Each shell-wraps `bun-runner.js` → `worker-service.cjs`. |
|
||||
| Hook build targets | `plugin/scripts/*-hook.js` | TS source in `src/hooks/` and `src/cli/handlers/` → esbuild → `plugin/scripts/*-hook.js` (ESM). |
|
||||
| Settings schema | `src/services/domain/SettingsDefaultsManager.ts` | `loadFromFile(USER_SETTINGS_PATH)`. Flat key-value schema. Accepts `'true'` string OR boolean `true`. |
|
||||
| Privacy tags | `src/utils/tag-stripping.ts` | Six tag types: `<private>`, `<claude-mem-context>`, `<system-reminder>`, etc. Single-pass strip at every ingress (after Plan 03). |
|
||||
| HTTP loopback replacement | (future) `src/services/worker/http/shared.ts` | `ingestObservation(payload)` → direct function call. Hooks still use HTTP (cross-process); worker→worker uses function call. |
|
||||
| Observation XML | `src/sdk/parser.ts` | `<observation type="…"><title/><narrative/><facts><fact/>…</facts>…</observation>`. |
|
||||
| Summary XML | `src/sdk/parser.ts` | `<summary><request/><investigated/><learned/><completed/><next_steps/><notes/></summary>`. Optional `<skip_summary reason="…"/>` bypass. |
|
||||
| Project scoping | `src/utils/project-name.ts` | `getProjectContext(cwd)` → `{ primary, allProjects, excluded }`. Excluded list from settings. |
|
||||
|
||||
---
|
||||
|
||||
## Part 4: Confidence + gaps
|
||||
|
||||
**Confidence: HIGH (95%)** — all anchors verified by direct read, all external APIs verified against docs.
|
||||
|
||||
**Known gaps to flag in plans**:
|
||||
|
||||
1. **Chroma upsert fallback is brittle** — error-text match for "already exist". Plan 01 must guard behind a flag until Chroma exposes upsert natively.
|
||||
2. **Prompt-caching TTL assumption** — Plan 04 depends on SDK cache TTL ≈ 5 min. Run a cost smoke test before Plan 10 lands.
|
||||
3. **Node 20+ requirement** — Plan 03 Phase 1 requires `fs.watch` recursive on Linux. Preflight: `engines.node` bump.
|
||||
4. **Zod is not currently a dep** — Plan 06 Phase 1 is `npm install zod@^3.x`.
|
||||
5. **`respawn` dep is optional** — Plan 02 §Lazy-spawn wrapper: decide in that plan whether to add `respawn` or hand-roll a 3-attempt startup retry.
|
||||
6. **Two registries today** — `src/services/worker/ProcessRegistry.ts` + `src/supervisor/process-registry.ts`. Plan 02 consolidates to supervisor-only.
|
||||
|
||||
---
|
||||
|
||||
**Status: READY FOR CORPUS AUTHORING.** All plans in `PATHFINDER-2026-04-22/` may cite this file directly.
|
||||
@@ -0,0 +1,420 @@
|
||||
# PATHFINDER-2026-04-22 Rewrite Plan
|
||||
|
||||
**Purpose**: Execute a clean rewrite of the claude-mem refactor corpus, replacing `PATHFINDER-2026-04-21/` with a principle-driven 8-plan corpus. Each phase can be executed consecutively in a fresh chat context.
|
||||
|
||||
**Inputs** (already in this directory):
|
||||
- `_reference.md` — verified current-code anchors + external API signatures
|
||||
- `_mapping.md` — section-by-section migration map from old → new
|
||||
|
||||
**Outputs** (to be produced by executing this plan):
|
||||
- `00-principles.md` — unifying criteria every plan is measured against
|
||||
- `01-data-integrity.md` — UNIQUE constraints, idempotency, self-healing claim
|
||||
- `02-process-lifecycle.md` — delete supervisor, lazy-spawn, process groups
|
||||
- `03-ingestion-path.md` — fail-fast parser, direct ingest, recursive fs.watch
|
||||
- `04-read-path.md` — 1 renderer, 1 search path, delete SearchManager.findBy*
|
||||
- `05-hook-surface.md` — fail-loud hooks, blocking endpoint, cached alive
|
||||
- `06-api-surface.md` — Zod middleware, delete diagnostic endpoints
|
||||
- `07-dead-code.md` — TranscriptParser class, migration 19, @deprecated sweep
|
||||
- `98-execution-order.md` — DAG + preflight gates + post-landing greps
|
||||
- `99-verification.md` — grep targets, acceptance criteria, viewer lockdown
|
||||
|
||||
**Target lines deleted across the corpus**: ~3,800 LoC net, after double-count correction.
|
||||
|
||||
---
|
||||
|
||||
## Global principles (cite in every plan)
|
||||
|
||||
1. **No recovery code for fixable failures.** If the primary path is correct, recovery never runs. If it's broken, recovery hides the bug.
|
||||
2. **Fail-fast over grace-degrade.** Local code does not circuit-break, coerce, or silently fall back. It throws and lets the caller decide.
|
||||
3. **UNIQUE constraint over dedup window.** DB schema prevents duplicates; don't time-gate them.
|
||||
4. **Event-driven over polling.** `fs.watch` over `setInterval` rescan. Server-side wait over client-side poll. `child.on('exit')` over periodic scan.
|
||||
5. **OS-supervised process groups over hand-rolled reapers.** `detached: true` + `kill(-pgid)` replaces orphan sweeps.
|
||||
6. **One helper, N callers.** Not N copies of a helper. Not a strategy class for each config.
|
||||
7. **Delete code in the same PR it becomes unused.** No `@deprecated` fence, no "remove next release."
|
||||
|
||||
These are repeated verbatim in `00-principles.md`. Every other plan cites them.
|
||||
|
||||
---
|
||||
|
||||
## Anti-pattern guards (check in every plan)
|
||||
|
||||
- No new `setInterval` in `src/services/worker/` or the plan text (plan 99 greps for this)
|
||||
- No new `coerce*`, `recover*`, `heal*`, `repair*`, `reap*`, `kill*Orphans*` function names
|
||||
- No new try/catch that swallows errors and returns a fallback value
|
||||
- No new schema column whose only purpose is to feed a recovery query
|
||||
- No new strategy class when a config object would do
|
||||
- No new HTTP endpoint for diagnostic / manual-repair purposes
|
||||
|
||||
---
|
||||
|
||||
## Phase 0 — Documentation discovery (DONE)
|
||||
|
||||
**Status**: Complete. See `_reference.md` (API + code anchors) and `_mapping.md` (old→new section mapping). Phase 0 subagents verified 12 old plans, every audit-cited file:line, every external API in use.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Write `00-principles.md`
|
||||
|
||||
**Task**: Draft the principles document that every other plan cites.
|
||||
|
||||
**Sections**:
|
||||
1. The seven principles (copy verbatim from "Global principles" section above)
|
||||
2. The six anti-pattern guards (copy verbatim from "Anti-pattern guards" above)
|
||||
3. The unifying diagnosis (one paragraph): missing primary-path correctness gets papered over with defensive code; defensive code hides bugs in the primary path; hidden bugs spawn more defensive code. Same disease, five organs.
|
||||
4. Five cures table: one row per subsystem (lifecycle, data, search, ingestion, hooks) stating the concrete cure from the principles.
|
||||
5. Glossary: "second-system effect," "lease pattern," "self-healing claim," "fail-fast contract" — one-sentence definitions with the canonical example.
|
||||
|
||||
**Doc refs**: none outside this plan — `00-principles.md` is the anchor every other plan cites.
|
||||
|
||||
**Verification**:
|
||||
- [ ] File exists at `PATHFINDER-2026-04-22/00-principles.md`
|
||||
- [ ] Seven principles are numbered and quotable
|
||||
- [ ] Five cures table has all five subsystems
|
||||
- [ ] Glossary has one-sentence definitions for the four terms
|
||||
|
||||
**Anti-pattern guards for this phase**:
|
||||
- Don't add principles that don't have a cure in the table
|
||||
- Don't add cures for problems not in the audit
|
||||
- Don't add a "see also" subsection — principles stand alone
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Write `01-data-integrity.md` + `02-process-lifecycle.md`
|
||||
|
||||
These two plans define the tectonic primitives other plans depend on. Both run in the same phase because they're the foundation.
|
||||
|
||||
### 2A. `01-data-integrity.md`
|
||||
|
||||
**Task**: Draft the data-layer plan covering schema UNIQUE constraints, idempotency tokens, self-healing claim query, Chroma sync, migration cleanup.
|
||||
|
||||
**Phases inside this plan**:
|
||||
1. **Fresh `schema.sql`** — regenerate from current migrations, remove `started_processing_at_epoch` column, add `worker_pid INTEGER`, add `UNIQUE(session_id, tool_use_id)` on `pending_messages`, add `UNIQUE(memory_session_id, content_hash)` on `observations`.
|
||||
2. **Migrate existing databases** — ALTER TABLE for the new columns, backfill, create UNIQUE indexes.
|
||||
3. **Self-healing claim query** — replace 60-s stale-reset-inside-claim with `UPDATE pending_messages SET worker_pid=?, status='processing' WHERE status='pending' OR (status='processing' AND worker_pid NOT IN live_worker_pids) ORDER BY created_at_epoch LIMIT 1`. Delete `STALE_PROCESSING_THRESHOLD_MS`, delete `started_processing_at_epoch` column.
|
||||
4. **Delete dedup window** — remove `DEDUP_WINDOW_MS` + `findDuplicateObservation`; replace with `INSERT … ON CONFLICT DO NOTHING`.
|
||||
5. **Delete `clearFailedOlderThan` interval** — failed rows are a retention policy question. Make them a query-time filter (`WHERE status != 'failed' OR updated_at > now-1h`) or just let them accumulate until a user explicitly purges.
|
||||
6. **Delete `repairMalformedSchema` Python subprocess** — root-cause WAL corruption if it recurs; do not ship repair code.
|
||||
7. **Chroma sync — upsert semantics** — document delete-then-add as a bridge pattern; gate behind `CHROMA_SYNC_FALLBACK_ON_CONFLICT=true` flag; remove once Chroma MCP adds upsert natively.
|
||||
8. **Delete migration 19 no-op** — absorbed into the fresh `schema.sql`.
|
||||
|
||||
**Doc refs**: `_reference.md` Part 1 §Data layer + §Chroma sync; SQLite docs on `ON CONFLICT DO NOTHING` + UNIQUE on added columns; migration 22 precedent in `runner.ts:658-837`.
|
||||
|
||||
**Verification**:
|
||||
- [ ] `grep -n STALE_PROCESSING_THRESHOLD_MS src/` → 0
|
||||
- [ ] `grep -n started_processing_at_epoch src/` → 0
|
||||
- [ ] `grep -n DEDUP_WINDOW_MS src/` → 0
|
||||
- [ ] `grep -n findDuplicateObservation src/` → 0
|
||||
- [ ] `grep -n repairMalformedSchema src/` → 0
|
||||
- [ ] `grep -n clearFailedOlderThan src/services/worker-service.ts` → 0 (interval deletion)
|
||||
- [ ] Integration test: kill worker mid-claim; next worker's claim succeeds and row is re-processed
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT keep `recoverStuckProcessing()` as a boot-once function. Self-healing claim replaces it entirely.
|
||||
- Do NOT add a new timer for Chroma backfill. Backfill runs at boot-once OR on-demand when a downstream reader requests.
|
||||
- Do NOT add "repair" CLI commands.
|
||||
|
||||
### 2B. `02-process-lifecycle.md`
|
||||
|
||||
**Task**: Draft the lifecycle plan: delete `src/supervisor/`, lazy-spawn from hooks, process groups for SDK children, no reapers, no idle-shutdown.
|
||||
|
||||
**Phases inside this plan**:
|
||||
1. **Delete `src/services/worker/ProcessRegistry.ts`** (the worker-side parallel registry). Consolidate to `src/supervisor/process-registry.ts`.
|
||||
2. **Change SDK spawn to use process groups** — `src/services/worker/ProcessRegistry.ts:452-465` (to be moved to supervisor): `spawn(cmd, args, { detached: true, stdio: ['ignore','pipe','pipe'] })`. Track `pgid = proc.pid`.
|
||||
3. **Change shutdown cascade to kill groups** — `src/supervisor/shutdown.ts:116, 163`: `process.kill(-record.pgid, 'SIGTERM')` → wait 5s → `process.kill(-record.pgid, 'SIGKILL')`.
|
||||
4. **Delete all reaper intervals** — `startOrphanReaper`, `staleSessionReaperInterval`, `clearFailedOlderThan` interval at `worker-service.ts:537, 547, 567`. Delete `killSystemOrphans`, `killIdleDaemonChildren`, `reapOrphanedProcesses`, `reapStaleSessions`.
|
||||
5. **Delete the `abandonedTimer` per-session setTimeout** — replace with synchronous cleanup in `generatorPromise.finally` at the session itself.
|
||||
6. **Delete idle-eviction** — `SessionManager.evictIdlestSession` at `:477-506`. Pool backpressure via queue depth instead.
|
||||
7. **Delete fallback agent chain** (Gemini → OpenRouter) in SessionManager. Fail-fast on SDK failure; surface to hook via exit 2.
|
||||
8. **Lazy-spawn wrapper** — every hook's `ensureWorkerRunning()` (`src/shared/worker-utils.ts:221-239`): check port → if dead, `spawn(bunPath, [workerPath], { detached: true, stdio: ['ignore','ignore','ignore'] })` → `proc.unref()` → return. Optional `respawn` dep for 3-attempt startup retry with backoff.
|
||||
9. **Delete worker self-shutdown** — no idle timer. Worker runs until killed.
|
||||
|
||||
**Doc refs**: `_reference.md` Part 1 §Worker/lifecycle + Part 2 API verification rows 1-3 (Node detached, `kill(-pgid)`); commit 99060bac for PID-reuse pattern.
|
||||
|
||||
**Verification**:
|
||||
- [ ] `grep -rn setInterval src/services/worker/` → 0
|
||||
- [ ] `grep -rn startOrphanReaper src/` → 0
|
||||
- [ ] `grep -rn staleSessionReaperInterval src/` → 0
|
||||
- [ ] `grep -rn killSystemOrphans src/` → 0
|
||||
- [ ] `grep -rn killIdleDaemonChildren src/` → 0
|
||||
- [ ] `grep -rn reapStaleSessions src/` → 0
|
||||
- [ ] `grep -rn reapOrphanedProcesses src/` → 0
|
||||
- [ ] `grep -rn evictIdlestSession src/` → 0
|
||||
- [ ] `grep -rn abandonedTimer src/` → 0
|
||||
- [ ] `grep -rn "fallbackAgent\|Gemini\|OpenRouter" src/services/worker/SessionManager.ts` → 0
|
||||
- [ ] `src/services/worker/ProcessRegistry.ts` file does NOT exist
|
||||
- [ ] `src/supervisor/` directory DOES still exist (canonical registry + shutdown)
|
||||
- [ ] Integration test: kill worker via `kill -9 <pid>`; next hook respawns worker; no orphan children remain
|
||||
- [ ] Integration test: graceful SIGTERM to worker; all SDK children exit within 6s
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT keep `killSystemOrphans` as a boot-once function — orphans are PREVENTED by process groups, not swept.
|
||||
- Do NOT add idle-timer self-shutdown to the worker.
|
||||
- Do NOT introduce a third process registry during the migration.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Write `03-ingestion-path.md` + `04-read-path.md`
|
||||
|
||||
### 3A. `03-ingestion-path.md`
|
||||
|
||||
**Task**: Draft the ingestion plan: fail-fast parser, direct `ingestObservation()` call, recursive `fs.watch`, DB-backed tool pairing, single-regex tag strip, delete `TranscriptParser` dead class.
|
||||
|
||||
**Phases inside this plan**:
|
||||
0. **Ingest helpers** (prerequisite for plans 05, 06, 07) — `ingestObservation(payload)`, `ingestPrompt(payload)`, `ingestSummary(payload)` as direct functions on the worker. No HTTP loopback.
|
||||
1. **`parseAgentXml`** — single entry point returning `{ valid: true, data } | { valid: false, reason }` discriminated union. Replaces `parseObservations` + `parseSummary` + `coerceObservationToSummary`.
|
||||
2. **ResponseProcessor migration** — call `parseAgentXml` once; on invalid, `markFailed(messageId, reason)`. On valid summary, emit `summaryStoredEvent` (consumed by `05-hook-surface.md` blocking endpoint).
|
||||
3. **Delete circuit breaker** — `consecutiveSummaryFailures`, `MAX_CONSECUTIVE_SUMMARY_FAILURES`, SessionManager guards on it.
|
||||
4. **Delete coerce function** — `coerceObservationToSummary` in `src/sdk/parser.ts:222-259` removed entirely.
|
||||
5. **Recursive `fs.watch`** — `src/services/transcripts/watcher.ts:124-132` replaces 5-s rescan `setInterval` with `fs.watch(transcriptsRoot, { recursive: true })`. Preflight: `engines.node >= 20.0.0`.
|
||||
6. **DB-backed tool pairing** — delete `pendingTools` Map at `processor.ts:23`. Insert both `tool_use` and `tool_result` rows into `pending_messages` with `UNIQUE(session_id, tool_use_id)` constraint. Pair by JOIN at read time.
|
||||
7. **Direct `ingestObservation`** — `processor.ts:252` calls the helper from Phase 0, not `observationHandler.execute()`.
|
||||
8. **Single-regex tag strip** — consolidate `src/utils/tag-stripping.ts` `countTags`/`stripTagsInternal` to one regex with alternation.
|
||||
9. **Delete dead `TranscriptParser` class** — `src/utils/transcript-parser.ts:28-90`.
|
||||
|
||||
**Doc refs**: `_reference.md` Part 1 §Ingestion; old Plan 01/03/08 for prior work; `fs.watch` Node 20+ release notes.
|
||||
|
||||
**Verification**:
|
||||
- [ ] `grep -n coerceObservationToSummary src/` → 0
|
||||
- [ ] `grep -n consecutiveSummaryFailures src/` → 0
|
||||
- [ ] `grep -n "pendingTools" src/services/transcripts/` → 0
|
||||
- [ ] `grep -n "setInterval" src/services/transcripts/watcher.ts` → 0
|
||||
- [ ] `grep -n "observationHandler.execute" src/services/transcripts/` → 0
|
||||
- [ ] `grep -n "TranscriptParser" src/utils/transcript-parser.ts` → file does not exist
|
||||
- [ ] `package.json` engines.node ≥ 20.0.0
|
||||
- [ ] Fuzz test: drop JSONL with `tool_use` but no `tool_result` → row stays pending, no pair emitted, no crash
|
||||
- [ ] Fuzz test: drop JSONL with `tool_result` referencing unknown `tool_use_id` → debug log, no crash, no phantom observation
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT keep coercion as a "lenient mode" flag.
|
||||
- Do NOT ship a polling fallback for `fs.watch` — Node 20+ handles recursive Linux natively.
|
||||
- Do NOT preserve the in-memory Map behind a feature flag.
|
||||
|
||||
### 3B. `04-read-path.md`
|
||||
|
||||
**Task**: Draft the read-path plan: one renderer with strategy config, one search path, delete `SearchManager.findBy*`, consolidate recency filter, throw 503 on Chroma failure.
|
||||
|
||||
**Phases inside this plan**:
|
||||
1. **`renderObservations(obs, strategy: RenderStrategy)`** — single function replacing `AgentFormatter`, `HumanFormatter`, `ResultFormatter`, `CorpusRenderer`. `RenderStrategy` is a config object with knobs: `header`, `grouping`, `rowDensity`, `colors`, `columns`.
|
||||
2. **Delete four formatter classes** — `src/services/context/formatters/*.ts` replaced by four configs passed to `renderObservations`.
|
||||
3. **Delete SearchManager duplicated methods** — `findByConcept`, `findByFile`, `findByType` at `SearchManager.ts:1209-1310, 1277, 1399`. Route all calls through `SearchOrchestrator`.
|
||||
4. **Consolidate recency filter** — import `RECENCY_WINDOW_MS` from `types.ts:16` into every call site. Delete all seven hand-rolled copies in SearchManager.
|
||||
5. **Fail-fast Chroma** — `SearchOrchestrator.ts:85-110` throws 503 on Chroma error instead of stripping query and re-querying SQLite. `ChromaSearchStrategy.ts:76-86` returns `usedChroma: false` only when Chroma is explicitly uninitialized; propagates real errors.
|
||||
6. **Delete hybrid silent fallbacks** — `HybridSearchStrategy.ts:82-95, 120-134, 161-173`: propagate errors instead of returning metadata-only.
|
||||
7. **Delete `@deprecated getExistingChromaIds`** — dead code fence removed in same PR.
|
||||
8. **Single `estimateTokens` utility** — `src/shared/estimate-tokens.ts`. Delete duplicates in `ResultFormatter.ts:264` and `CorpusRenderer.ts:90`.
|
||||
9. **Knowledge-corpus simplification** — delete `session_id` persistence, `prime`/`reprime` operations, auto-reprime regex in KnowledgeAgent; rewrite `/query` to fresh SDK call with systemPrompt; rely on SDK prompt caching.
|
||||
|
||||
**Doc refs**: `_reference.md` Part 1 §Search + §Context; old Plans 05, 06, 10.
|
||||
|
||||
**Verification**:
|
||||
- [ ] `grep -n "SearchManager\.findBy" src/` → 0 (definitions deleted)
|
||||
- [ ] `grep -rn "RECENCY_WINDOW_MS" src/services/worker/SearchManager.ts` → 0 (constants inlined in 7 places deleted)
|
||||
- [ ] `grep -n "fellBack: true" src/` → 0 (silent fallback flag deleted)
|
||||
- [ ] `grep -n "getExistingChromaIds" src/` → 0
|
||||
- [ ] `ls src/services/context/formatters/` → empty or deleted
|
||||
- [ ] Integration test: Chroma down → request fails with 503 (not empty result)
|
||||
- [ ] Snapshot test: `renderObservations` with agent config produces byte-identical output to the old `AgentFormatter` on the same input
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT create a `RenderStrategy` class hierarchy. Config object only.
|
||||
- Do NOT add a feature flag to "disable fail-fast Chroma" — callers either handle 503 or they don't.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Write `05-hook-surface.md` + `06-api-surface.md`
|
||||
|
||||
### 4A. `05-hook-surface.md`
|
||||
|
||||
**Task**: Draft the hook plan: consolidate worker HTTP plumbing, cache settings, delete shell retry loops, delete polling in summarize, fail-loud after N consecutive failures.
|
||||
|
||||
**Phases inside this plan**:
|
||||
1. **Delete shell retry loops** — `plugin/hooks/hooks.json:27, 32, 43` — remove the 20-iteration `curl` retry loops. `ensureWorkerRunning()` does the one check.
|
||||
2. **`executeWithWorkerFallback(url, method, body)` helper** — consolidate the 8-handler copy of `ensureWorkerRunning → workerHttpRequest → if (!ok) return { continue: true }`. Move to `src/shared/worker-utils.ts` as a new export.
|
||||
3. **Blocking `/api/session/end` endpoint** — server-side wait-for-`summaryStoredEvent` (emitted by `03-ingestion-path` Phase 2). Single POST, single response. Delete `src/cli/handlers/summarize.ts:117-150` polling loop.
|
||||
4. **Cache settings once per hook process** — module-scope `loadFromFileOnce()` replaces per-handler `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` calls at `context.ts:36`, `session-init.ts:57`, `observation.ts:58`, `file-context.ts:211`.
|
||||
5. **`shouldTrackProject(cwd)` helper** — consolidate the three duplicated `isProjectExcluded(cwd, settings.CLAUDE_MEM_EXCLUDED_PROJECTS)` call sites.
|
||||
6. **cwd validation at adapter boundary** — move from `file-edit.ts:50-51`, `observation.ts:53-54` to the adapter's `normalizeInput()` function. Validation happens once.
|
||||
7. **Always-init agent** — delete conditional in `session-init.ts:120-129`. Agent init is idempotent.
|
||||
8. **Fail-loud after N consecutive failures** — track consecutive `ensureWorkerRunning == false` in settings file; after N (e.g., 3), exit code 2 to surface to Claude. Reset on first success.
|
||||
9. **Delete cache alive heuristic duplication** — single `ensureWorkerAliveOnce()` with module-scope cache.
|
||||
|
||||
**Doc refs**: `_reference.md` Part 1 §Hooks/CLI; old Plan 09 for endpoint consolidation (10 → 4).
|
||||
|
||||
**Verification**:
|
||||
- [ ] `grep -rn "for i in 1 2 3 4 5 6 7" plugin/hooks/hooks.json` → 0
|
||||
- [ ] `grep -rn "SettingsDefaultsManager.loadFromFile" src/cli/handlers/` → 1 (cached location only)
|
||||
- [ ] `grep -rn "isProjectExcluded" src/cli/handlers/` → 1 (inside `shouldTrackProject` only)
|
||||
- [ ] `grep -rn "MAX_WAIT_FOR_SUMMARY_MS\|POLL_INTERVAL_MS" src/cli/handlers/` → 0
|
||||
- [ ] Integration test: block worker port → hook exits 0 first time, exits 2 after 3 consecutive failures
|
||||
- [ ] Integration test: session end hook blocks until summary stored (single POST, no polling)
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT add a retry loop inside the hook (any kind).
|
||||
- Do NOT add a timeout-and-exit-0 pattern.
|
||||
- Do NOT keep the shell retry loops behind a feature flag.
|
||||
|
||||
### 4B. `06-api-surface.md`
|
||||
|
||||
**Task**: Draft the API-surface plan: Zod middleware, delete rate limiter, delete diagnostic endpoints, cache static files, consolidate shutdown paths.
|
||||
|
||||
**Phases inside this plan**:
|
||||
1. **Preflight: `npm install zod@^3.x`**.
|
||||
2. **`validateBody` middleware** — single Express middleware using Zod `safeParse`. Returns 400 with field errors on failure.
|
||||
3. **Per-route Zod schemas** — one per POST/PUT endpoint, defined at top of route file.
|
||||
4. **Delete hand-rolled validation** — grep-and-delete `validateRequired`, inline `typeof` checks, coerce helpers across route files.
|
||||
5. **Delete rate limiter** — worker is localhost-only; rate limiting is a second-system effect masking a real concurrency bug (if one exists, find it).
|
||||
6. **Cache viewer.html + /api/instructions** — load at boot into Buffer, serve from memory. Per-process lifecycle.
|
||||
7. **Delete diagnostic endpoints** — `/api/pending-queue` GET, `/api/pending-queue/process`, `/api/pending-queue/failed` DELETE, `/api/pending-queue/all` DELETE at `DataRoutes.ts:475, 510, 529, 548`. Keep `/api/processing-status` at `:305` and `/health` at `ViewerRoutes.ts:32`.
|
||||
8. **Consolidate shutdown paths** — delete `WorkerService.shutdown`, `runShutdownCascade`, `stopSupervisor` wrappers. Single `performGracefulShutdown` at `GracefulShutdown.ts:52-86` is the only shutdown path.
|
||||
9. **Consolidate failure-marking paths** — delete `markSessionMessagesFailed` at `SessionRoutes.ts:256` and `markAllSessionMessagesAbandoned` at `worker-service.ts:943`. Single `transitionMessagesTo(status)` method on `PendingMessageStore`.
|
||||
|
||||
**Doc refs**: `_reference.md` Part 1 §API surface; old Plan 11 for Zod strategy.
|
||||
|
||||
**Verification**:
|
||||
- [ ] `grep -rn "validateRequired\|rateLimit" src/services/worker/http/` → 0
|
||||
- [ ] `grep -rn "/api/pending-queue" src/` → 0
|
||||
- [ ] `grep -rn "markSessionMessagesFailed\|markAllSessionMessagesAbandoned" src/` → 0 (or 1, only inside `transitionMessagesTo`)
|
||||
- [ ] `grep -rn "WorkerService.prototype.shutdown\|runShutdownCascade\|stopSupervisor" src/` → 0 (or 1 at the canonical call site)
|
||||
- [ ] Integration test: POST /api/observations with malformed body → 400 with field errors (not 500)
|
||||
- [ ] Integration test: viewer.html served from memory (no disk read after boot)
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT add per-route middleware stacks; one middleware for all validated POST/PUT.
|
||||
- Do NOT add a diagnostic endpoint "for debugging only."
|
||||
- Do NOT keep a shutdown wrapper "for backward compat."
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Write `07-dead-code.md`
|
||||
|
||||
**Task**: Draft the sweep plan that catches everything the other plans don't explicitly delete.
|
||||
|
||||
**Scope**:
|
||||
- `TranscriptParser` class at `src/utils/transcript-parser.ts:28-90` (no active importers)
|
||||
- Migration 19 no-op at `src/services/sqlite/migrations/runner.ts:621-628` (absorbed into fresh schema)
|
||||
- `@deprecated getExistingChromaIds` (noted in `04-read-path` but deleted here if missed)
|
||||
- Any `// removed` or `// old` or `// legacy` commented-out blocks
|
||||
- Any unused exports (grep for exports never imported)
|
||||
- Any `bun-resolver.ts`, `bun-path.ts`, `BranchManager.ts`, `runtime.ts` spawn sites that are unused
|
||||
- Migration logic duplicated in `SessionStore.ts:52-70` (delegates to `MigrationRunner`)
|
||||
|
||||
**Phases**:
|
||||
1. Run `ts-prune` or `knip` to identify unused exports.
|
||||
2. Grep for commented-out code patterns.
|
||||
3. Delete identified dead code with rationale in the commit message.
|
||||
4. Re-run build + tests to verify no accidental removal.
|
||||
|
||||
**Doc refs**: `_reference.md` Part 1 §Data layer (SessionStore duplication), §Ingestion (TranscriptParser).
|
||||
|
||||
**Verification**:
|
||||
- [ ] `npx ts-prune` or equivalent shows zero unused exports in `src/`
|
||||
- [ ] Build passes
|
||||
- [ ] Test suite passes
|
||||
- [ ] `grep -rn "// @deprecated\|// TODO remove\|// old\|// legacy" src/` → 0
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT delete anything still imported by a test.
|
||||
- Do NOT delete types still referenced by exported interfaces.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — Write `98-execution-order.md` + `99-verification.md`
|
||||
|
||||
### 6A. `98-execution-order.md`
|
||||
|
||||
**Task**: Produce the dependency DAG, preflight gates, critical path, parallel branches, and blocking issues.
|
||||
|
||||
**Contents**:
|
||||
1. **DAG**: `00` is the root (no deps). `01` + `02` are foundational. `03` depends on `01` (UNIQUE constraint) + `02` (process groups implied in spawn refactor). `04` depends on `01` (Chroma table shape). `05` depends on `02` (lazy-spawn), `03` (`summaryStoredEvent`). `06` depends on `05` (Zod schemas for hook endpoints). `07` runs last (sweep after everything else deletes its code). `99` runs alongside each plan (acceptance checks).
|
||||
2. **Preflight gates**:
|
||||
- `engines.node >= 20.0.0` bump
|
||||
- `npm install zod@^3.x`
|
||||
- Prompt-caching cost smoke test (for `04` knowledge-corpus phases)
|
||||
- Chroma MCP availability + error-text pattern documented
|
||||
3. **Critical path**: `00 → 01 → 02 → 03 → 05 → 06 → 07` (seven sequential plans).
|
||||
4. **Parallel branches**: `04` runs after `01` independently of `02`. `07` runs after everything.
|
||||
5. **Blocking issues**: carried forward from old `08-reconciliation.md` Part 5.
|
||||
6. **Post-landing verification**: grep chains from every plan's verification section.
|
||||
|
||||
**Doc refs**: `_mapping.md` Cross-plan coupling table; old `07-master-plan.md` + `08-reconciliation.md`.
|
||||
|
||||
### 6B. `99-verification.md`
|
||||
|
||||
**Task**: The acceptance-criteria document for the whole refactor.
|
||||
|
||||
**Contents**:
|
||||
1. **Timer census**: 3 → 0 repeating background timers.
|
||||
2. **Polling loops**: 1 → 0.
|
||||
3. **Full grep target list**: consolidated from every plan's verification section, grouped by pattern:
|
||||
- `grep -rn "setInterval" src/services/worker/` → 0
|
||||
- `grep -rn "coerceObservationToSummary\|consecutiveSummaryFailures" src/` → 0
|
||||
- `grep -rn "recoverStuckProcessing\|killSystemOrphans\|reapStaleSessions\|reapOrphanedProcesses\|killIdleDaemonChildren" src/` → 0
|
||||
- `grep -rn "ProcessRegistry" src/services/worker/` → 0
|
||||
- `grep -rn "/api/pending-queue" src/` → 0
|
||||
- `grep -rn "DEDUP_WINDOW_MS\|findDuplicateObservation" src/` → 0
|
||||
- `grep -rn "abandonedTimer\|evictIdlestSession" src/` → 0
|
||||
- `grep -rn "fallbackAgent\|Gemini\|OpenRouter" src/services/worker/` → 0
|
||||
4. **Prompt-caching cost smoke test**: three sequential `/api/corpus/:name/query` calls assert `cache_read_input_tokens > 0` on calls 2 and 3.
|
||||
5. **Viewer regression harness**: 12 invariants (I1–I12), 11 tests (T1–T11), baseline capture + re-run schedule.
|
||||
6. **Integration tests** (consolidated from per-plan verification):
|
||||
- Kill worker mid-claim → next worker picks up the row
|
||||
- SIGTERM worker → all SDK children exit within 6s (process-group teardown)
|
||||
- Chroma down → search returns 503 (no silent fallback)
|
||||
- Malformed POST → 400 with field errors (Zod)
|
||||
- Consecutive hook failures → exit 2 after N
|
||||
7. **Acceptance criteria** — final net lines, full test pass, viewer regression pass, cost smoke pass.
|
||||
|
||||
**Doc refs**: Every other plan's verification section.
|
||||
|
||||
**Verification**:
|
||||
- [ ] Every grep target is sourced from at least one plan
|
||||
- [ ] Every integration test has a corresponding plan that introduces the behavior
|
||||
- [ ] Viewer lockdown section cites `tests/viewer-lockdown/` artifacts
|
||||
|
||||
---
|
||||
|
||||
## Phase 7 — Principle cross-check
|
||||
|
||||
**Task**: Before the new corpus ships, verify each new plan passes its own principles. Run as a meta-review.
|
||||
|
||||
**Checks**:
|
||||
1. `grep -rn "recover\|reap\|heal\|repair\|orphan\|coerce\|fallback" PATHFINDER-2026-04-22/*.md` — every hit must be in a "DELETE" or "NEVER add" context, never as an acceptable future pattern.
|
||||
2. `grep -rn "setInterval\|setTimeout" PATHFINDER-2026-04-22/*.md` — every hit must be either a deletion target or a narrowly-justified per-operation timer.
|
||||
3. `grep -rn "strategy\|factory\|builder" PATHFINDER-2026-04-22/*.md` — every hit must justify why a config object won't do.
|
||||
4. `grep -rn "for backward compat\|for one release\|@deprecated" PATHFINDER-2026-04-22/*.md` — must be 0.
|
||||
5. Verify every plan cites `_reference.md` Part 1 for its code anchors and Part 2 for its external APIs.
|
||||
6. Verify `_mapping.md` accounts for every old section (no orphans).
|
||||
7. Verify `98-execution-order.md` DAG is acyclic and covers all plans.
|
||||
|
||||
**Deliverable**: A short `_principle-crosscheck.md` in the new corpus directory logging the results. If ANY check fails, the corresponding plan gets sent back for revision before ship.
|
||||
|
||||
---
|
||||
|
||||
## Execution instructions
|
||||
|
||||
Each phase (1 through 7) can be executed in a fresh chat context. To execute phase N:
|
||||
|
||||
1. Open a new chat
|
||||
2. Load `PATHFINDER-2026-04-22/_reference.md` and `_mapping.md` and this file
|
||||
3. Scroll to "Phase N" and execute its tasks verbatim
|
||||
4. Commit each new plan file as it's produced (`git add PATHFINDER-2026-04-22/<plan>.md`)
|
||||
5. Run the verification checklist; if any check fails, revise the plan before moving on
|
||||
|
||||
**Total estimated effort**: 4 engineer-days for Phases 1–6 (plan authoring), 2 engineer-days for Phase 7 (cross-check + revisions), then the plans themselves execute the refactor over ~3 weeks.
|
||||
|
||||
---
|
||||
|
||||
## Confidence + known gaps
|
||||
|
||||
**Confidence: HIGH.** Phase 0 agents verified every anchor against live code. The principle list is derived from five independent audits that independently converged on the same diagnosis. The DAG is internally consistent (every new plan has exactly one owner for each cross-plan invariant — see `_mapping.md` coupling table).
|
||||
|
||||
**Known gaps**:
|
||||
1. **Chroma upsert fallback is brittle** — document the error-text pattern in `01-data-integrity.md` §Chroma, gate behind a flag.
|
||||
2. **Prompt-caching TTL assumption** — cost smoke test must pass before `04-read-path` knowledge-corpus phases ship.
|
||||
3. **Windows process-group behavior** — `process.kill(-pgid)` is Unix-only; document Windows Job Objects as a gap-to-fix in `02-process-lifecycle.md`.
|
||||
4. **`respawn` dep decision** — `02-process-lifecycle.md` must decide: adopt `respawn` or hand-roll a 3-attempt retry in the lazy-spawn wrapper.
|
||||
5. **Snapshot tests for renderer collapse** — `04-read-path.md` §Phase 1 must freeze byte-equality snapshots BEFORE deleting the four formatters, otherwise regressions are undetectable.
|
||||
|
||||
---
|
||||
|
||||
**Status: READY FOR PHASE 1.** Next action: open a fresh chat, load this file + `_reference.md` + `_mapping.md`, execute Phase 1 to produce `00-principles.md`.
|
||||
Reference in New Issue
Block a user