94d592f212
* docs: pathfinder refactor corpus + Node 20 preflight
Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 01 — data integrity
Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.
- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
UNIQUE(memory_session_id, content_hash) on observations; dedup
duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.
Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/01-data-integrity.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 02 — process lifecycle
OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).
- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
canonical registry at src/supervisor/process-registry.ts is the
sole survivor; SDK spawn site consolidated into it via new
createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
process.kill(-pgid, signal) on Unix when pgid is recorded;
Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
staleSessionReaperInterval setInterval (including the co-located
WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
WAL growth without an app-level timer), killIdleDaemonChildren,
killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
via generatorPromise.finally() already lives in worker-service
startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
lazy-spawn — consults isWorkerPortAlive (which gates
captureProcessStartToken for PID-reuse safety via commit
99060bac), then spawns detached with unref(), then
waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
only on external SIGTERM via supervisor signal handlers.
Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.
All 10 verification greps return 0. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast
Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
- Phase 1/2: ObservationRow type doesn't exist; the four
"formatters" operate on three incompatible types.
- Phase 4: RECENCY_WINDOW_MS already imported from
SEARCH_CONSTANTS at every call site.
- Phase 7: getExistingChromaIds is NOT @deprecated and has an
active caller in ChromaSync.backfillMissingSyncs.
- Phase 8: estimateTokens already consolidated.
- Phase 9: knowledge-corpus rewrite blocked on PG-3
prompt-caching cost smoke test.
Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.
Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.
Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).
Tests updated (Principle 7 — delete in same PR):
- search-orchestrator.test.ts: "fall back to SQLite" rewritten
as "throw ChromaUnavailableError (HTTP 503)".
- chroma/hybrid/sqlite-search-strategy tests: rewritten to
rejects.toThrow; removed fellBack assertions.
Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 03 — ingestion path
Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.
- Phase 0: Created src/services/worker/http/shared.ts exporting
ingestObservation/ingestPrompt/ingestSummary as direct
in-process functions plus ingestEventBus (Node EventEmitter,
reusing existing pattern — no third event bus introduced).
setIngestContext wires the SessionManager dependency from
worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
returning { valid:true; kind: 'observation'|'summary'; data }
| { valid:false; reason: string }. Inspects root element;
<skip_summary reason="…"/> is a first-class summary case
with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
branches on the discriminated union. On invalid → markFailed
+ logger.warn(reason). On observation → ingestObservation.
On summary → ingestSummary then emit summaryStoredEvent
{ sessionId, messageId } (consumed by Plan 05's blocking
/api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
(ResponseProcessor + SessionManager + worker-types) and
MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
replaced with fs.watch(transcriptsRoot, { recursive: true,
persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
Map deleted. tool_use rows insert with INSERT OR IGNORE on
UNIQUE(session_id, tool_use_id) (added by Plan 01). New
pairToolUsesByJoin query in PendingMessageStore for read-time
pairing (UNIQUE INDEX provides idempotency; explicit consumer
not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
direct ingestObservation call. maybeParseJson silent-passthrough
rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
class) deleted. The active extractLastMessage at
src/shared/transcript-parser.ts:41-144 is the sole survivor.
Tests updated (Principle 7 — same-PR delete):
- tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
to assert discriminated-union shape; coercion-specific
scenarios collapse into { valid:false } assertions.
- tests/worker/agents/response-processor.test.ts: circuit-breaker
describe block skipped; non-XML/empty-response tests assert
fail-fast markFailed behavior.
Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.
Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.
Plan: PATHFINDER-2026-04-22/03-ingestion-path.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 05 — hook surface
Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.
- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
1..20; do curl -sf .../health && break; sleep 0.1; done` shell
retry wrappers deleted. Hook commands invoke their bun entry
point directly.
- Phase 2: src/shared/worker-utils.ts — added
executeWithWorkerFallback<T>(url, method, body) returning
T | { continue: true; reason?: string }. All 8 hook handlers
(observation, session-init, context, file-context, file-edit,
summarize, session-complete, user-message) rewritten to use
it instead of duplicating the ensureWorkerRunning →
workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
using validateBody + sessionEndSchema (z.object({sessionId})).
One-shot ingestEventBus.on('summaryStoredEvent') listener,
30 s timer, req.aborted handler — all share one cleanup so
the listener cannot leak. summarize.ts polling loop, plus
MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
memoizes SettingsDefaultsManager.loadFromFile per process.
Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
check entry; isProjectExcluded no longer referenced from
src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
(all 6 adapters: claude-code, cursor, raw, gemini-cli,
windsurf). New AdapterRejectedInput error in
src/cli/adapters/errors.ts. Handler-level isValidCwd checks
deleted from file-edit.ts and observation.ts. hook-command.ts
catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
initAgent is idempotent. tests/hooks/context-reinjection-guard
test (validated the deleted conditional) deleted in same PR
per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
.json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
_THRESHOLD setting (default 3). On consecutive worker-unreachable
≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
wrapping ensureWorkerRunning. executeWithWorkerFallback calls
the memoized version.
Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.
Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.
Plan: PATHFINDER-2026-04-22/05-hook-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 06 — API surface
One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.
- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
in src/services/worker/http/middleware/validateBody.ts —
safeParse → 400 { error: 'ValidationError', issues: [...] }
on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
route file. 24 POST endpoints across SessionRoutes,
CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
LogsRoutes, SettingsRoutes now wrap with validateBody().
/api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
along with every call site. Inline coercion helpers
(coerceStringArray, coercePositiveInteger) and inline
if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
from src/services/worker/http/middleware.ts. Worker binds
127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
via fs.readFileSync; served as Buffer with text/html content
type. SKILL.md + per-operation .md files cached in
Server.ts as Map<string, string>; loadInstructionContent
helper deleted. NO fs.watch, NO TTL — process restart is the
cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
— /api/pending-queue (GET), /api/pending-queue/process (POST),
/api/pending-queue/failed (DELETE), /api/pending-queue/all
(DELETE). Helper methods that ONLY served them
(getQueueMessages, getStuckCount, getRecentlyProcessed,
clearFailed, clearAll) deleted from PendingMessageStore.
KEPT: /api/processing-status (observability), /health
(used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
GracefulShutdown now calls getSupervisor().stop() directly.
Two functions retained with clear roles:
- performGracefulShutdown — worker-side 6-step shutdown
- runShutdownCascade — supervisor-side child teardown
(process.kill(-pgid), Windows tree-kill, PID-file cleanup)
Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
failure-marking path on PendingMessageStore. Old methods
markSessionMessagesFailed and markAllSessionMessagesAbandoned
deleted along with all callers (worker-service,
SessionCompletionHandler, tests/zombie-prevention).
Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.
Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.
Plan: PATHFINDER-2026-04-22/06-api-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 07 — dead code sweep
ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.
Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments
Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
builders, ParsedObservation, ParsedSummary, ParseResult,
SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
via dynamic await import('../../../context-generator.js') in
worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
— used via dynamic await import in npx-cli/install.ts +
uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
orphan-recovery caller in worker-service.ts plus
zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
in same file.
- All Database.ts barrel re-exports — used downstream.
Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
the methods are not thin wrappers but ~900 LoC of bodies, and
two methods are documented as intentional mirrors so the
context-generator.cjs bundle stays schema-consistent without
pulling MigrationRunner. Deserves its own plan, not a sweep.
Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.
Plan: PATHFINDER-2026-04-22/07-dead-code.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove residual ProcessRegistry comment reference
Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review (P1 + 2× P2)
P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
- Added optional timeoutMs to executeWithWorkerFallback,
forwarded to workerHttpRequest.
- summarize.ts call site now passes 35_000 (5 s above server
hold window).
P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
- ResponseProcessor now calls ingestSummary({ kind: 'parsed',
sessionDbId, messageId, contentSessionId, parsed }) so the
event-emission path is single-sourced.
- ingestSummary's requireContext() resolution moved inside the
'queue' branch (the only branch that needs sessionManager /
dbManager). 'parsed' is a pure event-bus emission and
doesn't need worker-internal context — fixes mocked
ResponseProcessor unit tests that don't call
setIngestContext.
P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
- Added a Symbol.for('claude-mem/worker-fallback') brand to
WorkerFallback. isWorkerFallback now checks the brand, not
a duck-typed property name.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 2 (P1 + P2)
P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.
- Gate ingestSummary call on (parsed.data.skipped ||
session.lastSummaryStored). Skipped summaries are an explicit
no-op bypass and still confirm; real summaries only confirm
when storage actually wrote a row.
- Non-skipped + summaryId === null path logs a warn and lets
the server-side timeout (504) surface to the hook instead of
a false ok:true.
P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.
- Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
log instead of the misleading ENQUEUED line. No behavior
change — the duplicate is still correctly suppressed by the
DB (Principle 3); only the log surface is corrected.
- confirmProcessed is never called with the enqueue() return
value (it operates on session.processingMessageIds[] from
claimNextMessage), so no caller is broken; the visibility
fix prevents future misuse.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 3 (P1 + 2× P2)
- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
context after SessionRoutes is constructed. setIngestContext runs
before routes exist, so transcript-watcher observations queued via
ingestObservation() had no way to auto-start the SDK generator.
Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
/api/session/end calls register one listener each and clean up on
completion, so the default-10 warning fires spuriously under normal
load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
ingestObservation() instead of duplicating skip-tool / meta /
privacy / queue logic. Single helper, matching the Plan 03 goal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)
- processor.handleToolResult: restore in-memory tool-use→tool-result
pairing via session.pendingTools for schemas (e.g. Codex) whose
tool_result events carry only tool_use_id + output. Without this,
neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
of throwing. Previously a single malformed JSON-shaped field caused
handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
for purely-glob inputs so the caller skips the watch instead of
anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
log on the returned id; the SessionManager branches on id === 0.
* fix: forward tool_use_id through ingestObservation (Greptile iter 5)
P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.
- shared.ingestObservation: forward payload.toolUseId to
queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
tool_use_id (HTTP convention) and toolUseId (JS convention) from
req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
validator doesn't rely on .passthrough() alone.
* fix: drop dead pairToolUsesByJoin, close session-end listener race
- PendingMessageStore: delete pairToolUsesByJoin. The method was never
called and its self-join semantics are structurally incompatible
with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
collapses any second row with the same pair, so a self-join can
only ever match a row to itself. In-memory pendingTools in
processor.ts remains the pairing path for split-event schemas.
- IngestEventBus: retain a short-lived (60s) recentStored map keyed
by sessionId. Populated on summaryStoredEvent emit, evicted on
consume or TTL.
- handleSessionEnd: drain the recent-events buffer before attaching
the listener. Closes the register-after-emit race where the summary
can persist between the hook's summarize POST and its session/end
POST — previously that window returned 504 after the 30s timeout.
* chore: merge origin/main into vivacious-teeth
Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).
Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
summaryStoredEvent supersedes main's SessionCompletionHandler DI
refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
reason; generator .finally() Stop-hook self-clean is a guard for a
path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
#2084) while preserving our Zod validateBody schema.
Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings
1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
in wrapHandler — synchronous exceptions would hang the client rather
than surfacing as 500s. Wrap it like every other handler.
2) processor.handleToolResult only consumed the session.pendingTools
entry when the tool_result arrived without a toolName. In the
split-schema path where tool_result carries both toolName and toolId,
the entry was never deleted and the map grew for the life of the
session. Consume the entry whenever toolId is present.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: typing cleanup and viewer tsconfig split for PR feedback
- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings (iter 2)
- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
the unscoped-drain branch that would nuke every pending/processing
row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
cached event until TTL eviction so a retried Stop hook's second
/api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
already tailed (JSONL appends fire on every line; only unknown
paths warrant a rescan).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: call finalizeSession in terminal session paths (Greptile iter 3)
terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.
Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: GC failed pending_messages rows at startup (Greptile iter 4)
Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.
Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)
1. startSessionProcessor success branch now calls completionHandler.
finalizeSession before removeSessionImmediate. Hooks-disabled installs
(and any Stop hook that fails before POST /api/sessions/complete) no
longer leave sdk_sessions rows as status='active' forever. Idempotent
— a subsequent /api/sessions/complete is a no-op.
2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
closures that reference it (TDZ safety; safe at runtime today but
fragile if timeout ever shrinks).
3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
instead of constructing its own — prevents silent divergence if the
handler ever becomes stateful.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: stop runaway crash-recovery loop on dead sessions
Two distinct bugs were combining to keep a dead session restarting forever:
Bug 1 (uncaught "The operation was aborted."):
child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
signal aborts. spawnSdkProcess() never attached an 'error' listener, so
any async spawn failure became uncaughtException and escaped to the
daemon-level handler. Attach an 'error' listener immediately after spawn,
before the !child.pid early-return, so async spawn errors are logged
(with errno code) and swallowed locally.
Bug 2 (sliding-window limiter never trips on slow restart cadence):
RestartGuard tripped only when restartTimestamps.length exceeded
MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
session that fail-restart-fail-restart on 8s cycles would loop forever
(consecutiveRestarts climbing past 30+ in observed logs). Add a
consecutiveFailures counter that increments on every restart and resets
only on recordSuccess(). Trip when consecutive failures exceed
MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
processing in between proves the session is dead. Both guards now run in
parallel: tight loops still trip the windowed cap; slow loops trip the
consecutive-failure cap.
Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* perf: streamline worker startup and consolidate database connections
1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)
* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations
Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.
- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
when shouldTrackProject(cwd) is false, so the observer's own hooks
cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
on observations) inline so bundled artifacts (worker-service.cjs,
context-generator.cjs) stay schema-consistent — without it, the
ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
supervisor can actually feed the observer's stdin.
Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.
* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)
Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.
* fix: cross-platform observer-dir containment; clarify SDK stdin pipe
claude-review feedback on PR #2124.
- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
hard-coded a POSIX separator and missed Windows backslash paths plus any
trailing-slash variance. Switched to a path.relative-based isWithin()
helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
consumes that pipe; 'ignore' would null it and the null-check below
would tear the child down on every spawn.
* fix: make Stop hook fire-and-forget; remove dead /api/session/end
The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.
The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.
- summarize.ts: drop the /api/session/end long-poll and the trailing
/api/sessions/complete await; ~40 lines removed; unused
SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
route registration. Drop the now-unused ingestEventBus and
SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
comments that referenced the dead endpoint. The IngestEventBus is
left in place dormant (no listeners) for follow-up cleanup so this
PR stays focused on the blocker.
Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.
Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* deps: bump all dependencies to latest including majors
Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.
Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: surface real chroma errors and add deep status probe
Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.
Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.
Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: rebuild worker-service bundle to match merged src
Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: address coderabbit feedback on PLAN-fix-mcp-search.md
- replace machine-specific /Users/alexnewman absolute paths with portable
<repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
394 lines
26 KiB
Markdown
394 lines
26 KiB
Markdown
# 05 — Hook Surface
|
||
|
||
## Purpose
|
||
|
||
Consolidate worker HTTP plumbing across the eight hook handlers, cache settings once per hook process, delete the 20-iteration `curl` retry loops in `plugin/hooks/hooks.json`, delete the 120-second client-side polling loop in `src/cli/handlers/summarize.ts`, and escalate to exit code 2 after N consecutive `ensureWorkerRunning()` failures so the worker's death surfaces to Claude instead of being silently absorbed. The cure is nine moves: delete the shell retry loops; introduce one `executeWithWorkerFallback` helper with eight callers; replace the polling loop with a server-side blocking `/api/session/end` endpoint that awaits the `summaryStoredEvent` emitted by `03-ingestion-path.md` Phase 2; cache settings at module scope; collapse three duplicated exclusion checks into one `shouldTrackProject(cwd)` helper; move cwd validation to the adapter boundary so it runs once; delete the always-init conditional on the agent (init is idempotent); track consecutive failures in a state file and exit 2 after N; and consolidate the alive-heuristic cache into one `ensureWorkerAliveOnce()` call site.
|
||
|
||
---
|
||
|
||
## Principles invoked
|
||
|
||
This plan is measured against `00-principles.md`:
|
||
|
||
- **Principle 2 — Fail-fast over grace-degrade.** Consecutive hook failures do not degrade silently into "exit 0 and hope next time works." After N consecutive `ensureWorkerRunning == false` results, the hook exits code 2 so Claude Code's hook contract surfaces the problem. No retry inside the hook. No timeout-and-exit-0 papering.
|
||
- **Principle 4 — Event-driven over polling.** The 120-second client-side polling loop in `src/cli/handlers/summarize.ts:117-150` is replaced by a single POST to `/api/session/end` that the server holds open until the `summaryStoredEvent` (emitted by `03-ingestion-path.md` Phase 2) fires. One request, one response, no polling on either side.
|
||
- **Principle 6 — One helper, N callers.** The eight-handler copy of `ensureWorkerRunning → workerHttpRequest → if (!ok) return { continue: true }` collapses to one exported `executeWithWorkerFallback(url, method, body)`. Three duplicated `isProjectExcluded(cwd, …)` call sites collapse to one `shouldTrackProject(cwd)`. Four per-handler `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` calls collapse to one module-scope `loadFromFileOnce()`.
|
||
|
||
**Cross-references**:
|
||
|
||
- `03-ingestion-path.md` Phase 2 emits `summaryStoredEvent` with payload `{ sessionId: string; messageId: number }`. Phase 3 of this plan consumes that event inside the Express handler for `/api/session/end`. The emitter lives inside the worker (`src/services/worker/agents/ResponseProcessor.ts` after its rewrite); the consumer lives inside the HTTP route. Event-bus implementation is left to the implementer per `03-ingestion-path.md` §Known gaps #3.
|
||
- `02-process-lifecycle.md` Phase 8 defines the lazy-spawn wrapper (`ensureWorkerRunning` in `src/shared/worker-utils.ts:221-239`) that this plan's `executeWithWorkerFallback` calls as its first step. If the worker is not alive, lazy-spawn attempts to start it; if the port check still fails afterwards, the helper returns `{ continue: true }` and this plan's Phase 8 fail-loud counter increments. The two plans do not duplicate spawn logic — lazy-spawn is defined in 02, consumed here.
|
||
- `06-api-surface.md` defines the Zod `validateBody` middleware (Phase 2 of that plan). The blocking `/api/session/end` endpoint introduced in Phase 3 below uses the same middleware to validate its POST body before entering the event-wait loop; no hand-rolled validation lives in the hook-surface plumbing.
|
||
|
||
---
|
||
|
||
## Phase 1 — Delete shell retry loops
|
||
|
||
**Purpose**: Remove the 20-iteration `curl` retry loops wrapping three hook entries in `plugin/hooks/hooks.json`. Shell-level retry is a bash expression of the same anti-pattern principle 2 forbids at the TypeScript layer. `ensureWorkerRunning()` (`02-process-lifecycle.md` Phase 8) is the one check; it either succeeds or the fail-loud counter (Phase 8 below) escalates. A shell loop papers over that signal.
|
||
|
||
**Anchors** (`_reference.md` Part 1 §Hooks/CLI):
|
||
- `plugin/hooks/hooks.json:27` — `for i in 1 2 3 4 5 6 7 …` curl retry wrapper
|
||
- `plugin/hooks/hooks.json:32` — same pattern, second hook entry
|
||
- `plugin/hooks/hooks.json:43` — same pattern, third hook entry
|
||
|
||
**Before** (conceptual):
|
||
```jsonc
|
||
// plugin/hooks/hooks.json:27 (current)
|
||
"command": "for i in 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20; do curl -sf http://localhost:37777/health && break; sleep 0.1; done && bun .../observation-hook.js"
|
||
```
|
||
|
||
**After**:
|
||
```jsonc
|
||
// plugin/hooks/hooks.json:27 (after this phase)
|
||
"command": "bun .../observation-hook.js"
|
||
```
|
||
|
||
The handler invokes `executeWithWorkerFallback` (Phase 2) on entry; that helper calls `ensureWorkerRunning()` (`02-process-lifecycle.md` Phase 8) which performs a single port check plus one lazy-spawn attempt. No shell loop.
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `plugin/hooks/hooks.json:27, 32, 43` (target call sites).
|
||
|
||
---
|
||
|
||
## Phase 2 — `executeWithWorkerFallback(url, method, body)` helper
|
||
|
||
**Purpose**: Consolidate the eight hook handlers' copy of `ensureWorkerRunning → workerHttpRequest → if (!ok) return { continue: true }` into one exported helper. The helper is added to `src/shared/worker-utils.ts` alongside `ensureWorkerRunning`; every handler imports and calls it instead of reproducing the sequence.
|
||
|
||
**Anchors**:
|
||
- `src/shared/worker-utils.ts:221-239` — `ensureWorkerRunning` (existing, consumed by the new helper)
|
||
- `src/cli/handlers/observation.ts:17` — one of eight call sites that reproduces the sequence
|
||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/observation.ts:17, 53-54, 58-61` (current duplicated pattern)
|
||
|
||
**Contract** (required signature, see "`executeWithWorkerFallback` signature" section below for the canonical block).
|
||
|
||
**Behavior**:
|
||
1. Call `ensureWorkerRunning()`. If it returns `false`, increment the fail-loud counter (Phase 8) and return `{ continue: true, reason: 'worker_unreachable' }`.
|
||
2. If `true`, call `workerHttpRequest(url, method, body)` and return its parsed response typed as `T`.
|
||
3. Reset the fail-loud counter on the first success.
|
||
|
||
**Callers after this plan lands** (all eight):
|
||
- `src/cli/handlers/observation.ts`
|
||
- `src/cli/handlers/session-init.ts`
|
||
- `src/cli/handlers/context.ts`
|
||
- `src/cli/handlers/file-context.ts`
|
||
- `src/cli/handlers/file-edit.ts`
|
||
- `src/cli/handlers/summarize.ts`
|
||
- (two additional handlers in `src/cli/handlers/` that reproduce the pattern — see `_reference.md` Part 1 §Hooks/CLI for anchors)
|
||
|
||
**By principle 6 (one helper, N callers)**: the request/fallback sequence has one implementation; eight handlers import it. No handler reimplements the "worker missing → exit gracefully" path.
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/shared/worker-utils.ts:221-239` and `src/cli/handlers/observation.ts:17`. Cross-reference: `02-process-lifecycle.md` Phase 8 for the `ensureWorkerRunning` contract this helper depends on.
|
||
|
||
---
|
||
|
||
## Phase 3 — Blocking `/api/session/end` endpoint
|
||
|
||
**Purpose**: Replace the client-side 120-second polling loop in `src/cli/handlers/summarize.ts:117-150` with a single POST to `/api/session/end` that the server holds open until the summary-stored event fires. By principle 4 (event-driven over polling), the server already knows when the summary is persisted — it just emitted `summaryStoredEvent` in `03-ingestion-path.md` Phase 2 — so there is no reason for the hook to walk back in and ask repeatedly.
|
||
|
||
**Anchors**:
|
||
- `src/cli/handlers/summarize.ts:117-150` — 120-second polling loop (1 s tick, `MAX_WAIT_FOR_SUMMARY_MS`, `POLL_INTERVAL_MS`) — DELETE
|
||
- `03-ingestion-path.md` Phase 2 — emits `summaryStoredEvent` with payload `{ sessionId: string; messageId: number }`
|
||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/summarize.ts:117-150` (current polling target)
|
||
|
||
**Server-side pattern** (Express-level; event bus + per-request timeout + single response):
|
||
|
||
```ts
|
||
// Express route registered in src/services/worker/http/routes/SessionRoutes.ts
|
||
// after 06-api-surface.md Phase 2 validateBody middleware runs.
|
||
router.post('/api/session/end', validateBody(sessionEndSchema), (req, res) => {
|
||
const { sessionId } = req.body;
|
||
|
||
// one-shot listener; cleared on either fulfillment or timeout
|
||
const onStored = (evt: SummaryStoredEvent) => {
|
||
if (evt.sessionId !== sessionId) return;
|
||
cleanup();
|
||
res.status(200).json({ ok: true, messageId: evt.messageId });
|
||
};
|
||
|
||
const timer = setTimeout(() => {
|
||
cleanup();
|
||
res.status(504).json({ ok: false, reason: 'summary_not_stored_in_time' });
|
||
}, SERVER_SIDE_SUMMARY_TIMEOUT_MS);
|
||
|
||
const cleanup = () => {
|
||
clearTimeout(timer);
|
||
eventBus.off('summaryStoredEvent', onStored);
|
||
};
|
||
|
||
eventBus.on('summaryStoredEvent', onStored);
|
||
|
||
// request aborted by client (hook process died): drop the listener immediately
|
||
req.on('close', cleanup);
|
||
});
|
||
```
|
||
|
||
Per-hook call site:
|
||
|
||
```ts
|
||
// src/cli/handlers/summarize.ts (after this phase)
|
||
const result = await executeWithWorkerFallback<SessionEndResponse>(
|
||
'/api/session/end', 'POST', { sessionId },
|
||
);
|
||
// one POST, one response. No loop.
|
||
```
|
||
|
||
**Delete in the same PR**:
|
||
- `src/cli/handlers/summarize.ts:117-150` — polling loop body
|
||
- `MAX_WAIT_FOR_SUMMARY_MS` constant
|
||
- `POLL_INTERVAL_MS` constant
|
||
- Any helper that existed only to drive the loop (`pollUntilSummary`, `waitForSummarySync`, …)
|
||
|
||
**Cross-reference (load-bearing)**: `03-ingestion-path.md` Phase 2 is the emitter side of the contract. Its `summaryStoredEvent` payload `{ sessionId: string; messageId: number }` is consumed verbatim here. If Phase 2 changes the event name or shape, this phase's route handler changes with it. The event bus implementation (`EventEmitter` vs dedicated `src/services/infrastructure/eventBus.ts`) is per `03-ingestion-path.md` §Known gaps #3.
|
||
|
||
**Cross-reference (validation)**: `06-api-surface.md` Phase 2 defines `validateBody`. The `sessionEndSchema` Zod schema is declared at the top of `SessionRoutes.ts` per `06-api-surface.md` Phase 3.
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/summarize.ts:117-150`; `_reference.md` Part 2 row 7 (hook exit-code contract — a 504 returned to the hook flows through `executeWithWorkerFallback` and triggers the fail-loud counter like any other failure).
|
||
|
||
---
|
||
|
||
## Phase 4 — Cache settings once per hook process
|
||
|
||
**Purpose**: Each hook process is short-lived and reads `USER_SETTINGS_PATH` independently. Four handlers currently call `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` on every handler entry; since settings do not mutate during a single hook execution, module-scope caching eliminates three redundant disk reads per invocation across the eight handlers.
|
||
|
||
**Anchors**:
|
||
- `src/cli/handlers/context.ts:36` — per-handler `loadFromFile` call
|
||
- `src/cli/handlers/session-init.ts:57` — same
|
||
- `src/cli/handlers/observation.ts:58` — same
|
||
- `src/cli/handlers/file-context.ts:211` — same
|
||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/session-init.ts:57-60` and `src/cli/handlers/observation.ts:17, 53-54, 58-61`
|
||
- `_reference.md` Part 3 row "Settings schema" — `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` pattern
|
||
|
||
**After**: a module-scope `loadFromFileOnce()` in (e.g.) `src/shared/hook-settings.ts` that memoizes the `SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH)` result for the lifetime of the process. Every handler imports `loadFromFileOnce` instead of calling `loadFromFile` directly.
|
||
|
||
```ts
|
||
// src/shared/hook-settings.ts (after this phase)
|
||
let cachedSettings: Settings | null = null;
|
||
export function loadFromFileOnce(): Settings {
|
||
if (cachedSettings !== null) return cachedSettings;
|
||
cachedSettings = SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH);
|
||
return cachedSettings;
|
||
}
|
||
```
|
||
|
||
**Delete in the same PR**: the per-handler `loadFromFile` calls at `context.ts:36`, `session-init.ts:57`, `observation.ts:58`, `file-context.ts:211`. After this phase, the only `SettingsDefaultsManager.loadFromFile` call in `src/cli/handlers/` is inside `loadFromFileOnce` (verification grep below).
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI (call sites); Part 3 row "Settings schema" (current pattern).
|
||
|
||
---
|
||
|
||
## Phase 5 — `shouldTrackProject(cwd)` helper
|
||
|
||
**Purpose**: Three handlers duplicate the pattern `isProjectExcluded(cwd, settings.CLAUDE_MEM_EXCLUDED_PROJECTS)` — each one reloads settings (fixed by Phase 4) and calls the same exclusion check. Consolidate to one `shouldTrackProject(cwd)` helper that is the single answer to "does this hook run for this cwd?"
|
||
|
||
**Anchors**:
|
||
- `src/cli/handlers/observation.ts:58-61` — exclusion check call site
|
||
- `src/cli/handlers/context.ts` — exclusion check call site
|
||
- `src/cli/handlers/file-context.ts:211` region — exclusion check call site
|
||
- `src/utils/project-name.ts` — `getProjectContext(cwd)` returning `{ primary, allProjects, excluded }` per `_reference.md` Part 3 row "Project scoping"
|
||
|
||
**After**:
|
||
```ts
|
||
// src/shared/should-track-project.ts (after this phase)
|
||
export function shouldTrackProject(cwd: string): boolean {
|
||
const settings = loadFromFileOnce(); // Phase 4
|
||
return !isProjectExcluded(cwd, settings.CLAUDE_MEM_EXCLUDED_PROJECTS);
|
||
}
|
||
```
|
||
|
||
**Callers**: every handler that currently reads `CLAUDE_MEM_EXCLUDED_PROJECTS` imports and calls `shouldTrackProject(cwd)` at the top of its handler body. No handler references the setting key directly after this phase.
|
||
|
||
**By principle 6 (one helper, N callers)**: three exclusion-check sites → one helper. The verification grep below asserts that `isProjectExcluded` is referenced exactly once in `src/cli/handlers/` (inside `shouldTrackProject`); every other caller routes through the helper.
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/observation.ts:58-61`; Part 3 row "Project scoping".
|
||
|
||
---
|
||
|
||
## Phase 6 — cwd validation at adapter boundary
|
||
|
||
**Purpose**: cwd validation currently runs twice on some paths — once after the adapter normalizes input and once inside the handler. Move validation into the adapter's `normalizeInput()` function so it runs exactly once, at the boundary.
|
||
|
||
**Anchors**:
|
||
- `src/cli/handlers/file-edit.ts:50-51` — cwd validation after adapter normalization (DELETE; move to adapter)
|
||
- `src/cli/handlers/observation.ts:53-54` — same pattern (DELETE; move to adapter)
|
||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/observation.ts:17, 53-54, 58-61`
|
||
|
||
**Before**:
|
||
```ts
|
||
// src/cli/handlers/observation.ts:53-54 (current)
|
||
const payload = adapter.normalizeInput(raw);
|
||
if (!isValidCwd(payload.cwd)) return { continue: true }; // handler-level check
|
||
```
|
||
|
||
**After**:
|
||
```ts
|
||
// adapter body (conceptual)
|
||
normalizeInput(raw) {
|
||
const payload = this.parse(raw);
|
||
if (!isValidCwd(payload.cwd)) throw new AdapterRejectedInput('invalid_cwd');
|
||
return payload;
|
||
}
|
||
|
||
// handler body — no cwd check remains
|
||
const payload = adapter.normalizeInput(raw);
|
||
```
|
||
|
||
**Delete in the same PR**: the two handler-level `isValidCwd` checks at `file-edit.ts:50-51` and `observation.ts:53-54`.
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI anchors above.
|
||
|
||
---
|
||
|
||
## Phase 7 — Always-init agent
|
||
|
||
**Purpose**: `src/cli/handlers/session-init.ts:120-129` wraps agent initialization in `if (!initResult.contextInjected)`. The conditional exists to avoid re-initializing the agent when context was already injected; but agent init is idempotent (second call is a no-op), so the conditional adds branching without reducing work. Delete it.
|
||
|
||
**Anchors**:
|
||
- `src/cli/handlers/session-init.ts:120-129` — conditional guard around agent init
|
||
- `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/session-init.ts:57-60, 120-129`
|
||
|
||
**Before**:
|
||
```ts
|
||
// src/cli/handlers/session-init.ts:120-129 (current)
|
||
if (!initResult.contextInjected) {
|
||
await initAgent(…);
|
||
}
|
||
```
|
||
|
||
**After**:
|
||
```ts
|
||
// src/cli/handlers/session-init.ts (after this phase)
|
||
await initAgent(…); // idempotent; safe to always call
|
||
```
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/cli/handlers/session-init.ts:120-129`.
|
||
|
||
---
|
||
|
||
## Phase 8 — Fail-loud after N consecutive failures
|
||
|
||
**Purpose**: Escalate silent failure to a surfaced failure. When `ensureWorkerRunning()` returns `false`, the hook still exits `0` (first time) to avoid breaking the user's Claude Code session; but the helper increments a counter in a state file, and after N (default 3) consecutive failures, the hook exits code 2. Per `_reference.md` Part 2 row 7, exit code 2 is a **blocking error** that Claude Code feeds back to Claude — it is the correct surface for "the worker has been unreachable 3 times in a row; something is actually broken."
|
||
|
||
**This counter is NOT a retry.** A retry would reinvoke the failed operation inside the hook to try again; this plan forbids that (see Anti-pattern guards below). The counter records how many consecutive hook invocations have seen the worker unreachable and escalates only the Nth invocation to exit 2 — the first (N−1) invocations still return the graceful-degradation response. Retry loops live work forward within one invocation; the fail-loud counter surfaces a persistent outage across invocations. They are disjoint mechanisms.
|
||
|
||
**Anchors**:
|
||
- `src/shared/worker-utils.ts:221-239` — `ensureWorkerRunning` (the call whose `false` return increments the counter)
|
||
- `_reference.md` Part 2 row 7 — Claude Code hook exit codes (0 success, 1 non-blocking, 2 blocking)
|
||
- `CLAUDE.md` §Exit Code Strategy — claude-mem's philosophy that worker-unreachable alone exits 0 to prevent Windows Terminal tab accumulation, overridden here by the N-th consecutive failure escalating to 2
|
||
|
||
**Counter location**: the existing claude-mem state directory (the same directory that already holds other per-process state under `~/.claude-mem/`). Place the counter at `~/.claude-mem/state/hook-failures.json`. **Do NOT create a new top-level directory**; use the state directory that already exists. If the state directory does not yet exist (implementer discovers at landing time), the existing state-directory creation path creates it; this plan does not introduce a new creation path.
|
||
|
||
**File shape**:
|
||
```json
|
||
{ "consecutiveFailures": 2, "lastFailureAt": 1713830400000 }
|
||
```
|
||
|
||
**Atomic write**: write to `~/.claude-mem/state/hook-failures.json.tmp`, then `rename` over the destination. POSIX rename is atomic within a filesystem; no partial-write window. No `fs.watch` or lock is needed because each hook invocation reads-then-writes as a short sequence, and a race across two simultaneous hooks at most over- or under-counts by one — which is acceptable given the threshold is 3.
|
||
|
||
**Behavior (in `executeWithWorkerFallback`)**:
|
||
1. `ensureWorkerRunning()` returns `true` → reset counter to 0 (atomic write), proceed with request.
|
||
2. `ensureWorkerRunning()` returns `false` → read counter, increment by 1, atomic write:
|
||
- If new value < N → exit the hook with code 0 and return `{ continue: true, reason: 'worker_unreachable' }` to the caller.
|
||
- If new value ≥ N → exit the hook with code **2** so Claude Code surfaces the outage. stderr: "claude-mem worker unreachable for <N> consecutive hooks."
|
||
|
||
**N (threshold)**: default 3. Settings key `CLAUDE_MEM_HOOK_FAIL_LOUD_THRESHOLD` (integer, optional; defaults to 3 if absent).
|
||
|
||
**Distinguishing from a retry**: the helper does NOT call `ensureWorkerRunning()` twice, does NOT sleep-and-retry the HTTP request, does NOT attempt the operation a second time inside the same hook. It runs the primary path once, records the result in the counter, and either returns or escalates. A retry reinvokes work; the counter records work. If an implementer is tempted to add a "just try once more before incrementing" line, refer to the Anti-pattern guards section and stop.
|
||
|
||
**Reset**: any successful `ensureWorkerRunning()` resets the counter to 0 in the same atomic write. This is not a retry either — it is a success-path acknowledgment that the outage ended.
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/shared/worker-utils.ts:221-239`; `_reference.md` Part 2 row 7 (exit-code contract); `CLAUDE.md` §Exit Code Strategy.
|
||
|
||
---
|
||
|
||
## Phase 9 — Delete cache alive heuristic duplication
|
||
|
||
**Purpose**: Multiple handlers re-derive "is the worker alive?" heuristics (port check, recent-success flag, …) each invocation. Collapse into one `ensureWorkerAliveOnce()` with module-scope caching, consumed by `executeWithWorkerFallback` from Phase 2.
|
||
|
||
**Anchors**:
|
||
- `src/shared/worker-utils.ts:221-239` — `ensureWorkerRunning` (the underlying port check; `ensureWorkerAliveOnce` wraps it with one per-process memoization)
|
||
- handlers that duplicate alive-heuristic checks — covered by the grep "SettingsDefaultsManager.loadFromFile" (Phase 4) and "isProjectExcluded" (Phase 5) verifications plus this phase's consolidation
|
||
|
||
**After**:
|
||
```ts
|
||
// src/shared/worker-utils.ts (after this phase)
|
||
let aliveCache: boolean | null = null;
|
||
export async function ensureWorkerAliveOnce(): Promise<boolean> {
|
||
if (aliveCache !== null) return aliveCache;
|
||
aliveCache = await ensureWorkerRunning();
|
||
return aliveCache;
|
||
}
|
||
```
|
||
|
||
`executeWithWorkerFallback` (Phase 2) calls `ensureWorkerAliveOnce()` instead of `ensureWorkerRunning()`. Within a single hook process, the first call hits the network; subsequent calls return the memoized value. This matters because a single hook invocation may issue multiple requests (e.g., session-init issues several), and the alive-state cannot change mid-invocation without the process exiting.
|
||
|
||
**By principle 6 (one helper, N callers)**: the memoization lives in one place; eight handlers call the memoized wrapper transparently.
|
||
|
||
**Reference**: `_reference.md` Part 1 §Hooks/CLI `src/shared/worker-utils.ts:221-239`.
|
||
|
||
---
|
||
|
||
## `executeWithWorkerFallback` signature (verbatim contract)
|
||
|
||
Phase 2 establishes the single helper consumed by all eight handlers. The discriminated return type makes the degrade-gracefully branch an explicit caller concern rather than an ad-hoc `{ continue: true }` literal scattered across handlers.
|
||
|
||
```ts
|
||
type WorkerFallback = { continue: true } | { continue: true, reason: string };
|
||
async function executeWithWorkerFallback<T>(
|
||
url: string,
|
||
method: 'GET' | 'POST' | 'PUT' | 'DELETE',
|
||
body?: unknown,
|
||
): Promise<T | WorkerFallback>;
|
||
```
|
||
|
||
---
|
||
|
||
## Fail-loud counter location callout
|
||
|
||
The fail-loud counter (Phase 8) lives at `~/.claude-mem/state/hook-failures.json` — inside the **existing** state directory under `~/.claude-mem/`. This plan does not create a new directory; it writes to the directory that already holds claude-mem's per-process state. Atomic write via the temp-file + rename pattern (`write hook-failures.json.tmp → rename hook-failures.json.tmp hook-failures.json`). POSIX rename within one filesystem is atomic; no partial-file window.
|
||
|
||
Reminder: this counter is **not** a retry. See Phase 8's "Distinguishing from a retry" subsection and the Anti-pattern guards below.
|
||
|
||
---
|
||
|
||
## Verification grep targets
|
||
|
||
Each command must return the indicated count after this plan lands.
|
||
|
||
```
|
||
grep -rn "for i in 1 2 3 4 5 6 7" plugin/hooks/hooks.json → 0
|
||
grep -rn "SettingsDefaultsManager.loadFromFile" src/cli/handlers/ → 1 # cached location only (loadFromFileOnce)
|
||
grep -rn "isProjectExcluded" src/cli/handlers/ → 1 # inside shouldTrackProject only
|
||
grep -rn "MAX_WAIT_FOR_SUMMARY_MS\|POLL_INTERVAL_MS" src/cli/handlers/ → 0
|
||
```
|
||
|
||
**Integration test 1** (fail-loud counter): block the worker port (e.g., kill the worker with a firewall rule or a `iptables`/`pfctl` reject on 37777). Invoke any hook; assert it exits **0** and writes `{ "consecutiveFailures": 1 }` to `~/.claude-mem/state/hook-failures.json`. Invoke again; assert exit 0 and counter at 2. Invoke a third time; assert exit **2** with stderr naming the outage. Unblock the port and invoke once more; assert exit 0 and counter reset to 0.
|
||
|
||
**Integration test 2** (session end blocks without polling): start a session end hook while a session is in flight. Assert a single POST to `/api/session/end` is issued from the hook (tcpdump/strace count or application-level log asserts request count == 1). The request hangs until the worker stores the summary (triggering `summaryStoredEvent`), then returns 200 in one response. No tick-loop, no repeated requests.
|
||
|
||
**Six verification targets total**: four greps + two integration tests.
|
||
|
||
---
|
||
|
||
## Anti-pattern guards
|
||
|
||
Reproduced verbatim from `_rewrite-plan.md` §4A:
|
||
|
||
- Do NOT add a retry loop inside the hook (any kind).
|
||
- Do NOT add a timeout-and-exit-0 pattern.
|
||
- Do NOT keep the shell retry loops behind a feature flag.
|
||
|
||
Additional hard rules enforced by this plan:
|
||
|
||
- Do NOT add polling anywhere in the hook. The session-end summary wait is server-side, single POST, single response.
|
||
- Do NOT add a shell-level retry loop in `plugin/hooks/hooks.json`. Phase 1 deletes the existing ones; none may be reintroduced.
|
||
- Do NOT treat the fail-loud counter as a retry. It does not reinvoke work; it records work. If tempted to add "one more attempt before incrementing," see Phase 8's distinguishing subsection and stop.
|
||
- Do NOT migrate the fail-loud counter to a new directory. It lives at `~/.claude-mem/state/hook-failures.json` inside the existing state directory.
|
||
- Do NOT introduce a second `ensureWorkerRunning`-like helper; consumers go through `executeWithWorkerFallback` (Phase 2) or `ensureWorkerAliveOnce` (Phase 9). Both wrap the single primitive from `02-process-lifecycle.md` Phase 8.
|
||
|
||
---
|
||
|
||
## Known gaps / deferrals
|
||
|
||
1. **Event-bus choice.** Phase 3's `/api/session/end` endpoint listens for `summaryStoredEvent` from `03-ingestion-path.md` Phase 2. The event-bus implementation (`node:events` `EventEmitter` vs a dedicated `src/services/infrastructure/eventBus.ts` module) is left to the implementer per `03-ingestion-path.md` §Known gaps #3. This plan specifies only the consumer contract.
|
||
2. **Server-side timeout default.** `SERVER_SIDE_SUMMARY_TIMEOUT_MS` for the blocking endpoint is not fixed by this plan; the implementer picks a value bounded by the SDK's worst-case summary latency. A 30-s default is a reasonable starting point; revisit once Phase 2 (ingestion) is in place and we have measured latency distribution.
|
||
3. **Windows counter path.** `~/.claude-mem/state/hook-failures.json` resolves via the existing `~/.claude-mem/` base path logic. On Windows under WSL the path is Unix-shaped; native-Windows behavior inherits the platform caveat from `02-process-lifecycle.md` §Platform caveat — Windows.
|