perf: streamline worker startup and consolidate database connections (#2122)
* docs: pathfinder refactor corpus + Node 20 preflight
Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 01 — data integrity
Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.
- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
UNIQUE(memory_session_id, content_hash) on observations; dedup
duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.
Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/01-data-integrity.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 02 — process lifecycle
OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).
- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
canonical registry at src/supervisor/process-registry.ts is the
sole survivor; SDK spawn site consolidated into it via new
createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
process.kill(-pgid, signal) on Unix when pgid is recorded;
Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
staleSessionReaperInterval setInterval (including the co-located
WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
WAL growth without an app-level timer), killIdleDaemonChildren,
killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
via generatorPromise.finally() already lives in worker-service
startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
lazy-spawn — consults isWorkerPortAlive (which gates
captureProcessStartToken for PID-reuse safety via commit
99060bac), then spawns detached with unref(), then
waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
only on external SIGTERM via supervisor signal handlers.
Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.
All 10 verification greps return 0. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast
Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
- Phase 1/2: ObservationRow type doesn't exist; the four
"formatters" operate on three incompatible types.
- Phase 4: RECENCY_WINDOW_MS already imported from
SEARCH_CONSTANTS at every call site.
- Phase 7: getExistingChromaIds is NOT @deprecated and has an
active caller in ChromaSync.backfillMissingSyncs.
- Phase 8: estimateTokens already consolidated.
- Phase 9: knowledge-corpus rewrite blocked on PG-3
prompt-caching cost smoke test.
Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.
Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.
Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).
Tests updated (Principle 7 — delete in same PR):
- search-orchestrator.test.ts: "fall back to SQLite" rewritten
as "throw ChromaUnavailableError (HTTP 503)".
- chroma/hybrid/sqlite-search-strategy tests: rewritten to
rejects.toThrow; removed fellBack assertions.
Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 03 — ingestion path
Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.
- Phase 0: Created src/services/worker/http/shared.ts exporting
ingestObservation/ingestPrompt/ingestSummary as direct
in-process functions plus ingestEventBus (Node EventEmitter,
reusing existing pattern — no third event bus introduced).
setIngestContext wires the SessionManager dependency from
worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
returning { valid:true; kind: 'observation'|'summary'; data }
| { valid:false; reason: string }. Inspects root element;
<skip_summary reason="…"/> is a first-class summary case
with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
branches on the discriminated union. On invalid → markFailed
+ logger.warn(reason). On observation → ingestObservation.
On summary → ingestSummary then emit summaryStoredEvent
{ sessionId, messageId } (consumed by Plan 05's blocking
/api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
(ResponseProcessor + SessionManager + worker-types) and
MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
replaced with fs.watch(transcriptsRoot, { recursive: true,
persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
Map deleted. tool_use rows insert with INSERT OR IGNORE on
UNIQUE(session_id, tool_use_id) (added by Plan 01). New
pairToolUsesByJoin query in PendingMessageStore for read-time
pairing (UNIQUE INDEX provides idempotency; explicit consumer
not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
direct ingestObservation call. maybeParseJson silent-passthrough
rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
class) deleted. The active extractLastMessage at
src/shared/transcript-parser.ts:41-144 is the sole survivor.
Tests updated (Principle 7 — same-PR delete):
- tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
to assert discriminated-union shape; coercion-specific
scenarios collapse into { valid:false } assertions.
- tests/worker/agents/response-processor.test.ts: circuit-breaker
describe block skipped; non-XML/empty-response tests assert
fail-fast markFailed behavior.
Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.
Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.
Plan: PATHFINDER-2026-04-22/03-ingestion-path.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 05 — hook surface
Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.
- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
1..20; do curl -sf .../health && break; sleep 0.1; done` shell
retry wrappers deleted. Hook commands invoke their bun entry
point directly.
- Phase 2: src/shared/worker-utils.ts — added
executeWithWorkerFallback<T>(url, method, body) returning
T | { continue: true; reason?: string }. All 8 hook handlers
(observation, session-init, context, file-context, file-edit,
summarize, session-complete, user-message) rewritten to use
it instead of duplicating the ensureWorkerRunning →
workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
using validateBody + sessionEndSchema (z.object({sessionId})).
One-shot ingestEventBus.on('summaryStoredEvent') listener,
30 s timer, req.aborted handler — all share one cleanup so
the listener cannot leak. summarize.ts polling loop, plus
MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
memoizes SettingsDefaultsManager.loadFromFile per process.
Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
check entry; isProjectExcluded no longer referenced from
src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
(all 6 adapters: claude-code, cursor, raw, gemini-cli,
windsurf). New AdapterRejectedInput error in
src/cli/adapters/errors.ts. Handler-level isValidCwd checks
deleted from file-edit.ts and observation.ts. hook-command.ts
catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
initAgent is idempotent. tests/hooks/context-reinjection-guard
test (validated the deleted conditional) deleted in same PR
per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
.json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
_THRESHOLD setting (default 3). On consecutive worker-unreachable
≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
wrapping ensureWorkerRunning. executeWithWorkerFallback calls
the memoized version.
Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.
Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.
Plan: PATHFINDER-2026-04-22/05-hook-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 06 — API surface
One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.
- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
in src/services/worker/http/middleware/validateBody.ts —
safeParse → 400 { error: 'ValidationError', issues: [...] }
on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
route file. 24 POST endpoints across SessionRoutes,
CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
LogsRoutes, SettingsRoutes now wrap with validateBody().
/api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
along with every call site. Inline coercion helpers
(coerceStringArray, coercePositiveInteger) and inline
if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
from src/services/worker/http/middleware.ts. Worker binds
127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
via fs.readFileSync; served as Buffer with text/html content
type. SKILL.md + per-operation .md files cached in
Server.ts as Map<string, string>; loadInstructionContent
helper deleted. NO fs.watch, NO TTL — process restart is the
cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
— /api/pending-queue (GET), /api/pending-queue/process (POST),
/api/pending-queue/failed (DELETE), /api/pending-queue/all
(DELETE). Helper methods that ONLY served them
(getQueueMessages, getStuckCount, getRecentlyProcessed,
clearFailed, clearAll) deleted from PendingMessageStore.
KEPT: /api/processing-status (observability), /health
(used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
GracefulShutdown now calls getSupervisor().stop() directly.
Two functions retained with clear roles:
- performGracefulShutdown — worker-side 6-step shutdown
- runShutdownCascade — supervisor-side child teardown
(process.kill(-pgid), Windows tree-kill, PID-file cleanup)
Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
failure-marking path on PendingMessageStore. Old methods
markSessionMessagesFailed and markAllSessionMessagesAbandoned
deleted along with all callers (worker-service,
SessionCompletionHandler, tests/zombie-prevention).
Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.
Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.
Plan: PATHFINDER-2026-04-22/06-api-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 07 — dead code sweep
ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.
Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments
Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
builders, ParsedObservation, ParsedSummary, ParseResult,
SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
via dynamic await import('../../../context-generator.js') in
worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
— used via dynamic await import in npx-cli/install.ts +
uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
orphan-recovery caller in worker-service.ts plus
zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
in same file.
- All Database.ts barrel re-exports — used downstream.
Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
the methods are not thin wrappers but ~900 LoC of bodies, and
two methods are documented as intentional mirrors so the
context-generator.cjs bundle stays schema-consistent without
pulling MigrationRunner. Deserves its own plan, not a sweep.
Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.
Plan: PATHFINDER-2026-04-22/07-dead-code.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove residual ProcessRegistry comment reference
Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review (P1 + 2× P2)
P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
- Added optional timeoutMs to executeWithWorkerFallback,
forwarded to workerHttpRequest.
- summarize.ts call site now passes 35_000 (5 s above server
hold window).
P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
- ResponseProcessor now calls ingestSummary({ kind: 'parsed',
sessionDbId, messageId, contentSessionId, parsed }) so the
event-emission path is single-sourced.
- ingestSummary's requireContext() resolution moved inside the
'queue' branch (the only branch that needs sessionManager /
dbManager). 'parsed' is a pure event-bus emission and
doesn't need worker-internal context — fixes mocked
ResponseProcessor unit tests that don't call
setIngestContext.
P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
- Added a Symbol.for('claude-mem/worker-fallback') brand to
WorkerFallback. isWorkerFallback now checks the brand, not
a duck-typed property name.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 2 (P1 + P2)
P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.
- Gate ingestSummary call on (parsed.data.skipped ||
session.lastSummaryStored). Skipped summaries are an explicit
no-op bypass and still confirm; real summaries only confirm
when storage actually wrote a row.
- Non-skipped + summaryId === null path logs a warn and lets
the server-side timeout (504) surface to the hook instead of
a false ok:true.
P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.
- Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
log instead of the misleading ENQUEUED line. No behavior
change — the duplicate is still correctly suppressed by the
DB (Principle 3); only the log surface is corrected.
- confirmProcessed is never called with the enqueue() return
value (it operates on session.processingMessageIds[] from
claimNextMessage), so no caller is broken; the visibility
fix prevents future misuse.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 3 (P1 + 2× P2)
- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
context after SessionRoutes is constructed. setIngestContext runs
before routes exist, so transcript-watcher observations queued via
ingestObservation() had no way to auto-start the SDK generator.
Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
/api/session/end calls register one listener each and clean up on
completion, so the default-10 warning fires spuriously under normal
load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
ingestObservation() instead of duplicating skip-tool / meta /
privacy / queue logic. Single helper, matching the Plan 03 goal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)
- processor.handleToolResult: restore in-memory tool-use→tool-result
pairing via session.pendingTools for schemas (e.g. Codex) whose
tool_result events carry only tool_use_id + output. Without this,
neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
of throwing. Previously a single malformed JSON-shaped field caused
handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
for purely-glob inputs so the caller skips the watch instead of
anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
log on the returned id; the SessionManager branches on id === 0.
* fix: forward tool_use_id through ingestObservation (Greptile iter 5)
P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.
- shared.ingestObservation: forward payload.toolUseId to
queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
tool_use_id (HTTP convention) and toolUseId (JS convention) from
req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
validator doesn't rely on .passthrough() alone.
* fix: drop dead pairToolUsesByJoin, close session-end listener race
- PendingMessageStore: delete pairToolUsesByJoin. The method was never
called and its self-join semantics are structurally incompatible
with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
collapses any second row with the same pair, so a self-join can
only ever match a row to itself. In-memory pendingTools in
processor.ts remains the pairing path for split-event schemas.
- IngestEventBus: retain a short-lived (60s) recentStored map keyed
by sessionId. Populated on summaryStoredEvent emit, evicted on
consume or TTL.
- handleSessionEnd: drain the recent-events buffer before attaching
the listener. Closes the register-after-emit race where the summary
can persist between the hook's summarize POST and its session/end
POST — previously that window returned 504 after the 30s timeout.
* chore: merge origin/main into vivacious-teeth
Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).
Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
summaryStoredEvent supersedes main's SessionCompletionHandler DI
refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
reason; generator .finally() Stop-hook self-clean is a guard for a
path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
#2084) while preserving our Zod validateBody schema.
Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings
1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
in wrapHandler — synchronous exceptions would hang the client rather
than surfacing as 500s. Wrap it like every other handler.
2) processor.handleToolResult only consumed the session.pendingTools
entry when the tool_result arrived without a toolName. In the
split-schema path where tool_result carries both toolName and toolId,
the entry was never deleted and the map grew for the life of the
session. Consume the entry whenever toolId is present.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: typing cleanup and viewer tsconfig split for PR feedback
- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings (iter 2)
- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
the unscoped-drain branch that would nuke every pending/processing
row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
cached event until TTL eviction so a retried Stop hook's second
/api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
already tailed (JSONL appends fire on every line; only unknown
paths warrant a rescan).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: call finalizeSession in terminal session paths (Greptile iter 3)
terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.
Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: GC failed pending_messages rows at startup (Greptile iter 4)
Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.
Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)
1. startSessionProcessor success branch now calls completionHandler.
finalizeSession before removeSessionImmediate. Hooks-disabled installs
(and any Stop hook that fails before POST /api/sessions/complete) no
longer leave sdk_sessions rows as status='active' forever. Idempotent
— a subsequent /api/sessions/complete is a no-op.
2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
closures that reference it (TDZ safety; safe at runtime today but
fragile if timeout ever shrinks).
3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
instead of constructing its own — prevents silent divergence if the
handler ever becomes stateful.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: stop runaway crash-recovery loop on dead sessions
Two distinct bugs were combining to keep a dead session restarting forever:
Bug 1 (uncaught "The operation was aborted."):
child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
signal aborts. spawnSdkProcess() never attached an 'error' listener, so
any async spawn failure became uncaughtException and escaped to the
daemon-level handler. Attach an 'error' listener immediately after spawn,
before the !child.pid early-return, so async spawn errors are logged
(with errno code) and swallowed locally.
Bug 2 (sliding-window limiter never trips on slow restart cadence):
RestartGuard tripped only when restartTimestamps.length exceeded
MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
session that fail-restart-fail-restart on 8s cycles would loop forever
(consecutiveRestarts climbing past 30+ in observed logs). Add a
consecutiveFailures counter that increments on every restart and resets
only on recordSuccess(). Trip when consecutive failures exceed
MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
processing in between proves the session is dead. Both guards now run in
parallel: tight loops still trip the windowed cap; slow loops trip the
consecutive-failure cap.
Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* perf: streamline worker startup and consolidate database connections
1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)
* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations
Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.
- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
when shouldTrackProject(cwd) is false, so the observer's own hooks
cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
on observations) inline so bundled artifacts (worker-service.cjs,
context-generator.cjs) stay schema-consistent — without it, the
ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
supervisor can actually feed the observer's stdin.
Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.
* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)
Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.
* fix: cross-platform observer-dir containment; clarify SDK stdin pipe
claude-review feedback on PR #2124.
- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
hard-coded a POSIX separator and missed Windows backslash paths plus any
trailing-slash variance. Switched to a path.relative-based isWithin()
helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
consumes that pipe; 'ignore' would null it and the null-check below
would tear the child down on every spawn.
* fix: make Stop hook fire-and-forget; remove dead /api/session/end
The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.
The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.
- summarize.ts: drop the /api/session/end long-poll and the trailing
/api/sessions/complete await; ~40 lines removed; unused
SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
route registration. Drop the now-unused ingestEventBus and
SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
comments that referenced the dead endpoint. The IngestEventBus is
left in place dormant (no listeners) for follow-up cleanup so this
PR stays focused on the blocker.
Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.
Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* deps: bump all dependencies to latest including majors
Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.
Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: surface real chroma errors and add deep status probe
Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.
Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.
Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: rebuild worker-service bundle to match merged src
Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: address coderabbit feedback on PLAN-fix-mcp-search.md
- replace machine-specific /Users/alexnewman absolute paths with portable
<repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -1,315 +0,0 @@
|
||||
/**
|
||||
* Tests for Context Re-Injection Guard (#1079)
|
||||
*
|
||||
* Validates:
|
||||
* - session-init handler skips SDK agent init when contextInjected=true
|
||||
* - session-init handler proceeds with SDK agent init when contextInjected=false
|
||||
* - SessionManager.getSession returns undefined for uninitialized sessions
|
||||
* - SessionManager.getSession returns session after initialization
|
||||
*/
|
||||
import { describe, it, expect, beforeEach, afterEach, spyOn, mock } from 'bun:test';
|
||||
import { homedir } from 'os';
|
||||
import { join } from 'path';
|
||||
|
||||
// Mock modules that cause import chain issues - MUST be before handler imports
|
||||
// paths.ts calls SettingsDefaultsManager.get() at module load time
|
||||
mock.module('../../src/shared/SettingsDefaultsManager.js', () => ({
|
||||
SettingsDefaultsManager: {
|
||||
get: (key: string) => {
|
||||
if (key === 'CLAUDE_MEM_DATA_DIR') return join(homedir(), '.claude-mem');
|
||||
return '';
|
||||
},
|
||||
getInt: () => 0,
|
||||
loadFromFile: () => ({ CLAUDE_MEM_EXCLUDED_PROJECTS: [] }),
|
||||
},
|
||||
}));
|
||||
|
||||
mock.module('../../src/shared/worker-utils.js', () => ({
|
||||
ensureWorkerRunning: () => Promise.resolve(true),
|
||||
getWorkerPort: () => 37777,
|
||||
workerHttpRequest: (apiPath: string, options?: any) => {
|
||||
// Delegate to global fetch so tests can mock fetch behavior
|
||||
const url = `http://127.0.0.1:37777${apiPath}`;
|
||||
return globalThis.fetch(url, {
|
||||
method: options?.method ?? 'GET',
|
||||
headers: options?.headers,
|
||||
body: options?.body,
|
||||
});
|
||||
},
|
||||
}));
|
||||
|
||||
mock.module('../../src/utils/project-filter.js', () => ({
|
||||
isProjectExcluded: () => false,
|
||||
}));
|
||||
|
||||
// Now import after mocks
|
||||
import { logger } from '../../src/utils/logger.js';
|
||||
|
||||
// Suppress logger output during tests
|
||||
let loggerSpies: ReturnType<typeof spyOn>[] = [];
|
||||
|
||||
beforeEach(() => {
|
||||
loggerSpies = [
|
||||
spyOn(logger, 'info').mockImplementation(() => {}),
|
||||
spyOn(logger, 'debug').mockImplementation(() => {}),
|
||||
spyOn(logger, 'warn').mockImplementation(() => {}),
|
||||
spyOn(logger, 'error').mockImplementation(() => {}),
|
||||
spyOn(logger, 'failure').mockImplementation(() => {}),
|
||||
];
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
loggerSpies.forEach(spy => spy.mockRestore());
|
||||
});
|
||||
|
||||
describe('Context Re-Injection Guard (#1079)', () => {
|
||||
describe('session-init handler - contextInjected flag behavior', () => {
|
||||
it('should skip SDK agent init when contextInjected is true', async () => {
|
||||
const fetchedUrls: string[] = [];
|
||||
|
||||
const mockFetch = mock((url: string | URL | Request) => {
|
||||
const urlStr = typeof url === 'string' ? url : url.toString();
|
||||
fetchedUrls.push(urlStr);
|
||||
|
||||
if (urlStr.includes('/api/sessions/init')) {
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({
|
||||
sessionDbId: 42,
|
||||
promptNumber: 2,
|
||||
skipped: false,
|
||||
contextInjected: true // SDK agent already running
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
// The /sessions/42/init call — should NOT be reached
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ status: 'initialized' })
|
||||
});
|
||||
});
|
||||
|
||||
const originalFetch = globalThis.fetch;
|
||||
globalThis.fetch = mockFetch as any;
|
||||
|
||||
try {
|
||||
const { sessionInitHandler } = await import('../../src/cli/handlers/session-init.js');
|
||||
|
||||
const result = await sessionInitHandler.execute({
|
||||
sessionId: 'test-session-123',
|
||||
cwd: '/test/project',
|
||||
prompt: 'second prompt in this session',
|
||||
platform: 'claude-code',
|
||||
});
|
||||
|
||||
// Should return success without making the second /sessions/42/init call
|
||||
expect(result.continue).toBe(true);
|
||||
expect(result.suppressOutput).toBe(true);
|
||||
|
||||
// Only the /api/sessions/init call should have been made
|
||||
const apiInitCalls = fetchedUrls.filter(u => u.includes('/api/sessions/init'));
|
||||
const sdkInitCalls = fetchedUrls.filter(u => u.includes('/sessions/42/init'));
|
||||
|
||||
expect(apiInitCalls.length).toBe(1);
|
||||
expect(sdkInitCalls.length).toBe(0);
|
||||
} finally {
|
||||
globalThis.fetch = originalFetch;
|
||||
}
|
||||
});
|
||||
|
||||
it('should proceed with SDK agent init when contextInjected is false', async () => {
|
||||
const fetchedUrls: string[] = [];
|
||||
|
||||
const mockFetch = mock((url: string | URL | Request) => {
|
||||
const urlStr = typeof url === 'string' ? url : url.toString();
|
||||
fetchedUrls.push(urlStr);
|
||||
|
||||
if (urlStr.includes('/api/sessions/init')) {
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({
|
||||
sessionDbId: 42,
|
||||
promptNumber: 1,
|
||||
skipped: false,
|
||||
contextInjected: false // First prompt — SDK agent not yet started
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
// The /sessions/42/init call — SHOULD be reached
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ status: 'initialized' })
|
||||
});
|
||||
});
|
||||
|
||||
const originalFetch = globalThis.fetch;
|
||||
globalThis.fetch = mockFetch as any;
|
||||
|
||||
try {
|
||||
const { sessionInitHandler } = await import('../../src/cli/handlers/session-init.js');
|
||||
|
||||
const result = await sessionInitHandler.execute({
|
||||
sessionId: 'test-session-456',
|
||||
cwd: '/test/project',
|
||||
prompt: 'first prompt in session',
|
||||
platform: 'claude-code',
|
||||
});
|
||||
|
||||
expect(result.continue).toBe(true);
|
||||
expect(result.suppressOutput).toBe(true);
|
||||
|
||||
// Both calls should have been made
|
||||
const apiInitCalls = fetchedUrls.filter(u => u.includes('/api/sessions/init'));
|
||||
const sdkInitCalls = fetchedUrls.filter(u => u.includes('/sessions/42/init'));
|
||||
|
||||
expect(apiInitCalls.length).toBe(1);
|
||||
expect(sdkInitCalls.length).toBe(1);
|
||||
} finally {
|
||||
globalThis.fetch = originalFetch;
|
||||
}
|
||||
});
|
||||
|
||||
it('should proceed with SDK agent init when contextInjected is undefined (backward compat)', async () => {
|
||||
const fetchedUrls: string[] = [];
|
||||
|
||||
const mockFetch = mock((url: string | URL | Request) => {
|
||||
const urlStr = typeof url === 'string' ? url : url.toString();
|
||||
fetchedUrls.push(urlStr);
|
||||
|
||||
if (urlStr.includes('/api/sessions/init')) {
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({
|
||||
sessionDbId: 42,
|
||||
promptNumber: 1,
|
||||
skipped: false
|
||||
// contextInjected not present (older worker version)
|
||||
})
|
||||
});
|
||||
}
|
||||
|
||||
return Promise.resolve({
|
||||
ok: true,
|
||||
json: () => Promise.resolve({ status: 'initialized' })
|
||||
});
|
||||
});
|
||||
|
||||
const originalFetch = globalThis.fetch;
|
||||
globalThis.fetch = mockFetch as any;
|
||||
|
||||
try {
|
||||
const { sessionInitHandler } = await import('../../src/cli/handlers/session-init.js');
|
||||
|
||||
const result = await sessionInitHandler.execute({
|
||||
sessionId: 'test-session-789',
|
||||
cwd: '/test/project',
|
||||
prompt: 'test prompt',
|
||||
platform: 'claude-code',
|
||||
});
|
||||
|
||||
expect(result.continue).toBe(true);
|
||||
|
||||
// When contextInjected is undefined/missing, should still make the SDK init call
|
||||
const sdkInitCalls = fetchedUrls.filter(u => u.includes('/sessions/42/init'));
|
||||
expect(sdkInitCalls.length).toBe(1);
|
||||
} finally {
|
||||
globalThis.fetch = originalFetch;
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
describe('SessionManager contextInjected logic', () => {
|
||||
it('should return undefined for getSession when no active session exists', async () => {
|
||||
const { SessionManager } = await import('../../src/services/worker/SessionManager.js');
|
||||
|
||||
const mockDbManager = {
|
||||
getSessionById: () => ({
|
||||
id: 1,
|
||||
content_session_id: 'test-session',
|
||||
project: 'test',
|
||||
user_prompt: 'test prompt',
|
||||
memory_session_id: null,
|
||||
status: 'active',
|
||||
started_at: new Date().toISOString(),
|
||||
completed_at: null,
|
||||
}),
|
||||
getSessionStore: () => ({ db: {} }),
|
||||
} as any;
|
||||
|
||||
const sessionManager = new SessionManager(mockDbManager);
|
||||
|
||||
// Session 42 has not been initialized in memory
|
||||
const session = sessionManager.getSession(42);
|
||||
expect(session).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should return active session after initializeSession is called', async () => {
|
||||
const { SessionManager } = await import('../../src/services/worker/SessionManager.js');
|
||||
|
||||
const mockDbManager = {
|
||||
getSessionById: () => ({
|
||||
id: 42,
|
||||
content_session_id: 'test-session',
|
||||
project: 'test',
|
||||
user_prompt: 'test prompt',
|
||||
memory_session_id: null,
|
||||
status: 'active',
|
||||
started_at: new Date().toISOString(),
|
||||
completed_at: null,
|
||||
}),
|
||||
getSessionStore: () => ({
|
||||
db: {},
|
||||
clearMemorySessionId: () => {},
|
||||
}),
|
||||
} as any;
|
||||
|
||||
const sessionManager = new SessionManager(mockDbManager);
|
||||
|
||||
// Initialize session (simulates first SDK agent init)
|
||||
sessionManager.initializeSession(42, 'first prompt', 1);
|
||||
|
||||
// Now getSession should return the active session
|
||||
const session = sessionManager.getSession(42);
|
||||
expect(session).toBeDefined();
|
||||
expect(session!.contentSessionId).toBe('test-session');
|
||||
});
|
||||
|
||||
it('should return contextInjected=true pattern for subsequent prompts', async () => {
|
||||
const { SessionManager } = await import('../../src/services/worker/SessionManager.js');
|
||||
|
||||
const mockDbManager = {
|
||||
getSessionById: () => ({
|
||||
id: 42,
|
||||
content_session_id: 'test-session',
|
||||
project: 'test',
|
||||
user_prompt: 'test prompt',
|
||||
memory_session_id: 'sdk-session-abc',
|
||||
status: 'active',
|
||||
started_at: new Date().toISOString(),
|
||||
completed_at: null,
|
||||
}),
|
||||
getSessionStore: () => ({
|
||||
db: {},
|
||||
clearMemorySessionId: () => {},
|
||||
}),
|
||||
} as any;
|
||||
|
||||
const sessionManager = new SessionManager(mockDbManager);
|
||||
|
||||
// Before initialization: contextInjected would be false
|
||||
expect(sessionManager.getSession(42)).toBeUndefined();
|
||||
|
||||
// After initialization: contextInjected would be true
|
||||
sessionManager.initializeSession(42, 'first prompt', 1);
|
||||
expect(sessionManager.getSession(42)).toBeDefined();
|
||||
|
||||
// Second call to initializeSession returns existing session (idempotent)
|
||||
const session2 = sessionManager.initializeSession(42, 'second prompt', 2);
|
||||
expect(session2.contentSessionId).toBe('test-session');
|
||||
expect(session2.userPrompt).toBe('second prompt');
|
||||
expect(session2.lastPromptNumber).toBe(2);
|
||||
});
|
||||
});
|
||||
});
|
||||
+76
-116
@@ -1,37 +1,56 @@
|
||||
/**
|
||||
* Tests for parseSummary (fix for #1360)
|
||||
* Tests for parseAgentXml summary path (PATHFINDER plan 03 phase 1).
|
||||
*
|
||||
* Validates that false-positive summary matches (no sub-tags) are rejected
|
||||
* while real summaries — even with some missing fields — are still saved.
|
||||
* Validates that the discriminated-union parser:
|
||||
* - rejects responses with no recognised root element (`{ valid: false }`),
|
||||
* - rejects empty / no-sub-tag <summary> blocks (former #1360 false-positive),
|
||||
* - returns a populated summary when at least one sub-tag is present,
|
||||
* - treats <skip_summary reason="…"/> as a first-class summary case,
|
||||
* - DOES NOT coerce <observation> blocks into summary fields (former
|
||||
* #1633 fallback path is deleted; the caller must mark the message failed
|
||||
* and let the retry ladder do its job — principle 1 + principle 2).
|
||||
*/
|
||||
import { describe, it, expect } from 'bun:test';
|
||||
import { parseSummary } from '../../src/sdk/parser.js';
|
||||
import { describe, it, expect, mock } from 'bun:test';
|
||||
|
||||
describe('parseSummary', () => {
|
||||
it('returns null when no <summary> tag present and coercion disabled', () => {
|
||||
expect(parseSummary('<observation><title>foo</title></observation>')).toBeNull();
|
||||
mock.module('../../src/services/domain/ModeManager.js', () => ({
|
||||
ModeManager: {
|
||||
getInstance: () => ({
|
||||
getActiveMode: () => ({
|
||||
observation_types: [{ id: 'bugfix' }, { id: 'discovery' }, { id: 'refactor' }],
|
||||
}),
|
||||
}),
|
||||
},
|
||||
}));
|
||||
|
||||
import { parseAgentXml } from '../../src/sdk/parser.js';
|
||||
|
||||
describe('parseAgentXml — summaries', () => {
|
||||
it('returns invalid when response is plain text (no XML)', () => {
|
||||
const result = parseAgentXml('Some plain text response without any XML tags');
|
||||
expect(result.valid).toBe(false);
|
||||
});
|
||||
|
||||
it('returns null when no <summary> or <observation> tags present', () => {
|
||||
expect(parseSummary('Some plain text response without any XML tags')).toBeNull();
|
||||
it('returns invalid when <summary> has no sub-tags (false positive — was #1360)', () => {
|
||||
// observation response that accidentally contains <summary>some text</summary>
|
||||
const result = parseAgentXml('<observation>done <summary>some content here</summary></observation>');
|
||||
// The first root is <observation>, which has no parseable content; result must be invalid.
|
||||
expect(result.valid).toBe(false);
|
||||
});
|
||||
|
||||
it('returns null when <summary> has no sub-tags (false positive — fix for #1360)', () => {
|
||||
// This is the bug: observation response accidentally contains <summary>some text</summary>
|
||||
expect(parseSummary('<observation>done <summary>some content here</summary></observation>')).toBeNull();
|
||||
it('returns invalid for bare <summary> with only plain text, no sub-tags', () => {
|
||||
const result = parseAgentXml('<summary>This session was productive.</summary>');
|
||||
expect(result.valid).toBe(false);
|
||||
});
|
||||
|
||||
it('returns null for bare <summary> with only plain text, no sub-tags', () => {
|
||||
expect(parseSummary('<summary>This session was productive.</summary>')).toBeNull();
|
||||
});
|
||||
|
||||
it('returns summary when at least one sub-tag is present (respects maintainer note)', () => {
|
||||
it('returns valid summary when at least one sub-tag is present', () => {
|
||||
const text = `<summary><request>Fix the bug</request></summary>`;
|
||||
const result = parseSummary(text);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('Fix the bug');
|
||||
expect(result?.investigated).toBeNull();
|
||||
expect(result?.learned).toBeNull();
|
||||
const result = parseAgentXml(text);
|
||||
expect(result.valid).toBe(true);
|
||||
if (result.valid && result.kind === 'summary') {
|
||||
expect(result.data.request).toBe('Fix the bug');
|
||||
expect(result.data.investigated).toBeNull();
|
||||
expect(result.data.learned).toBeNull();
|
||||
}
|
||||
});
|
||||
|
||||
it('returns full summary when all fields are present', () => {
|
||||
@@ -42,108 +61,49 @@ describe('parseSummary', () => {
|
||||
<completed>Extended token TTL to 24h</completed>
|
||||
<next_steps>Monitor error rates</next_steps>
|
||||
</summary>`;
|
||||
const result = parseSummary(text);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('Fix login bug');
|
||||
expect(result?.investigated).toBe('Auth flow and JWT expiry');
|
||||
expect(result?.learned).toBe('Token was expiring too soon');
|
||||
expect(result?.completed).toBe('Extended token TTL to 24h');
|
||||
expect(result?.next_steps).toBe('Monitor error rates');
|
||||
const result = parseAgentXml(text);
|
||||
expect(result.valid).toBe(true);
|
||||
if (result.valid && result.kind === 'summary') {
|
||||
expect(result.data.request).toBe('Fix login bug');
|
||||
expect(result.data.investigated).toBe('Auth flow and JWT expiry');
|
||||
expect(result.data.learned).toBe('Token was expiring too soon');
|
||||
expect(result.data.completed).toBe('Extended token TTL to 24h');
|
||||
expect(result.data.next_steps).toBe('Monitor error rates');
|
||||
}
|
||||
});
|
||||
|
||||
it('returns null when skip_summary tag is present', () => {
|
||||
expect(parseSummary('<skip_summary reason="no work done"/>')).toBeNull();
|
||||
it('treats <skip_summary reason="…"/> as a first-class summary with skipped:true', () => {
|
||||
const result = parseAgentXml('<skip_summary reason="no work done"/>');
|
||||
expect(result.valid).toBe(true);
|
||||
if (result.valid && result.kind === 'summary') {
|
||||
expect(result.data.skipped).toBe(true);
|
||||
expect(result.data.skip_reason).toBe('no work done');
|
||||
}
|
||||
});
|
||||
|
||||
// Observation-to-summary coercion tests (#1633)
|
||||
it('coerces <observation> with content into a summary when coerceFromObservation=true (#1633)', () => {
|
||||
const result = parseSummary('<observation><title>foo</title></observation>', undefined, true);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('foo');
|
||||
expect(result?.completed).toBe('foo');
|
||||
it('does NOT coerce <observation> into a summary (former #1633 path deleted)', () => {
|
||||
const result = parseAgentXml('<observation><title>foo</title></observation>');
|
||||
expect(result.valid).toBe(true);
|
||||
if (result.valid) {
|
||||
expect(result.kind).toBe('observation');
|
||||
}
|
||||
});
|
||||
|
||||
it('coerces observation with narrative into summary with investigated field (#1633)', () => {
|
||||
const text = `<observation>
|
||||
<type>refactor</type>
|
||||
<title>UObjectArray refactored</title>
|
||||
<narrative>Removed local XXXX and migrated to new pattern</narrative>
|
||||
</observation>`;
|
||||
const result = parseSummary(text, undefined, true);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('UObjectArray refactored');
|
||||
expect(result?.investigated).toBe('Removed local XXXX and migrated to new pattern');
|
||||
});
|
||||
|
||||
it('coerces observation with facts into summary with learned field (#1633)', () => {
|
||||
const text = `<observation>
|
||||
<type>discovery</type>
|
||||
<title>JWT token handling</title>
|
||||
<facts>
|
||||
<fact>Tokens expire after 1 hour</fact>
|
||||
<fact>Refresh flow uses rotating keys</fact>
|
||||
</facts>
|
||||
</observation>`;
|
||||
const result = parseSummary(text, undefined, true);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('JWT token handling');
|
||||
expect(result?.learned).toBe('Tokens expire after 1 hour; Refresh flow uses rotating keys');
|
||||
});
|
||||
|
||||
it('coerces observation with subtitle into completed field (#1633)', () => {
|
||||
const text = `<observation>
|
||||
<type>config</type>
|
||||
<title>Database migration</title>
|
||||
<subtitle>Added new index for performance</subtitle>
|
||||
</observation>`;
|
||||
const result = parseSummary(text, undefined, true);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.completed).toBe('Database migration — Added new index for performance');
|
||||
});
|
||||
|
||||
it('returns null for empty observation even with coercion enabled (#1633)', () => {
|
||||
const text = `<observation><type>config</type></observation>`;
|
||||
expect(parseSummary(text, undefined, true)).toBeNull();
|
||||
});
|
||||
|
||||
it('prefers <summary> tags over observation coercion when both present (#1633)', () => {
|
||||
it('prefers <summary> over <observation> when both present', () => {
|
||||
const text = `<observation><title>obs title</title></observation>
|
||||
<summary><request>summary request</request></summary>`;
|
||||
const result = parseSummary(text, undefined, true);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('summary request');
|
||||
const result = parseAgentXml(text);
|
||||
// First root by position is observation → that wins. Caller must pick the
|
||||
// right turn (summary vs observation) by sending only summary prompts on
|
||||
// summary turns. This is the contract; it is not coercion.
|
||||
expect(result.valid).toBe(true);
|
||||
if (result.valid) {
|
||||
expect(result.kind).toBe('observation');
|
||||
}
|
||||
});
|
||||
|
||||
it('falls back to observation coercion when <summary> matches but has empty sub-tags (#1633)', () => {
|
||||
// LLM wraps an empty summary around real observation content — without the
|
||||
// fallback, the empty-subtag guard (#1360) rejects the summary and we lose
|
||||
// the observation content, resurrecting the retry loop.
|
||||
const text = `<summary></summary>
|
||||
<observation>
|
||||
<title>the real work</title>
|
||||
<narrative>what actually happened</narrative>
|
||||
</observation>`;
|
||||
const result = parseSummary(text, undefined, true);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('the real work');
|
||||
expect(result?.investigated).toBe('what actually happened');
|
||||
});
|
||||
|
||||
it('empty <summary> with no observation content still returns null (coercion disabled)', () => {
|
||||
const text = '<summary></summary>';
|
||||
expect(parseSummary(text, undefined, true)).toBeNull();
|
||||
});
|
||||
|
||||
it('skips empty leading observation blocks and coerces from the first populated one (#1633)', () => {
|
||||
const text = `<observation><type>discovery</type></observation>
|
||||
<observation>
|
||||
<type>bugfix</type>
|
||||
<title>second block has content</title>
|
||||
<narrative>fixed the crash</narrative>
|
||||
</observation>`;
|
||||
const result = parseSummary(text, undefined, true);
|
||||
expect(result).not.toBeNull();
|
||||
expect(result?.request).toBe('second block has content');
|
||||
expect(result?.investigated).toBe('fixed the crash');
|
||||
it('returns invalid for empty input', () => {
|
||||
expect(parseAgentXml('').valid).toBe(false);
|
||||
expect(parseAgentXml(' \n ').valid).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
+28
-21
@@ -11,9 +11,16 @@ mock.module('../../src/services/domain/ModeManager.js', () => ({
|
||||
},
|
||||
}));
|
||||
|
||||
import { parseObservations } from '../../src/sdk/parser.js';
|
||||
import { parseAgentXml } from '../../src/sdk/parser.js';
|
||||
|
||||
describe('parseObservations', () => {
|
||||
function expectObservation(raw: string) {
|
||||
const result = parseAgentXml(raw);
|
||||
if (!result.valid) throw new Error(`expected valid observation, got reason: ${result.reason}`);
|
||||
if (result.kind !== 'observation') throw new Error(`expected observation, got ${result.kind}`);
|
||||
return result.data;
|
||||
}
|
||||
|
||||
describe('parseAgentXml — observations', () => {
|
||||
it('returns a populated observation when title is present', () => {
|
||||
const xml = `<observation>
|
||||
<type>discovery</type>
|
||||
@@ -21,7 +28,7 @@ describe('parseObservations', () => {
|
||||
<narrative>The token refresh logic skips expired tokens.</narrative>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
const result = expectObservation(xml);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].title).toBe('Found a bug in auth module');
|
||||
@@ -35,7 +42,7 @@ describe('parseObservations', () => {
|
||||
<narrative>Patched the null pointer dereference in session handler.</narrative>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
const result = expectObservation(xml);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].title).toBeNull();
|
||||
@@ -48,7 +55,7 @@ describe('parseObservations', () => {
|
||||
<facts><fact>File limit is hardcoded to 5</fact></facts>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
const result = expectObservation(xml);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].facts).toEqual(['File limit is hardcoded to 5']);
|
||||
@@ -60,7 +67,7 @@ describe('parseObservations', () => {
|
||||
<concepts><concept>dependency-injection</concept></concepts>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
const result = expectObservation(xml);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].concepts).toEqual(['dependency-injection']);
|
||||
@@ -73,9 +80,8 @@ describe('parseObservations', () => {
|
||||
<type>bugfix</type>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
|
||||
expect(result).toHaveLength(0);
|
||||
const result = parseAgentXml(xml);
|
||||
expect(result.valid).toBe(false);
|
||||
});
|
||||
|
||||
it('filters out ghost observation with empty tags but no text content (#1625)', () => {
|
||||
@@ -87,9 +93,8 @@ describe('parseObservations', () => {
|
||||
<concepts></concepts>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
|
||||
expect(result).toHaveLength(0);
|
||||
const result = parseAgentXml(xml);
|
||||
expect(result.valid).toBe(false);
|
||||
});
|
||||
|
||||
it('filters out multiple ghost observations while keeping valid ones (#1625)', () => {
|
||||
@@ -102,7 +107,7 @@ describe('parseObservations', () => {
|
||||
<observation><type>refactor</type><title></title><narrative> </narrative></observation>
|
||||
`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
const result = expectObservation(xml);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].title).toBe('Real observation');
|
||||
@@ -116,9 +121,8 @@ describe('parseObservations', () => {
|
||||
<subtitle>Only a subtitle, no real content</subtitle>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
|
||||
expect(result).toHaveLength(0);
|
||||
const result = parseAgentXml(xml);
|
||||
expect(result.valid).toBe(false);
|
||||
});
|
||||
|
||||
it('uses first mode type as fallback when type is missing', () => {
|
||||
@@ -126,16 +130,19 @@ describe('parseObservations', () => {
|
||||
<title>Missing type field</title>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
const result = expectObservation(xml);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
// First type in mocked mode is 'bugfix'
|
||||
expect(result[0].type).toBe('bugfix');
|
||||
});
|
||||
|
||||
it('returns empty array when no observation blocks are present', () => {
|
||||
const result = parseObservations('Some text without any observations.');
|
||||
expect(result).toHaveLength(0);
|
||||
it('returns a fail-fast result when no observation/summary blocks are present', () => {
|
||||
const result = parseAgentXml('Some text without any observations.');
|
||||
expect(result.valid).toBe(false);
|
||||
if (!result.valid) {
|
||||
expect(result.reason).toMatch(/unknown root|empty/);
|
||||
}
|
||||
});
|
||||
|
||||
it('parses files_read and files_modified arrays correctly', () => {
|
||||
@@ -146,7 +153,7 @@ describe('parseObservations', () => {
|
||||
<files_modified><file>src/utils.ts</file></files_modified>
|
||||
</observation>`;
|
||||
|
||||
const result = parseObservations(xml);
|
||||
const result = expectObservation(xml);
|
||||
|
||||
expect(result).toHaveLength(1);
|
||||
expect(result[0].files_read).toEqual(['src/utils.ts', 'src/parser.ts']);
|
||||
|
||||
@@ -1,291 +0,0 @@
|
||||
/**
|
||||
* Tests for Issue #1652: Stuck generator (zombie subprocess) detection in reapStaleSessions()
|
||||
*
|
||||
* Root cause: reapStaleSessions() unconditionally skipped sessions where
|
||||
* `session.generatorPromise` was non-null, meaning generators stuck inside
|
||||
* `for await (const msg of queryResult)` (blocked on a hung subprocess) were
|
||||
* never cleaned up — even after the session's Stop hook completed.
|
||||
*
|
||||
* Fix: Check `session.lastGeneratorActivity`. If it hasn't updated in
|
||||
* MAX_GENERATOR_IDLE_MS (5 min), SIGKILL the subprocess to unblock the
|
||||
* for-await, then abort the controller so the generator exits.
|
||||
*
|
||||
* Mock Justification (~30% mock code):
|
||||
* - Session fixtures: Required to create valid ActiveSession objects with all
|
||||
* required fields — tests the actual detection logic, not fixture creation.
|
||||
* - Process mock: Verify SIGKILL is sent and abort is called — no real subprocess needed.
|
||||
*/
|
||||
|
||||
import { describe, test, expect, beforeEach, afterEach, mock, setSystemTime } from 'bun:test';
|
||||
import {
|
||||
MAX_GENERATOR_IDLE_MS,
|
||||
MAX_SESSION_IDLE_MS,
|
||||
detectStaleGenerator,
|
||||
type StaleGeneratorCandidate,
|
||||
} from '../../../src/services/worker/SessionManager.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
interface MockProcess {
|
||||
exitCode: number | null;
|
||||
killed: boolean;
|
||||
kill: (signal?: string) => boolean;
|
||||
_lastSignal?: string;
|
||||
}
|
||||
|
||||
function createMockProcess(exitCode: number | null = null): MockProcess {
|
||||
const proc: MockProcess = {
|
||||
exitCode,
|
||||
killed: false,
|
||||
kill(signal?: string) {
|
||||
proc.killed = true;
|
||||
proc._lastSignal = signal;
|
||||
return true;
|
||||
},
|
||||
};
|
||||
return proc;
|
||||
}
|
||||
|
||||
interface TestSession extends StaleGeneratorCandidate {
|
||||
sessionDbId: number;
|
||||
startTime: number;
|
||||
}
|
||||
|
||||
function createSession(overrides: Partial<TestSession> = {}): TestSession {
|
||||
return {
|
||||
sessionDbId: 1,
|
||||
generatorPromise: null,
|
||||
lastGeneratorActivity: Date.now(),
|
||||
abortController: new AbortController(),
|
||||
startTime: Date.now(),
|
||||
...overrides,
|
||||
};
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('reapStaleSessions — stale generator detection (Issue #1652)', () => {
|
||||
|
||||
describe('threshold constants', () => {
|
||||
test('MAX_GENERATOR_IDLE_MS should be 5 minutes', () => {
|
||||
expect(MAX_GENERATOR_IDLE_MS).toBe(5 * 60 * 1000);
|
||||
});
|
||||
|
||||
test('MAX_SESSION_IDLE_MS should be 15 minutes', () => {
|
||||
expect(MAX_SESSION_IDLE_MS).toBe(15 * 60 * 1000);
|
||||
});
|
||||
|
||||
test('generator idle threshold should be less than session idle threshold', () => {
|
||||
// Ensures stuck generators are cleaned up before idle no-generator sessions
|
||||
expect(MAX_GENERATOR_IDLE_MS).toBeLessThan(MAX_SESSION_IDLE_MS);
|
||||
});
|
||||
});
|
||||
|
||||
describe('stale generator detection', () => {
|
||||
test('should detect generator as stale when idle > 5 minutes', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - (MAX_GENERATOR_IDLE_MS + 1000), // 5m1s ago
|
||||
});
|
||||
const proc = createMockProcess();
|
||||
|
||||
const isStale = detectStaleGenerator(session, proc);
|
||||
|
||||
expect(isStale).toBe(true);
|
||||
});
|
||||
|
||||
test('should NOT detect generator as stale when idle exactly at threshold', () => {
|
||||
// At exactly the threshold we do NOT yet reap (strictly greater than).
|
||||
// Freeze time so that both the session creation and detectStaleGenerator
|
||||
// call share the same Date.now() value, preventing a race where the two
|
||||
// calls return different timestamps and push the idle time over the boundary.
|
||||
const now = Date.now();
|
||||
setSystemTime(now);
|
||||
try {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: now - MAX_GENERATOR_IDLE_MS,
|
||||
});
|
||||
const proc = createMockProcess();
|
||||
|
||||
const isStale = detectStaleGenerator(session, proc);
|
||||
|
||||
expect(isStale).toBe(false);
|
||||
} finally {
|
||||
setSystemTime(); // restore real time
|
||||
}
|
||||
});
|
||||
|
||||
test('should NOT detect generator as stale when idle < 5 minutes', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - 60_000, // 1 minute ago
|
||||
});
|
||||
const proc = createMockProcess();
|
||||
|
||||
const isStale = detectStaleGenerator(session, proc);
|
||||
|
||||
expect(isStale).toBe(false);
|
||||
});
|
||||
|
||||
test('should NOT flag sessions without a generator (no generator = different code path)', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: null,
|
||||
// Even though lastGeneratorActivity is ancient, no generator means no stale-generator detection
|
||||
lastGeneratorActivity: 0,
|
||||
});
|
||||
const proc = createMockProcess();
|
||||
|
||||
const isStale = detectStaleGenerator(session, proc);
|
||||
|
||||
expect(isStale).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('subprocess kill on stale generator', () => {
|
||||
test('should SIGKILL the subprocess when stale generator detected', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - (MAX_GENERATOR_IDLE_MS + 5000),
|
||||
});
|
||||
const proc = createMockProcess(); // exitCode === null (still running)
|
||||
|
||||
detectStaleGenerator(session, proc);
|
||||
|
||||
expect(proc.killed).toBe(true);
|
||||
expect(proc._lastSignal).toBe('SIGKILL');
|
||||
});
|
||||
|
||||
test('should NOT attempt to kill an already-exited subprocess', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - (MAX_GENERATOR_IDLE_MS + 5000),
|
||||
});
|
||||
const proc = createMockProcess(0); // exitCode === 0 (already exited)
|
||||
|
||||
detectStaleGenerator(session, proc);
|
||||
|
||||
// Should not try to kill an already-exited process
|
||||
expect(proc.killed).toBe(false);
|
||||
});
|
||||
|
||||
test('should still abort controller even when no tracked subprocess found', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - (MAX_GENERATOR_IDLE_MS + 5000),
|
||||
});
|
||||
|
||||
// proc is undefined — subprocess not tracked in ProcessRegistry
|
||||
detectStaleGenerator(session, undefined);
|
||||
|
||||
// AbortController should still be aborted to signal the generator loop
|
||||
expect(session.abortController.signal.aborted).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
describe('abort controller on stale generator', () => {
|
||||
test('should abort the session controller when stale generator detected', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - (MAX_GENERATOR_IDLE_MS + 1000),
|
||||
});
|
||||
const proc = createMockProcess();
|
||||
|
||||
expect(session.abortController.signal.aborted).toBe(false);
|
||||
|
||||
detectStaleGenerator(session, proc);
|
||||
|
||||
expect(session.abortController.signal.aborted).toBe(true);
|
||||
});
|
||||
|
||||
test('should NOT abort controller for fresh generator', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - 30_000, // 30 seconds ago — fresh
|
||||
});
|
||||
const proc = createMockProcess();
|
||||
|
||||
detectStaleGenerator(session, proc);
|
||||
|
||||
expect(session.abortController.signal.aborted).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
describe('idle session reaping (existing behaviour preserved)', () => {
|
||||
test('idle session without generator should be reaped after 15 minutes', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: null,
|
||||
startTime: Date.now() - (MAX_SESSION_IDLE_MS + 1000), // 15m1s ago
|
||||
});
|
||||
|
||||
// Simulate the existing idle-session path (no generator, no pending work)
|
||||
const sessionAge = Date.now() - session.startTime;
|
||||
const shouldReap = !session.generatorPromise && sessionAge > MAX_SESSION_IDLE_MS;
|
||||
|
||||
expect(shouldReap).toBe(true);
|
||||
});
|
||||
|
||||
test('idle session without generator should NOT be reaped before 15 minutes', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: null,
|
||||
startTime: Date.now() - (10 * 60 * 1000), // 10 minutes ago
|
||||
});
|
||||
|
||||
const sessionAge = Date.now() - session.startTime;
|
||||
const shouldReap = !session.generatorPromise && sessionAge > MAX_SESSION_IDLE_MS;
|
||||
|
||||
expect(shouldReap).toBe(false);
|
||||
});
|
||||
|
||||
test('session with active generator should never be reaped by idle-session path', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
startTime: Date.now() - (60 * 60 * 1000), // 1 hour ago — very old
|
||||
// But generator was active recently (fresh activity)
|
||||
lastGeneratorActivity: Date.now() - 10_000,
|
||||
});
|
||||
const proc = createMockProcess();
|
||||
|
||||
// Stale generator detection says NOT stale (activity is fresh)
|
||||
const isStaleGenerator = detectStaleGenerator(session, proc);
|
||||
expect(isStaleGenerator).toBe(false);
|
||||
|
||||
// Idle-session path is skipped because generatorPromise is non-null
|
||||
expect(session.generatorPromise).not.toBeNull();
|
||||
});
|
||||
});
|
||||
|
||||
describe('lastGeneratorActivity update semantics', () => {
|
||||
test('should be initialized to session startTime to avoid false positives on boot', () => {
|
||||
// When a session is first created, lastGeneratorActivity must be set to a
|
||||
// recent time so the generator isn't immediately flagged as stale before it
|
||||
// has had a chance to produce output.
|
||||
const now = Date.now();
|
||||
const session = createSession({
|
||||
startTime: now,
|
||||
lastGeneratorActivity: now, // mirrors SessionManager initialization
|
||||
});
|
||||
|
||||
const generatorIdleMs = now - session.lastGeneratorActivity;
|
||||
expect(generatorIdleMs).toBeLessThan(MAX_GENERATOR_IDLE_MS);
|
||||
});
|
||||
|
||||
test('should be updated when generator yields a message (prevents false positive reap)', () => {
|
||||
const session = createSession({
|
||||
generatorPromise: Promise.resolve(),
|
||||
lastGeneratorActivity: Date.now() - (MAX_GENERATOR_IDLE_MS - 10_000), // 4m50s ago
|
||||
});
|
||||
|
||||
// Simulate the getMessageIterator yielding a message:
|
||||
session.lastGeneratorActivity = Date.now();
|
||||
|
||||
// Generator is now fresh — should not be reaped
|
||||
const generatorIdleMs = Date.now() - session.lastGeneratorActivity;
|
||||
expect(generatorIdleMs).toBeLessThan(MAX_GENERATOR_IDLE_MS);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -8,7 +8,6 @@ import { ClaudeMemDatabase } from '../../src/services/sqlite/Database.js';
|
||||
import {
|
||||
storeObservation,
|
||||
computeObservationContentHash,
|
||||
findDuplicateObservation,
|
||||
} from '../../src/services/sqlite/observations/store.js';
|
||||
import {
|
||||
createSDKSession,
|
||||
@@ -91,16 +90,19 @@ describe('TRIAGE-03: Data Integrity', () => {
|
||||
expect(result2.id).toBe(result1.id);
|
||||
});
|
||||
|
||||
it('storeObservation allows same content after dedup window expires', () => {
|
||||
it('storeObservation deduplicates identical content regardless of time gap (UNIQUE constraint)', () => {
|
||||
// PATHFINDER-2026-04-22 Plan 01 Phase 4: the legacy time-window dedup
|
||||
// was replaced by UNIQUE(memory_session_id, content_hash) +
|
||||
// ON CONFLICT DO NOTHING. Identical content always dedupes.
|
||||
const memId = createSessionWithMemoryId(db, 'content-dedup-2', 'mem-dedup-2');
|
||||
const obs = createObservationInput({ title: 'Same Title', narrative: 'Same Narrative' });
|
||||
|
||||
const now = Date.now();
|
||||
const result1 = storeObservation(db, memId, 'test-project', obs, 1, 0, now);
|
||||
// 31 seconds later — outside the 30s window
|
||||
// Far outside any legacy window — UNIQUE constraint still dedupes.
|
||||
const result2 = storeObservation(db, memId, 'test-project', obs, 1, 0, now + 31_000);
|
||||
|
||||
expect(result2.id).not.toBe(result1.id);
|
||||
expect(result2.id).toBe(result1.id);
|
||||
});
|
||||
|
||||
it('storeObservation allows different content at same time', () => {
|
||||
@@ -159,47 +161,10 @@ describe('TRIAGE-03: Data Integrity', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('Stuck isProcessing flag', () => {
|
||||
it('hasAnyPendingWork resets stuck processing messages older than 5 minutes', () => {
|
||||
// Create a pending_messages table entry that's stuck in 'processing'
|
||||
const sessionId = createSDKSession(db, 'content-stuck', 'stuck-project', 'test');
|
||||
|
||||
// Insert a processing message stuck for 6 minutes
|
||||
const sixMinutesAgo = Date.now() - (6 * 60 * 1000);
|
||||
db.prepare(`
|
||||
INSERT INTO pending_messages (session_db_id, content_session_id, message_type, status, retry_count, created_at_epoch, started_processing_at_epoch)
|
||||
VALUES (?, 'content-stuck', 'observation', 'processing', 0, ?, ?)
|
||||
`).run(sessionId, sixMinutesAgo, sixMinutesAgo);
|
||||
|
||||
const pendingStore = new PendingMessageStore(db);
|
||||
|
||||
// hasAnyPendingWork should reset the stuck message and still return true (it's now pending again)
|
||||
const hasPending = pendingStore.hasAnyPendingWork();
|
||||
expect(hasPending).toBe(true);
|
||||
|
||||
// Verify the message was reset to 'pending'
|
||||
const msg = db.prepare('SELECT status FROM pending_messages WHERE content_session_id = ?').get('content-stuck') as { status: string };
|
||||
expect(msg.status).toBe('pending');
|
||||
});
|
||||
|
||||
it('hasAnyPendingWork does NOT reset recently-started processing messages', () => {
|
||||
const sessionId = createSDKSession(db, 'content-recent', 'recent-project', 'test');
|
||||
|
||||
// Insert a processing message started 1 minute ago (well within 5-minute threshold)
|
||||
const oneMinuteAgo = Date.now() - (1 * 60 * 1000);
|
||||
db.prepare(`
|
||||
INSERT INTO pending_messages (session_db_id, content_session_id, message_type, status, retry_count, created_at_epoch, started_processing_at_epoch)
|
||||
VALUES (?, 'content-recent', 'observation', 'processing', 0, ?, ?)
|
||||
`).run(sessionId, oneMinuteAgo, oneMinuteAgo);
|
||||
|
||||
const pendingStore = new PendingMessageStore(db);
|
||||
const hasPending = pendingStore.hasAnyPendingWork();
|
||||
expect(hasPending).toBe(true);
|
||||
|
||||
// Verify the message is still 'processing' (not reset)
|
||||
const msg = db.prepare('SELECT status FROM pending_messages WHERE content_session_id = ?').get('content-recent') as { status: string };
|
||||
expect(msg.status).toBe('processing');
|
||||
});
|
||||
describe('hasAnyPendingWork', () => {
|
||||
// PATHFINDER-2026-04-22 Plan 01: time-based stale-reset on
|
||||
// started_processing_at_epoch was replaced by worker-PID liveness.
|
||||
// The legacy "5-minute reset" tests were removed with the column.
|
||||
|
||||
it('hasAnyPendingWork returns false when no pending or processing messages exist', () => {
|
||||
const pendingStore = new PendingMessageStore(db);
|
||||
|
||||
@@ -131,7 +131,6 @@ describe('ResponseProcessor', () => {
|
||||
conversationHistory: [],
|
||||
currentProvider: 'claude',
|
||||
processingMessageIds: [], // CLAIM-CONFIRM pattern: track message IDs being processed
|
||||
consecutiveSummaryFailures: 0,
|
||||
...overrides,
|
||||
} as ActiveSession;
|
||||
}
|
||||
@@ -214,9 +213,15 @@ describe('ResponseProcessor', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('non-XML observer responses', () => {
|
||||
it('warns when the observer returns prose that will be discarded', async () => {
|
||||
const session = createMockSession();
|
||||
describe('non-XML observer responses (fail-fast — plan 03 phase 2)', () => {
|
||||
it('warns and marks messages failed when the observer returns non-XML prose', async () => {
|
||||
const markFailed = mock(() => {});
|
||||
mockSessionManager = {
|
||||
getMessageIterator: async function* () { yield* []; },
|
||||
getPendingMessageStore: () => ({ markFailed, confirmProcessed: mock(() => {}) }),
|
||||
} as unknown as SessionManager;
|
||||
|
||||
const session = createMockSession({ processingMessageIds: [7] });
|
||||
const responseText = 'Skipping — repeated log scan with no new findings.';
|
||||
|
||||
await processAgentResponse(
|
||||
@@ -232,30 +237,20 @@ describe('ResponseProcessor', () => {
|
||||
|
||||
expect(logger.warn).toHaveBeenCalledWith(
|
||||
'PARSER',
|
||||
'TestAgent returned non-XML response; observation content was discarded',
|
||||
expect.objectContaining({
|
||||
sessionId: 1,
|
||||
preview: responseText
|
||||
})
|
||||
expect.stringMatching(/^TestAgent returned unparseable response:/),
|
||||
expect.objectContaining({ sessionId: 1 })
|
||||
);
|
||||
const [, , observations, summary] = mockStoreObservations.mock.calls[0];
|
||||
expect(observations).toHaveLength(0);
|
||||
expect(summary).toBeNull();
|
||||
expect(markFailed).toHaveBeenCalledWith(7);
|
||||
expect(mockStoreObservations).not.toHaveBeenCalled();
|
||||
});
|
||||
});
|
||||
|
||||
describe('parsing summary from XML response', () => {
|
||||
it('should parse summary from response', async () => {
|
||||
const session = createMockSession();
|
||||
// PATHFINDER plan 03 phase 1: parseAgentXml returns one kind per call.
|
||||
// Summary-only response exercises the summary path.
|
||||
const responseText = `
|
||||
<observation>
|
||||
<type>discovery</type>
|
||||
<title>Test</title>
|
||||
<facts></facts>
|
||||
<concepts></concepts>
|
||||
<files_read></files_read>
|
||||
<files_modified></files_modified>
|
||||
</observation>
|
||||
<summary>
|
||||
<request>Build login form</request>
|
||||
<investigated>Reviewed existing forms</investigated>
|
||||
@@ -374,7 +369,11 @@ describe('ResponseProcessor', () => {
|
||||
expect(memorySessionId).toBe('memory-session-456');
|
||||
expect(project).toBe('test-project');
|
||||
expect(observations).toHaveLength(1);
|
||||
expect(summary).not.toBeNull();
|
||||
// PATHFINDER plan 03 phase 1: parseAgentXml returns ONE kind per call.
|
||||
// The first recognised root wins (here: <observation>), so the summary
|
||||
// in the same response is NOT extracted — the caller is expected to
|
||||
// issue observation turns and summary turns separately.
|
||||
expect(summary).toBeNull();
|
||||
expect(promptNumber).toBe(5);
|
||||
expect(tokens).toBe(100);
|
||||
expect(timestamp).toBe(1700000000000);
|
||||
@@ -434,16 +433,21 @@ describe('ResponseProcessor', () => {
|
||||
});
|
||||
|
||||
it('should broadcast summary via SSE', async () => {
|
||||
// PATHFINDER plan 03 phase 1: parseAgentXml returns one kind per call,
|
||||
// so summary broadcasts require a summary-only response.
|
||||
mockStoreObservations = mock(() => ({
|
||||
observationIds: [],
|
||||
summaryId: 99,
|
||||
createdAtEpoch: 1700000000000,
|
||||
} as StorageResult));
|
||||
(mockDbManager.getSessionStore as any) = () => ({
|
||||
storeObservations: mockStoreObservations,
|
||||
ensureMemorySessionIdRegistered: mock(() => {}),
|
||||
getSessionById: mock(() => ({ memory_session_id: 'memory-session-456' })),
|
||||
});
|
||||
|
||||
const session = createMockSession();
|
||||
const responseText = `
|
||||
<observation>
|
||||
<type>discovery</type>
|
||||
<title>Test</title>
|
||||
<facts></facts>
|
||||
<concepts></concepts>
|
||||
<files_read></files_read>
|
||||
<files_modified></files_modified>
|
||||
</observation>
|
||||
<summary>
|
||||
<request>Build feature</request>
|
||||
<investigated>Reviewed code</investigated>
|
||||
@@ -473,70 +477,44 @@ describe('ResponseProcessor', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('handling empty response', () => {
|
||||
it('should handle empty response gracefully', async () => {
|
||||
const session = createMockSession();
|
||||
describe('handling empty / non-XML response (fail-fast — plan 03 phase 2)', () => {
|
||||
it('marks in-flight messages failed and does NOT call storeObservations on empty response', async () => {
|
||||
const markFailed = mock(() => {});
|
||||
mockSessionManager = {
|
||||
getMessageIterator: async function* () { yield* []; },
|
||||
getPendingMessageStore: () => ({ markFailed, confirmProcessed: mock(() => {}) }),
|
||||
} as unknown as SessionManager;
|
||||
|
||||
const session = createMockSession({ processingMessageIds: [1, 2, 3] });
|
||||
const responseText = '';
|
||||
|
||||
// Mock to handle empty observations
|
||||
mockStoreObservations = mock(() => ({
|
||||
observationIds: [],
|
||||
summaryId: null,
|
||||
createdAtEpoch: 1700000000000,
|
||||
}));
|
||||
(mockDbManager.getSessionStore as any) = () => ({
|
||||
storeObservations: mockStoreObservations,
|
||||
ensureMemorySessionIdRegistered: mock(() => {}),
|
||||
getSessionById: mock(() => ({ memory_session_id: 'memory-session-456' })),
|
||||
});
|
||||
|
||||
await processAgentResponse(
|
||||
responseText,
|
||||
session,
|
||||
mockDbManager,
|
||||
mockSessionManager,
|
||||
mockWorker,
|
||||
100,
|
||||
null,
|
||||
'TestAgent'
|
||||
responseText, session, mockDbManager, mockSessionManager, mockWorker,
|
||||
100, null, 'TestAgent'
|
||||
);
|
||||
|
||||
// Should still call storeObservations with empty arrays
|
||||
expect(mockStoreObservations).toHaveBeenCalledTimes(1);
|
||||
const [, , observations, summary] = mockStoreObservations.mock.calls[0];
|
||||
expect(observations).toHaveLength(0);
|
||||
expect(summary).toBeNull();
|
||||
expect(mockStoreObservations).not.toHaveBeenCalled();
|
||||
expect(markFailed).toHaveBeenCalledTimes(3);
|
||||
expect(session.processingMessageIds).toEqual([]);
|
||||
});
|
||||
|
||||
it('should handle response with only text (no XML)', async () => {
|
||||
const session = createMockSession();
|
||||
it('marks in-flight messages failed and does NOT call storeObservations on plain-text response', async () => {
|
||||
const markFailed = mock(() => {});
|
||||
mockSessionManager = {
|
||||
getMessageIterator: async function* () { yield* []; },
|
||||
getPendingMessageStore: () => ({ markFailed, confirmProcessed: mock(() => {}) }),
|
||||
} as unknown as SessionManager;
|
||||
|
||||
const session = createMockSession({ processingMessageIds: [42] });
|
||||
const responseText = 'This is just plain text without any XML tags.';
|
||||
|
||||
mockStoreObservations = mock(() => ({
|
||||
observationIds: [],
|
||||
summaryId: null,
|
||||
createdAtEpoch: 1700000000000,
|
||||
}));
|
||||
(mockDbManager.getSessionStore as any) = () => ({
|
||||
storeObservations: mockStoreObservations,
|
||||
ensureMemorySessionIdRegistered: mock(() => {}),
|
||||
getSessionById: mock(() => ({ memory_session_id: 'memory-session-456' })),
|
||||
});
|
||||
|
||||
await processAgentResponse(
|
||||
responseText,
|
||||
session,
|
||||
mockDbManager,
|
||||
mockSessionManager,
|
||||
mockWorker,
|
||||
100,
|
||||
null,
|
||||
'TestAgent'
|
||||
responseText, session, mockDbManager, mockSessionManager, mockWorker,
|
||||
100, null, 'TestAgent'
|
||||
);
|
||||
|
||||
expect(mockStoreObservations).toHaveBeenCalledTimes(1);
|
||||
const [, , observations] = mockStoreObservations.mock.calls[0];
|
||||
expect(observations).toHaveLength(0);
|
||||
expect(mockStoreObservations).not.toHaveBeenCalled();
|
||||
expect(markFailed).toHaveBeenCalledTimes(1);
|
||||
});
|
||||
});
|
||||
|
||||
@@ -669,7 +647,11 @@ describe('ResponseProcessor', () => {
|
||||
const session = createMockSession({
|
||||
memorySessionId: null, // Missing memory session ID
|
||||
});
|
||||
const responseText = '<observation><type>discovery</type></observation>';
|
||||
const responseText = `<observation>
|
||||
<type>discovery</type>
|
||||
<title>some title</title>
|
||||
<narrative>some narrative</narrative>
|
||||
</observation>`;
|
||||
|
||||
await expect(
|
||||
processAgentResponse(
|
||||
@@ -729,7 +711,14 @@ describe('ResponseProcessor', () => {
|
||||
});
|
||||
});
|
||||
|
||||
describe('circuit breaker: consecutiveSummaryFailures counter (#1633)', () => {
|
||||
// PATHFINDER plan 03 phase 3: circuit breaker (consecutiveSummaryFailures) deleted.
|
||||
// Former tests covered: counter stability on observation turns, increment on
|
||||
// missing summary, neutrality on <skip_summary/>, reset on successful summary.
|
||||
// Replacement coverage: `tests/sdk/parse-summary.test.ts` asserts that the
|
||||
// parser returns `{ valid: false, reason }` for malformed summaries; the
|
||||
// failure path goes through PendingMessageStore.markFailed's retry ladder,
|
||||
// which is unit-tested separately in tests/services/sqlite/.
|
||||
describe.skip('circuit breaker: consecutiveSummaryFailures counter (#1633 — deleted)', () => {
|
||||
const SUMMARY_PROMPT = `--- ${SUMMARY_MODE_MARKER} ---\nDo the summary now.`;
|
||||
|
||||
it('does NOT increment the counter on normal observation responses (P1 regression guard)', async () => {
|
||||
|
||||
@@ -50,6 +50,37 @@ async function flushPromises(): Promise<void> {
|
||||
await Promise.resolve();
|
||||
}
|
||||
|
||||
/**
|
||||
* Plan 06 Phase 3 — body validation lives in `validateBody` middleware now.
|
||||
* Build a single chain function that runs the validateBody middleware
|
||||
* followed by the handler, mirroring how Express dispatches them in
|
||||
* production.
|
||||
*/
|
||||
function captureChain(mockApp: any, targetPath: string): (req: Request, res: Response) => void {
|
||||
let middleware: ((req: Request, res: Response, next: () => void) => void) | undefined;
|
||||
let handler: (req: Request, res: Response) => void;
|
||||
mockApp.post = mock((path: string, ...rest: any[]) => {
|
||||
if (path !== targetPath) return;
|
||||
if (rest.length === 1) {
|
||||
handler = rest[0];
|
||||
} else {
|
||||
middleware = rest[0];
|
||||
handler = rest[1];
|
||||
}
|
||||
});
|
||||
return (req: Request, res: Response): void => {
|
||||
if (!middleware) {
|
||||
handler(req, res);
|
||||
return;
|
||||
}
|
||||
let nextCalled = false;
|
||||
middleware(req, res, () => {
|
||||
nextCalled = true;
|
||||
});
|
||||
if (nextCalled) handler(req, res);
|
||||
};
|
||||
}
|
||||
|
||||
describe('CorpusRoutes Type Coercion', () => {
|
||||
let handler: (req: Request, res: Response) => void;
|
||||
let mockBuild: ReturnType<typeof mock>;
|
||||
@@ -63,14 +94,11 @@ describe('CorpusRoutes Type Coercion', () => {
|
||||
{} as any
|
||||
);
|
||||
|
||||
const mockApp = {
|
||||
post: mock((path: string, fn: any) => {
|
||||
if (path === '/api/corpus') handler = fn;
|
||||
}),
|
||||
const mockApp: any = {
|
||||
get: mock(() => {}),
|
||||
delete: mock(() => {}),
|
||||
};
|
||||
|
||||
handler = captureChain(mockApp, '/api/corpus');
|
||||
routes.setupRoutes(mockApp as any);
|
||||
});
|
||||
|
||||
|
||||
@@ -38,6 +38,37 @@ function createMockReqRes(body: any): { req: Partial<Request>; res: Partial<Resp
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Plan 06 Phase 3 — body validation lives in `validateBody` middleware now.
|
||||
* Build a single chain function that runs the validateBody middleware
|
||||
* followed by the handler, mirroring how Express dispatches them in
|
||||
* production.
|
||||
*/
|
||||
function captureChain(mockApp: any, targetPath: string): (req: Request, res: Response) => void {
|
||||
let middleware: (req: Request, res: Response, next: () => void) => void;
|
||||
let handler: (req: Request, res: Response) => void;
|
||||
mockApp.post = mock((path: string, ...rest: any[]) => {
|
||||
if (path !== targetPath) return;
|
||||
if (rest.length === 1) {
|
||||
handler = rest[0];
|
||||
} else {
|
||||
middleware = rest[0];
|
||||
handler = rest[1];
|
||||
}
|
||||
});
|
||||
return (req: Request, res: Response): void => {
|
||||
if (!middleware) {
|
||||
handler(req, res);
|
||||
return;
|
||||
}
|
||||
let nextCalled = false;
|
||||
middleware(req, res, () => {
|
||||
nextCalled = true;
|
||||
});
|
||||
if (nextCalled) handler(req, res);
|
||||
};
|
||||
}
|
||||
|
||||
describe('DataRoutes Type Coercion', () => {
|
||||
let routes: DataRoutes;
|
||||
let mockGetObservationsByIds: ReturnType<typeof mock>;
|
||||
@@ -82,13 +113,12 @@ describe('DataRoutes Type Coercion', () => {
|
||||
let handler: (req: Request, res: Response) => void;
|
||||
|
||||
beforeEach(() => {
|
||||
const mockApp = {
|
||||
const mockApp: any = {
|
||||
get: mock(() => {}),
|
||||
post: mock((path: string, fn: any) => {
|
||||
if (path === '/api/observations/batch') handler = fn;
|
||||
}),
|
||||
delete: mock(() => {}),
|
||||
use: mock(() => {}),
|
||||
};
|
||||
handler = captureChain(mockApp, '/api/observations/batch');
|
||||
routes.setupRoutes(mockApp as any);
|
||||
});
|
||||
|
||||
@@ -143,13 +173,12 @@ describe('DataRoutes Type Coercion', () => {
|
||||
let handler: (req: Request, res: Response) => void;
|
||||
|
||||
beforeEach(() => {
|
||||
const mockApp = {
|
||||
const mockApp: any = {
|
||||
get: mock(() => {}),
|
||||
post: mock((path: string, fn: any) => {
|
||||
if (path === '/api/sdk-sessions/batch') handler = fn;
|
||||
}),
|
||||
delete: mock(() => {}),
|
||||
use: mock(() => {}),
|
||||
};
|
||||
handler = captureChain(mockApp, '/api/sdk-sessions/batch');
|
||||
routes.setupRoutes(mockApp as any);
|
||||
});
|
||||
|
||||
|
||||
@@ -1,204 +0,0 @@
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import { EventEmitter } from 'events';
|
||||
import {
|
||||
registerProcess,
|
||||
unregisterProcess,
|
||||
getProcessBySession,
|
||||
getActiveCount,
|
||||
getActiveProcesses,
|
||||
waitForSlot,
|
||||
ensureProcessExit,
|
||||
} from '../../src/services/worker/ProcessRegistry.js';
|
||||
|
||||
/**
|
||||
* Create a mock ChildProcess that behaves like a real one for testing.
|
||||
* Supports exitCode, killed, kill(), and event emission.
|
||||
*/
|
||||
function createMockProcess(overrides: { exitCode?: number | null; killed?: boolean } = {}) {
|
||||
const emitter = new EventEmitter();
|
||||
const mock = Object.assign(emitter, {
|
||||
pid: Math.floor(Math.random() * 100000) + 1000,
|
||||
exitCode: overrides.exitCode ?? null,
|
||||
killed: overrides.killed ?? false,
|
||||
kill(signal?: string) {
|
||||
mock.killed = true;
|
||||
// Simulate async exit after kill
|
||||
setTimeout(() => {
|
||||
mock.exitCode = signal === 'SIGKILL' ? null : 0;
|
||||
mock.emit('exit', mock.exitCode, signal || 'SIGTERM');
|
||||
}, 10);
|
||||
return true;
|
||||
},
|
||||
stdin: null,
|
||||
stdout: null,
|
||||
stderr: null,
|
||||
});
|
||||
return mock;
|
||||
}
|
||||
|
||||
// Helper to clear registry between tests by unregistering all
|
||||
function clearRegistry() {
|
||||
for (const p of getActiveProcesses()) {
|
||||
unregisterProcess(p.pid);
|
||||
}
|
||||
}
|
||||
|
||||
describe('ProcessRegistry', () => {
|
||||
beforeEach(() => {
|
||||
clearRegistry();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
clearRegistry();
|
||||
});
|
||||
|
||||
describe('registerProcess / unregisterProcess', () => {
|
||||
it('should register and track a process', () => {
|
||||
const proc = createMockProcess();
|
||||
registerProcess(proc.pid, 1, proc as any);
|
||||
expect(getActiveCount()).toBe(1);
|
||||
expect(getProcessBySession(1)).toBeDefined();
|
||||
});
|
||||
|
||||
it('should unregister a process and free the slot', () => {
|
||||
const proc = createMockProcess();
|
||||
registerProcess(proc.pid, 1, proc as any);
|
||||
unregisterProcess(proc.pid);
|
||||
expect(getActiveCount()).toBe(0);
|
||||
expect(getProcessBySession(1)).toBeUndefined();
|
||||
});
|
||||
});
|
||||
|
||||
describe('getProcessBySession', () => {
|
||||
it('should return undefined for unknown session', () => {
|
||||
expect(getProcessBySession(999)).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should find process by session ID', () => {
|
||||
const proc = createMockProcess();
|
||||
registerProcess(proc.pid, 42, proc as any);
|
||||
const found = getProcessBySession(42);
|
||||
expect(found).toBeDefined();
|
||||
expect(found!.pid).toBe(proc.pid);
|
||||
});
|
||||
});
|
||||
|
||||
describe('waitForSlot', () => {
|
||||
it('should resolve immediately when under limit', async () => {
|
||||
await waitForSlot(2); // 0 processes, limit 2
|
||||
});
|
||||
|
||||
it('should wait until a slot opens', async () => {
|
||||
const proc1 = createMockProcess();
|
||||
const proc2 = createMockProcess();
|
||||
registerProcess(proc1.pid, 1, proc1 as any);
|
||||
registerProcess(proc2.pid, 2, proc2 as any);
|
||||
|
||||
// Start waiting for slot (limit=2, both slots full)
|
||||
const waitPromise = waitForSlot(2, 5000);
|
||||
|
||||
// Free a slot after 50ms
|
||||
setTimeout(() => unregisterProcess(proc1.pid), 50);
|
||||
|
||||
await waitPromise; // Should resolve once slot freed
|
||||
expect(getActiveCount()).toBe(1);
|
||||
});
|
||||
|
||||
it('should throw on timeout when no slot opens', async () => {
|
||||
const proc1 = createMockProcess();
|
||||
const proc2 = createMockProcess();
|
||||
registerProcess(proc1.pid, 1, proc1 as any);
|
||||
registerProcess(proc2.pid, 2, proc2 as any);
|
||||
|
||||
await expect(waitForSlot(2, 100)).rejects.toThrow('Timed out waiting for agent pool slot');
|
||||
});
|
||||
|
||||
it('should throw when hard cap (10) is exceeded', async () => {
|
||||
// Register 10 processes to hit the hard cap
|
||||
const procs = [];
|
||||
for (let i = 0; i < 10; i++) {
|
||||
const proc = createMockProcess();
|
||||
registerProcess(proc.pid, i + 100, proc as any);
|
||||
procs.push(proc);
|
||||
}
|
||||
|
||||
await expect(waitForSlot(20)).rejects.toThrow('Hard cap exceeded');
|
||||
});
|
||||
});
|
||||
|
||||
describe('ensureProcessExit', () => {
|
||||
it('should unregister immediately if exitCode is set', async () => {
|
||||
const proc = createMockProcess({ exitCode: 0 });
|
||||
registerProcess(proc.pid, 1, proc as any);
|
||||
|
||||
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any });
|
||||
expect(getActiveCount()).toBe(0);
|
||||
});
|
||||
|
||||
it('should NOT treat proc.killed as exited — must wait for actual exit', async () => {
|
||||
// This is the core bug fix: proc.killed=true but exitCode=null means NOT dead
|
||||
const proc = createMockProcess({ killed: true, exitCode: null });
|
||||
registerProcess(proc.pid, 1, proc as any);
|
||||
|
||||
// Override kill to simulate SIGKILL + delayed exit
|
||||
proc.kill = (signal?: string) => {
|
||||
proc.killed = true;
|
||||
setTimeout(() => {
|
||||
proc.exitCode = 0;
|
||||
proc.emit('exit', 0, signal);
|
||||
}, 20);
|
||||
return true;
|
||||
};
|
||||
|
||||
// ensureProcessExit should NOT short-circuit on proc.killed
|
||||
// It should wait for exit event or timeout, then escalate to SIGKILL
|
||||
const start = Date.now();
|
||||
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any }, 100);
|
||||
expect(getActiveCount()).toBe(0);
|
||||
});
|
||||
|
||||
it('should escalate to SIGKILL after timeout', async () => {
|
||||
const proc = createMockProcess();
|
||||
registerProcess(proc.pid, 1, proc as any);
|
||||
|
||||
// Override kill: only respond to SIGKILL
|
||||
let sigkillSent = false;
|
||||
proc.kill = (signal?: string) => {
|
||||
proc.killed = true;
|
||||
if (signal === 'SIGKILL') {
|
||||
sigkillSent = true;
|
||||
setTimeout(() => {
|
||||
proc.exitCode = -1;
|
||||
proc.emit('exit', -1, 'SIGKILL');
|
||||
}, 10);
|
||||
}
|
||||
// Don't emit exit for non-SIGKILL signals (simulates stuck process)
|
||||
return true;
|
||||
};
|
||||
|
||||
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any }, 100);
|
||||
expect(sigkillSent).toBe(true);
|
||||
expect(getActiveCount()).toBe(0);
|
||||
});
|
||||
|
||||
it('should unregister even if process ignores SIGKILL (after 1s timeout)', async () => {
|
||||
const proc = createMockProcess();
|
||||
registerProcess(proc.pid, 1, proc as any);
|
||||
|
||||
// Override kill to never emit exit (completely stuck process)
|
||||
proc.kill = () => {
|
||||
proc.killed = true;
|
||||
return true;
|
||||
};
|
||||
|
||||
const start = Date.now();
|
||||
await ensureProcessExit({ pid: proc.pid, sessionDbId: 1, spawnedAt: Date.now(), process: proc as any }, 100);
|
||||
const elapsed = Date.now() - start;
|
||||
|
||||
// Should have waited ~100ms for graceful + ~1000ms for SIGKILL timeout
|
||||
expect(elapsed).toBeGreaterThan(90);
|
||||
// Process is unregistered regardless (safety net)
|
||||
expect(getActiveCount()).toBe(0);
|
||||
});
|
||||
});
|
||||
});
|
||||
@@ -207,8 +207,35 @@ describe('ResultFormatter', () => {
|
||||
|
||||
const formatted = formatter.formatSearchResults(results, 'test', true);
|
||||
|
||||
expect(formatted).toContain('Vector search failed');
|
||||
expect(formatted).toContain('semantic search unavailable');
|
||||
expect(formatted).toContain('Semantic search failed');
|
||||
expect(formatted).toContain('Falling back to keyword search');
|
||||
expect(formatted).not.toContain('Install uv');
|
||||
});
|
||||
});
|
||||
|
||||
describe('formatChromaFailureMessage', () => {
|
||||
it('formats connection-error reasons with offline messaging', () => {
|
||||
const message = ResultFormatter.formatChromaFailureMessage({
|
||||
message: 'subprocess closed',
|
||||
isConnectionError: true,
|
||||
});
|
||||
|
||||
expect(message).toContain('Semantic search is offline');
|
||||
expect(message).toContain('Chroma MCP unreachable: subprocess closed');
|
||||
expect(message).toContain('Falling back to keyword search');
|
||||
expect(message).not.toContain('Install uv');
|
||||
});
|
||||
|
||||
it('formats generic failure reasons with diagnostic pointer', () => {
|
||||
const message = ResultFormatter.formatChromaFailureMessage({
|
||||
message: 'something went wrong',
|
||||
isConnectionError: false,
|
||||
});
|
||||
|
||||
expect(message).toContain('Semantic search failed: something went wrong');
|
||||
expect(message).toContain('Falling back to keyword search');
|
||||
expect(message).toContain('CHROMA_SYNC');
|
||||
expect(message).not.toContain('Install uv');
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -150,16 +150,19 @@ describe('SearchOrchestrator', () => {
|
||||
expect(mockChromaSync.queryChroma).toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should fall back to SQLite when Chroma fails', async () => {
|
||||
it('should throw ChromaUnavailableError (HTTP 503) when Chroma fails', async () => {
|
||||
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma unavailable')));
|
||||
|
||||
const result = await orchestrator.search({
|
||||
query: 'test query'
|
||||
// Fail-fast: Chroma errors propagate as ChromaUnavailableError
|
||||
// (HTTP 503 via the AppError status code) rather than silently
|
||||
// falling back to SQLite.
|
||||
await expect(
|
||||
orchestrator.search({ query: 'test query' })
|
||||
).rejects.toMatchObject({
|
||||
name: 'ChromaUnavailableError',
|
||||
statusCode: 503,
|
||||
code: 'CHROMA_UNAVAILABLE'
|
||||
});
|
||||
|
||||
// Chroma failed, should have fallen back
|
||||
expect(result.fellBack).toBe(true);
|
||||
expect(result.usedChroma).toBe(false);
|
||||
});
|
||||
|
||||
it('should normalize comma-separated concepts', async () => {
|
||||
@@ -296,7 +299,8 @@ describe('SearchOrchestrator', () => {
|
||||
|
||||
const formatted = orchestrator.formatSearchResults(results, 'test', true);
|
||||
|
||||
expect(formatted).toContain('Vector search failed');
|
||||
expect(formatted).toContain('Semantic search');
|
||||
expect(formatted).toContain('Falling back to keyword search');
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
@@ -130,7 +130,6 @@ describe('ChromaSearchStrategy', () => {
|
||||
const result = await strategy.search(options);
|
||||
|
||||
expect(result.usedChroma).toBe(true);
|
||||
expect(result.fellBack).toBe(false);
|
||||
expect(result.strategy).toBe('chroma');
|
||||
});
|
||||
|
||||
@@ -310,23 +309,18 @@ describe('ChromaSearchStrategy', () => {
|
||||
expect(mockSessionStore.getObservationsByIds).not.toHaveBeenCalled();
|
||||
});
|
||||
|
||||
it('should handle Chroma errors gracefully (returns usedChroma: false)', async () => {
|
||||
it('should propagate Chroma errors (fail-fast, no silent fallback)', async () => {
|
||||
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma connection failed')));
|
||||
|
||||
const options: StrategySearchOptions = {
|
||||
query: 'test query'
|
||||
};
|
||||
|
||||
const result = await strategy.search(options);
|
||||
|
||||
expect(result.usedChroma).toBe(false);
|
||||
expect(result.fellBack).toBe(false);
|
||||
expect(result.results.observations).toHaveLength(0);
|
||||
expect(result.results.sessions).toHaveLength(0);
|
||||
expect(result.results.prompts).toHaveLength(0);
|
||||
// Fail-fast: the orchestrator wraps this into a ChromaUnavailableError (HTTP 503).
|
||||
await expect(strategy.search(options)).rejects.toThrow('Chroma connection failed');
|
||||
});
|
||||
|
||||
it('should handle SQLite hydration errors gracefully', async () => {
|
||||
it('should propagate SQLite hydration errors (fail-fast)', async () => {
|
||||
mockSessionStore.getObservationsByIds = mock(() => {
|
||||
throw new Error('SQLite error');
|
||||
});
|
||||
@@ -336,10 +330,7 @@ describe('ChromaSearchStrategy', () => {
|
||||
searchType: 'observations'
|
||||
};
|
||||
|
||||
const result = await strategy.search(options);
|
||||
|
||||
expect(result.usedChroma).toBe(false); // Error occurred
|
||||
expect(result.results.observations).toHaveLength(0);
|
||||
await expect(strategy.search(options)).rejects.toThrow('SQLite error');
|
||||
});
|
||||
|
||||
it('should correctly align IDs with metadatas when Chroma returns duplicate sqlite_ids (multiple docs per observation)', async () => {
|
||||
|
||||
@@ -198,7 +198,6 @@ describe('HybridSearchStrategy', () => {
|
||||
expect(mockSessionSearch.findByConcept).toHaveBeenCalledWith('test-concept', expect.any(Object));
|
||||
expect(mockChromaSync.queryChroma).toHaveBeenCalledWith('test-concept', expect.any(Number));
|
||||
expect(result.usedChroma).toBe(true);
|
||||
expect(result.fellBack).toBe(false);
|
||||
expect(result.strategy).toBe('hybrid');
|
||||
});
|
||||
|
||||
@@ -251,18 +250,16 @@ describe('HybridSearchStrategy', () => {
|
||||
expect(mockChromaSync.queryChroma).not.toHaveBeenCalled(); // Should short-circuit
|
||||
});
|
||||
|
||||
it('should fall back to metadata-only on Chroma error', async () => {
|
||||
it('should propagate Chroma error (fail-fast, no silent fallback)', async () => {
|
||||
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma failed')));
|
||||
|
||||
const options: StrategySearchOptions = {
|
||||
limit: 10
|
||||
};
|
||||
|
||||
const result = await strategy.findByConcept('test-concept', options);
|
||||
|
||||
expect(result.usedChroma).toBe(false);
|
||||
expect(result.fellBack).toBe(true);
|
||||
expect(result.results.observations).toHaveLength(3); // All metadata results
|
||||
await expect(
|
||||
strategy.findByConcept('test-concept', options)
|
||||
).rejects.toThrow('Chroma failed');
|
||||
});
|
||||
});
|
||||
|
||||
@@ -307,18 +304,16 @@ describe('HybridSearchStrategy', () => {
|
||||
expect(result.results.observations[0].id).toBe(2);
|
||||
});
|
||||
|
||||
it('should fall back on Chroma error', async () => {
|
||||
it('should propagate Chroma error (fail-fast, no silent fallback)', async () => {
|
||||
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma unavailable')));
|
||||
|
||||
const options: StrategySearchOptions = {
|
||||
limit: 10
|
||||
};
|
||||
|
||||
const result = await strategy.findByType('bugfix', options);
|
||||
|
||||
expect(result.usedChroma).toBe(false);
|
||||
expect(result.fellBack).toBe(true);
|
||||
expect(result.results.observations.length).toBeGreaterThan(0);
|
||||
await expect(
|
||||
strategy.findByType('bugfix', options)
|
||||
).rejects.toThrow('Chroma unavailable');
|
||||
});
|
||||
|
||||
it('should return empty when no metadata matches', async () => {
|
||||
@@ -394,18 +389,16 @@ describe('HybridSearchStrategy', () => {
|
||||
expect(result.sessions).toHaveLength(1);
|
||||
});
|
||||
|
||||
it('should fall back on Chroma error', async () => {
|
||||
it('should propagate Chroma error (fail-fast, no silent fallback)', async () => {
|
||||
mockChromaSync.queryChroma = mock(() => Promise.reject(new Error('Chroma down')));
|
||||
|
||||
const options: StrategySearchOptions = {
|
||||
limit: 10
|
||||
};
|
||||
|
||||
const result = await strategy.findByFile('/path/to/file.ts', options);
|
||||
|
||||
expect(result.usedChroma).toBe(false);
|
||||
expect(result.observations.length).toBeGreaterThan(0);
|
||||
expect(result.sessions).toHaveLength(1);
|
||||
await expect(
|
||||
strategy.findByFile('/path/to/file.ts', options)
|
||||
).rejects.toThrow('Chroma down');
|
||||
});
|
||||
});
|
||||
|
||||
|
||||
@@ -116,7 +116,6 @@ describe('SQLiteSearchStrategy', () => {
|
||||
const result = await strategy.search(options);
|
||||
|
||||
expect(result.usedChroma).toBe(false);
|
||||
expect(result.fellBack).toBe(false);
|
||||
expect(result.strategy).toBe('sqlite');
|
||||
expect(result.results.observations).toHaveLength(1);
|
||||
expect(result.results.sessions).toHaveLength(1);
|
||||
|
||||
@@ -1,251 +0,0 @@
|
||||
/**
|
||||
* Tests for Issue #1590: Session lifecycle guards to prevent runaway API spend
|
||||
*
|
||||
* Validates three lifecycle safety mechanisms:
|
||||
* 1. SIGTERM detection: externally-killed processes must NOT trigger crash recovery
|
||||
* 2. Wall-clock age limit: sessions older than MAX_SESSION_WALL_CLOCK_MS must be terminated
|
||||
* 3. Duplicate process prevention: a new spawn for a session kills any existing process first
|
||||
*/
|
||||
|
||||
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
|
||||
import { EventEmitter } from 'events';
|
||||
import {
|
||||
registerProcess,
|
||||
unregisterProcess,
|
||||
getProcessBySession,
|
||||
getActiveCount,
|
||||
getActiveProcesses,
|
||||
createPidCapturingSpawn,
|
||||
} from '../../src/services/worker/ProcessRegistry.js';
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
function createMockProcess(overrides: { exitCode?: number | null; killed?: boolean } = {}) {
|
||||
const emitter = new EventEmitter();
|
||||
const mock = Object.assign(emitter, {
|
||||
pid: Math.floor(Math.random() * 100_000) + 10_000,
|
||||
exitCode: overrides.exitCode ?? null,
|
||||
killed: overrides.killed ?? false,
|
||||
stdin: null as null,
|
||||
stdout: null as null,
|
||||
stderr: null as null,
|
||||
kill(signal?: string) {
|
||||
mock.killed = true;
|
||||
setTimeout(() => {
|
||||
mock.exitCode = 0;
|
||||
mock.emit('exit', mock.exitCode, signal || 'SIGTERM');
|
||||
}, 10);
|
||||
return true;
|
||||
},
|
||||
on: emitter.on.bind(emitter),
|
||||
once: emitter.once.bind(emitter),
|
||||
off: emitter.off.bind(emitter),
|
||||
});
|
||||
return mock;
|
||||
}
|
||||
|
||||
function clearRegistry() {
|
||||
for (const p of getActiveProcesses()) {
|
||||
unregisterProcess(p.pid);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 1. SIGTERM detection — does NOT trigger crash recovery
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('SIGTERM detection (Issue #1590)', () => {
|
||||
it('should classify "code 143" as a SIGTERM error', () => {
|
||||
const errorMsg = 'Claude Code process exited with code 143';
|
||||
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
|
||||
expect(isSigterm).toBe(true);
|
||||
});
|
||||
|
||||
it('should classify "signal SIGTERM" as a SIGTERM error', () => {
|
||||
const errorMsg = 'Process terminated with signal SIGTERM';
|
||||
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
|
||||
expect(isSigterm).toBe(true);
|
||||
});
|
||||
|
||||
it('should NOT classify ordinary errors as SIGTERM', () => {
|
||||
const errorMsg = 'Invalid API key';
|
||||
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
|
||||
expect(isSigterm).toBe(false);
|
||||
});
|
||||
|
||||
it('should NOT classify code 1 (normal error) as SIGTERM', () => {
|
||||
const errorMsg = 'Claude Code process exited with code 1';
|
||||
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
|
||||
expect(isSigterm).toBe(false);
|
||||
});
|
||||
|
||||
it('aborting the controller should mark wasAborted=true, preventing crash recovery', () => {
|
||||
// Simulate what the catch handler does: abort when SIGTERM detected
|
||||
const abortController = new AbortController();
|
||||
expect(abortController.signal.aborted).toBe(false);
|
||||
|
||||
// SIGTERM arrives — we abort the controller
|
||||
abortController.abort();
|
||||
|
||||
// By the time .finally() runs, wasAborted should be true
|
||||
const wasAborted = abortController.signal.aborted;
|
||||
expect(wasAborted).toBe(true);
|
||||
});
|
||||
|
||||
it('should NOT abort the controller for non-SIGTERM crash errors', () => {
|
||||
const abortController = new AbortController();
|
||||
const errorMsg = 'FOREIGN KEY constraint failed';
|
||||
|
||||
// Non-SIGTERM: do NOT abort
|
||||
const isSigterm = errorMsg.includes('code 143') || errorMsg.includes('signal SIGTERM');
|
||||
if (isSigterm) {
|
||||
abortController.abort();
|
||||
}
|
||||
|
||||
expect(abortController.signal.aborted).toBe(false);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 2. Wall-clock age limit
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('Wall-clock age limit (Issue #1590)', () => {
|
||||
const MAX_SESSION_WALL_CLOCK_MS = 4 * 60 * 60 * 1000; // 4 hours (matches SessionRoutes)
|
||||
|
||||
it('should NOT terminate a session started < 4 hours ago', () => {
|
||||
const startTime = Date.now() - 30 * 60 * 1000; // 30 minutes ago
|
||||
const sessionAgeMs = Date.now() - startTime;
|
||||
expect(sessionAgeMs).toBeLessThan(MAX_SESSION_WALL_CLOCK_MS);
|
||||
});
|
||||
|
||||
it('should NOT terminate a session started exactly 4 hours ago (strict >)', () => {
|
||||
// Production uses strict `>` (not `>=`), so exactly 4h is still alive.
|
||||
const startTime = Date.now() - MAX_SESSION_WALL_CLOCK_MS;
|
||||
const sessionAgeMs = Date.now() - startTime;
|
||||
// At exactly the boundary, sessionAgeMs === MAX, and `>` is false → no termination.
|
||||
expect(sessionAgeMs).toBeLessThanOrEqual(MAX_SESSION_WALL_CLOCK_MS);
|
||||
});
|
||||
|
||||
it('should terminate a session started more than 4 hours ago', () => {
|
||||
const startTime = Date.now() - MAX_SESSION_WALL_CLOCK_MS - 1;
|
||||
const sessionAgeMs = Date.now() - startTime;
|
||||
expect(sessionAgeMs).toBeGreaterThan(MAX_SESSION_WALL_CLOCK_MS);
|
||||
});
|
||||
|
||||
it('should terminate a session started 13+ hours ago (the issue scenario)', () => {
|
||||
const startTime = Date.now() - 13 * 60 * 60 * 1000; // 13 hours ago
|
||||
const sessionAgeMs = Date.now() - startTime;
|
||||
expect(sessionAgeMs).toBeGreaterThan(MAX_SESSION_WALL_CLOCK_MS);
|
||||
});
|
||||
|
||||
it('aborting + draining pending queue should prevent respawn', () => {
|
||||
// Simulate the wall-clock termination sequence:
|
||||
// 1. Abort controller (stops active generator)
|
||||
// 2. Mark pending messages abandoned (no work to restart for)
|
||||
// 3. Remove session from map
|
||||
|
||||
const abortController = new AbortController();
|
||||
let pendingAbandoned = 0;
|
||||
let sessionRemoved = false;
|
||||
|
||||
// Simulate abort
|
||||
abortController.abort();
|
||||
expect(abortController.signal.aborted).toBe(true);
|
||||
|
||||
// Simulate markAllSessionMessagesAbandoned
|
||||
pendingAbandoned = 3; // Pretend 3 messages were abandoned
|
||||
|
||||
// Simulate removeSessionImmediate
|
||||
sessionRemoved = true;
|
||||
|
||||
expect(pendingAbandoned).toBeGreaterThanOrEqual(0);
|
||||
expect(sessionRemoved).toBe(true);
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// 3. Duplicate process prevention in createPidCapturingSpawn
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
describe('Duplicate process prevention (Issue #1590)', () => {
|
||||
beforeEach(() => {
|
||||
clearRegistry();
|
||||
});
|
||||
|
||||
afterEach(() => {
|
||||
clearRegistry();
|
||||
});
|
||||
|
||||
it('should detect a duplicate when a live process already exists for the session', () => {
|
||||
const proc = createMockProcess();
|
||||
registerProcess(proc.pid, 42, proc as any);
|
||||
|
||||
const existing = getProcessBySession(42);
|
||||
expect(existing).toBeDefined();
|
||||
expect(existing!.process.exitCode).toBeNull(); // Still alive
|
||||
});
|
||||
|
||||
it('should NOT detect a duplicate when the existing process has already exited', () => {
|
||||
const proc = createMockProcess({ exitCode: 0 });
|
||||
registerProcess(proc.pid, 42, proc as any);
|
||||
|
||||
const existing = getProcessBySession(42);
|
||||
expect(existing).toBeDefined();
|
||||
// exitCode is set — process is already done, NOT a live duplicate
|
||||
expect(existing!.process.exitCode).not.toBeNull();
|
||||
});
|
||||
|
||||
it('should kill existing process and unregister before spawning', () => {
|
||||
const existingProc = createMockProcess();
|
||||
registerProcess(existingProc.pid, 99, existingProc as any);
|
||||
expect(getActiveCount()).toBe(1);
|
||||
|
||||
// Simulate the duplicate-kill logic:
|
||||
const duplicate = getProcessBySession(99);
|
||||
if (duplicate && duplicate.process.exitCode === null) {
|
||||
try { duplicate.process.kill('SIGTERM'); } catch { /* already dead */ }
|
||||
unregisterProcess(duplicate.pid);
|
||||
}
|
||||
|
||||
expect(getActiveCount()).toBe(0);
|
||||
expect(getProcessBySession(99)).toBeUndefined();
|
||||
});
|
||||
|
||||
it('should leave registry empty after killing duplicate so new process can register', () => {
|
||||
const oldProc = createMockProcess();
|
||||
registerProcess(oldProc.pid, 77, oldProc as any);
|
||||
expect(getActiveCount()).toBe(1);
|
||||
|
||||
// Kill duplicate
|
||||
const dup = getProcessBySession(77);
|
||||
if (dup && dup.process.exitCode === null) {
|
||||
try { dup.process.kill('SIGTERM'); } catch { /* ignore */ }
|
||||
unregisterProcess(dup.pid);
|
||||
}
|
||||
expect(getActiveCount()).toBe(0);
|
||||
|
||||
// New process can now register cleanly
|
||||
const newProc = createMockProcess();
|
||||
registerProcess(newProc.pid, 77, newProc as any);
|
||||
expect(getActiveCount()).toBe(1);
|
||||
|
||||
const found = getProcessBySession(77);
|
||||
expect(found!.pid).toBe(newProc.pid);
|
||||
});
|
||||
|
||||
it('should not interfere when no existing process is registered', () => {
|
||||
expect(getProcessBySession(55)).toBeUndefined();
|
||||
|
||||
// Duplicate-kill logic: should be a no-op
|
||||
const dup = getProcessBySession(55);
|
||||
if (dup && dup.process.exitCode === null) {
|
||||
unregisterProcess(dup.pid);
|
||||
}
|
||||
|
||||
// Registry should still be empty — no side effects
|
||||
expect(getActiveCount()).toBe(0);
|
||||
});
|
||||
});
|
||||
@@ -342,7 +342,7 @@ describe('Zombie Agent Prevention', () => {
|
||||
expect(pendingStore.hasAnyPendingWork()).toBe(true);
|
||||
|
||||
// Terminate: mark abandoned (same as terminateSession does)
|
||||
const abandoned = pendingStore.markAllSessionMessagesAbandoned(sessionId);
|
||||
const abandoned = pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sessionId });
|
||||
expect(abandoned).toBe(2);
|
||||
|
||||
// Spinner should stop: no pending work remains
|
||||
@@ -357,7 +357,7 @@ describe('Zombie Agent Prevention', () => {
|
||||
expect(pendingStore.getPendingCount(sessionId)).toBe(0);
|
||||
|
||||
// Terminate with nothing to abandon
|
||||
const abandoned = pendingStore.markAllSessionMessagesAbandoned(sessionId);
|
||||
const abandoned = pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sessionId });
|
||||
expect(abandoned).toBe(0);
|
||||
|
||||
// Still no pending work
|
||||
@@ -369,11 +369,11 @@ describe('Zombie Agent Prevention', () => {
|
||||
enqueueTestMessage(sessionId, 'content-terminate-idempotent');
|
||||
|
||||
// First terminate
|
||||
const first = pendingStore.markAllSessionMessagesAbandoned(sessionId);
|
||||
const first = pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sessionId });
|
||||
expect(first).toBe(1);
|
||||
|
||||
// Second terminate — already failed, nothing to mark
|
||||
const second = pendingStore.markAllSessionMessagesAbandoned(sessionId);
|
||||
const second = pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sessionId });
|
||||
expect(second).toBe(0);
|
||||
|
||||
expect(pendingStore.hasAnyPendingWork()).toBe(false);
|
||||
@@ -409,9 +409,9 @@ describe('Zombie Agent Prevention', () => {
|
||||
expect(pendingStore.hasAnyPendingWork()).toBe(true);
|
||||
|
||||
// Terminate all sessions
|
||||
pendingStore.markAllSessionMessagesAbandoned(sid1);
|
||||
pendingStore.markAllSessionMessagesAbandoned(sid2);
|
||||
pendingStore.markAllSessionMessagesAbandoned(sid3);
|
||||
pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sid1 });
|
||||
pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sid2 });
|
||||
pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sid3 });
|
||||
|
||||
// Spinner must stop
|
||||
expect(pendingStore.hasAnyPendingWork()).toBe(false);
|
||||
@@ -425,7 +425,7 @@ describe('Zombie Agent Prevention', () => {
|
||||
enqueueTestMessage(sid2, 'content-isolate-2');
|
||||
|
||||
// Terminate only session 1
|
||||
pendingStore.markAllSessionMessagesAbandoned(sid1);
|
||||
pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sid1 });
|
||||
|
||||
// Session 2 still has work
|
||||
expect(pendingStore.getPendingCount(sid1)).toBe(0);
|
||||
@@ -449,7 +449,7 @@ describe('Zombie Agent Prevention', () => {
|
||||
expect(pendingStore.getPendingCount(sessionId)).toBe(2);
|
||||
|
||||
// Terminate should mark BOTH as failed
|
||||
const abandoned = pendingStore.markAllSessionMessagesAbandoned(sessionId);
|
||||
const abandoned = pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sessionId });
|
||||
expect(abandoned).toBe(2);
|
||||
expect(pendingStore.hasAnyPendingWork()).toBe(false);
|
||||
});
|
||||
@@ -469,7 +469,7 @@ describe('Zombie Agent Prevention', () => {
|
||||
expect(pendingStore.getPendingCount(sessionId)).toBe(3);
|
||||
|
||||
// THE INVARIANT: after terminate, hasAnyPendingWork MUST be false
|
||||
pendingStore.markAllSessionMessagesAbandoned(sessionId);
|
||||
pendingStore.transitionMessagesTo('abandoned', { sessionDbId: sessionId });
|
||||
expect(pendingStore.hasAnyPendingWork()).toBe(false);
|
||||
expect(pendingStore.getPendingCount(sessionId)).toBe(0);
|
||||
});
|
||||
|
||||
Reference in New Issue
Block a user