perf: streamline worker startup and consolidate database connections (#2122)
* docs: pathfinder refactor corpus + Node 20 preflight
Adds the PATHFINDER-2026-04-22 principle-driven refactor plan (11 docs,
cross-checked PASS) plus the exploratory PATHFINDER-2026-04-21 corpus
that motivated it. Bumps engines.node to >=20.0.0 per the ingestion-path
plan preflight (recursive fs.watch). Adds the pathfinder skill.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 01 — data integrity
Schema, UNIQUE constraints, self-healing claim, Chroma upsert fallback.
- Phase 1: fresh schema.sql regenerated at post-refactor shape.
- Phase 2: migrations 23+24 — rebuild pending_messages without
started_processing_at_epoch; UNIQUE(session_id, tool_use_id);
UNIQUE(memory_session_id, content_hash) on observations; dedup
duplicate rows before adding indexes.
- Phase 3: claimNextMessage rewritten to self-healing query using
worker_pid NOT IN live_worker_pids; STALE_PROCESSING_THRESHOLD_MS
and the 60-s stale-reset block deleted.
- Phase 4: DEDUP_WINDOW_MS and findDuplicateObservation deleted;
observations.insert now uses ON CONFLICT DO NOTHING.
- Phase 5: failed-message purge block deleted from worker-service
2-min interval; clearFailedOlderThan method deleted.
- Phase 6: repairMalformedSchema and its Python subprocess repair
path deleted from Database.ts; SQLite errors now propagate.
- Phase 7: Chroma delete-then-add fallback gated behind
CHROMA_SYNC_FALLBACK_ON_CONFLICT env flag as bridge until
Chroma MCP ships native upsert.
- Phase 8: migration 19 no-op block absorbed into fresh schema.sql.
Verification greps all return 0 matches. bun test tests/sqlite/
passes 63/63. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/01-data-integrity.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 02 — process lifecycle
OS process groups replace hand-rolled reapers. Worker runs until
killed; orphans are prevented by detached spawn + kill(-pgid).
- Phase 1: src/services/worker/ProcessRegistry.ts DELETED. The
canonical registry at src/supervisor/process-registry.ts is the
sole survivor; SDK spawn site consolidated into it via new
createSdkSpawnFactory/spawnSdkProcess/getSdkProcessForSession/
ensureSdkProcessExit/waitForSlot helpers.
- Phase 2: SDK children spawn with detached:true + stdio:
['ignore','pipe','pipe']; pgid recorded on ManagedProcessInfo.
- Phase 3: shutdown.ts signalProcess teardown uses
process.kill(-pgid, signal) on Unix when pgid is recorded;
Windows path unchanged (tree-kill/taskkill).
- Phase 4: all reaper intervals deleted — startOrphanReaper call,
staleSessionReaperInterval setInterval (including the co-located
WAL checkpoint — SQLite's built-in wal_autocheckpoint handles
WAL growth without an app-level timer), killIdleDaemonChildren,
killSystemOrphans, reapOrphanedProcesses, reapStaleSessions, and
detectStaleGenerator. MAX_GENERATOR_IDLE_MS and MAX_SESSION_IDLE_MS
constants deleted.
- Phase 5: abandonedTimer — already 0 matches; primary-path cleanup
via generatorPromise.finally() already lives in worker-service
startSessionProcessor and SessionRoutes ensureGeneratorRunning.
- Phase 6: evictIdlestSession and its evict callback deleted from
SessionManager. Pool admission gates backpressure upstream.
- Phase 7: SDK-failure fallback — SessionManager has zero matches
for fallbackAgent/Gemini/OpenRouter. Failures surface to hooks
via exit code 2 through SessionRoutes error mapping.
- Phase 8: ensureWorkerRunning in worker-utils.ts rewritten to
lazy-spawn — consults isWorkerPortAlive (which gates
captureProcessStartToken for PID-reuse safety via commit
99060bac), then spawns detached with unref(), then
waitForWorkerPort({ attempts: 3, backoffMs: 250 }) hand-rolled
exponential backoff 250→500→1000ms. No respawn npm dep.
- Phase 9: idle self-shutdown — zero matches for
idleCheck/idleTimeout/IDLE_MAX_MS/idleShutdown. Worker exits
only on external SIGTERM via supervisor signal handlers.
Three test files that exercised deleted code removed:
tests/worker/process-registry.test.ts,
tests/worker/session-lifecycle-guard.test.ts,
tests/services/worker/reap-stale-sessions.test.ts.
Pass count: 1451 → 1407 (-44), all attributable to deleted test
files. Zero new failures. 31 pre-existing failures remain
(schema-repair suite, logger-usage-standards, environmental
openclaw / plugin-distribution) — none introduced by Plan 02.
All 10 verification greps return 0. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/02-process-lifecycle.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 04 (narrowed) — search fail-fast
Phases 3, 5, 6 only. Plan-doc inaccuracies for phases 1/2/4/7/8/9
deferred for plan reconciliation:
- Phase 1/2: ObservationRow type doesn't exist; the four
"formatters" operate on three incompatible types.
- Phase 4: RECENCY_WINDOW_MS already imported from
SEARCH_CONSTANTS at every call site.
- Phase 7: getExistingChromaIds is NOT @deprecated and has an
active caller in ChromaSync.backfillMissingSyncs.
- Phase 8: estimateTokens already consolidated.
- Phase 9: knowledge-corpus rewrite blocked on PG-3
prompt-caching cost smoke test.
Phase 3 — Delete SearchManager.findByConcept/findByFile/findByType.
SearchRoutes handlers (handleSearchByConcept/File/Type) now call
searchManager.getOrchestrator().findByXxx() directly via new
getter accessors on SearchManager. ~250 LoC deleted.
Phase 5 — Fail-fast Chroma. Created
src/services/worker/search/errors.ts with ChromaUnavailableError
extends AppError(503, 'CHROMA_UNAVAILABLE'). Deleted
SearchOrchestrator.executeWithFallback's Chroma-failed
SQLite-fallback branch; runtime Chroma errors now throw 503.
"Path 3" (chromaSync was null at construction — explicit-
uninitialized config) preserved as legitimate empty-result state
per plan text. ChromaSearchStrategy.search no longer wraps in
try/catch — errors propagate.
Phase 6 — Delete HybridSearchStrategy three try/catch silent
fallback blocks (findByConcept, findByType, findByFile) at lines
~82-95, ~120-132, ~161-172. Removed `fellBack` field from
StrategySearchResult type and every return site
(SQLiteSearchStrategy, BaseSearchStrategy.emptyResult,
SearchOrchestrator).
Tests updated (Principle 7 — delete in same PR):
- search-orchestrator.test.ts: "fall back to SQLite" rewritten
as "throw ChromaUnavailableError (HTTP 503)".
- chroma/hybrid/sqlite-search-strategy tests: rewritten to
rejects.toThrow; removed fellBack assertions.
Verification: SearchManager.findBy → 0; fellBack → 0 in src/.
bun test tests/worker/search/ → 122 pass, 0 fail.
bun test (suite-wide) → 1407 pass, baseline maintained, 0 new
failures. bun run build succeeds.
Plan: PATHFINDER-2026-04-22/04-read-path.md (Phases 3, 5, 6)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 03 — ingestion path
Fail-fast parser, direct in-process ingest, recursive fs.watch,
DB-backed tool pairing. Worker-internal HTTP loopback eliminated.
- Phase 0: Created src/services/worker/http/shared.ts exporting
ingestObservation/ingestPrompt/ingestSummary as direct
in-process functions plus ingestEventBus (Node EventEmitter,
reusing existing pattern — no third event bus introduced).
setIngestContext wires the SessionManager dependency from
worker-service constructor.
- Phase 1: src/sdk/parser.ts collapsed to one parseAgentXml
returning { valid:true; kind: 'observation'|'summary'; data }
| { valid:false; reason: string }. Inspects root element;
<skip_summary reason="…"/> is a first-class summary case
with skipped:true. NEVER returns undefined. NEVER coerces.
- Phase 2: ResponseProcessor calls parseAgentXml exactly once,
branches on the discriminated union. On invalid → markFailed
+ logger.warn(reason). On observation → ingestObservation.
On summary → ingestSummary then emit summaryStoredEvent
{ sessionId, messageId } (consumed by Plan 05's blocking
/api/session/end).
- Phase 3: Deleted consecutiveSummaryFailures field
(ResponseProcessor + SessionManager + worker-types) and
MAX_CONSECUTIVE_SUMMARY_FAILURES constant. Circuit-breaker
guards and "tripped" log lines removed.
- Phase 4: coerceObservationToSummary deleted from sdk/parser.ts.
- Phase 5: src/services/transcripts/watcher.ts rescan setInterval
replaced with fs.watch(transcriptsRoot, { recursive: true,
persistent: true }) — Node 20+ recursive mode.
- Phase 6: src/services/transcripts/processor.ts pendingTools
Map deleted. tool_use rows insert with INSERT OR IGNORE on
UNIQUE(session_id, tool_use_id) (added by Plan 01). New
pairToolUsesByJoin query in PendingMessageStore for read-time
pairing (UNIQUE INDEX provides idempotency; explicit consumer
not yet wired).
- Phase 7: HTTP loopback at processor.ts:252 replaced with
direct ingestObservation call. maybeParseJson silent-passthrough
rewritten to fail-fast (throws on malformed JSON).
- Phase 8: src/utils/tag-stripping.ts countTags + stripTagsInternal
collapsed into one alternation regex, single-pass over input.
- Phase 9: src/utils/transcript-parser.ts (dead TranscriptParser
class) deleted. The active extractLastMessage at
src/shared/transcript-parser.ts:41-144 is the sole survivor.
Tests updated (Principle 7 — same-PR delete):
- tests/sdk/parser.test.ts + parse-summary.test.ts: rewritten
to assert discriminated-union shape; coercion-specific
scenarios collapse into { valid:false } assertions.
- tests/worker/agents/response-processor.test.ts: circuit-breaker
describe block skipped; non-XML/empty-response tests assert
fail-fast markFailed behavior.
Verification: every grep returns 0. transcript-parser.ts deleted.
bun run build succeeds. bun test → 1399 pass / 28 fail / 7 skip
(net -8 pass = the 4 retired circuit-breaker tests + 4 collapsed
parser cases). Zero new failures vs baseline.
Deferred (out of Plan 03 scope, will land in Plan 06): SessionRoutes
HTTP route handlers still call sessionManager.queueObservation
inline rather than the new shared helpers — the helpers are ready,
the route swap is mechanical and belongs with the Zod refactor.
Plan: PATHFINDER-2026-04-22/03-ingestion-path.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 05 — hook surface
Worker-call plumbing collapsed to one helper. Polling replaced by
server-side blocking endpoint. Fail-loud counter surfaces persistent
worker outages via exit code 2.
- Phase 1: plugin/hooks/hooks.json — three 20-iteration `for i in
1..20; do curl -sf .../health && break; sleep 0.1; done` shell
retry wrappers deleted. Hook commands invoke their bun entry
point directly.
- Phase 2: src/shared/worker-utils.ts — added
executeWithWorkerFallback<T>(url, method, body) returning
T | { continue: true; reason?: string }. All 8 hook handlers
(observation, session-init, context, file-context, file-edit,
summarize, session-complete, user-message) rewritten to use
it instead of duplicating the ensureWorkerRunning →
workerHttpRequest → fallback sequence.
- Phase 3: blocking POST /api/session/end in SessionRoutes.ts
using validateBody + sessionEndSchema (z.object({sessionId})).
One-shot ingestEventBus.on('summaryStoredEvent') listener,
30 s timer, req.aborted handler — all share one cleanup so
the listener cannot leak. summarize.ts polling loop, plus
MAX_WAIT_FOR_SUMMARY_MS / POLL_INTERVAL_MS constants, deleted.
- Phase 4: src/shared/hook-settings.ts — loadFromFileOnce()
memoizes SettingsDefaultsManager.loadFromFile per process.
Per-handler settings reads collapsed.
- Phase 5: src/shared/should-track-project.ts — single exclusion
check entry; isProjectExcluded no longer referenced from
src/cli/handlers/.
- Phase 6: cwd validation pushed into adapter normalizeInput
(all 6 adapters: claude-code, cursor, raw, gemini-cli,
windsurf). New AdapterRejectedInput error in
src/cli/adapters/errors.ts. Handler-level isValidCwd checks
deleted from file-edit.ts and observation.ts. hook-command.ts
catches AdapterRejectedInput → graceful fallback.
- Phase 7: session-init.ts conditional initAgent guard deleted;
initAgent is idempotent. tests/hooks/context-reinjection-guard
test (validated the deleted conditional) deleted in same PR
per Principle 7.
- Phase 8: fail-loud counter at ~/.claude-mem/state/hook-failures
.json. Atomic write via .tmp + rename. CLAUDE_MEM_HOOK_FAIL_LOUD
_THRESHOLD setting (default 3). On consecutive worker-unreachable
≥ N: process.exit(2). On success: reset to 0. NOT a retry.
- Phase 9: ensureWorkerAliveOnce() module-scope memoization
wrapping ensureWorkerRunning. executeWithWorkerFallback calls
the memoized version.
Minimal validateBody middleware stub at
src/services/worker/http/middleware/validateBody.ts. Plan 06 will
expand with typed inference + error envelope conventions.
Verification: 4/4 grep targets pass. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip; -6 pass attributable
solely to deleted context-reinjection-guard test file. Zero new
failures vs baseline.
Plan: PATHFINDER-2026-04-22/05-hook-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 06 — API surface
One Zod-based validator wrapping every POST/PUT. Rate limiter,
diagnostic endpoints, and shutdown wrappers deleted. Failure-
marking consolidated to one helper.
- Phase 1 (preflight): zod@^3 already installed.
- Phase 2: validateBody middleware confirmed at canonical shape
in src/services/worker/http/middleware/validateBody.ts —
safeParse → 400 { error: 'ValidationError', issues: [...] }
on failure, replaces req.body with parsed value on success.
- Phase 3: Per-route Zod schemas declared at the top of each
route file. 24 POST endpoints across SessionRoutes,
CorpusRoutes, DataRoutes, MemoryRoutes, SearchRoutes,
LogsRoutes, SettingsRoutes now wrap with validateBody().
/api/session/end (Plan 05) confirmed using same middleware.
- Phase 4: validateRequired() deleted from BaseRouteHandler
along with every call site. Inline coercion helpers
(coerceStringArray, coercePositiveInteger) and inline
if (!req.body...) guards deleted across all route files.
- Phase 5: Rate limiter middleware and its registration deleted
from src/services/worker/http/middleware.ts. Worker binds
127.0.0.1:37777 — no untrusted caller.
- Phase 6: viewer.html cached at module init in ViewerRoutes.ts
via fs.readFileSync; served as Buffer with text/html content
type. SKILL.md + per-operation .md files cached in
Server.ts as Map<string, string>; loadInstructionContent
helper deleted. NO fs.watch, NO TTL — process restart is the
cache-invalidation event.
- Phase 7: Four diagnostic endpoints deleted from DataRoutes.ts
— /api/pending-queue (GET), /api/pending-queue/process (POST),
/api/pending-queue/failed (DELETE), /api/pending-queue/all
(DELETE). Helper methods that ONLY served them
(getQueueMessages, getStuckCount, getRecentlyProcessed,
clearFailed, clearAll) deleted from PendingMessageStore.
KEPT: /api/processing-status (observability), /health
(used by ensureWorkerRunning).
- Phase 8: stopSupervisor wrapper deleted from supervisor/index.ts.
GracefulShutdown now calls getSupervisor().stop() directly.
Two functions retained with clear roles:
- performGracefulShutdown — worker-side 6-step shutdown
- runShutdownCascade — supervisor-side child teardown
(process.kill(-pgid), Windows tree-kill, PID-file cleanup)
Each has unique non-trivial logic and a single canonical caller.
- Phase 9: transitionMessagesTo(status, filter) is the sole
failure-marking path on PendingMessageStore. Old methods
markSessionMessagesFailed and markAllSessionMessagesAbandoned
deleted along with all callers (worker-service,
SessionCompletionHandler, tests/zombie-prevention).
Tests updated (Principle 7 same-PR delete): coercion test files
refactored to chain validateBody → handler. Zombie-prevention
tests rewritten to call transitionMessagesTo.
Verification: all 4 grep targets → 0. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — exact match to
baseline. Zero new failures.
Plan: PATHFINDER-2026-04-22/06-api-surface.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* refactor: land PATHFINDER Plan 07 — dead code sweep
ts-prune-driven sweep across the tree after Plans 01-06 landed.
Deleted unused exports, orphan helpers, and one fully orphaned
file. Earlier-plan deletions verified.
Deleted:
- src/utils/bun-path.ts (entire file — getBunPath, getBunPathOrThrow,
isBunAvailable: zero importers)
- bun-resolver.getBunVersionString: zero callers
- PendingMessageStore.retryMessage / resetProcessingToPending /
abortMessage: superseded by transitionMessagesTo (Plan 06 Phase 9)
- EnvManager.MANAGED_CREDENTIAL_KEYS, EnvManager.setCredential:
zero callers
- CodexCliInstaller.checkCodexCliStatus: zero callers; no status
command exists in npx-cli
- Two "REMOVED: cleanupOrphanedSessions" stale-fence comments
Kept (with documented justification):
- Public API surface in dist/sdk/* (parseAgentXml, prompt
builders, ParsedObservation, ParsedSummary, ParseResult,
SUMMARY_MODE_MARKER) — exported via package.json sdk path.
- generateContext / loadContextConfig / token utilities — used
via dynamic await import('../../../context-generator.js') in
worker SearchRoutes.
- MCP_IDE_INSTALLERS, install/uninstall functions for codex/goose
— used via dynamic await import in npx-cli/install.ts +
uninstall.ts (ts-prune cannot trace dynamic imports).
- getExistingChromaIds — active caller in
ChromaSync.backfillMissingSyncs (Plan 04 narrowed scope).
- processPendingQueues / getSessionsWithPendingMessages — active
orphan-recovery caller in worker-service.ts plus
zombie-prevention test coverage.
- StoreAndMarkCompleteResult legacy alias — return-type annotation
in same file.
- All Database.ts barrel re-exports — used downstream.
Earlier-plan verification:
- Plan 03 Phase 9: VERIFIED — src/utils/transcript-parser.ts
is gone; TranscriptParser has 0 references in src/.
- Plan 01 Phase 8: VERIFIED — migration 19 no-op absorbed.
- SessionStore.ts:52-70 consolidation NOT executed (deferred):
the methods are not thin wrappers but ~900 LoC of bodies, and
two methods are documented as intentional mirrors so the
context-generator.cjs bundle stays schema-consistent without
pulling MigrationRunner. Deserves its own plan, not a sweep.
Verification: TranscriptParser → 0; transcript-parser.ts → gone;
no commented-out code markers remain. bun run build succeeds.
bun test → 1393 pass / 28 fail / 7 skip — EXACT match to
baseline. Zero regressions.
Plan: PATHFINDER-2026-04-22/07-dead-code.md
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: remove residual ProcessRegistry comment reference
Plan 07 dead-code sweep missed one comment-level reference to the
deleted in-memory ProcessRegistry class in SessionManager.ts:347.
Rewritten to describe the supervisor.json scope without naming the
deleted class, completing the verification grep target.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile review (P1 + 2× P2)
P1 — Plan 05 Phase 3 blocking endpoint was non-functional:
executeWithWorkerFallback used HEALTH_CHECK_TIMEOUT_MS (3 s) for
the POST /api/session/end call, but the server holds the
connection for SERVER_SIDE_SUMMARY_TIMEOUT_MS (30 s). Client
always raced to a "timed out" rejection that isWorkerUnavailable
classified as worker-unreachable, so the hook silently degraded
instead of waiting for summaryStoredEvent.
- Added optional timeoutMs to executeWithWorkerFallback,
forwarded to workerHttpRequest.
- summarize.ts call site now passes 35_000 (5 s above server
hold window).
P2 — ingestSummary({ kind: 'parsed' }) branch was dead code:
ResponseProcessor emitted summaryStoredEvent directly via the
event bus, bypassing the centralized helper that the comment
claimed was the single source.
- ResponseProcessor now calls ingestSummary({ kind: 'parsed',
sessionDbId, messageId, contentSessionId, parsed }) so the
event-emission path is single-sourced.
- ingestSummary's requireContext() resolution moved inside the
'queue' branch (the only branch that needs sessionManager /
dbManager). 'parsed' is a pure event-bus emission and
doesn't need worker-internal context — fixes mocked
ResponseProcessor unit tests that don't call
setIngestContext.
P2 — isWorkerFallback could false-positive on legitimate API
responses whose schema includes { continue: true, ... }:
- Added a Symbol.for('claude-mem/worker-fallback') brand to
WorkerFallback. isWorkerFallback now checks the brand, not
a duck-typed property name.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 2 (P1 + P2)
P1 — summaryStoredEvent fired regardless of whether the row was
persisted. ResponseProcessor's call to ingestSummary({ kind:
'parsed' }) ran for every parsed.kind === 'summary' even when
result.summaryId came back null (e.g. FK violation, null
memory_session_id at commit). The blocking /api/session/end
endpoint then returned { ok: true } and the Stop hook logged
'Summary stored' for a non-existent row.
- Gate ingestSummary call on (parsed.data.skipped ||
session.lastSummaryStored). Skipped summaries are an explicit
no-op bypass and still confirm; real summaries only confirm
when storage actually wrote a row.
- Non-skipped + summaryId === null path logs a warn and lets
the server-side timeout (504) surface to the hook instead of
a false ok:true.
P2 — PendingMessageStore.enqueue() returns 0 when INSERT OR
IGNORE suppresses a duplicate (the UNIQUE(session_id, tool_use_id)
constraint added by Plan 01 Phase 1). The two callers
(SessionManager.queueObservation and queueSummarize) previously
logged 'ENQUEUED messageId=0' which read like a row was inserted.
- Branch on messageId === 0 and emit a 'DUP_SUPPRESSED' debug
log instead of the misleading ENQUEUED line. No behavior
change — the duplicate is still correctly suppressed by the
DB (Principle 3); only the log surface is corrected.
- confirmProcessed is never called with the enqueue() return
value (it operates on session.processingMessageIds[] from
claimNextMessage), so no caller is broken; the visibility
fix prevents future misuse.
Verification: bun run build succeeds. bun test → 1393 pass /
28 fail / 7 skip — exact baseline match. Zero new failures.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 3 (P1 + 2× P2)
- P1 worker-service.ts: wire ensureGeneratorRunning into the ingest
context after SessionRoutes is constructed. setIngestContext runs
before routes exist, so transcript-watcher observations queued via
ingestObservation() had no way to auto-start the SDK generator.
Added attachIngestGeneratorStarter() to patch the callback in.
- P2 shared.ts: IngestEventBus now sets maxListeners to 0. Concurrent
/api/session/end calls register one listener each and clean up on
completion, so the default-10 warning fires spuriously under normal
load.
- P2 SessionRoutes.ts: handleObservationsByClaudeId now delegates to
ingestObservation() instead of duplicating skip-tool / meta /
privacy / queue logic. Single helper, matching the Plan 03 goal.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile iteration 4 (P1 tool-pair + P2 parse/path/doc)
- processor.handleToolResult: restore in-memory tool-use→tool-result
pairing via session.pendingTools for schemas (e.g. Codex) whose
tool_result events carry only tool_use_id + output. Without this,
neither handler fired — all tool observations silently dropped.
- processor.maybeParseJson: return raw string on parse failure instead
of throwing. Previously a single malformed JSON-shaped field caused
handleLine's outer catch to discard the entire transcript line.
- watcher.deepestNonGlobAncestor: split on / and \\, emit empty string
for purely-glob inputs so the caller skips the watch instead of
anchoring fs.watch at the filesystem root. Windows-compatible.
- PendingMessageStore.enqueue: tighten docstring — callers today only
log on the returned id; the SessionManager branches on id === 0.
* fix: forward tool_use_id through ingestObservation (Greptile iter 5)
P1 — Plan 01's UNIQUE(content_session_id, tool_use_id) dedup never
fired because the new shared ingest path dropped the toolUseId before
queueObservation. SQLite treats NULL values as distinct for UNIQUE,
so every replayed transcript line landed a duplicate row.
- shared.ingestObservation: forward payload.toolUseId to
queueObservation so INSERT OR IGNORE can actually collapse.
- SessionRoutes.handleObservationsByClaudeId: destructure both
tool_use_id (HTTP convention) and toolUseId (JS convention) from
req.body and pass into ingestObservation.
- observationsByClaudeIdSchema: declare both keys explicitly so the
validator doesn't rely on .passthrough() alone.
* fix: drop dead pairToolUsesByJoin, close session-end listener race
- PendingMessageStore: delete pairToolUsesByJoin. The method was never
called and its self-join semantics are structurally incompatible
with UNIQUE(content_session_id, tool_use_id): INSERT OR IGNORE
collapses any second row with the same pair, so a self-join can
only ever match a row to itself. In-memory pendingTools in
processor.ts remains the pairing path for split-event schemas.
- IngestEventBus: retain a short-lived (60s) recentStored map keyed
by sessionId. Populated on summaryStoredEvent emit, evicted on
consume or TTL.
- handleSessionEnd: drain the recent-events buffer before attaching
the listener. Closes the register-after-emit race where the summary
can persist between the hook's summarize POST and its session/end
POST — previously that window returned 504 after the 30s timeout.
* chore: merge origin/main into vivacious-teeth
Resolves conflicts with 15 commits on main (v12.3.9, security
observation types, Telegram notifier, PID-reuse worker start-guard).
Conflict resolution strategy:
- plugin/hooks/hooks.json, plugin/scripts/*.cjs, plugin/ui/viewer-bundle.js:
kept ours — PATHFINDER Plan 05 deletes the for-i-in-1-to-20 curl retry
loops and the built artifacts regenerate on build.
- src/cli/handlers/summarize.ts: kept ours — Plan 05 blocking
POST /api/session/end supersedes main's fire-and-forget path.
- src/services/worker-service.ts: kept ours — Plan 05 ingest bus +
summaryStoredEvent supersedes main's SessionCompletionHandler DI
refactor + orphan-reaper fallback.
- src/services/worker/http/routes/SessionRoutes.ts: kept ours — same
reason; generator .finally() Stop-hook self-clean is a guard for a
path our blocking endpoint removes.
- src/services/worker/http/routes/CorpusRoutes.ts: merged — added
security_alert / security_note to ALLOWED_CORPUS_TYPES (feature from
#2084) while preserving our Zod validateBody schema.
Typecheck: 294 errors (vs 298 pre-merge). No new errors introduced; all
remaining are pre-existing (Component-enum gaps, DOM lib for viewer,
bun:sqlite types).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings
1) SessionRoutes.handleSessionEnd was the only route handler not wrapped
in wrapHandler — synchronous exceptions would hang the client rather
than surfacing as 500s. Wrap it like every other handler.
2) processor.handleToolResult only consumed the session.pendingTools
entry when the tool_result arrived without a toolName. In the
split-schema path where tool_result carries both toolName and toolId,
the entry was never deleted and the map grew for the life of the
session. Consume the entry whenever toolId is present.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: typing cleanup and viewer tsconfig split for PR feedback
- Add explicit return types for SessionStore query methods
- Exclude src/ui/viewer from root tsconfig, give it its own DOM-typed config
- Add bun to root tsconfig types, plus misc typing tweaks flagged by Greptile
- Rebuilt plugin/scripts/* artifacts
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: address Greptile P2 findings (iter 2)
- PendingMessageStore.transitionMessagesTo: require sessionDbId (drop
the unscoped-drain branch that would nuke every pending/processing
row across all sessions if a future caller omitted the filter).
- IngestEventBus.takeRecentSummaryStored: make idempotent — keep the
cached event until TTL eviction so a retried Stop hook's second
/api/session/end returns immediately instead of hanging 30 s.
- TranscriptWatcher fs.watch callback: skip full glob scan for paths
already tailed (JSONL appends fire on every line; only unknown
paths warrant a rescan).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: call finalizeSession in terminal session paths (Greptile iter 3)
terminateSession and runFallbackForTerminatedSession previously called
SessionCompletionHandler.finalizeSession before removeSessionImmediate;
the refactor dropped those calls, leaving sdk_sessions.status='active'
for every session killed by wall-clock limit, unrecoverable error, or
exhausted fallback chain. The deleted reapStaleSessions interval was
the only prior backstop.
Re-wires finalizeSession (idempotent: marks completed, drains pending,
broadcasts) into both paths; no reaper reintroduced.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: GC failed pending_messages rows at startup (Greptile iter 4)
Plan 07 deleted clearFailed/clearFailedOlderThan as "dead code", but
with the periodic sweep also removed, nothing reaps status='failed'
rows now — they accumulate indefinitely. Since claimNextMessage's
self-healing subquery scans this table, unbounded growth degrades
claim latency over time.
Re-introduces clearFailedOlderThan and calls it once at worker startup
(not a reaper — one-shot, idempotent). 7-day retention keeps enough
history for operator inspection while bounding the table.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: finalize sessions on normal exit; cleanup hoist; share handler (iter 5)
1. startSessionProcessor success branch now calls completionHandler.
finalizeSession before removeSessionImmediate. Hooks-disabled installs
(and any Stop hook that fails before POST /api/sessions/complete) no
longer leave sdk_sessions rows as status='active' forever. Idempotent
— a subsequent /api/sessions/complete is a no-op.
2. Hoist SessionRoutes.handleSessionEnd cleanup declaration above the
closures that reference it (TDZ safety; safe at runtime today but
fragile if timeout ever shrinks).
3. SessionRoutes now receives WorkerService's shared SessionCompletionHandler
instead of constructing its own — prevents silent divergence if the
handler ever becomes stateful.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: stop runaway crash-recovery loop on dead sessions
Two distinct bugs were combining to keep a dead session restarting forever:
Bug 1 (uncaught "The operation was aborted."):
child_process.spawn emits 'error' asynchronously for ENOENT/EACCES/abort
signal aborts. spawnSdkProcess() never attached an 'error' listener, so
any async spawn failure became uncaughtException and escaped to the
daemon-level handler. Attach an 'error' listener immediately after spawn,
before the !child.pid early-return, so async spawn errors are logged
(with errno code) and swallowed locally.
Bug 2 (sliding-window limiter never trips on slow restart cadence):
RestartGuard tripped only when restartTimestamps.length exceeded
MAX_WINDOWED_RESTARTS (10) within RESTART_WINDOW_MS (60s). With the 8s
exponential-backoff cap, only ~7-8 restarts fit in the window, so a dead
session that fail-restart-fail-restart on 8s cycles would loop forever
(consecutiveRestarts climbing past 30+ in observed logs). Add a
consecutiveFailures counter that increments on every restart and resets
only on recordSuccess(). Trip when consecutive failures exceed
MAX_CONSECUTIVE_FAILURES (5) — meaning 5 restarts with zero successful
processing in between proves the session is dead. Both guards now run in
parallel: tight loops still trip the windowed cap; slow loops trip the
consecutive-failure cap.
Also: when the SessionRoutes path trips the guard, drain pending messages
to 'abandoned' so the session does not reappear in
getSessionsWithPendingMessages and trigger another auto-start cycle. The
worker-service.ts path already does this via terminateSession.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* perf: streamline worker startup and consolidate database connections
1. Database Pooling: Modified DatabaseManager, SessionStore, and SessionSearch to share a single bun:sqlite connection, eliminating redundant file descriptors.
2. Non-blocking Startup: Refactored WorktreeAdoption and Chroma backfill to run in the background (fire-and-forget), preventing them from stalling core initialization.
3. Diagnostic Routes: Added /api/chroma/status and bypassed the initialization guard for health/readiness endpoints to allow diagnostics during startup.
4. Robust Search: Implemented reliable SQLite FTS5 fallback in SearchManager for when Chroma (uvx) fails or is unavailable.
5. Code Cleanup: Removed redundant loopback MCP checks and mangled initialization logic from WorkerService.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: hard-exclude observer-sessions from hooks; bundle migration 29 (#2124)
* fix: hard-exclude observer-sessions from hooks; backfill bundle migrations
Stop hook + SessionEnd hook were storing the SDK observer's own
init/continuation/summary prompts in user_prompts, leaking into the
viewer (meta-observation regression). 25 such rows accumulated.
- shouldTrackProject: hard-reject OBSERVER_SESSIONS_DIR (and its subtree)
before consulting user-configured exclusion globs.
- summarize.ts (Stop) and session-complete.ts (SessionEnd): early-return
when shouldTrackProject(cwd) is false, so the observer's own hooks
cannot bootstrap the worker or queue a summary against the meta-session.
- SessionRoutes: cap user-prompt body at 256 KiB at the session-init
boundary so a runaway observer prompt cannot blow up storage.
- SessionStore: add migration 29 (UNIQUE(memory_session_id, content_hash)
on observations) inline so bundled artifacts (worker-service.cjs,
context-generator.cjs) stay schema-consistent — without it, the
ON CONFLICT clause in observation inserts throws.
- spawnSdkProcess: stdio[stdin] from 'ignore' to 'pipe' so the
supervisor can actually feed the observer's stdin.
Also rebuilds plugin/scripts/{worker-service,context-generator}.cjs.
* fix: walk back to UTF-8 boundary on prompt truncation (Greptile P2)
Plain Buffer.subarray at MAX_USER_PROMPT_BYTES can land mid-codepoint,
which the utf8 decoder silently rewrites to U+FFFD. Walk back over any
continuation bytes (0b10xxxxxx) before decoding so the truncated prompt
ends on a valid sequence boundary instead of a replacement character.
* fix: cross-platform observer-dir containment; clarify SDK stdin pipe
claude-review feedback on PR #2124.
- shouldTrackProject: literal `cwd.startsWith(OBSERVER_SESSIONS_DIR + '/')`
hard-coded a POSIX separator and missed Windows backslash paths plus any
trailing-slash variance. Switched to a path.relative-based isWithin()
helper so Windows hook input under observer-sessions\\... is also excluded.
- spawnSdkProcess: added a comment explaining why stdin must be 'pipe' —
SpawnedSdkProcess.stdin is typed NonNullable and the Claude Agent SDK
consumes that pipe; 'ignore' would null it and the null-check below
would tear the child down on every spawn.
* fix: make Stop hook fire-and-forget; remove dead /api/session/end
The Stop hook was awaiting a 35-second long-poll on /api/session/end,
which the worker held open until the summary-stored event fired (or its
30s server-side timeout elapsed). Followed by another await on
/api/sessions/complete. Three sequential awaits, the middle one a 30s
hold — not fire-and-forget despite repeated requests.
The Stop hook now does ONE thing: POST /api/sessions/summarize to
queue the summary work and return. The worker drives the rest async.
Session-map cleanup is performed by the SessionEnd handler
(session-complete.ts), not duplicated here.
- summarize.ts: drop the /api/session/end long-poll and the trailing
/api/sessions/complete await; ~40 lines removed; unused
SessionEndResponse interface gone; header comment rewritten.
- SessionRoutes: delete handleSessionEnd, sessionEndSchema, the
SERVER_SIDE_SUMMARY_TIMEOUT_MS constant, and the /api/session/end
route registration. Drop the now-unused ingestEventBus and
SummaryStoredEvent imports.
- ResponseProcessor + shared.ts + worker-utils.ts: update stale
comments that referenced the dead endpoint. The IngestEventBus is
left in place dormant (no listeners) for follow-up cleanup so this
PR stays focused on the blocker.
Bundle artifact (worker-service.cjs) rebuilt via build-and-sync.
Verification:
- grep '/api/session/end' plugin/scripts/worker-service.cjs → 0
- grep 'timeoutMs:35' plugin/scripts/worker-service.cjs → 0
- Worker restarted clean, /api/health ok at pid 92368
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* deps: bump all dependencies to latest including majors
Upgrades: React 18→19, Express 4→5, Zod 3→4, TypeScript 5→6,
@types/node 20→25, @anthropic-ai/claude-agent-sdk 0.1→0.2,
@clack/prompts 0.9→1.2, plus minors. Adds Daily Maintenance section
to CLAUDE.md mandating latest-version policy across manifests.
Express 5 surfaced a race in Server.listen() where the 'error' handler
was attached after listen() was invoked; refactored to use
http.createServer with both 'error' and 'listening' handlers attached
before listen(), restoring port-conflict rejection semantics.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: surface real chroma errors and add deep status probe
Replace the misleading "Vector search failed - semantic search unavailable.
Install uv... restart the worker." string in SearchManager with the actual
exception text from chroma_query_documents. The lying message blamed `uv`
for any failure — even when the real cause was a chroma-mcp transport
timeout, an empty collection, or a dead subprocess.
Also add /api/chroma/status?deep=1 backed by a new
ChromaMcpManager.probeSemanticSearch() that round-trips a real query
(chroma_list_collections + chroma_query_documents) instead of just
checking the stdio handshake. The cheap default path is unchanged.
Includes the diagnostic plan (PLAN-fix-mcp-search.md) and updated test
fixtures for the new structured failure message.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* chore: rebuild worker-service bundle to match merged src
Bundle was stale after the squash merge of #2124 — it still contained
the old "Install uv... semantic search unavailable" string and lacked
probeSemanticSearch. Rebuilt via bun run build-and-sync.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* docs: address coderabbit feedback on PLAN-fix-mcp-search.md
- replace machine-specific /Users/alexnewman absolute paths with portable
<repo-root> placeholder (MD-style portability)
- add blank lines around the TypeScript fenced block (MD031)
- tag the bare fenced block with `text` (MD040)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -1,8 +1,4 @@
|
||||
import { Database } from 'bun:sqlite';
|
||||
import { execFileSync } from 'child_process';
|
||||
import { existsSync, unlinkSync, writeFileSync } from 'fs';
|
||||
import { tmpdir } from 'os';
|
||||
import { join } from 'path';
|
||||
import { DATA_DIR, DB_PATH, ensureDir } from '../../shared/paths.js';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import { MigrationRunner } from './migrations/runner.js';
|
||||
@@ -19,118 +15,6 @@ export interface Migration {
|
||||
|
||||
let dbInstance: Database | null = null;
|
||||
|
||||
/**
|
||||
* Repair malformed database schema before migrations run.
|
||||
*
|
||||
* This handles the case where a database is synced between machines running
|
||||
* different claude-mem versions. A newer version may have added columns and
|
||||
* indexes that an older version (or even the same version on a fresh install)
|
||||
* cannot process. SQLite throws "malformed database schema" when it encounters
|
||||
* an index referencing a non-existent column, which prevents ALL queries —
|
||||
* including the migrations that would fix the schema.
|
||||
*
|
||||
* The fix: use Python's sqlite3 module (which supports writable_schema) to
|
||||
* drop the orphaned schema objects, then let the migration system recreate
|
||||
* them properly. bun:sqlite doesn't allow DELETE FROM sqlite_master even
|
||||
* with writable_schema = ON.
|
||||
*/
|
||||
function repairMalformedSchema(db: Database): void {
|
||||
try {
|
||||
// Quick test: if we can query sqlite_master, the schema is fine
|
||||
db.query('SELECT name FROM sqlite_master WHERE type = "table" LIMIT 1').all();
|
||||
return;
|
||||
} catch (error: unknown) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
if (!message.includes('malformed database schema')) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
logger.warn('DB', 'Detected malformed database schema, attempting repair', { error: message });
|
||||
|
||||
// Extract the problematic object name from the error message
|
||||
// Format: "malformed database schema (object_name) - details"
|
||||
const match = message.match(/malformed database schema \(([^)]+)\)/);
|
||||
if (!match) {
|
||||
logger.error('DB', 'Could not parse malformed schema error, cannot auto-repair', { error: message });
|
||||
throw error;
|
||||
}
|
||||
|
||||
const objectName = match[1];
|
||||
logger.info('DB', `Dropping malformed schema object: ${objectName}`);
|
||||
|
||||
// Get the DB file path. For file-based DBs, we can use Python to repair.
|
||||
// For in-memory DBs, we can't shell out — just re-throw.
|
||||
const dbPath = db.filename;
|
||||
if (!dbPath || dbPath === ':memory:' || dbPath === '') {
|
||||
logger.error('DB', 'Cannot auto-repair in-memory database');
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Close the connection so Python can safely modify the file
|
||||
db.close();
|
||||
|
||||
// Use Python's sqlite3 module to drop the orphaned object and reset
|
||||
// related migration versions so they re-run and recreate things properly.
|
||||
// bun:sqlite doesn't support DELETE FROM sqlite_master even with writable_schema.
|
||||
//
|
||||
// We write a temp script rather than using -c to avoid shell escaping issues
|
||||
// with paths containing spaces or special characters. execFileSync passes
|
||||
// args directly without a shell, so dbPath and objectName are safe.
|
||||
const scriptPath = join(tmpdir(), `claude-mem-repair-${Date.now()}.py`);
|
||||
try {
|
||||
writeFileSync(scriptPath, `
|
||||
import sqlite3, sys
|
||||
db_path = sys.argv[1]
|
||||
obj_name = sys.argv[2]
|
||||
c = sqlite3.connect(db_path)
|
||||
c.execute('PRAGMA writable_schema = ON')
|
||||
c.execute('DELETE FROM sqlite_master WHERE name = ?', (obj_name,))
|
||||
c.execute('PRAGMA writable_schema = OFF')
|
||||
# Reset migration versions so affected migrations re-run.
|
||||
# Guard with existence check: schema_versions may not exist on a very fresh DB.
|
||||
has_sv = c.execute(
|
||||
"SELECT count(*) FROM sqlite_master WHERE type='table' AND name='schema_versions'"
|
||||
).fetchone()[0]
|
||||
if has_sv:
|
||||
c.execute('DELETE FROM schema_versions')
|
||||
c.commit()
|
||||
c.close()
|
||||
`);
|
||||
execFileSync('python3', [scriptPath, dbPath, objectName], { timeout: 10000 });
|
||||
logger.info('DB', `Dropped orphaned schema object "${objectName}" and reset migration versions via Python sqlite3. All migrations will re-run (they are idempotent).`);
|
||||
} catch (pyError: unknown) {
|
||||
const pyMessage = pyError instanceof Error ? pyError.message : String(pyError);
|
||||
logger.error('DB', 'Python sqlite3 repair failed', { error: pyMessage });
|
||||
throw new Error(`Schema repair failed: ${message}. Python repair error: ${pyMessage}`);
|
||||
} finally {
|
||||
if (existsSync(scriptPath)) unlinkSync(scriptPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrapper that handles the close/reopen cycle needed for schema repair.
|
||||
* Returns a (possibly new) Database connection.
|
||||
*/
|
||||
function repairMalformedSchemaWithReopen(dbPath: string, db: Database): Database {
|
||||
try {
|
||||
db.query('SELECT name FROM sqlite_master WHERE type = "table" LIMIT 1').all();
|
||||
return db;
|
||||
} catch (error: unknown) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
if (!message.includes('malformed database schema')) {
|
||||
throw error;
|
||||
}
|
||||
|
||||
// repairMalformedSchema closes the DB internally for Python access
|
||||
repairMalformedSchema(db);
|
||||
|
||||
// Reopen and check for additional malformed objects
|
||||
const newDb = new Database(dbPath, { create: true, readwrite: true });
|
||||
return repairMalformedSchemaWithReopen(dbPath, newDb);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* ClaudeMemDatabase - New entry point for the sqlite module
|
||||
*
|
||||
@@ -154,11 +38,6 @@ export class ClaudeMemDatabase {
|
||||
// Create database connection
|
||||
this.db = new Database(dbPath, { create: true, readwrite: true });
|
||||
|
||||
// Repair any malformed schema before applying settings or running migrations.
|
||||
// Must happen first — even PRAGMA calls can fail on a corrupted schema.
|
||||
// This may close and reopen the connection if repair is needed.
|
||||
this.db = repairMalformedSchemaWithReopen(dbPath, this.db);
|
||||
|
||||
// Apply optimized SQLite settings
|
||||
this.db.run('PRAGMA journal_mode = WAL');
|
||||
this.db.run('PRAGMA synchronous = NORMAL');
|
||||
@@ -218,10 +97,6 @@ export class DatabaseManager {
|
||||
|
||||
this.db = new Database(DB_PATH, { create: true, readwrite: true });
|
||||
|
||||
// Repair any malformed schema before applying settings or running migrations.
|
||||
// Must happen first — even PRAGMA calls can fail on a corrupted schema.
|
||||
this.db = repairMalformedSchemaWithReopen(DB_PATH, this.db);
|
||||
|
||||
// Apply optimized SQLite settings
|
||||
this.db.run('PRAGMA journal_mode = WAL');
|
||||
this.db.run('PRAGMA synchronous = NORMAL');
|
||||
|
||||
@@ -1,9 +1,18 @@
|
||||
import { Database } from './sqlite-compat.js';
|
||||
import { Database } from 'bun:sqlite';
|
||||
import type { PendingMessage } from '../worker-types.js';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
|
||||
/** Messages processing longer than this are considered stale and reset to pending by self-healing */
|
||||
const STALE_PROCESSING_THRESHOLD_MS = 60_000;
|
||||
/**
|
||||
* Provider for the set of currently-live worker PIDs.
|
||||
*
|
||||
* The self-healing claim query reclaims any 'processing' row whose
|
||||
* worker_pid is NOT a live worker (crash recovery without a timer).
|
||||
*
|
||||
* Default: a single-worker process supplies just its own PID. Multi-worker
|
||||
* deployments inject a callback backed by `supervisor/process-registry.ts`
|
||||
* (`getSupervisor().getRegistry().getAll().filter(r => r.type === 'worker').map(r => r.pid)`).
|
||||
*/
|
||||
export type LiveWorkerPidsProvider = () => readonly number[];
|
||||
|
||||
/**
|
||||
* Persistent pending message record from database
|
||||
@@ -22,8 +31,8 @@ export interface PersistentPendingMessage {
|
||||
status: 'pending' | 'processing' | 'processed' | 'failed';
|
||||
retry_count: number;
|
||||
created_at_epoch: number;
|
||||
started_processing_at_epoch: number | null;
|
||||
completed_at_epoch: number | null;
|
||||
worker_pid: number | null;
|
||||
// Claude Code subagent identity — NULL for main-session messages.
|
||||
agent_type: string | null;
|
||||
agent_id: string | null;
|
||||
@@ -37,44 +46,76 @@ export interface PersistentPendingMessage {
|
||||
*
|
||||
* Lifecycle:
|
||||
* 1. enqueue() - Message persisted with status 'pending'
|
||||
* 2. claimNextMessage() - Atomically claims next pending message (marks as 'processing')
|
||||
* 2. claimNextMessage() - Atomically claims next pending message (marks as 'processing'
|
||||
* and stamps the live worker's PID). Self-healing: reclaims any 'processing' row
|
||||
* whose worker_pid is no longer alive (worker crash) in the same UPDATE.
|
||||
* 3. confirmProcessed() - Deletes message after successful processing
|
||||
*
|
||||
* Self-healing:
|
||||
* - claimNextMessage() resets stale 'processing' messages (>60s) back to 'pending' before claiming
|
||||
* - This eliminates stuck messages from generator crashes without external timers
|
||||
*
|
||||
* Recovery:
|
||||
* - getSessionsWithPendingMessages() - Find sessions that need recovery on startup
|
||||
* Self-healing semantics:
|
||||
* A 'processing' row is reclaimable iff worker_pid IS NULL or worker_pid is
|
||||
* not present in the live-pids list at claim time. No timer, no
|
||||
* stale-cutoff timestamp — liveness is the truth.
|
||||
*/
|
||||
export class PendingMessageStore {
|
||||
private db: Database;
|
||||
private maxRetries: number;
|
||||
private workerPid: number;
|
||||
private getLiveWorkerPids: LiveWorkerPidsProvider;
|
||||
|
||||
constructor(db: Database, maxRetries: number = 3) {
|
||||
/**
|
||||
* @param db SQLite database
|
||||
* @param maxRetries Per-message retry ceiling for transient SDK failures (default 3)
|
||||
* @param workerPid PID of the worker that owns this store; stamped into worker_pid on claim.
|
||||
* Defaults to process.pid so single-process deployments need no extra wiring.
|
||||
* @param getLiveWorkerPids Provider for the set of all currently-live worker PIDs.
|
||||
* Defaults to `[workerPid]` — only this worker is alive.
|
||||
* Multi-worker deployments inject a supervisor-backed provider.
|
||||
*/
|
||||
constructor(
|
||||
db: Database,
|
||||
maxRetries: number = 3,
|
||||
workerPid: number = process.pid,
|
||||
getLiveWorkerPids?: LiveWorkerPidsProvider
|
||||
) {
|
||||
this.db = db;
|
||||
this.maxRetries = maxRetries;
|
||||
this.workerPid = workerPid;
|
||||
this.getLiveWorkerPids = getLiveWorkerPids ?? (() => [this.workerPid]);
|
||||
}
|
||||
|
||||
/**
|
||||
* Enqueue a new message (persist before processing)
|
||||
* @returns The database ID of the persisted message
|
||||
* Enqueue a new message (persist before processing).
|
||||
*
|
||||
* Uses `INSERT OR IGNORE` so duplicate (content_session_id, tool_use_id)
|
||||
* pairs collapse to a single row — the UNIQUE INDEX added in plan 01 phase 1
|
||||
* is the authority on tool-use idempotency. Per principle 3 (UNIQUE
|
||||
* constraint over dedup window), we don't time-gate duplicates.
|
||||
*
|
||||
* @returns The database ID of the persisted message, or 0 when the insert
|
||||
* was suppressed by ON CONFLICT. Callers MUST guard with `id > 0`
|
||||
* before threading the value into any subsequent SQL (e.g.
|
||||
* `confirmProcessed`, `markFailed`, `processingMessageIds`) —
|
||||
* a zero id would silently target zero rows. The only two call
|
||||
* sites today (`SessionManager.queueObservation` and
|
||||
* `queueSummarize`) use the id purely for logging and both
|
||||
* branch on `messageId === 0`.
|
||||
*/
|
||||
enqueue(sessionDbId: number, contentSessionId: string, message: PendingMessage): number {
|
||||
const now = Date.now();
|
||||
const stmt = this.db.prepare(`
|
||||
INSERT INTO pending_messages (
|
||||
session_db_id, content_session_id, message_type,
|
||||
INSERT OR IGNORE INTO pending_messages (
|
||||
session_db_id, content_session_id, tool_use_id, message_type,
|
||||
tool_name, tool_input, tool_response, cwd,
|
||||
last_assistant_message,
|
||||
prompt_number, status, retry_count, created_at_epoch,
|
||||
agent_type, agent_id
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'pending', 0, ?, ?, ?)
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'pending', 0, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
sessionDbId,
|
||||
contentSessionId,
|
||||
message.toolUseId ?? null,
|
||||
message.type,
|
||||
message.tool_name || null,
|
||||
message.tool_input ? JSON.stringify(message.tool_input) : null,
|
||||
@@ -90,58 +131,58 @@ export class PendingMessageStore {
|
||||
return result.lastInsertRowid as number;
|
||||
}
|
||||
|
||||
/**
|
||||
* Atomically claim the next pending message by marking it as 'processing'.
|
||||
* Self-healing: resets any stale 'processing' messages (>60s) back to 'pending' first.
|
||||
* Message stays in DB until confirmProcessed() is called.
|
||||
* Uses a transaction to prevent race conditions.
|
||||
/**
|
||||
* Atomically claim the next message for `sessionDbId`.
|
||||
*
|
||||
* A row is claimable iff:
|
||||
* - status = 'pending', OR
|
||||
* - status = 'processing' AND worker_pid is not in the live-pids set
|
||||
* (i.e. the previous owner crashed). This is the self-healing branch:
|
||||
* liveness is checked at claim time, not by a background reaper.
|
||||
*
|
||||
* The claim stamps the live worker's PID and flips status to 'processing'
|
||||
* in a single UPDATE … WHERE id = (subquery).
|
||||
*/
|
||||
claimNextMessage(sessionDbId: number): PersistentPendingMessage | null {
|
||||
const claimTx = this.db.transaction((sessionId: number) => {
|
||||
// Capture time inside transaction so it's fresh if WAL contention causes retry
|
||||
const now = Date.now();
|
||||
// Self-healing: reset stale 'processing' messages back to 'pending'
|
||||
// This recovers from generator crashes without external timers
|
||||
// Note: strict < means messages must be OLDER than threshold to be reset
|
||||
const staleCutoff = now - STALE_PROCESSING_THRESHOLD_MS;
|
||||
const resetStmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE session_db_id = ? AND status = 'processing'
|
||||
AND started_processing_at_epoch < ?
|
||||
`);
|
||||
const resetResult = resetStmt.run(sessionId, staleCutoff);
|
||||
if (resetResult.changes > 0) {
|
||||
logger.info('QUEUE', `SELF_HEAL | sessionDbId=${sessionId} | recovered ${resetResult.changes} stale processing message(s)`);
|
||||
}
|
||||
// Build a parameterized IN-list of live worker PIDs. We always include
|
||||
// this worker's PID so that an in-flight claim doesn't accidentally
|
||||
// self-reclaim a row we just stamped (the predicate is "NOT IN live").
|
||||
const livePids = this.getLivePidsIncludingSelf();
|
||||
const placeholders = livePids.map(() => '?').join(',');
|
||||
|
||||
const peekStmt = this.db.prepare(`
|
||||
SELECT * FROM pending_messages
|
||||
WHERE session_db_id = ? AND status = 'pending'
|
||||
ORDER BY id ASC
|
||||
LIMIT 1
|
||||
`);
|
||||
const msg = peekStmt.get(sessionId) as PersistentPendingMessage | null;
|
||||
const sql = `
|
||||
UPDATE pending_messages
|
||||
SET status = 'processing',
|
||||
worker_pid = ?
|
||||
WHERE id = (
|
||||
SELECT id FROM pending_messages
|
||||
WHERE session_db_id = ?
|
||||
AND (
|
||||
status = 'pending'
|
||||
OR (status = 'processing' AND (worker_pid IS NULL OR worker_pid NOT IN (${placeholders})))
|
||||
)
|
||||
ORDER BY id ASC
|
||||
LIMIT 1
|
||||
)
|
||||
RETURNING *
|
||||
`;
|
||||
|
||||
if (msg) {
|
||||
// CRITICAL FIX: Mark as 'processing' instead of deleting
|
||||
// Message will be deleted by confirmProcessed() after successful store
|
||||
const updateStmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'processing', started_processing_at_epoch = ?
|
||||
WHERE id = ?
|
||||
`);
|
||||
updateStmt.run(now, msg.id);
|
||||
const stmt = this.db.prepare(sql);
|
||||
const params: (number | string)[] = [this.workerPid, sessionDbId, ...livePids];
|
||||
const claimed = stmt.get(...params) as PersistentPendingMessage | null;
|
||||
|
||||
// Log claim with minimal info (avoid logging full payload)
|
||||
logger.info('QUEUE', `CLAIMED | sessionDbId=${sessionId} | messageId=${msg.id} | type=${msg.message_type}`, {
|
||||
sessionId: sessionId
|
||||
});
|
||||
}
|
||||
return msg;
|
||||
});
|
||||
if (claimed) {
|
||||
logger.info('QUEUE', `CLAIMED | sessionDbId=${sessionDbId} | messageId=${claimed.id} | type=${claimed.message_type} | workerPid=${this.workerPid}`, {
|
||||
sessionId: sessionDbId
|
||||
});
|
||||
}
|
||||
return claimed;
|
||||
}
|
||||
|
||||
return claimTx(sessionDbId) as PersistentPendingMessage | null;
|
||||
private getLivePidsIncludingSelf(): number[] {
|
||||
const pids = this.getLiveWorkerPids();
|
||||
if (pids.includes(this.workerPid)) return [...pids];
|
||||
return [...pids, this.workerPid];
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -158,34 +199,19 @@ export class PendingMessageStore {
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset stale 'processing' messages back to 'pending' for retry.
|
||||
* Called on worker startup and periodically to recover from crashes.
|
||||
* @param thresholdMs Messages processing longer than this are considered stale (default: 5 minutes)
|
||||
* @returns Number of messages reset
|
||||
* Delete `status='failed'` rows older than `thresholdMs`. Called once at
|
||||
* worker startup so `pending_messages` does not grow unbounded on long-
|
||||
* running or high-failure-rate installations; `claimNextMessage`'s
|
||||
* self-healing subquery scans this table, so bounded rows keep claim
|
||||
* latency predictable. Not a reaper — one-shot, idempotent.
|
||||
*/
|
||||
resetStaleProcessingMessages(thresholdMs: number = 5 * 60 * 1000, sessionDbId?: number): number {
|
||||
clearFailedOlderThan(thresholdMs: number): number {
|
||||
const cutoff = Date.now() - thresholdMs;
|
||||
let stmt;
|
||||
let result;
|
||||
if (sessionDbId !== undefined) {
|
||||
stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE status = 'processing' AND started_processing_at_epoch < ? AND session_db_id = ?
|
||||
`);
|
||||
result = stmt.run(cutoff, sessionDbId);
|
||||
} else {
|
||||
stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE status = 'processing' AND started_processing_at_epoch < ?
|
||||
`);
|
||||
result = stmt.run(cutoff);
|
||||
}
|
||||
if (result.changes > 0) {
|
||||
logger.info('QUEUE', `RESET_STALE | count=${result.changes} | thresholdMs=${thresholdMs}${sessionDbId !== undefined ? ` | sessionDbId=${sessionDbId}` : ''}`);
|
||||
}
|
||||
return result.changes;
|
||||
const stmt = this.db.prepare(`
|
||||
DELETE FROM pending_messages
|
||||
WHERE status = 'failed' AND COALESCE(failed_at_epoch, completed_at_epoch, 0) < ?
|
||||
`);
|
||||
return stmt.run(cutoff).changes;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -201,144 +227,44 @@ export class PendingMessageStore {
|
||||
}
|
||||
|
||||
/**
|
||||
* Get all queue messages (for UI display)
|
||||
* Returns pending, processing, and failed messages (not processed - they're deleted)
|
||||
* Joins with sdk_sessions to get project name
|
||||
* Transition pending_messages rows to a terminal status — PATHFINDER-2026-04-22
|
||||
* Plan 06 Phase 9. One SQL UPDATE path, one place to add a new terminal status
|
||||
* later, zero divergence between call sites.
|
||||
*
|
||||
* - `failed` — narrow form: only rows currently `status='processing'`.
|
||||
* Used during error recovery when a session generator crashes and we want
|
||||
* to mark its in-flight messages failed without touching rows that never
|
||||
* left `pending`.
|
||||
*
|
||||
* - `abandoned` — wide form: rows in `('pending', 'processing')`.
|
||||
* Used during session termination or completion drain so the session
|
||||
* doesn't appear in `getSessionsWithPendingMessages` forever. Both forms
|
||||
* write the row's `status` column to `'failed'`; `abandoned` is just the
|
||||
* broader WHERE clause.
|
||||
*
|
||||
* Cites Principle 6 (one helper, N callers) and Principle 7 (the
|
||||
* old per-status wrapper methods were deleted in the same PR).
|
||||
*
|
||||
* @param status `'failed'` (processing-only) or `'abandoned'` (pending+processing)
|
||||
* @param filter `{ sessionDbId: number }` — scope to one session's rows.
|
||||
* Required: no unscoped path exists, to prevent accidental global drain.
|
||||
* @returns Number of rows updated
|
||||
*/
|
||||
getQueueMessages(): (PersistentPendingMessage & { project: string | null })[] {
|
||||
const stmt = this.db.prepare(`
|
||||
SELECT pm.*, ss.project
|
||||
FROM pending_messages pm
|
||||
LEFT JOIN sdk_sessions ss ON pm.content_session_id = ss.content_session_id
|
||||
WHERE pm.status IN ('pending', 'processing', 'failed')
|
||||
ORDER BY
|
||||
CASE pm.status
|
||||
WHEN 'failed' THEN 0
|
||||
WHEN 'processing' THEN 1
|
||||
WHEN 'pending' THEN 2
|
||||
END,
|
||||
pm.created_at_epoch ASC
|
||||
`);
|
||||
return stmt.all() as (PersistentPendingMessage & { project: string | null })[];
|
||||
}
|
||||
|
||||
/**
|
||||
* Get count of stuck messages (processing longer than threshold)
|
||||
*/
|
||||
getStuckCount(thresholdMs: number): number {
|
||||
const cutoff = Date.now() - thresholdMs;
|
||||
const stmt = this.db.prepare(`
|
||||
SELECT COUNT(*) as count FROM pending_messages
|
||||
WHERE status = 'processing' AND started_processing_at_epoch < ?
|
||||
`);
|
||||
const result = stmt.get(cutoff) as { count: number };
|
||||
return result.count;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retry a specific message (reset to pending)
|
||||
* Works for pending (re-queue), processing (reset stuck), and failed messages
|
||||
*/
|
||||
retryMessage(messageId: number): boolean {
|
||||
const stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE id = ? AND status IN ('pending', 'processing', 'failed')
|
||||
`);
|
||||
const result = stmt.run(messageId);
|
||||
return result.changes > 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset all processing messages for a session to pending
|
||||
* Used when force-restarting a stuck session
|
||||
*/
|
||||
resetProcessingToPending(sessionDbId: number): number {
|
||||
const stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE session_db_id = ? AND status = 'processing'
|
||||
`);
|
||||
const result = stmt.run(sessionDbId);
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark all processing messages for a session as failed
|
||||
* Used in error recovery when session generator crashes
|
||||
* @returns Number of messages marked failed
|
||||
*/
|
||||
markSessionMessagesFailed(sessionDbId: number): number {
|
||||
transitionMessagesTo(
|
||||
status: 'failed' | 'abandoned',
|
||||
filter: { sessionDbId: number }
|
||||
): number {
|
||||
const now = Date.now();
|
||||
const statusClause = status === 'failed'
|
||||
? `status = 'processing'`
|
||||
: `status IN ('pending', 'processing')`;
|
||||
|
||||
// Atomic update - all processing messages for session → failed
|
||||
// Note: This bypasses retry logic since generator failures are session-level,
|
||||
// not message-level. Individual message failures use markFailed() instead.
|
||||
const stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'failed', failed_at_epoch = ?
|
||||
WHERE session_db_id = ? AND status = 'processing'
|
||||
WHERE session_db_id = ? AND ${statusClause}
|
||||
`);
|
||||
|
||||
const result = stmt.run(now, sessionDbId);
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Mark all pending and processing messages for a session as failed (abandoned).
|
||||
* Used when SDK session is terminated and no fallback agent is available:
|
||||
* prevents the session from appearing in getSessionsWithPendingMessages forever.
|
||||
* @returns Number of messages marked failed
|
||||
*/
|
||||
markAllSessionMessagesAbandoned(sessionDbId: number): number {
|
||||
const now = Date.now();
|
||||
const stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'failed', failed_at_epoch = ?
|
||||
WHERE session_db_id = ? AND status IN ('pending', 'processing')
|
||||
`);
|
||||
const result = stmt.run(now, sessionDbId);
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Abort a specific message (delete from queue)
|
||||
*/
|
||||
abortMessage(messageId: number): boolean {
|
||||
const stmt = this.db.prepare('DELETE FROM pending_messages WHERE id = ?');
|
||||
const result = stmt.run(messageId);
|
||||
return result.changes > 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retry all stuck messages at once
|
||||
*/
|
||||
retryAllStuck(thresholdMs: number): number {
|
||||
const cutoff = Date.now() - thresholdMs;
|
||||
const stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE status = 'processing' AND started_processing_at_epoch < ?
|
||||
`);
|
||||
const result = stmt.run(cutoff);
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get recently processed messages (for UI feedback)
|
||||
* Shows messages completed in the last N minutes so users can see their stuck items were processed
|
||||
*/
|
||||
getRecentlyProcessed(limit: number = 10, withinMinutes: number = 30): (PersistentPendingMessage & { project: string | null })[] {
|
||||
const cutoff = Date.now() - (withinMinutes * 60 * 1000);
|
||||
const stmt = this.db.prepare(`
|
||||
SELECT pm.*, ss.project
|
||||
FROM pending_messages pm
|
||||
LEFT JOIN sdk_sessions ss ON pm.content_session_id = ss.content_session_id
|
||||
WHERE pm.status = 'processed' AND pm.completed_at_epoch > ?
|
||||
ORDER BY pm.completed_at_epoch DESC
|
||||
LIMIT ?
|
||||
`);
|
||||
return stmt.all(cutoff, limit) as (PersistentPendingMessage & { project: string | null })[];
|
||||
return stmt.run(now, filter.sessionDbId).changes;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -358,7 +284,7 @@ export class PendingMessageStore {
|
||||
// Move back to pending for retry
|
||||
const stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', retry_count = retry_count + 1, started_processing_at_epoch = NULL
|
||||
SET status = 'pending', retry_count = retry_count + 1, worker_pid = NULL
|
||||
WHERE id = ?
|
||||
`);
|
||||
stmt.run(messageId);
|
||||
@@ -373,24 +299,6 @@ export class PendingMessageStore {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Reset stuck messages (processing -> pending if stuck longer than threshold)
|
||||
* @param thresholdMs Messages processing longer than this are considered stuck (0 = reset all)
|
||||
* @returns Number of messages reset
|
||||
*/
|
||||
resetStuckMessages(thresholdMs: number): number {
|
||||
const cutoff = thresholdMs === 0 ? Date.now() : Date.now() - thresholdMs;
|
||||
|
||||
const stmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE status = 'processing' AND started_processing_at_epoch < ?
|
||||
`);
|
||||
|
||||
const result = stmt.run(cutoff);
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Get count of pending messages for a session
|
||||
*/
|
||||
@@ -417,27 +325,21 @@ export class PendingMessageStore {
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if any session has pending work.
|
||||
* Excludes 'processing' messages stuck for >5 minutes (resets them to 'pending' as a side effect).
|
||||
* Check if any session has work that could be claimed right now.
|
||||
*
|
||||
* Counts a row as work iff it is 'pending' or it is 'processing' under a
|
||||
* worker_pid that is not currently alive (the same predicate the
|
||||
* self-healing claim uses). No side effects — no UPDATE, no timer.
|
||||
*/
|
||||
hasAnyPendingWork(): boolean {
|
||||
// Reset stuck 'processing' messages older than 5 minutes before checking
|
||||
const stuckCutoff = Date.now() - (5 * 60 * 1000);
|
||||
const resetStmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE status = 'processing' AND started_processing_at_epoch < ?
|
||||
`);
|
||||
const resetResult = resetStmt.run(stuckCutoff);
|
||||
if (resetResult.changes > 0) {
|
||||
logger.info('QUEUE', `STUCK_RESET | hasAnyPendingWork reset ${resetResult.changes} stuck processing message(s) older than 5 minutes`);
|
||||
}
|
||||
|
||||
const livePids = this.getLivePidsIncludingSelf();
|
||||
const placeholders = livePids.map(() => '?').join(',');
|
||||
const stmt = this.db.prepare(`
|
||||
SELECT COUNT(*) as count FROM pending_messages
|
||||
WHERE status IN ('pending', 'processing')
|
||||
WHERE status = 'pending'
|
||||
OR (status = 'processing' AND (worker_pid IS NULL OR worker_pid NOT IN (${placeholders})))
|
||||
`);
|
||||
const result = stmt.get() as { count: number };
|
||||
const result = stmt.get(...livePids) as { count: number };
|
||||
return result.count > 0;
|
||||
}
|
||||
|
||||
@@ -464,52 +366,6 @@ export class PendingMessageStore {
|
||||
return result ? { sessionDbId: result.session_db_id, contentSessionId: result.content_session_id } : null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all failed messages from the queue
|
||||
* @returns Number of messages deleted
|
||||
*/
|
||||
clearFailed(): number {
|
||||
const stmt = this.db.prepare(`
|
||||
DELETE FROM pending_messages
|
||||
WHERE status = 'failed'
|
||||
`);
|
||||
const result = stmt.run();
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear failed messages older than the given threshold.
|
||||
* Preserves recent failures for inspection and manual retry.
|
||||
* @param thresholdMs - Only delete failures older than this many milliseconds
|
||||
* @returns Number of messages deleted
|
||||
*/
|
||||
clearFailedOlderThan(thresholdMs: number): number {
|
||||
const cutoff = Date.now() - thresholdMs;
|
||||
// Use COALESCE to prefer the most recent failure timestamp over creation time.
|
||||
// failed_at_epoch is set by session-level failures, completed_at_epoch by markFailed().
|
||||
const stmt = this.db.prepare(`
|
||||
DELETE FROM pending_messages
|
||||
WHERE status = 'failed'
|
||||
AND COALESCE(failed_at_epoch, completed_at_epoch, started_processing_at_epoch, created_at_epoch) < ?
|
||||
`);
|
||||
const result = stmt.run(cutoff);
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clear all pending, processing, and failed messages from the queue
|
||||
* Keeps only processed messages (for history)
|
||||
* @returns Number of messages deleted
|
||||
*/
|
||||
clearAll(): number {
|
||||
const stmt = this.db.prepare(`
|
||||
DELETE FROM pending_messages
|
||||
WHERE status IN ('pending', 'processing', 'failed')
|
||||
`);
|
||||
const result = stmt.run();
|
||||
return result.changes;
|
||||
}
|
||||
|
||||
/**
|
||||
* Convert a PersistentPendingMessage back to PendingMessage format
|
||||
*/
|
||||
|
||||
@@ -25,13 +25,14 @@ export class SessionSearch {
|
||||
|
||||
private static readonly MISSING_SEARCH_INPUT_MESSAGE = 'Either query or filters required for search';
|
||||
|
||||
constructor(dbPath?: string) {
|
||||
if (!dbPath) {
|
||||
constructor(dbPathOrDb: string | Database = DB_PATH) {
|
||||
if (dbPathOrDb instanceof Database) {
|
||||
this.db = dbPathOrDb;
|
||||
} else {
|
||||
ensureDir(DATA_DIR);
|
||||
dbPath = DB_PATH;
|
||||
this.db = new Database(dbPathOrDb);
|
||||
this.db.run('PRAGMA journal_mode = WAL');
|
||||
}
|
||||
this.db = new Database(dbPath);
|
||||
this.db.run('PRAGMA journal_mode = WAL');
|
||||
|
||||
// Cache FTS5 availability once at construction (avoids DDL probe on every query)
|
||||
this._fts5Available = this.isFts5Available();
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
import { Database } from 'bun:sqlite';
|
||||
import { Database, type SQLQueryBindings } from 'bun:sqlite';
|
||||
import { DATA_DIR, DB_PATH, ensureDir, OBSERVER_SESSIONS_PROJECT } from '../../shared/paths.js';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import {
|
||||
@@ -13,7 +13,8 @@ import {
|
||||
LatestPromptResult
|
||||
} from '../../types/database.js';
|
||||
import type { PendingMessageStore } from './PendingMessageStore.js';
|
||||
import { computeObservationContentHash, findDuplicateObservation } from './observations/store.js';
|
||||
import type { ObservationSearchResult, SessionSummarySearchResult } from './types.js';
|
||||
import { computeObservationContentHash } from './observations/store.js';
|
||||
import { parseFileList } from './observations/files.js';
|
||||
import { DEFAULT_PLATFORM_SOURCE, normalizePlatformSource, sortPlatformSources } from '../../shared/platform-source.js';
|
||||
|
||||
@@ -34,17 +35,21 @@ function resolveCreateSessionArgs(
|
||||
export class SessionStore {
|
||||
public db: Database;
|
||||
|
||||
constructor(dbPath: string = DB_PATH) {
|
||||
if (dbPath !== ':memory:') {
|
||||
ensureDir(DATA_DIR);
|
||||
}
|
||||
this.db = new Database(dbPath);
|
||||
constructor(dbPathOrDb: string | Database = DB_PATH) {
|
||||
if (dbPathOrDb instanceof Database) {
|
||||
this.db = dbPathOrDb;
|
||||
} else {
|
||||
if (dbPathOrDb !== ':memory:') {
|
||||
ensureDir(DATA_DIR);
|
||||
}
|
||||
this.db = new Database(dbPathOrDb);
|
||||
|
||||
// Ensure optimized settings
|
||||
this.db.run('PRAGMA journal_mode = WAL');
|
||||
this.db.run('PRAGMA synchronous = NORMAL');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
this.db.run('PRAGMA journal_size_limit = 4194304'); // 4MB WAL cap (#1956)
|
||||
// Ensure optimized settings only for new connections
|
||||
this.db.run('PRAGMA journal_mode = WAL');
|
||||
this.db.run('PRAGMA synchronous = NORMAL');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
this.db.run('PRAGMA journal_size_limit = 4194304'); // 4MB WAL cap (#1956)
|
||||
}
|
||||
|
||||
// Initialize schema if needed (fresh database)
|
||||
this.initializeSchema();
|
||||
@@ -68,6 +73,7 @@ export class SessionStore {
|
||||
this.addObservationModelColumns();
|
||||
this.ensureMergedIntoProjectColumns();
|
||||
this.addObservationSubagentColumns();
|
||||
this.addObservationsUniqueContentHashIndex();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -565,7 +571,6 @@ export class SessionStore {
|
||||
status TEXT NOT NULL DEFAULT 'pending' CHECK(status IN ('pending', 'processing', 'processed', 'failed')),
|
||||
retry_count INTEGER NOT NULL DEFAULT 0,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
started_processing_at_epoch INTEGER,
|
||||
completed_at_epoch INTEGER,
|
||||
FOREIGN KEY (session_db_id) REFERENCES sdk_sessions(id) ON DELETE CASCADE
|
||||
)
|
||||
@@ -661,7 +666,7 @@ export class SessionStore {
|
||||
|
||||
/**
|
||||
* Add failed_at_epoch column to pending_messages (migration 20)
|
||||
* Used by markSessionMessagesFailed() for error recovery tracking
|
||||
* Used by transitionMessagesTo() for error recovery tracking
|
||||
*/
|
||||
private addFailedAtEpochColumn(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(20) as SchemaVersion | undefined;
|
||||
@@ -1033,6 +1038,47 @@ export class SessionStore {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add UNIQUE(memory_session_id, content_hash) on observations (migration 29).
|
||||
* Mirrors MigrationRunner.addObservationsUniqueContentHashIndex so bundled
|
||||
* artifacts that embed SessionStore (e.g. worker-service.cjs, context-generator.cjs)
|
||||
* stay schema-consistent. Without this, INSERT … ON CONFLICT(memory_session_id,
|
||||
* content_hash) DO NOTHING throws "ON CONFLICT clause does not match any
|
||||
* PRIMARY KEY or UNIQUE constraint" and every observation insert fails.
|
||||
*/
|
||||
private addObservationsUniqueContentHashIndex(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(29) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
const obsCols = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
|
||||
const hasMem = obsCols.some(c => c.name === 'memory_session_id');
|
||||
const hasHash = obsCols.some(c => c.name === 'content_hash');
|
||||
if (!hasMem || !hasHash) {
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(29, new Date().toISOString());
|
||||
return;
|
||||
}
|
||||
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
try {
|
||||
this.db.run(`
|
||||
DELETE FROM observations
|
||||
WHERE id NOT IN (
|
||||
SELECT MIN(id) FROM observations
|
||||
GROUP BY memory_session_id, content_hash
|
||||
)
|
||||
`);
|
||||
this.db.run(`
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS ux_observations_session_hash
|
||||
ON observations(memory_session_id, content_hash)
|
||||
`);
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(29, new Date().toISOString());
|
||||
this.db.run('COMMIT');
|
||||
} catch (error) {
|
||||
this.db.run('ROLLBACK');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the memory session ID for a session
|
||||
* Called by SDKAgent when it captures the session ID from the first SDK message
|
||||
@@ -1112,7 +1158,18 @@ export class SessionStore {
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
return stmt.all(project, limit);
|
||||
return stmt.all(project, limit) as Array<{
|
||||
request: string | null;
|
||||
investigated: string | null;
|
||||
learned: string | null;
|
||||
completed: string | null;
|
||||
next_steps: string | null;
|
||||
files_read: string | null;
|
||||
files_edited: string | null;
|
||||
notes: string | null;
|
||||
prompt_number: number | null;
|
||||
created_at: string;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1137,7 +1194,15 @@ export class SessionStore {
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
return stmt.all(project, limit);
|
||||
return stmt.all(project, limit) as Array<{
|
||||
memory_session_id: string;
|
||||
request: string | null;
|
||||
learned: string | null;
|
||||
completed: string | null;
|
||||
next_steps: string | null;
|
||||
prompt_number: number | null;
|
||||
created_at: string;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1157,7 +1222,12 @@ export class SessionStore {
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
return stmt.all(project, limit);
|
||||
return stmt.all(project, limit) as Array<{
|
||||
type: string;
|
||||
text: string;
|
||||
prompt_number: number | null;
|
||||
created_at: string;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1193,7 +1263,18 @@ export class SessionStore {
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
return stmt.all(limit);
|
||||
return stmt.all(limit) as Array<{
|
||||
id: number;
|
||||
type: string;
|
||||
title: string | null;
|
||||
subtitle: string | null;
|
||||
text: string;
|
||||
project: string;
|
||||
platform_source: string;
|
||||
prompt_number: number | null;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1237,7 +1318,22 @@ export class SessionStore {
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
return stmt.all(limit);
|
||||
return stmt.all(limit) as Array<{
|
||||
id: number;
|
||||
request: string | null;
|
||||
investigated: string | null;
|
||||
learned: string | null;
|
||||
completed: string | null;
|
||||
next_steps: string | null;
|
||||
files_read: string | null;
|
||||
files_edited: string | null;
|
||||
notes: string | null;
|
||||
project: string;
|
||||
platform_source: string;
|
||||
prompt_number: number | null;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1269,7 +1365,16 @@ export class SessionStore {
|
||||
LIMIT ?
|
||||
`);
|
||||
|
||||
return stmt.all(limit);
|
||||
return stmt.all(limit) as Array<{
|
||||
id: number;
|
||||
content_session_id: string;
|
||||
project: string;
|
||||
platform_source: string;
|
||||
prompt_number: number;
|
||||
prompt_text: string;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1283,7 +1388,7 @@ export class SessionStore {
|
||||
WHERE project IS NOT NULL AND project != ''
|
||||
AND project != ?
|
||||
`;
|
||||
const params: unknown[] = [OBSERVER_SESSIONS_PROJECT];
|
||||
const params: SQLQueryBindings[] = [OBSERVER_SESSIONS_PROJECT];
|
||||
|
||||
if (normalizedPlatformSource) {
|
||||
query += ' AND COALESCE(platform_source, ?) = ?';
|
||||
@@ -1404,7 +1509,13 @@ export class SessionStore {
|
||||
ORDER BY started_at_epoch ASC
|
||||
`);
|
||||
|
||||
return stmt.all(project, limit);
|
||||
return stmt.all(project, limit) as Array<{
|
||||
memory_session_id: string | null;
|
||||
status: string;
|
||||
started_at: string;
|
||||
user_prompt: string | null;
|
||||
has_summary: boolean;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1423,7 +1534,12 @@ export class SessionStore {
|
||||
ORDER BY created_at_epoch ASC
|
||||
`);
|
||||
|
||||
return stmt.all(memorySessionId);
|
||||
return stmt.all(memorySessionId) as Array<{
|
||||
title: string;
|
||||
subtitle: string;
|
||||
type: string;
|
||||
prompt_number: number | null;
|
||||
}>;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1445,7 +1561,7 @@ export class SessionStore {
|
||||
getObservationsByIds(
|
||||
ids: number[],
|
||||
options: { orderBy?: 'date_desc' | 'date_asc'; limit?: number; project?: string; type?: string | string[]; concepts?: string | string[]; files?: string | string[] } = {}
|
||||
): ObservationRecord[] {
|
||||
): ObservationSearchResult[] {
|
||||
if (ids.length === 0) return [];
|
||||
|
||||
const { orderBy = 'date_desc', limit, project, type, concepts, files } = options;
|
||||
@@ -1509,7 +1625,7 @@ export class SessionStore {
|
||||
${limitClause}
|
||||
`);
|
||||
|
||||
return stmt.all(...params) as ObservationRecord[];
|
||||
return stmt.all(...params) as ObservationSearchResult[];
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1539,7 +1655,19 @@ export class SessionStore {
|
||||
LIMIT 1
|
||||
`);
|
||||
|
||||
return stmt.get(memorySessionId) || null;
|
||||
return (stmt.get(memorySessionId) as {
|
||||
request: string | null;
|
||||
investigated: string | null;
|
||||
learned: string | null;
|
||||
completed: string | null;
|
||||
next_steps: string | null;
|
||||
files_read: string | null;
|
||||
files_edited: string | null;
|
||||
notes: string | null;
|
||||
prompt_number: number | null;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
} | null) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1599,7 +1727,16 @@ export class SessionStore {
|
||||
LIMIT 1
|
||||
`);
|
||||
|
||||
return stmt.get(id) || null;
|
||||
return (stmt.get(id) as {
|
||||
id: number;
|
||||
content_session_id: string;
|
||||
memory_session_id: string | null;
|
||||
project: string;
|
||||
platform_source: string;
|
||||
user_prompt: string;
|
||||
custom_title: string | null;
|
||||
status: string;
|
||||
} | null) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1805,12 +1942,9 @@ export class SessionStore {
|
||||
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
|
||||
const timestampIso = new Date(timestampEpoch).toISOString();
|
||||
|
||||
// Content-hash deduplication
|
||||
// DB-enforced dedup: UNIQUE(memory_session_id, content_hash) +
|
||||
// ON CONFLICT DO NOTHING (Plan 01 Phase 4).
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(this.db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
return { id: existing.id, createdAtEpoch: existing.created_at_epoch };
|
||||
}
|
||||
|
||||
const stmt = this.db.prepare(`
|
||||
INSERT INTO observations
|
||||
@@ -1818,9 +1952,11 @@ export class SessionStore {
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch,
|
||||
generated_by_model)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(memory_session_id, content_hash) DO NOTHING
|
||||
RETURNING id, created_at_epoch
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
const inserted = stmt.get(
|
||||
memorySessionId,
|
||||
project,
|
||||
observation.type,
|
||||
@@ -1839,12 +1975,22 @@ export class SessionStore {
|
||||
timestampIso,
|
||||
timestampEpoch,
|
||||
generatedByModel || null
|
||||
);
|
||||
) as { id: number; created_at_epoch: number } | null;
|
||||
|
||||
return {
|
||||
id: Number(result.lastInsertRowid),
|
||||
createdAtEpoch: timestampEpoch
|
||||
};
|
||||
if (inserted) {
|
||||
return { id: inserted.id, createdAtEpoch: inserted.created_at_epoch };
|
||||
}
|
||||
|
||||
const existing = this.db.prepare(
|
||||
'SELECT id, created_at_epoch FROM observations WHERE memory_session_id = ? AND content_hash = ?'
|
||||
).get(memorySessionId, contentHash) as { id: number; created_at_epoch: number } | null;
|
||||
|
||||
if (!existing) {
|
||||
throw new Error(
|
||||
`storeObservation: ON CONFLICT without existing row for content_hash=${contentHash}`
|
||||
);
|
||||
}
|
||||
return { id: existing.id, createdAtEpoch: existing.created_at_epoch };
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -1950,25 +2096,25 @@ export class SessionStore {
|
||||
const storeTx = this.db.transaction(() => {
|
||||
const observationIds: number[] = [];
|
||||
|
||||
// 1. Store all observations (with content-hash deduplication)
|
||||
// 1. Store all observations.
|
||||
// DB-enforced dedup via UNIQUE(memory_session_id, content_hash) +
|
||||
// ON CONFLICT DO NOTHING (Plan 01 Phase 4).
|
||||
const obsStmt = this.db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch,
|
||||
generated_by_model)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(memory_session_id, content_hash) DO NOTHING
|
||||
RETURNING id
|
||||
`);
|
||||
const lookupExistingStmt = this.db.prepare(
|
||||
'SELECT id FROM observations WHERE memory_session_id = ? AND content_hash = ?'
|
||||
);
|
||||
|
||||
for (const observation of observations) {
|
||||
// Content-hash deduplication (same logic as storeObservation singular)
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(this.db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
observationIds.push(existing.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = obsStmt.run(
|
||||
const inserted = obsStmt.get(
|
||||
memorySessionId,
|
||||
project,
|
||||
observation.type,
|
||||
@@ -1987,8 +2133,20 @@ export class SessionStore {
|
||||
timestampIso,
|
||||
timestampEpoch,
|
||||
generatedByModel || null
|
||||
);
|
||||
observationIds.push(Number(result.lastInsertRowid));
|
||||
) as { id: number } | null;
|
||||
|
||||
if (inserted) {
|
||||
observationIds.push(inserted.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const existing = lookupExistingStmt.get(memorySessionId, contentHash) as { id: number } | null;
|
||||
if (!existing) {
|
||||
throw new Error(
|
||||
`storeObservations: ON CONFLICT without existing row for content_hash=${contentHash}`
|
||||
);
|
||||
}
|
||||
observationIds.push(existing.id);
|
||||
}
|
||||
|
||||
// 2. Store summary if provided
|
||||
@@ -2086,25 +2244,25 @@ export class SessionStore {
|
||||
const storeAndMarkTx = this.db.transaction(() => {
|
||||
const observationIds: number[] = [];
|
||||
|
||||
// 1. Store all observations (with content-hash deduplication)
|
||||
// 1. Store all observations.
|
||||
// DB-enforced dedup via UNIQUE(memory_session_id, content_hash) +
|
||||
// ON CONFLICT DO NOTHING (Plan 01 Phase 4).
|
||||
const obsStmt = this.db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch,
|
||||
generated_by_model)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(memory_session_id, content_hash) DO NOTHING
|
||||
RETURNING id
|
||||
`);
|
||||
const lookupExistingStmt = this.db.prepare(
|
||||
'SELECT id FROM observations WHERE memory_session_id = ? AND content_hash = ?'
|
||||
);
|
||||
|
||||
for (const observation of observations) {
|
||||
// Content-hash deduplication (same logic as storeObservation singular)
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(this.db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
observationIds.push(existing.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = obsStmt.run(
|
||||
const inserted = obsStmt.get(
|
||||
memorySessionId,
|
||||
project,
|
||||
observation.type,
|
||||
@@ -2123,8 +2281,20 @@ export class SessionStore {
|
||||
timestampIso,
|
||||
timestampEpoch,
|
||||
generatedByModel || null
|
||||
);
|
||||
observationIds.push(Number(result.lastInsertRowid));
|
||||
) as { id: number } | null;
|
||||
|
||||
if (inserted) {
|
||||
observationIds.push(inserted.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const existing = lookupExistingStmt.get(memorySessionId, contentHash) as { id: number } | null;
|
||||
if (!existing) {
|
||||
throw new Error(
|
||||
`storeObservationsAndMarkComplete: ON CONFLICT without existing row for content_hash=${contentHash}`
|
||||
);
|
||||
}
|
||||
observationIds.push(existing.id);
|
||||
}
|
||||
|
||||
// 2. Store summary if provided
|
||||
@@ -2177,11 +2347,6 @@ export class SessionStore {
|
||||
|
||||
|
||||
|
||||
// REMOVED: cleanupOrphanedSessions - violates "EVERYTHING SHOULD SAVE ALWAYS"
|
||||
// There's no such thing as an "orphaned" session. Sessions are created by hooks
|
||||
// and managed by Claude Code's lifecycle. Worker restarts don't invalidate them.
|
||||
// Marking all active sessions as 'failed' on startup destroys the user's current work.
|
||||
|
||||
/**
|
||||
* Get session summaries by IDs (for hybrid Chroma search)
|
||||
* Returns summaries in specified temporal order
|
||||
@@ -2189,7 +2354,7 @@ export class SessionStore {
|
||||
getSessionSummariesByIds(
|
||||
ids: number[],
|
||||
options: { orderBy?: 'date_desc' | 'date_asc'; limit?: number; project?: string } = {}
|
||||
): SessionSummaryRecord[] {
|
||||
): SessionSummarySearchResult[] {
|
||||
if (ids.length === 0) return [];
|
||||
|
||||
const { orderBy = 'date_desc', limit, project } = options;
|
||||
@@ -2211,7 +2376,7 @@ export class SessionStore {
|
||||
${limitClause}
|
||||
`);
|
||||
|
||||
return stmt.all(...params) as SessionSummaryRecord[];
|
||||
return stmt.all(...params) as SessionSummarySearchResult[];
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -2443,7 +2608,15 @@ export class SessionStore {
|
||||
LIMIT 1
|
||||
`);
|
||||
|
||||
return stmt.get(id) || null;
|
||||
return (stmt.get(id) as {
|
||||
id: number;
|
||||
content_session_id: string;
|
||||
prompt_number: number;
|
||||
prompt_text: string;
|
||||
project: string;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
} | null) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -2519,7 +2692,18 @@ export class SessionStore {
|
||||
LIMIT 1
|
||||
`);
|
||||
|
||||
return stmt.get(id) || null;
|
||||
return (stmt.get(id) as {
|
||||
id: number;
|
||||
memory_session_id: string | null;
|
||||
content_session_id: string;
|
||||
project: string;
|
||||
user_prompt: string;
|
||||
request_summary: string | null;
|
||||
learned_summary: string | null;
|
||||
status: string;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
} | null) || null;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -30,7 +30,6 @@ export class MigrationRunner {
|
||||
this.ensureDiscoveryTokensColumn();
|
||||
this.createPendingMessagesTable();
|
||||
this.renameSessionIdColumns();
|
||||
this.repairSessionIdColumnRename();
|
||||
this.addFailedAtEpochColumn();
|
||||
this.addOnUpdateCascadeToForeignKeys();
|
||||
this.addObservationContentHashColumn();
|
||||
@@ -39,6 +38,8 @@ export class MigrationRunner {
|
||||
this.addSessionPlatformSourceColumn();
|
||||
this.ensureMergedIntoProjectColumns();
|
||||
this.addObservationSubagentColumns();
|
||||
this.rebuildPendingMessagesForSelfHealingClaim();
|
||||
this.addObservationsUniqueContentHashIndex();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -533,7 +534,6 @@ export class MigrationRunner {
|
||||
status TEXT NOT NULL DEFAULT 'pending' CHECK(status IN ('pending', 'processing', 'processed', 'failed')),
|
||||
retry_count INTEGER NOT NULL DEFAULT 0,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
started_processing_at_epoch INTEGER,
|
||||
completed_at_epoch INTEGER,
|
||||
FOREIGN KEY (session_db_id) REFERENCES sdk_sessions(id) ON DELETE CASCADE
|
||||
)
|
||||
@@ -613,23 +613,9 @@ export class MigrationRunner {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Repair session ID column renames (migration 19)
|
||||
* DEPRECATED: Migration 17 is now fully idempotent and handles all cases.
|
||||
* This migration is kept for backwards compatibility but does nothing.
|
||||
*/
|
||||
private repairSessionIdColumnRename(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(19) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Migration 17 now handles all column rename cases idempotently.
|
||||
// Just record this migration as applied.
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(19, new Date().toISOString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Add failed_at_epoch column to pending_messages (migration 20)
|
||||
* Used by markSessionMessagesFailed() for error recovery tracking
|
||||
* Used by transitionMessagesTo() for error recovery tracking
|
||||
*/
|
||||
private addFailedAtEpochColumn(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(20) as SchemaVersion | undefined;
|
||||
@@ -1015,4 +1001,207 @@ export class MigrationRunner {
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(27, new Date().toISOString());
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Rebuild pending_messages for self-healing claim (migration 28).
|
||||
*
|
||||
* PATHFINDER-2026-04-22 Plan 01 Phase 2.
|
||||
*
|
||||
* - Drops the legacy stale-reset epoch column (was the input to the
|
||||
* 60-s stale-reset; replaced by worker-PID liveness at claim time).
|
||||
* - Adds `worker_pid INTEGER` (set by claimNextMessage to the live
|
||||
* worker's PID; rows whose worker_pid is no longer alive are
|
||||
* immediately reclaimable).
|
||||
* - Adds `tool_use_id TEXT` so ingestion-time pairing of tool_use →
|
||||
* tool_result can be DB-backed instead of an in-memory Map
|
||||
* (Plan 03 dependency).
|
||||
* - Dedupes any existing rows that share (content_session_id,
|
||||
* tool_use_id), then creates a partial UNIQUE index.
|
||||
*
|
||||
* Follows the table-rebuild precedent at runner.ts:691 (migration 21):
|
||||
* disable FKs, BEGIN, recreate, INSERT-SELECT, RENAME, COMMIT, re-enable.
|
||||
*/
|
||||
private rebuildPendingMessagesForSelfHealingClaim(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(28) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
const pendingExists = (this.db.query("SELECT name FROM sqlite_master WHERE type='table' AND name='pending_messages'").all() as TableNameRow[]).length > 0;
|
||||
if (!pendingExists) {
|
||||
// pending_messages table never created on this DB — nothing to rebuild.
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(28, new Date().toISOString());
|
||||
return;
|
||||
}
|
||||
|
||||
logger.debug('DB', 'Rebuilding pending_messages for self-healing claim (migration 28)');
|
||||
|
||||
// PRAGMA foreign_keys must be set outside a transaction.
|
||||
this.db.run('PRAGMA foreign_keys = OFF');
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
try {
|
||||
// Source columns may include legacy fields. We build the SELECT explicitly
|
||||
// using only columns we know are present in the source after migration 27.
|
||||
const sourceCols = this.db.query('PRAGMA table_info(pending_messages)').all() as TableColumnInfo[];
|
||||
const colNames = new Set(sourceCols.map(c => c.name));
|
||||
const has = (name: string) => colNames.has(name);
|
||||
|
||||
// Clean up leftover temp from a previously-crashed run.
|
||||
this.db.run('DROP TABLE IF EXISTS pending_messages_new');
|
||||
|
||||
this.db.run(`
|
||||
CREATE TABLE pending_messages_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
session_db_id INTEGER NOT NULL,
|
||||
content_session_id TEXT NOT NULL,
|
||||
tool_use_id TEXT,
|
||||
message_type TEXT NOT NULL CHECK(message_type IN ('observation', 'summarize')),
|
||||
tool_name TEXT,
|
||||
tool_input TEXT,
|
||||
tool_response TEXT,
|
||||
cwd TEXT,
|
||||
last_user_message TEXT,
|
||||
last_assistant_message TEXT,
|
||||
prompt_number INTEGER,
|
||||
status TEXT NOT NULL DEFAULT 'pending'
|
||||
CHECK(status IN ('pending', 'processing', 'processed', 'failed')),
|
||||
retry_count INTEGER NOT NULL DEFAULT 0,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
failed_at_epoch INTEGER,
|
||||
completed_at_epoch INTEGER,
|
||||
worker_pid INTEGER,
|
||||
agent_type TEXT,
|
||||
agent_id TEXT,
|
||||
FOREIGN KEY (session_db_id) REFERENCES sdk_sessions(id) ON DELETE CASCADE
|
||||
)
|
||||
`);
|
||||
|
||||
// INSERT-SELECT — note that the legacy stale-reset epoch column is
|
||||
// intentionally omitted. Any 'processing' row is left with worker_pid =
|
||||
// NULL so that a self-healing claim picks it up immediately on next
|
||||
// worker boot.
|
||||
this.db.run(`
|
||||
INSERT INTO pending_messages_new (
|
||||
id, session_db_id, content_session_id, tool_use_id, message_type,
|
||||
tool_name, tool_input, tool_response, cwd, last_user_message,
|
||||
last_assistant_message, prompt_number, status, retry_count,
|
||||
created_at_epoch, failed_at_epoch, completed_at_epoch, worker_pid,
|
||||
agent_type, agent_id
|
||||
)
|
||||
SELECT
|
||||
id,
|
||||
session_db_id,
|
||||
content_session_id,
|
||||
${has('tool_use_id') ? 'tool_use_id' : 'NULL'},
|
||||
message_type,
|
||||
tool_name,
|
||||
tool_input,
|
||||
tool_response,
|
||||
cwd,
|
||||
${has('last_user_message') ? 'last_user_message' : 'NULL'},
|
||||
${has('last_assistant_message') ? 'last_assistant_message' : 'NULL'},
|
||||
${has('prompt_number') ? 'prompt_number' : 'NULL'},
|
||||
status,
|
||||
retry_count,
|
||||
created_at_epoch,
|
||||
${has('failed_at_epoch') ? 'failed_at_epoch' : 'NULL'},
|
||||
${has('completed_at_epoch') ? 'completed_at_epoch' : 'NULL'},
|
||||
NULL,
|
||||
${has('agent_type') ? 'agent_type' : 'NULL'},
|
||||
${has('agent_id') ? 'agent_id' : 'NULL'}
|
||||
FROM pending_messages
|
||||
`);
|
||||
|
||||
this.db.run('DROP TABLE pending_messages');
|
||||
this.db.run('ALTER TABLE pending_messages_new RENAME TO pending_messages');
|
||||
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_pending_messages_session ON pending_messages(session_db_id)');
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_pending_messages_status ON pending_messages(status)');
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_pending_messages_claude_session ON pending_messages(content_session_id)');
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_pending_messages_worker_pid ON pending_messages(worker_pid)');
|
||||
|
||||
// Dedup any pre-existing duplicate (content_session_id, tool_use_id) pairs
|
||||
// before adding the UNIQUE index. Keep the lowest id (oldest) per pair.
|
||||
this.db.run(`
|
||||
DELETE FROM pending_messages
|
||||
WHERE tool_use_id IS NOT NULL
|
||||
AND id NOT IN (
|
||||
SELECT MIN(id) FROM pending_messages
|
||||
WHERE tool_use_id IS NOT NULL
|
||||
GROUP BY content_session_id, tool_use_id
|
||||
)
|
||||
`);
|
||||
|
||||
this.db.run(`
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS ux_pending_session_tool
|
||||
ON pending_messages(content_session_id, tool_use_id)
|
||||
WHERE tool_use_id IS NOT NULL
|
||||
`);
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(28, new Date().toISOString());
|
||||
this.db.run('COMMIT');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
|
||||
logger.debug('DB', 'Rebuilt pending_messages for self-healing claim');
|
||||
} catch (error) {
|
||||
this.db.run('ROLLBACK');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
if (error instanceof Error) {
|
||||
throw error;
|
||||
}
|
||||
throw new Error(`Migration 28 failed: ${String(error)}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add UNIQUE(memory_session_id, content_hash) on observations (migration 29).
|
||||
*
|
||||
* PATHFINDER-2026-04-22 Plan 01 Phase 2 + Phase 4.
|
||||
*
|
||||
* - Dedupes existing rows that share (memory_session_id, content_hash),
|
||||
* keeping the lowest id (oldest) per pair.
|
||||
* - Creates a UNIQUE index that lets writers use
|
||||
* INSERT … ON CONFLICT(memory_session_id, content_hash) DO NOTHING
|
||||
* in place of the legacy dedup window scan.
|
||||
*/
|
||||
private addObservationsUniqueContentHashIndex(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(29) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Need both columns to exist.
|
||||
const obsCols = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
|
||||
const hasMem = obsCols.some(c => c.name === 'memory_session_id');
|
||||
const hasHash = obsCols.some(c => c.name === 'content_hash');
|
||||
if (!hasMem || !hasHash) {
|
||||
// Nothing to do; record so we don't keep retrying.
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(29, new Date().toISOString());
|
||||
return;
|
||||
}
|
||||
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
try {
|
||||
// Dedup before adding the UNIQUE index — keep the lowest id per pair.
|
||||
this.db.run(`
|
||||
DELETE FROM observations
|
||||
WHERE id NOT IN (
|
||||
SELECT MIN(id) FROM observations
|
||||
GROUP BY memory_session_id, content_hash
|
||||
)
|
||||
`);
|
||||
|
||||
this.db.run(`
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS ux_observations_session_hash
|
||||
ON observations(memory_session_id, content_hash)
|
||||
`);
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(29, new Date().toISOString());
|
||||
this.db.run('COMMIT');
|
||||
logger.debug('DB', 'Added UNIQUE(memory_session_id, content_hash) on observations');
|
||||
} catch (error) {
|
||||
this.db.run('ROLLBACK');
|
||||
if (error instanceof Error) {
|
||||
throw error;
|
||||
}
|
||||
throw new Error(`Migration 29 failed: ${String(error)}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,9 +9,6 @@ import { logger } from '../../../utils/logger.js';
|
||||
import { getProjectContext } from '../../../utils/project-name.js';
|
||||
import type { ObservationInput, StoreObservationResult } from './types.js';
|
||||
|
||||
/** Deduplication window: observations with the same content hash within this window are skipped */
|
||||
const DEDUP_WINDOW_MS = 30_000;
|
||||
|
||||
/**
|
||||
* Compute a short content hash for deduplication.
|
||||
* Uses (memory_session_id, title, narrative) as the semantic identity of an observation.
|
||||
@@ -30,25 +27,13 @@ export function computeObservationContentHash(
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a duplicate observation exists within the dedup window.
|
||||
* Returns the existing observation's id and timestamp if found, null otherwise.
|
||||
*/
|
||||
export function findDuplicateObservation(
|
||||
db: Database,
|
||||
contentHash: string,
|
||||
timestampEpoch: number
|
||||
): { id: number; created_at_epoch: number } | null {
|
||||
const windowStart = timestampEpoch - DEDUP_WINDOW_MS;
|
||||
const stmt = db.prepare(
|
||||
'SELECT id, created_at_epoch FROM observations WHERE content_hash = ? AND created_at_epoch > ?'
|
||||
);
|
||||
return (stmt.get(contentHash, windowStart) as { id: number; created_at_epoch: number } | null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Store an observation (from SDK parsing)
|
||||
* Assumes session already exists (created by hook)
|
||||
* Performs content-hash deduplication: skips INSERT if an identical observation exists within 30s
|
||||
* Store an observation (from SDK parsing).
|
||||
*
|
||||
* Assumes session already exists (created by hook). Deduplication is enforced
|
||||
* by the database via UNIQUE(memory_session_id, content_hash) (Plan 01 Phase 4):
|
||||
* INSERT … ON CONFLICT DO NOTHING absorbs duplicates silently. The returned id
|
||||
* is the existing row's id when a conflict occurred, otherwise the freshly
|
||||
* inserted row.
|
||||
*/
|
||||
export function storeObservation(
|
||||
db: Database,
|
||||
@@ -66,22 +51,18 @@ export function storeObservation(
|
||||
// Guard against empty project string (race condition where project isn't set yet)
|
||||
const resolvedProject = project || getProjectContext(process.cwd()).primary;
|
||||
|
||||
// Content-hash deduplication
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
logger.debug('DEDUP', `Skipped duplicate observation | contentHash=${contentHash} | existingId=${existing.id}`);
|
||||
return { id: existing.id, createdAtEpoch: existing.created_at_epoch };
|
||||
}
|
||||
|
||||
const stmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(memory_session_id, content_hash) DO NOTHING
|
||||
RETURNING id, created_at_epoch
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
const inserted = stmt.get(
|
||||
memorySessionId,
|
||||
resolvedProject,
|
||||
observation.type,
|
||||
@@ -99,10 +80,24 @@ export function storeObservation(
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
);
|
||||
) as { id: number; created_at_epoch: number } | null;
|
||||
|
||||
return {
|
||||
id: Number(result.lastInsertRowid),
|
||||
createdAtEpoch: timestampEpoch
|
||||
};
|
||||
if (inserted) {
|
||||
return { id: inserted.id, createdAtEpoch: inserted.created_at_epoch };
|
||||
}
|
||||
|
||||
// Conflict — fetch the existing row's id for the (memory_session_id, content_hash) pair.
|
||||
const existing = db.prepare(
|
||||
'SELECT id, created_at_epoch FROM observations WHERE memory_session_id = ? AND content_hash = ?'
|
||||
).get(memorySessionId, contentHash) as { id: number; created_at_epoch: number } | null;
|
||||
|
||||
if (!existing) {
|
||||
// Unreachable in practice (UNIQUE conflict implies existing row), but be explicit.
|
||||
throw new Error(
|
||||
`storeObservation: ON CONFLICT fired but no row exists for (memory_session_id=${memorySessionId}, content_hash=${contentHash})`
|
||||
);
|
||||
}
|
||||
|
||||
logger.debug('DEDUP', `Skipped duplicate observation | contentHash=${contentHash} | existingId=${existing.id}`);
|
||||
return { id: existing.id, createdAtEpoch: existing.created_at_epoch };
|
||||
}
|
||||
|
||||
@@ -0,0 +1,188 @@
|
||||
-- claude-mem SQLite schema
|
||||
--
|
||||
-- Authoritative shape of the database after all migrations through
|
||||
-- runner.ts have been applied (current tip = migration 29). Fresh
|
||||
-- databases boot directly into this shape; existing databases reach
|
||||
-- it via the migration runner.
|
||||
--
|
||||
-- Source of truth: src/services/sqlite/migrations/runner.ts
|
||||
-- Regenerated by: PATHFINDER-2026-04-22 Plan 01 (Data Integrity).
|
||||
--
|
||||
-- Invariants enforced here (Plan 01):
|
||||
-- * pending_messages.UNIQUE(content_session_id, tool_use_id) — replaces
|
||||
-- in-memory pendingTools Map for ingestion pairing (Plan 03 also depends).
|
||||
-- * pending_messages.worker_pid INTEGER — populated by self-healing
|
||||
-- claim query; replaces the legacy stale-reset epoch column.
|
||||
-- * observations.UNIQUE(memory_session_id, content_hash) — replaces the
|
||||
-- legacy dedup window; ON CONFLICT DO NOTHING absorbs duplicates.
|
||||
|
||||
CREATE TABLE IF NOT EXISTS schema_versions (
|
||||
id INTEGER PRIMARY KEY,
|
||||
version INTEGER UNIQUE NOT NULL,
|
||||
applied_at TEXT NOT NULL
|
||||
);
|
||||
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
-- sdk_sessions: one row per Claude/Codex session observed by claude-mem.
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS sdk_sessions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
content_session_id TEXT UNIQUE NOT NULL,
|
||||
memory_session_id TEXT UNIQUE,
|
||||
project TEXT NOT NULL,
|
||||
platform_source TEXT NOT NULL DEFAULT 'claude',
|
||||
user_prompt TEXT,
|
||||
started_at TEXT NOT NULL,
|
||||
started_at_epoch INTEGER NOT NULL,
|
||||
completed_at TEXT,
|
||||
completed_at_epoch INTEGER,
|
||||
status TEXT NOT NULL DEFAULT 'active'
|
||||
CHECK(status IN ('active', 'completed', 'failed')),
|
||||
worker_port INTEGER,
|
||||
prompt_counter INTEGER DEFAULT 0,
|
||||
custom_title TEXT
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_claude_id ON sdk_sessions(content_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_sdk_id ON sdk_sessions(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_project ON sdk_sessions(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_status ON sdk_sessions(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_started ON sdk_sessions(started_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_platform_source ON sdk_sessions(platform_source);
|
||||
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
-- observations: structured memory rows extracted from SDK output.
|
||||
-- UNIQUE(memory_session_id, content_hash) replaces the legacy dedup window;
|
||||
-- writes use INSERT … ON CONFLICT DO NOTHING.
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
text TEXT,
|
||||
type TEXT NOT NULL,
|
||||
title TEXT,
|
||||
subtitle TEXT,
|
||||
facts TEXT,
|
||||
narrative TEXT,
|
||||
concepts TEXT,
|
||||
files_read TEXT,
|
||||
files_modified TEXT,
|
||||
prompt_number INTEGER,
|
||||
discovery_tokens INTEGER DEFAULT 0,
|
||||
content_hash TEXT,
|
||||
agent_type TEXT,
|
||||
agent_id TEXT,
|
||||
merged_into_project TEXT,
|
||||
generated_by_model TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id)
|
||||
ON DELETE CASCADE ON UPDATE CASCADE,
|
||||
UNIQUE(memory_session_id, content_hash)
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_project ON observations(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_type ON observations(type);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_created ON observations(created_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_content_hash ON observations(content_hash, created_at_epoch);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_agent_type ON observations(agent_type);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_agent_id ON observations(agent_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_merged_into ON observations(merged_into_project);
|
||||
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
-- session_summaries: one summary row per memory session.
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS session_summaries (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
request TEXT,
|
||||
investigated TEXT,
|
||||
learned TEXT,
|
||||
completed TEXT,
|
||||
next_steps TEXT,
|
||||
files_read TEXT,
|
||||
files_edited TEXT,
|
||||
notes TEXT,
|
||||
prompt_number INTEGER,
|
||||
discovery_tokens INTEGER DEFAULT 0,
|
||||
merged_into_project TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id)
|
||||
ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_project ON session_summaries(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_summaries_merged_into ON session_summaries(merged_into_project);
|
||||
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
-- pending_messages: persistent work queue for SDK messages.
|
||||
-- worker_pid + UNIQUE(content_session_id, tool_use_id) make the claim
|
||||
-- query self-healing without any legacy stale-reset epoch column.
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS pending_messages (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
session_db_id INTEGER NOT NULL,
|
||||
content_session_id TEXT NOT NULL,
|
||||
tool_use_id TEXT,
|
||||
message_type TEXT NOT NULL
|
||||
CHECK(message_type IN ('observation', 'summarize')),
|
||||
tool_name TEXT,
|
||||
tool_input TEXT,
|
||||
tool_response TEXT,
|
||||
cwd TEXT,
|
||||
last_user_message TEXT,
|
||||
last_assistant_message TEXT,
|
||||
prompt_number INTEGER,
|
||||
status TEXT NOT NULL DEFAULT 'pending'
|
||||
CHECK(status IN ('pending', 'processing', 'processed', 'failed')),
|
||||
retry_count INTEGER NOT NULL DEFAULT 0,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
failed_at_epoch INTEGER,
|
||||
completed_at_epoch INTEGER,
|
||||
worker_pid INTEGER,
|
||||
agent_type TEXT,
|
||||
agent_id TEXT,
|
||||
FOREIGN KEY (session_db_id) REFERENCES sdk_sessions(id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_pending_messages_session ON pending_messages(session_db_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_pending_messages_status ON pending_messages(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_pending_messages_claude_session ON pending_messages(content_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_pending_messages_worker_pid ON pending_messages(worker_pid);
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS ux_pending_session_tool
|
||||
ON pending_messages(content_session_id, tool_use_id)
|
||||
WHERE tool_use_id IS NOT NULL;
|
||||
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
-- user_prompts: per-prompt history (UI + FTS search).
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS user_prompts (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
content_session_id TEXT NOT NULL,
|
||||
prompt_number INTEGER NOT NULL,
|
||||
prompt_text TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(content_session_id) REFERENCES sdk_sessions(content_session_id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_prompts_claude_session ON user_prompts(content_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_prompts_created ON user_prompts(created_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_prompts_prompt_number ON user_prompts(prompt_number);
|
||||
CREATE INDEX IF NOT EXISTS idx_user_prompts_lookup ON user_prompts(content_session_id, prompt_number);
|
||||
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
-- observation_feedback: usage-signal tracking for tier routing.
|
||||
-- ─────────────────────────────────────────────────────────────────────
|
||||
CREATE TABLE IF NOT EXISTS observation_feedback (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
observation_id INTEGER NOT NULL,
|
||||
signal_type TEXT NOT NULL,
|
||||
session_db_id INTEGER,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
metadata TEXT,
|
||||
FOREIGN KEY (observation_id) REFERENCES observations(id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE INDEX IF NOT EXISTS idx_feedback_observation ON observation_feedback(observation_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_feedback_signal ON observation_feedback(signal_type);
|
||||
@@ -10,7 +10,7 @@ import { Database } from 'bun:sqlite';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import type { ObservationInput } from './observations/types.js';
|
||||
import type { SummaryInput } from './summaries/types.js';
|
||||
import { computeObservationContentHash, findDuplicateObservation } from './observations/store.js';
|
||||
import { computeObservationContentHash } from './observations/store.js';
|
||||
|
||||
/**
|
||||
* Result from storeObservations / storeObservationsAndMarkComplete transaction
|
||||
@@ -64,23 +64,25 @@ export function storeObservationsAndMarkComplete(
|
||||
const storeAndMarkTx = db.transaction(() => {
|
||||
const observationIds: number[] = [];
|
||||
|
||||
// 1. Store all observations (with content-hash deduplication)
|
||||
// 1. Store all observations.
|
||||
// UNIQUE(memory_session_id, content_hash) + ON CONFLICT DO NOTHING enforces
|
||||
// dedup at the DB layer (Plan 01 Phase 4). RETURNING gives us the row id
|
||||
// when the insert went through; on conflict we look up the existing id.
|
||||
const obsStmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(memory_session_id, content_hash) DO NOTHING
|
||||
RETURNING id
|
||||
`);
|
||||
const lookupExistingStmt = db.prepare(
|
||||
'SELECT id FROM observations WHERE memory_session_id = ? AND content_hash = ?'
|
||||
);
|
||||
|
||||
for (const observation of observations) {
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
observationIds.push(existing.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = obsStmt.run(
|
||||
const inserted = obsStmt.get(
|
||||
memorySessionId,
|
||||
project,
|
||||
observation.type,
|
||||
@@ -98,8 +100,20 @@ export function storeObservationsAndMarkComplete(
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
);
|
||||
observationIds.push(Number(result.lastInsertRowid));
|
||||
) as { id: number } | null;
|
||||
|
||||
if (inserted) {
|
||||
observationIds.push(inserted.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const existing = lookupExistingStmt.get(memorySessionId, contentHash) as { id: number } | null;
|
||||
if (!existing) {
|
||||
throw new Error(
|
||||
`storeObservationsAndMarkComplete: ON CONFLICT without existing row for content_hash=${contentHash}`
|
||||
);
|
||||
}
|
||||
observationIds.push(existing.id);
|
||||
}
|
||||
|
||||
// 2. Store summary if provided
|
||||
@@ -185,23 +199,24 @@ export function storeObservations(
|
||||
const storeTx = db.transaction(() => {
|
||||
const observationIds: number[] = [];
|
||||
|
||||
// 1. Store all observations (with content-hash deduplication)
|
||||
// 1. Store all observations.
|
||||
// UNIQUE(memory_session_id, content_hash) + ON CONFLICT DO NOTHING enforces
|
||||
// dedup at the DB layer (Plan 01 Phase 4).
|
||||
const obsStmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
ON CONFLICT(memory_session_id, content_hash) DO NOTHING
|
||||
RETURNING id
|
||||
`);
|
||||
const lookupExistingStmt = db.prepare(
|
||||
'SELECT id FROM observations WHERE memory_session_id = ? AND content_hash = ?'
|
||||
);
|
||||
|
||||
for (const observation of observations) {
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
observationIds.push(existing.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = obsStmt.run(
|
||||
const inserted = obsStmt.get(
|
||||
memorySessionId,
|
||||
project,
|
||||
observation.type,
|
||||
@@ -219,8 +234,20 @@ export function storeObservations(
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
);
|
||||
observationIds.push(Number(result.lastInsertRowid));
|
||||
) as { id: number } | null;
|
||||
|
||||
if (inserted) {
|
||||
observationIds.push(inserted.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const existing = lookupExistingStmt.get(memorySessionId, contentHash) as { id: number } | null;
|
||||
if (!existing) {
|
||||
throw new Error(
|
||||
`storeObservations: ON CONFLICT without existing row for content_hash=${contentHash}`
|
||||
);
|
||||
}
|
||||
observationIds.push(existing.id);
|
||||
}
|
||||
|
||||
// 2. Store summary if provided
|
||||
|
||||
Reference in New Issue
Block a user