UX redesign: installer + provider rename + /learn-codebase + welcome card + SessionStart hint (#2255)

* feat(ux): claude-mem UX improvements with installer enhancements

Squashed PR #2156 commits for clean rebase onto main:
- feat(installer): add provider selection, model prompt, worker auto-start
- refactor: rename *Agent provider classes to *Provider
- feat: add /learn-codebase skill and viewer welcome card
- feat(worker): inject welcome hint when project has zero observations
- fix(pr-2156): address greptile review comments
- fix(pr-2156): address coderabbit review comments
- fix(pr-2156): persist CLAUDE_MEM_PROVIDER for non-claude in non-TTY mode
- fix(pr-2156): file-backed settings reads in installer + env-first SKILL doc

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* build: rebuild plugin artifacts after rebase onto v12.4.7

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(skills): strip claude-mem internals from learn-codebase

The learn-codebase skill, install next-step copy, WelcomeCard, and
welcome-hint previously walked the primary agent through worker endpoints
and synthetic observation payloads. The PostToolUse hook already captures
every Read/Edit the agent makes — the agent should have no awareness that
the memory layer exists. Collapse the skill to one instruction ("read every
source file in full") and rephrase touchpoints to describe only what the
user observes (Claude reading files), not what happens behind the scenes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): preflight version mismatch + settings-aware port resolution

Two related fixes for build-and-sync's worker restart step:

1. Read CLAUDE_MEM_WORKER_PORT from ~/.claude-mem/settings.json the same
   way the worker does, instead of computing the default port from the
   uid alone. Previously, users with a custom port saw a misleading
   "Worker not running" message because the restart POST hit the wrong
   port and got ECONNREFUSED.

2. Add a preflight check that aborts the sync when the running worker's
   reported version does not match the version we are about to build.
   Claude Code's plugin loader pins the worker to a specific cache
   version per session, so syncing into a newer cache directory has no
   effect until the user runs `claude plugin update thedotmack/claude-mem`
   to bump the pin. The preflight surfaces this explicitly with the exact
   command to run; --force bypasses it for intentional cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(learn-codebase): note sed for partial reads of large files

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide

Removed prose comments from all tracked source. Preserved directives
(@ts-ignore, eslint-disable, biome-ignore, prettier-ignore, triple-slash
references, webpack magic, shebangs). Deleted two tests that asserted
on comment text rather than runtime behavior.

Net: 401 files, -14,587 / +389 lines, -10.4% bytes.

Verified: typecheck passes, build passes, test count unchanged from
baseline (22 pre-existing fails, all unrelated).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(installer): move runtime setup into npx, eliminate hook dead air

Smart-install ran 3 times during a fresh install — the worst run was silent,
fired by Claude Code's Setup hook after `claude plugin install`, producing
~30s of dead air that looked like the plugin was hung.

This change makes `npx claude-mem install` the single place heavy work
happens, with a visible spinner. Hooks become runtime-only.

- New `src/npx-cli/install/setup-runtime.ts` module: ensureBun, ensureUv,
  installPluginDependencies, read/writeInstallMarker, isInstallCurrent.
  Marker schema preserved exactly ({version, bun, uv, installedAt}) so
  ContextBuilder and BranchManager readers keep working.
- `npx claude-mem install`: ungated copy/register/enable for every IDE,
  inserts a "Setting up runtime" task with honest "first install can take
  ~30s" spinner. The claude-code shell-out to `claude plugin install` is
  removed — npx already populated everything Claude reads.
- New `npx claude-mem repair` command for post-`claude plugin update`
  recovery, force-reinstalls runtime.
- Setup hook now runs `plugin/scripts/version-check.js` (29ms wall) instead
  of smart-install. Mismatch prints "run: npx claude-mem repair" on stderr.
  Always exits 0 (non-blocking, per CLAUDE.md exit-code strategy).
- SessionStart loses the smart-install entry; 2 hooks remain (worker start,
  context fetch).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(installer): delete smart-install sources, retarget tests

- Delete scripts/smart-install.js + plugin/scripts/smart-install.js (both
  are source files kept in sync manually; both must go).
- Delete tests/smart-install.test.ts (covered surface is gone).
- tests/plugin-scripts-line-endings: drop smart-install.js entry.
- tests/infrastructure/plugin-distribution: retarget two assertions at
  version-check.js (the new Setup hook script).
- New tests/setup-runtime.test.ts: 9 tests covering marker read/write,
  isInstallCurrent semantics. Marker schema invariant verified.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(installer): describe npx-driven setup + version-check Setup hook

Sweep public docs and architecture notes to reflect the new flow:
npx installer does Bun/uv setup with a visible spinner; Setup hook runs
sub-100ms version-check.js; users hit `npx claude-mem repair` after a
`claude plugin update`.

- docs/architecture-overview.md: hook lifecycle table + npx flow paragraph
- docs/public/configuration.mdx: tree + hook config example
- docs/public/development.mdx: build output line
- docs/public/hooks-architecture.mdx: full rewrite of pre-hook section,
  timing table, performance table
- docs/public/architecture/{overview,hooks,worker-service}.mdx: tree
  comments, JSON config example, Bun requirement section

docs/reports/* untouched (historical incident reports).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): mergeSettings writes via USER_SETTINGS_PATH

Greptile P1 (#2156): `settingsFilePath()` only resolved
`process.env.CLAUDE_MEM_DATA_DIR`, while `getSetting()` reads via
`USER_SETTINGS_PATH` which `resolveDataDir()` populates from BOTH the env
var AND a `CLAUDE_MEM_DATA_DIR` entry persisted in
`~/.claude-mem/settings.json`. Result: a user with the data dir saved in
settings.json but not exported in their shell would have provider/model
settings silently written to `~/.claude-mem/settings.json` while
`getSetting()` read from `/custom/path/settings.json` — read/write split.

Drop `settingsFilePath()` and the now-unused `homedir` import; reuse the
already-imported `USER_SETTINGS_PATH` constant.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(cli): parse --provider, --model, --no-auto-start install flags

Greptile P1 (#2156): InstallOptions has fields `provider`, `model`,
`noAutoStart`, but the install case in the npx-cli switch only parsed
`--ide`. The other three flags were silently dropped — `npx claude-mem
install --provider gemini` was a no-op.

Extract a `parseInstallOptions(argv)` helper, share it between the bare
`npx claude-mem` and `npx claude-mem install` paths, and validate
`--provider` against the allowed set. Update help text accordingly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): pipe runtime-setup output, always show IDE multiselect

Two issues caught in a docker test of the installer:

1. The bun.sh installer, uv installer, and `bun install` were using
   stdio: 'inherit', dumping their stdout/stderr through clack's spinner
   region — visible as raw "downloading uv 0.11.8…" / "Checked 58
   installs across 38 packages…" text streaming under the spinner. Switch
   to stdio: 'pipe' and surface captured stderr only on failure (via a
   shared describeExecError() helper that includes stdout when stderr is
   empty). Spinner stays clean on the happy path.

2. promptForIDESelection() silently picked claude-code when no IDEs were
   detected, never showing the user the multiselect. On a fresh machine
   with no IDEs present yet (e.g. our docker test container), the user
   never got to choose. Now: always show the full IDE list when
   interactive; mark detected ones with [detected] hints and pre-select
   them; show a warn line if zero are detected explaining they should pick
   what they plan to use. Non-TTY callers still get the silent
   claude-code default at the call site (unchanged).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): skip marketplace work for claude-code-only, offer to install Claude Code

Two related UX fixes from a docker test:

**Delay between "Saved Claude model=…" and "Plugin files copied OK"**

After dropping the needsManualInstall gate, every install was unconditionally
running `copyPluginToMarketplace` (which copied the entire root node_modules
tree — thousands of files, dozens of seconds) and `runNpmInstallInMarketplace`
(npm install --production) even when only claude-code was selected. Neither
is needed for claude-code: that path uses the plugin cache dir + the
installed_plugins.json + enabledPlugins flag, all of which we already write.

- Drop `node_modules` from `copyPluginToMarketplace`'s allowed-entries list;
  the dependency-install task populates it on the destination side anyway.
- Re-introduce `needsMarketplace = selectedIDEs.some(id => id !== 'claude-code')`
  scoped *only* to `copyPluginToMarketplace`, `runNpmInstallInMarketplace`,
  and the pre-install `shutdownWorkerAndWait` (also pointless for claude-code-
  only flows since we're not overwriting the worker's running cache dir
  source). All other tasks (cache copy, register, enable, runtime setup) stay
  unconditional.

**Claude Code missing → silent install of an IDE that isn't there**

When the user picked claude-code on a machine without it (e.g. a fresh
container), the install completed but `claude` was unavailable and the only
hint was a generic warn line. Replace with an explicit pre-flight prompt:

  Claude Code is not installed. Claude-mem works best in Claude Code, but
  also works with the IDEs below.
  ? Install Claude Code now?
    ◆ Yes — install Claude Code (recommended)
    ◯ No — pick another IDE below
    ◯ Cancel installation

If the user picks "Yes", run `curl -fsSL https://claude.ai/install.sh | bash`
(or the PowerShell equivalent on Windows), then re-detect IDEs and proceed
with claude-code pre-selected. If the install fails or the user picks "No",
the multiselect still appears with claude-code visible (just unmarked
[detected]), so they can opt in or pick another IDE.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): detect Claude Code via `claude` CLI, not ~/.claude dir

The directory `~/.claude` can exist (e.g. mounted in Docker, or created
by tooling) without Claude Code actually being installed. Detect the
`claude` command in PATH instead so the installer correctly offers to
install Claude Code when missing.

* docs(learn-codebase): add reviewer note explaining the cost tradeoff

The skill intentionally reads every file in full to build a cognitive
cache that pays off across the rest of the project. Add a brief note
so reviewers (human or bot) understand the tradeoff before flagging
the unbounded read as a cost issue.

* fix: address Greptile P1 feedback on welcome hint and learn-codebase

- SearchRoutes: skip welcome hint when caller passes ?full=true so
  explicit full-context requests aren't intercepted by the hint.
- learn-codebase: replace `sed` instruction with the Read tool's
  offset/limit parameters, since Bash is gated in Claude Code by
  default.

* feat(install): ASCII-animated logo splash on interactive install

Plays a ~1s bloom animation of the claude-mem sunburst logomark when
the installer starts in an interactive terminal — geometrically rendered
via 12 ray curves around a center disc, in the brand orange. The
wordmark and tagline type on alongside the final frame.

Auto-skipped on non-TTY, in CI, when NO_COLOR or CLAUDE_MEM_NO_BANNER
is set, or when the terminal is too narrow.

Inspired by ghostty +boo.

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): three-act choreography renderer with radial gradient and diff redraw

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): update preview script to support small/medium/hero tier selection

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(docker): add COLORTERM=truecolor to test-installer sandbox

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): auto-apply PATH for Claude Code with spinner UX

The Claude Code install.sh prints a Setup notes block telling users to
manually edit "your shell config file" to add ~/.local/bin to PATH —
which left fresh installs unable to launch claude from the command line.

After a successful install, detect ~/.local/bin/claude on disk and, if
the dir is missing from PATH, append the right export line to .zshrc /
.bash_profile / .bashrc / fish config (idempotent, marked with a
comment). Also updates process.env.PATH for the current install run.

Wraps the curl|bash install in a clack spinner (interactive only) so the
~4 minute native-build download doesn't look frozen — output is captured
silently and dumped on failure for debuggability. Non-interactive mode
keeps inherited stdio for CI logs.

Verified end-to-end in the test-installer docker sandbox: spinner
animates, .bashrc gets the export, fresh login shell resolves claude.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): video-frame ASCII renderer with three-act choreography

Generator switched from a single Jimp-rendered logo to pre-extracted
video frames concatenated with \x01 separators and gzip-deflated, ported
from ghostty's boo wire format. Renderer rewritten around three acts
(ignite → stagger bloom → text reveal + breathe) with adaptive sizing,
radial gradient, and diff-based redraw.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(onboarding): unify install / SessionStart / viewer around one first-success moment

Three surfaces now point at the same north-star moment — open the viewer, do
anything in Claude Code, watch an observation appear within seconds — with the
same verbatim timing and privacy lines, and a single canonical "how it works"
explainer instead of three diverging copies.

- Canonical explainer at src/services/worker/onboarding-explainer.md served via
  GET /api/onboarding/explainer; mirrored into plugin/skills/how-it-works/SKILL.md
- SessionStart welcome hint rewritten as third-person status (no imperatives
  Claude tries to execute), pinned with a default-value regression test
- Post-install Next Steps reframed as "two paths": passive default + optional
  /learn-codebase front-load; drops /mem-search and /knowledge-agent from this
  surface; adds verbatim timing + privacy lines and /how-it-works link
- /api/stats response gains firstObservationAt for the viewer stat row
- Viewer WelcomeCard branches on observationCount === 0: empty state shows live
  worker-connection dot + "waiting for activity"; has-data state shows
  observations · projects · since [date] and two example prompts. v2 dismiss key
- jimp added to package.json to fix pre-existing banner-frame build break

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(banner): play unconditionally; only honor CLAUDE_MEM_NO_BANNER

The 128-col / TTY / CI / NO_COLOR gates silently swallowed the banner in
narrower terminals, CI logs, and any non-TTY pipe — including Docker runs
where -it should preserve the experience but column width was the wrong
gate. Remove the implicit gates; keep the explicit opt-out only.

If a frame wraps in a narrow terminal, that's better than the banner
not playing at all.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* revert(banner): restore 15:33 gating logic per user request

Reverts eb6fc157. Restores isBannerEnabled to the state at commit
8e448015 (2026-04-30 15:33): TTY check, !CI, !NO_COLOR, !CLAUDE_MEM_NO_BANNER,
and cols >= BANNER.width.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): wrap remaining slow steps with spinners

Each IDE installer (Cursor, Gemini CLI, OpenCode, Windsurf, OpenClaw,
Codex CLI, MCP integrations) now runs inside a clack task spinner with
per-step progress messages instead of silent dynamic-import + cpSync.
Pre-overwrite worker shutdown (up to 10s) and the post-install health
probe (up to 3s) also get spinners.

Internal console.log/error/warn from each IDE installer is buffered
during the spinner; if the install fails, captured output is replayed
afterward via log.warn so users can see what broke.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(review): observation count + IDE pre-selection regressions

WelcomeCard's "no observations yet" empty state was triggered when a
project filter narrowed the feed to zero rows, even with thousands of
observations elsewhere. Source the count from global stats.database
to match firstObservationAt's scope.

Restore initialValues: [] in the IDE multiselect — pre-selecting every
detected IDE was the exact regression #2106 was filed for.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): trichotomy worker state + cache fallback for script path

ensureWorkerStarted now returns 'ready' | 'warming' | 'dead' instead of
boolean. The spawned-but-still-warming case (common in Docker cold
starts and slow first-time inits) was being misreported as 'did not
start', which contradicted the next-steps panel saying 'still starting
up'. Install task message and Next Steps headline now agree on the
actual state.

Also fixes the actual root cause of 'Worker did not start' on
claude-code-only installs: the worker script path was hardcoded to the
marketplace dir, which is left empty when no non-claude-code IDE is
selected. Now falls back to pluginCacheDirectory(version) when the
marketplace copy isn't present.

Verified end-to-end in docker/claude-mem with --ide claude-code,
--ide cursor, and a fresh container — install task and headline
agree on 'Worker ready at http://localhost:<port>' in all cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: align CLAUDE.md and public docs with current code

Sweep across CLAUDE.md and 10 high-traffic docs/public/ MDX files to
remove point-in-time references and align with the actual current
shape of the codebase. Highlights:

- Hardcoded port 37777 → per-user formula (37700 + uid % 100) on the
  front-door pages (introduction, installation, configuration,
  architecture/overview, architecture/worker-service, troubleshooting,
  hooks-architecture, platform-integration).
- Default model 'sonnet' → 'claude-haiku-4-5-20251001' (matches
  SettingsDefaultsManager).
- Node 18 → 20 (matches package.json engines).
- Lifecycle hook count corrected (5 events).
- Removed the nonexistent 'Smart Install' component and pre-built
  directory tree referencing files that no longer exist
  (context-hook.ts, save-hook.ts, cleanup-hook.ts, etc.); replaced
  with the real worker dispatcher shape.
- Removed CLAUDE.md '#2101' issue tag (kept the design rationale).
- Replaced obsolete hooks.json example with a description of the real
  bun-runner.js / worker-service.cjs hook event shape.

Lower-traffic doc pages still hardcode 37777 — left for a separate
global pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): land strip-comments around real parsers (postcss, remark, parse5)

Each language gets a real parser to locate comments, then we splice ranges
out of the original source. The library never serializes — that's how
remark-stringify produced 243 reformat-noise diffs in the first attempt
versus the 21 real strip targets here.

  JS/TS/JSX  -> ts.createSourceFile + getLeadingCommentRanges
  CSS/SCSS   -> postcss.parse + walkComments + node.source offsets
  MD/MDX     -> remark-parse (+ remark-mdx) + AST html / mdx-expression nodes
  HTML       -> parse5 with sourceCodeLocationInfo
  shell/py   -> kept hand-rolled hash stripper (no library worth the dep)

Preserves: shebangs, @ts-* directives, eslint-disable, biome-ignore,
prettier-ignore, triple-slash refs, webpack magic, /*! license keep,
@strip-comments-keep file marker. JS/TS handler runs a parse-roundtrip
check and refuses to write if syntax errors increased (catches the
worker-utils.ts breakage class from the 2026-04-29 attempt).

npm scripts:
  strip-comments         (apply)
  strip-comments:check   (CI-style, exits non-zero if changes needed)
  strip-comments:dry-run (list, no writes)

Verified --check on this repo: 21 changes, -4.0% bytes, no parse-error
regressions, no reformat-suspect false positives.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide via parser-backed tool

21 files changed, -17,550 bytes (-4.0%) of narrative comments removed
across .ts / .tsx / .js / .mjs and the .gitignore. JS/TS comments stripped
via ts.createSourceFile + getLeadingCommentRanges — same canonical lexer,
same behavior as the 2026-04-29 strip, no reformat noise.

Preexisting baseline (unchanged):
  typecheck: 16 errors at HEAD, 16 errors after strip (line numbers shift,
             no new error classes — verified via diff of sorted error lists)
  build:     fails at HEAD with CrushHooksInstaller.js unresolved import
             (preexisting, unrelated to this strip)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): drop Crush integration references after extract

The Crush integration was extracted to its own branch on May 1, but the
import at install.ts:280 (and the case block + ide-detection entry +
McpIntegrations config + npx-cli help text) still referenced the now-
removed CrushHooksInstaller.js, breaking the build.

Removes:
- case 'crush' block in install.ts
- crush entry in ide-detection.ts
- CRUSH_CONFIG and registration in McpIntegrations.ts
- 'crush' from the IDE Identifiers help line in index.ts

Rebuilds worker-service.cjs to match.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(banner): mark generated banner-frames.ts with @strip-comments-keep

Without this, every build/strip cycle ping-pongs five lines of doc
comments in and out of the auto-generated output. The keep-marker tells
strip-comments.ts to skip the file entirely.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): drop banner-frame regen from build script

generate-banner-frames.mjs requires PNG frames in /tmp/cmem-banner-frames
that only exist after the maintainer runs ffmpeg locally on the source
video. CI has neither the video nor the frames, so the build broke on
Windows. The output (src/npx-cli/banner-frames.ts) is committed, so the
regen is a one-shot dev step — not a build step. Run the script directly
when the video changes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): unstick the spinner — kill claim-self-lock, wake on fail, auto-broadcast

Three surgical changes that cure the stuck-spinner bug at the source.

Phase 1.1 (L9): claimNextMessage no longer self-excludes its own worker_pid.
A single UPDATE-RETURNING grabs the oldest pending row by id. Removes the
LiveWorkerPidsProvider plumbing that was never injected — Supervisor enforces
single-worker via PID file, so the multi-worker SQL was defending against a
configuration the project does not support.

Phase 1.2 (L19): SessionManager.markMessageFailed wraps PendingMessageStore.markFailed
and emits 'message' on the per-session EventEmitter. The iterator's waitForMessage
now wakes immediately on re-pend instead of parking for 3 minutes. ResponseProcessor
and SessionRoutes routed through the new wrapper.

Phase 1.3 (L24): PendingMessageStore takes an optional onMutate callback fired
from every mutator (enqueue, claimNextMessage, confirmProcessed, markFailed,
transitionMessagesTo, clearFailedOlderThan). SessionManager wires it; WorkerService
passes broadcastProcessingStatus. Ten manual broadcast calls deleted across
SessionCleanupHelper, SessionEventBroadcaster, SessionRoutes, DataRoutes, and
worker-service. Caller discipline becomes structurally impossible to forget.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): delete dead code — legacy routes, processPendingQueues, decorative guards

Pure deletions. Phase 2 of kill-the-asshole-gates.

- Legacy /sessions/:sessionDbId/* routes (handleSessionInit, handleObservations,
  handleSummarize, handleSessionStatus, handleSessionDelete, handleSessionComplete)
  bypassed all five ingest gates and were a parallel write path. Folded the
  initializeSession + broadcastNewPrompt + syncUserPrompt + ensureGeneratorRunning
  + broadcastSessionStarted work into the canonical /api/sessions/init handler so
  the hook makes one round trip instead of two.
- processPendingQueues (~104 lines, zero callers) — replaced in Phase 6 by a
  one-statement startup sweep.
- spawnInProgress Map and crashRecoveryScheduled Set — decorative dedupe over
  generatorPromise and stillExists checks that already provide the real safety.
- STALE_GENERATOR_THRESHOLD_MS — pre-empted live generators and raced with the
  finally block; the 3min idle timeout already kills zombies.
- MAX_SESSION_WALL_CLOCK_MS — ran a SELECT on every observation to enforce 24h.
  Runaway-spend protection lives in the API key, not in claude-mem.
- Missing-id 400 in shared.ts ingestObservation — Zod already enforces min(1)
  on contentSessionId and toolName at the route schema.
- SessionCompletionHandler import + completionHandler field on SessionRoutes
  (orphaned after handler deletions).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): SQL-backed getTotalQueueDepth — single source of truth

Was: iterate this.sessions.values() and sum getPendingCount per session.
Now: SELECT COUNT(*) FROM pending_messages WHERE status IN ('pending','processing').

The in-memory sessions Map drifted from the DB rows whenever a generator exited
without confirm/fail, leading to false-positive isProcessing in the UI. Phase 1.3's
auto-broadcast fires on every mutation, but it broadcast a stale Map count.
Reading from the DB makes the UI's spinner state match what the queue actually holds.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): typed abortReason replaces wasAborted boolean

Was: a boolean wasAborted that lumped every abort together. The finally block
branched on !wasAborted, so any abort skipped restart — including idle aborts
with pending work, which is exactly the case where we DO want to restart.

Now: ActiveSession.abortReason is a typed enum 'idle' | 'shutdown' | 'overflow'
| 'restart-guard'. The finally block consumes the reason and only skips restart
for 'shutdown' and 'restart-guard'. Idle and overflow aborts fall through, so
if pending work exists they trigger restart correctly.

Dropped 'stale' and 'wall-clock' from the union — Phase 2 deleted those paths.
Natural-completion abort (post-success) intentionally has no reason; it's not
gating restart logic.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): unify the two generator-exit finally blocks

Was: worker-service.ts:startSessionProcessor and SessionRoutes:ensureGeneratorRunning
each had their own ~70-line finally block with divergent restart-guard handling.
The worker-service path called terminateSession on RestartGuard trip and orphaned
pending rows (the L16 bug); the SessionRoutes path drained them. Two places to
update when rules changed.

Now: handleGeneratorExit in src/services/worker/session/GeneratorExitHandler.ts
owns the contract:
  1. Always kill the SDK subprocess if alive.
  2. Always drain processingMessageIds via sessionManager.markMessageFailed
     (which wakes the iterator — Phase 1.2).
  3. shutdown / restart-guard reasons: drain pending rows via
     transitionMessagesTo('failed'), finalize, remove from Map. Fixes L16.
  4. pendingCount=0: finalize normally and remove from Map.
  5. pendingCount>0: backoff respawn via per-session respawnTimer (no global Set;
     Phase 2.4 deleted that). RestartGuard trip drains to 'abandoned'.

Both finally blocks are now ~10-line wrappers that translate local state into the
canonical abortReason and delegate. Restored completionHandler injection into
SessionRoutes (was dropped in Phase 2 cleanup; needed by the unified helper for
finalizeSession).

Behavior change: SessionRoutes' previous "keep idle session in memory" was
deliberately replaced by the plan's "remove from Map on natural completion" —
next observation reinitializes via getMessageIterator → initializeSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(worker): startup orphan sweep — reset 'processing' rows at boot

When the worker dies (crash, kill, restart), any pending_messages rows it left
in 'processing' state are by definition orphans (the only worker is dead).
Single SQL UPDATE at boot resets them to 'pending' so the iterator can claim
them again. Replaces the deleted processPendingQueues function (Phase 2.2).

Runs in initializeBackground after dbManager.initialize() and before the
initializationComplete middleware releases blocked HTTP requests, so no
in-flight request can race the sweep. NOT on a periodic timer — after boot,
every 'processing' row has a live consumer and a periodic sweep would race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): simplify enqueue catch, replace memorySessionId throw with re-pend

7.1: queueObservation's catch was logging two ERROR-level messages and rethrowing.
The rethrow is correct (FK violations / disk full / schema drift should crash
loudly), but the verbose ERROR logging pretended the error was recoverable.
Reduced to one INFO line + rethrow.

7.2: ResponseProcessor's memorySessionId guard was throwing if the SDK hadn't
included session_id on the first user-yield, terminal-failing the entire batch.
Now warns and re-pends in-flight messages via sessionManager.markMessageFailed
(which wakes the iterator — Phase 1.2). The next iteration tries again with
memorySessionId hopefully captured.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): mirror builds to installed-version cache for hot reload

When package.json bumps past Claude Code's installed pin, sync-marketplace
wrote new code to cache/<buildVersion>/ but the worker loaded from
cache/<installedVersion>/, so worker:restart reloaded the same old code.

Replace the exit-on-mismatch preflight with a mirror step: when versions
differ, also rsync plugin/ into cache/<installedVersion>/ so worker:restart
hot-reloads new code without a Claude Code session restart. The
build-version cache still gets written for the eventual
`claude plugin update`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: delete dead barrel files and orphan utilities

- src/sdk/index.ts (re-exports parser+prompts; nothing imported the barrel)
- src/services/Context.ts (re-exports ./context/index.js; no importers)
- src/services/integrations/index.ts (no importers)
- src/services/worker/Search.ts (3-line barrel of ./search/index.js)
- src/services/infrastructure/index.ts: drop CleanupV12_4_3 re-export
- src/utils/error-messages.ts (getWorkerRestartInstructions never imported)
- src/types/transcript.ts (170 LoC of types, zero importers)
- src/npx-cli/_preview.ts (banner dev preview, no script wires it)

Build + tests still pass; observations still flowing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(parser): drop unused detectLanguage

Only the user-grammar-aware variant detectLanguageWithUserGrammars()
is actually called.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(types): drop unused SdkSessionRecord + ObservationWithContext

Both interfaces in src/types/database.ts had zero importers anywhere
in src or tests.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): drop unused getDetectedIDEs + claudeMemDataDirectory

getDetectedIDEs has no callers — install.ts uses detectInstalledIDEs
directly. claudeMemDataDirectory has no callers either.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ProcessManager): drop dead orphan-reaper + signal-handler helpers

Each had zero callers in src/ or tests/:
  - cleanupOrphanedProcesses + enumerateOrphanedProcesses
  - ORPHAN_PROCESS_PATTERNS + ORPHAN_MAX_AGE_MINUTES
  - forceKillProcess
  - waitForProcessesExit
  - createSignalHandler
  - resetWorkerRuntimePathCache

The orphan reaper was retired in PATHFINDER Plan 02 ("OS process groups
replace hand-rolled reapers", commit 94d592f2) — these were the leftover
pieces. shutdown.ts uses the supervisor's own kill-pgid path instead.

parseElapsedTime kept (covered by tests/infrastructure/process-manager.test.ts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete 11 unreferenced DX/forensic scripts

None of these are referenced by package.json npm scripts or docs/.
All last touched on Apr 29 only as part of the comment-stripping
pass — the feature code itself is older and orphaned:

  analyze-transformations-smart.js
  debug-transcript-structure.ts
  dump-transcript-readable.ts
  endless-mode-token-calculator.js
  extract-prompts-to-yaml.cjs
  extract-rich-context-examples.ts
  find-silent-failures.sh
  fix-all-timestamps.ts
  format-transcript-context.ts
  test-transcript-parser.ts
  transcript-to-markdown.ts

These are standalone tools — runtime behavior unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete unused extraction/ and types/ subdirs

- scripts/extraction/{extract-all-xml.py, filter-actual-xml.py, README.md}
  point at ~/Scripts/claude-mem/ — the user's pre-relocation path that no
  longer exists. Zero references in package.json, src/, or tests/.
- scripts/types/export.ts duplicates ObservationRecord etc. and has no
  importers (CodexCliInstaller imports transcripts/types, not this).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(BranchManager): drop dead getInstalledPluginPath

OpenCodeInstaller has its own (used) getInstalledPluginPath; the
BranchManager copy never had any external callers.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ChromaSyncState): unexport DocKind (used internally only)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(gemini): drop stale earliestPendingTimestamp / processingMessageIds

Both fields were removed from ActiveSession in earlier queue-engine
cleanup. Tests had been silently keeping them because the mock sessions
use 'as any' to bypass strict typing, so the dead fields rode along
without complaint.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 3 unused module-level constants

- src/npx-cli/banner.ts: CURSOR_HOME, CLEAR_DOWN (banner uses
  CLEAR_SCREEN which combines clear-down + cursor-home into a single
  CSI sequence; the standalone constants were leftovers).
- src/services/worker/BranchManager.ts: DEFAULT_SHELL_TIMEOUT_MS
  (BranchManager only uses GIT_COMMAND_TIMEOUT_MS / NPM_INSTALL_TIMEOUT_MS).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(opencode-plugin): drop dead workerPost helper

Only the fire-and-forget variant (workerPostFireAndForget) is actually
called. workerPost was the await-result version with no remaining caller.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 8 truly-unused interface fields

Verified each by grepping for `.field`, `"field"`, `'field'`, and
`field:` patterns across src/ + tests/ + plugin/scripts. Where the
only remaining usage was the assignment site, removed the assignments too.

- GitHubStarsData: watchers_count, forks_count (only stargazers_count read)
- TableColumnInfo: dflt_value (PRAGMA returns it but no caller reads it)
- IndexInfo: seq (PRAGMA returns it but no caller reads it)
- ObservationRecord: source_files (legacy field, no readers)
- HookResult.hookSpecificOutput: permissionDecisionReason
- WatchTarget: rescanIntervalMs (set in config, never read)
- ShutdownResult: confirmedStopped (write-only — assigned but no
  reader; updated all 3 return sites to drop it)
- ModePrompts: language_instruction (multilingual support never wired)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): reuse InstallOptions type instead of inline duplicate

parseInstallOptions had its return type written out inline as an
anonymous duplicate of InstallOptions. Use the canonical type
(import type — zero bundle cost).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(integrations): drop unused Platform type alias

The detectPlatform() function that returned this type was deleted earlier
in the branch (along with getScriptExtension that consumed it). The type
itself outlived its consumer; only string literals "Platform:" survive in
console.log diagnostics, which don't reference the alias.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): broadcast processing_status when summarize is queued

broadcastSummarizeQueued was an empty no-op even though
handleSummarizeByClaudeId calls it after enqueueing. The PendingMessageStore
onMutate callback already fires broadcastProcessingStatus on enqueue, but
calling it explicitly from broadcastSummarizeQueued ensures the spinner
ticks on the moment a summary is requested even if the onMutate chain has
any timing race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): keep spinner on while summary generates

ClaudeProvider's SDK can pull multiple synthetic prompts (e.g.
observation + summarize) before producing responses. Each pull pushed
an ID to session.processingMessageIds. When the SDK's first
observation response came back, ResponseProcessor.confirmProcessed
deleted ALL pending message rows — including the still-in-flight
summary — so getTotalQueueDepth dropped to 0 and the spinner turned
off, even though the summary took another ~22s to actually generate.

Tag each in-flight message with its type ({id, type}) so the response
processor can pop only the FIFO message of the matching type
(observation vs summarize). The summary row stays in 'processing'
until its own response arrives, keeping the spinner lit through the
entire summary window.

Also updates Gemini/OpenRouter providers and GeneratorExitHandler for
the new shape.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): clear summary from queue on any SDK response

Switch ResponseProcessor from type-aware FIFO matching to strict FIFO
popping (each SDK response → 1 in-flight message consumed). This way
the summary always clears when the SDK responds, even when the
response is unparseable or the summary doesn't actually generate
content — preventing stuck spinner / queue-depth-stuck-at-1.

Spinner behavior is preserved: messages enqueued after the summary
keep the queue depth elevated, and only when the SDK has responded
to every prompt does the queue drain to zero.

Also: when the consumed message is a 'summarize' and parsing fails,
treat it as best-effort and confirmProcessed (no retry) — summaries
that can't be parsed shouldn't keep retrying.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): redesign welcome card and remove source filters

The first-start welcome card now explains the three feed card types
(observation/summary/prompt) with color-coded badges, points users at
the gear icon for settings and the project dropdown for filtering, and
plugs /mem-search for recall — replacing the old two-line "ask:" prompts.

Source filter tabs (Claude/Codex/etc.) are removed from the header.
Filtering by AI provider was nonsense from a user POV; the project
dropdown is the only header filter now. Source tracking is also
stripped from useSSE, usePagination, App state, and CSS.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): keep welcome card in feed column, swap rows for 3 squares

Two visible problems in the previous design: the card stretched
edge-to-edge while feed cards sit in a centered 650px column, and
the body was a stack of long horizontal rows that scanned line-by-line.

Both fixed: Feed now accepts a pinnedTop slot so the welcome card
renders inside the same .feed-content column as observation cards.
Body is now a 3-column grid of square feature blocks — Live feed,
Tune it, Recall it — each with a custom inline SVG illustration
(stacked cards with color-coded stripes, gear+sliders, magnifier
over cards). Old text-row sections (welcome-card-types,
welcome-card-tips, welcome-card-section, welcome-card-tip-icon)
are removed. Squares stack to one column under 600px.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): convert welcome card to glassy modal with stylized logo

Card now opens as a centered modal with a frosted/glass backdrop
(blur + saturate) so it doubles as a proper help dialog when reopened
from the header's question-mark button. Removed the observation count,
project count, and "since" date — those don't make sense for a
first-launch surface and felt out of place in a help context.

Header art swapped from the small webp logomark to the new
high-resolution sun/sunburst PNG (claude-mem-logo-stylized.png),
shipped as a checked-in asset in src/ui and plugin/ui.

Bigger throughout: 28px h2, 16px tagline, 88px illustrations,
26px feature padding, 1:1 aspect-ratio squares. Backdrop click and
Esc both close. Mobile collapses the grid to one column and drops
the aspect-ratio constraint.

Reverted the unused pinnedTop slot on Feed.tsx since the welcome
card is now a true overlay rather than an in-feed pinned card.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): make welcome modal actually glassy

Previous version had a 55%-opacity black backdrop that almost fully
blocked the underlying UI — the "glass" was just a dark plate.

Now the backdrop is fully transparent (no darkening at all), the
panel itself drops to 55% bg-card opacity with its existing
backdrop-filter blur(28px) saturate(170%), and the feature squares
drop to 35% bg-tertiary so they layer as glass-on-glass over the
already-blurred panel. The header and feed below now read clearly
through the modal's frosted blur.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): bulletproof square features via padding-bottom + clamp() fluid type

Squares were rendering taller than wide because aspect-ratio is treated
as a minimum — content can push the box past 1:1. Switched to the
classic padding-bottom: 100% trick: percentage padding resolves against
the parent's width, so the box is ALWAYS W × W regardless of content.
Inner content sits in an absolutely-positioned flex column that can't
push the shell taller.

Whole modal is now desktop-first and fluid via clamp() — no media-query
stair-steps for type, padding, gaps, border-radius, illustration size,
or modal width. Single mobile breakpoint at <600px collapses the grid
to one column and reverts the padding-bottom trick so each feature can
grow to natural content height.

Tightened the three feature descriptions so they fit comfortably inside
the square at the desktop size.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* style(viewer): 15% black overlay + heavier modal shadow for elevation

Backdrop goes from transparent to rgba(0,0,0,0.15) — just enough
darkening to push the modal visually forward without burying the
underlying UI. Modal shadow stacked: 40px/120px ambient + 16px/48px
contact, both deeper, plus the existing inset 1px highlight.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): clear pending_messages queue on build-and-sync

Rewrites scripts/clear-failed-queue.ts to talk directly to SQLite via
bun:sqlite — the previous HTTP endpoints (/api/pending-queue/*) were
removed during the queue engine rewrite, so the script was orphaned.
Wires `npm run queue:clear` into `build-and-sync` so each rebuild
starts with a clean queue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): collapse parser to binary valid/invalid + clearPendingForSession model

- Parser: { valid: true, observations, summary } | { valid: false } — drops kind/skipped enum dispatch
- ResponseProcessor: two branches only (parseable → store + clearPendingForSession; else → no-op)
- Drop processingMessageIds + per-message claim/confirm/markFailed lifecycle across 3 providers
- PendingMessageStore: 226 → 140 lines; remove markFailed/transitionMessagesTo/confirmProcessed/clearFailedOlderThan/getAllPending/peekPendingTypes... wait keep peekPendingTypes
- Schema migration v31+v32: drop retry_count, failed_at_epoch, completed_at_epoch, worker_pid columns
- SessionQueueProcessor: delete two 1s recovery sleeps (let iterator end on error)
- Server.ts/SettingsRoutes.ts: replace four magic-number setTimeout exit-flush patterns with flushResponseThen helper
- GeneratorExitHandler: 183 → 117 lines (drain in-flight loop gone)

Net: -181 lines. No more silent data loss via maxRetries=3.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): address review comments batch 1

- install.ts: needsMarketplace true when claude-code selected (P1, was no-op)
- install.ts: throw on invalid --model so CLI exits non-zero
- install.ts: skip worker health checks + adapt next-step copy when --no-auto-start
- install.ts: repair regenerates plugin cache when missing
- index.ts: readFlag rejects missing/flag-shaped values
- index.ts: route flag-first invocations (e.g. `--provider claude`) to install
- banner.ts: fail-open if frame payload decode throws
- SearchRoutes.ts: 5s TTL cache for settings reads on hot hook path (P2)
- detect-error-handling-antipatterns.ts: trailing-brace strip whitespace-tolerant
- investigate-timestamps.ts: compute Dec 2025 epochs at runtime (was Dec 2024)
- regenerate-claude-md.ts: include workingDir in fallback walker so root is covered
- sync-marketplace.cjs: parseWorkerPort validates 1..65535 before http.request
- sync-to-marketplace.sh: resolve SOURCE_DIR from script location, not cwd
- Dockerfile.test-installer: bash --login sources .bashrc via .bash_profile
- docs/configuration.mdx: drop nonexistent .worker.port file refs, use settings.json
- docs/architecture-overview.md: dynamic port + queue model after parser collapse
- docs/architecture/worker-service.mdx: dynamic port example + drop port-file claim
- docs/platform-integration.mdx: WORKER_BASE_URL pattern, drop hardcoded 37777
- install/public/install.sh: Node 20 floor (was 18) to match docs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): reset claimed messages to pending on early-return paths

ResponseProcessor returns early in two cases:
- parser invalid (unparseable response)
- memorySessionId not yet captured

Both paths previously left the just-claimed message in `status='processing'`,
which counts toward `getPendingCount`. The generator-exit handler then sees
`pendingCount > 0` and respawns the generator, looping until the restart
guard trips and `clearPendingForSession` deletes the message — silent data
loss.

Calling `resetProcessingToPending` on these paths lets the next generator
pass re-claim the message and try again, instead of burning the restart
budget on no-op respawns.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): swebench fallback row + troubleshooting port path

- evals/swebench/run-batch.py: append fallback prediction row when
  orchestrator future raises, preserving "never drop an instance" guarantee
- docs/troubleshooting.mdx: drop nonexistent .worker.port / worker.port file
  references; use settings.json + /api/health for port discovery

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): memoize per-project observation count for welcome-hint hot path

handleContextInject runs on every PostToolUse hook (after every Read/Edit).
The welcome-hint block ran a COUNT(*) on observations for every call once
CLAUDE_MEM_WELCOME_HINT_ENABLED was true. Observation counts are
monotonically increasing — once a project has any observations it always
will — so cache the positive result in a Set and skip the COUNT(*) on
subsequent requests.

Combined with the 5s settings TTL added earlier, the steady-state cost on
the hook hot path drops to a Set lookup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): use clearProcessingForSession on AI-success path

clearPendingForSession deletes ALL rows for the session. On the success
path of processAgentResponse, that's wrong: messages that arrived as
'pending' during the (1-5s) AI response latency get deleted along with
the 'processing' row we just consumed. In a hook burst (three quick
PostToolUse hooks), B and C land while A is in flight; A's success then
nukes B and C — silent data loss.

Add a status-scoped clearProcessingForSession to PendingMessageStore +
SessionManager, and use it in ResponseProcessor's success path. The
unconditional clearPendingForSession remains correct in
GeneratorExitHandler for hard-stop / restart-guard-trip paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Revert "fix(pr-2255): use clearProcessingForSession on AI-success path"

This reverts commit a08995299c30cbad36bddc3e5bddda7af8604b35.

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-05-02 16:05:56 -07:00
committed by GitHub
parent 28b40c05f2
commit 9e2973059a
452 changed files with 6189 additions and 21059 deletions
-7
View File
@@ -1,8 +1,4 @@
#!/usr/bin/env node
/**
* Cleanup duplicate observations and summaries from the database
* Keeps the earliest entry (MIN(id)) for each duplicate group
*/
import { SessionStore } from '../services/sqlite/SessionStore.js';
@@ -11,7 +7,6 @@ function main() {
const db = new SessionStore();
// Find and delete duplicate observations
console.log('Finding duplicate observations...');
const duplicateObsQuery = db['db'].prepare(`
@@ -46,7 +41,6 @@ function main() {
deletedObs += deleteIds.length;
}
// Find and delete duplicate summaries
console.log('\n\nFinding duplicate summaries...');
const duplicateSumQuery = db['db'].prepare(`
@@ -92,7 +86,6 @@ function main() {
console.log('='.repeat(60));
}
// Run if executed directly
if (import.meta.url === `file://${process.argv[1]}`) {
main();
}
-53
View File
@@ -1,8 +1,4 @@
#!/usr/bin/env node
/**
* Import XML observations back into the database
* Parses actual_xml_only_with_timestamps.xml and inserts observations via SessionStore
*/
import { readFileSync, readdirSync } from 'fs';
import { join } from 'path';
@@ -39,10 +35,6 @@ interface TimestampMapping {
[timestamp: string]: SessionMetadata;
}
/**
* Build a map of timestamp (rounded to second) -> session metadata by reading all transcript files
* Since XML timestamps are rounded to seconds, we map by second
*/
function buildTimestampMap(): TimestampMapping {
const transcriptDir = join(homedir(), '.claude', 'projects', '-Users-alexnewman-Scripts-claude-mem');
const map: TimestampMapping = {};
@@ -76,7 +68,6 @@ function buildTimestampMap(): TimestampMapping {
const project = data.cwd;
if (timestamp && sessionId) {
// Round timestamp to second for matching with XML timestamps
const roundedTimestamp = new Date(timestamp);
if (Number.isNaN(roundedTimestamp.getTime())) {
continue;
@@ -84,7 +75,6 @@ function buildTimestampMap(): TimestampMapping {
roundedTimestamp.setMilliseconds(0);
const key = roundedTimestamp.toISOString();
// Only store first occurrence for each second (they're all the same session anyway)
if (!map[key]) {
map[key] = { sessionId, project };
}
@@ -96,18 +86,12 @@ function buildTimestampMap(): TimestampMapping {
return map;
}
/**
* Parse XML text content and extract tag value
*/
function extractTag(xml: string, tagName: string): string {
const regex = new RegExp(`<${tagName}>([\\s\\S]*?)</${tagName}>`, 'i');
const match = xml.match(regex);
return match ? match[1].trim() : '';
}
/**
* Parse XML array tags (facts, concepts, files, etc.)
*/
function extractArrayTags(xml: string, containerTag: string, itemTag: string): string[] {
const containerRegex = new RegExp(`<${containerTag}>([\\s\\S]*?)</${containerTag}>`, 'i');
const containerMatch = xml.match(containerRegex);
@@ -128,11 +112,7 @@ function extractArrayTags(xml: string, containerTag: string, itemTag: string): s
return items;
}
/**
* Parse an observation block from XML
*/
function parseObservation(xml: string): ObservationData | null {
// Must be a complete observation block
if (!xml.includes('<observation>') || !xml.includes('</observation>')) {
return null;
}
@@ -148,7 +128,6 @@ function parseObservation(xml: string): ObservationData | null {
files_modified: extractArrayTags(xml, 'files_modified', 'file'),
};
// Validate required fields
if (!observation.type || !observation.title) {
return null;
}
@@ -156,11 +135,7 @@ function parseObservation(xml: string): ObservationData | null {
return observation;
}
/**
* Parse a summary block from XML
*/
function parseSummary(xml: string): SummaryData | null {
// Must be a complete summary block
if (!xml.includes('<summary>') || !xml.includes('</summary>')) {
return null;
}
@@ -174,7 +149,6 @@ function parseSummary(xml: string): SummaryData | null {
notes: extractTag(xml, 'notes') || null,
};
// Validate required fields
if (!summary.request) {
return null;
}
@@ -182,33 +156,22 @@ function parseSummary(xml: string): SummaryData | null {
return summary;
}
/**
* Extract timestamp from XML comment
* Format: <!-- Block N | 2025-10-19 03:03:23 UTC -->
*/
function extractTimestamp(commentLine: string): string | null {
const match = commentLine.match(/<!-- Block \d+ \| (.+?) -->/);
if (match) {
// Convert "2025-10-19 03:03:23 UTC" to ISO format
const dateStr = match[1].replace(' UTC', '').replace(' ', 'T') + 'Z';
return new Date(dateStr).toISOString();
}
return null;
}
/**
* Main import function
*/
function main() {
console.log('Starting XML observation import...\n');
// Build timestamp map
const timestampMap = buildTimestampMap();
// Open database connection
const db = new SessionStore();
// Create SDK sessions for all unique Claude Code sessions
console.log('\nCreating SDK sessions for imported data...');
const claudeSessionToSdkSession = new Map<string, string>();
@@ -216,7 +179,6 @@ function main() {
if (!claudeSessionToSdkSession.has(sessionMeta.sessionId)) {
const syntheticSdkSessionId = `imported-${sessionMeta.sessionId}`;
// Try to find existing session first
const existingQuery = db['db'].prepare(`
SELECT memory_session_id
FROM sdk_sessions
@@ -225,22 +187,18 @@ function main() {
const existing = existingQuery.get(sessionMeta.sessionId) as { memory_session_id: string | null } | undefined;
if (existing && existing.memory_session_id) {
// Use existing SDK session ID
claudeSessionToSdkSession.set(sessionMeta.sessionId, existing.memory_session_id);
} else if (existing && !existing.memory_session_id) {
// Session exists but memory_session_id is NULL, update it
db['db'].prepare('UPDATE sdk_sessions SET memory_session_id = ? WHERE content_session_id = ?')
.run(syntheticSdkSessionId, sessionMeta.sessionId);
claudeSessionToSdkSession.set(sessionMeta.sessionId, syntheticSdkSessionId);
} else {
// Create new SDK session
db.createSDKSession(
sessionMeta.sessionId,
sessionMeta.project,
'Imported from transcript XML'
);
// Update with synthetic SDK session ID
db['db'].prepare('UPDATE sdk_sessions SET memory_session_id = ? WHERE content_session_id = ?')
.run(syntheticSdkSessionId, sessionMeta.sessionId);
@@ -251,12 +209,10 @@ function main() {
console.log(`Prepared ${claudeSessionToSdkSession.size} SDK sessions\n`);
// Read XML file
const xmlPath = join(process.cwd(), 'actual_xml_only_with_timestamps.xml');
console.log(`Reading XML file: ${xmlPath}`);
const xmlContent = readFileSync(xmlPath, 'utf-8');
// Split into blocks by comment markers
const blocks = xmlContent.split(/(?=<!-- Block \d+)/);
console.log(`Found ${blocks.length} blocks in XML file\n`);
@@ -272,14 +228,12 @@ function main() {
continue;
}
// Extract timestamp from comment
const timestampIso = extractTimestamp(block);
if (!timestampIso) {
skipped++;
continue;
}
// Look up session metadata
const sessionMeta = timestampMap[timestampIso];
if (!sessionMeta) {
noSession++;
@@ -290,17 +244,14 @@ function main() {
continue;
}
// Get SDK session ID
const memorySessionId = claudeSessionToSdkSession.get(sessionMeta.sessionId);
if (!memorySessionId) {
skipped++;
continue;
}
// Try parsing as observation first
const observation = parseObservation(block);
if (observation) {
// Check for duplicate
const existingObs = db['db'].prepare(`
SELECT id FROM observations
WHERE memory_session_id = ? AND title = ? AND subtitle = ? AND type = ?
@@ -329,10 +280,8 @@ function main() {
continue;
}
// Try parsing as summary
const summary = parseSummary(block);
if (summary) {
// Check for duplicate
const existingSum = db['db'].prepare(`
SELECT id FROM session_summaries
WHERE memory_session_id = ? AND request = ? AND completed = ? AND learned = ?
@@ -361,7 +310,6 @@ function main() {
continue;
}
// Neither observation nor summary - skip
skipped++;
}
@@ -379,7 +327,6 @@ function main() {
console.log('='.repeat(60));
}
// Run if executed directly
if (import.meta.url === `file://${process.argv[1]}`) {
main();
}
-9
View File
@@ -1,11 +1,6 @@
import type { PlatformAdapter, NormalizedHookInput, HookResult } from '../types.js';
import { AdapterRejectedInput, isValidCwd } from './errors.js';
// Maps Claude Code stdin format (session_id, cwd, tool_name, etc.)
// SessionStart hooks receive no stdin, so we must handle undefined input gracefully
// Defensive cap: Claude Code's agent identifiers are short (e.g., "agent-abc123", "Explore").
// Ignore anything longer than 128 chars so a malformed payload cannot balloon DB rows.
const MAX_AGENT_FIELD_LEN = 128;
const pickAgentField = (v: unknown): string | undefined =>
typeof v === 'string' && v.length > 0 && v.length <= MAX_AGENT_FIELD_LEN ? v : undefined;
@@ -13,8 +8,6 @@ const pickAgentField = (v: unknown): string | undefined =>
export const claudeCodeAdapter: PlatformAdapter = {
normalizeInput(raw) {
const r = (raw ?? {}) as any;
// Plan 05 Phase 6 — cwd validation at the adapter boundary (single check,
// not duplicated in handlers). Falls back to process.cwd() when unset.
const cwd = r.cwd ?? process.cwd();
if (!isValidCwd(cwd)) {
throw new AdapterRejectedInput('invalid_cwd');
@@ -40,8 +33,6 @@ export const claudeCodeAdapter: PlatformAdapter = {
}
return output;
}
// Only emit fields in the Claude Code hook contract — unrecognized fields
// cause "JSON validation failed" in Stop hooks.
const output: Record<string, unknown> = {};
if (r.systemMessage) {
output.systemMessage = r.systemMessage;
-12
View File
@@ -1,20 +1,10 @@
import type { PlatformAdapter, NormalizedHookInput, HookResult } from '../types.js';
import { AdapterRejectedInput, isValidCwd } from './errors.js';
// Maps Cursor stdin format - field names differ from Claude Code
// Cursor uses: conversation_id, workspace_roots[], result_json, command/output
// Handle undefined input gracefully for hooks that don't receive stdin
//
// Cursor payload variations (#838, #1049):
// Session ID: conversation_id, generation_id, or id
// Prompt: prompt, query, input, or message (varies by Cursor version/hook type)
// CWD: workspace_roots[0] or cwd
export const cursorAdapter: PlatformAdapter = {
normalizeInput(raw) {
const r = (raw ?? {}) as any;
// Cursor-specific: shell commands come as command/output instead of tool_name/input/response
const isShellCommand = !!r.command && !r.tool_name;
// Plan 05 Phase 6 — cwd validation at the adapter boundary.
const cwd = r.workspace_roots?.[0] ?? r.cwd ?? process.cwd();
if (!isValidCwd(cwd)) {
throw new AdapterRejectedInput('invalid_cwd');
@@ -27,13 +17,11 @@ export const cursorAdapter: PlatformAdapter = {
toolInput: isShellCommand ? { command: r.command } : r.tool_input,
toolResponse: isShellCommand ? { output: r.output } : r.result_json, // result_json not tool_response
transcriptPath: undefined, // Cursor doesn't provide transcript
// Cursor-specific fields for file edits
filePath: r.file_path,
edits: r.edits,
};
},
formatOutput(result) {
// Cursor expects simpler response - just continue flag
return { continue: result.continue ?? true };
}
};
-13
View File
@@ -1,11 +1,3 @@
/**
* Adapter-layer rejection. Plan 05 Phase 6 (PATHFINDER-2026-04-22): cwd
* validation moves from per-handler `if (!cwd) throw …` to the adapter
* boundary. When normalization detects an invalid input, the adapter throws
* `AdapterRejectedInput`; the hook runner translates it into a graceful
* `{ continue: true }` so the user's session is never blocked by a malformed
* hook payload.
*/
export class AdapterRejectedInput extends Error {
constructor(public readonly reason: string) {
@@ -14,11 +6,6 @@ export class AdapterRejectedInput extends Error {
}
}
/**
* A cwd is valid when it is a non-empty string. The adapter normalizers fall
* back to `process.cwd()` when the inbound payload omits cwd, so the only way
* this returns false is when the payload supplies `null`/`''`/non-string.
*/
export function isValidCwd(cwd: unknown): cwd is string {
return typeof cwd === 'string' && cwd.length > 0;
}
+5 -47
View File
@@ -1,45 +1,15 @@
import type { PlatformAdapter } from '../types.js';
import { AdapterRejectedInput, isValidCwd } from './errors.js';
/**
* Gemini CLI Platform Adapter
*
* Normalizes Gemini CLI's hook JSON to NormalizedHookInput.
* Gemini CLI supports 11 lifecycle hooks; we register 7:
*
* Lifecycle:
* SessionStart → context (inject memory context)
* PreCompress → summarize
* Notification → observation (system events like ToolPermission)
*
* Agent:
* BeforeAgent → session-init (initializes session, captures user prompt)
* AfterAgent → observation (full agent response)
*
* Tool:
* BeforeTool → observation (tool intent before execution)
* AfterTool → observation (tool result after execution)
*
* Unmapped (not useful for memory):
* BeforeModel, AfterModel, BeforeToolSelection — model-level events
* that fire per-LLM-call, too chatty for observation capture.
*
* Base fields (all events): session_id, transcript_path, cwd, hook_event_name, timestamp
*
* Output format: { continue, stopReason, suppressOutput, systemMessage, decision, reason, hookSpecificOutput }
* Advisory hooks (SessionStart, PreCompress, Notification) ignore flow-control fields.
*/
export const geminiCliAdapter: PlatformAdapter = {
normalizeInput(raw) {
const r = (raw ?? {}) as any;
// CWD resolution chain: JSON field → env vars → process.cwd()
const cwd = r.cwd
?? process.env.GEMINI_CWD
?? process.env.GEMINI_PROJECT_DIR
?? process.env.CLAUDE_PROJECT_DIR
?? process.cwd();
// Plan 05 Phase 6 — cwd validation at the adapter boundary.
if (!isValidCwd(cwd)) {
throw new AdapterRejectedInput('invalid_cwd');
}
@@ -50,25 +20,20 @@ export const geminiCliAdapter: PlatformAdapter = {
const hookEventName: string | undefined = r.hook_event_name;
// Tool fields — present in BeforeTool, AfterTool
let toolName: string | undefined = r.tool_name;
let toolInput: unknown = r.tool_input;
let toolResponse: unknown = r.tool_response;
// AfterAgent: synthesize observation shape from the full agent response
if (hookEventName === 'AfterAgent' && r.prompt_response) {
toolName = toolName ?? 'GeminiAgent';
toolName = toolName ?? 'GeminiProvider';
toolInput = toolInput ?? { prompt: r.prompt };
toolResponse = toolResponse ?? { response: r.prompt_response };
}
// BeforeTool: has tool_name and tool_input but no tool_response yet
// Synthesize a marker so observation handler knows this is pre-execution
if (hookEventName === 'BeforeTool' && toolName && !toolResponse) {
toolResponse = { _preExecution: true };
}
// Notification: capture as an observation with notification details
if (hookEventName === 'Notification') {
toolName = toolName ?? 'GeminiNotification';
toolInput = toolInput ?? {
@@ -78,12 +43,11 @@ export const geminiCliAdapter: PlatformAdapter = {
toolResponse = toolResponse ?? { details: r.details };
}
// Collect platform-specific metadata
const metadata: Record<string, unknown> = {};
if (r.source) metadata.source = r.source; // SessionStart: startup|resume|clear
if (r.reason) metadata.reason = r.reason; // SessionEnd: exit|clear|logout|...
if (r.trigger) metadata.trigger = r.trigger; // PreCompress: auto|manual
if (r.mcp_context) metadata.mcp_context = r.mcp_context; // Tool hooks: MCP server context
if (r.source) metadata.source = r.source;
if (r.reason) metadata.reason = r.reason;
if (r.trigger) metadata.trigger = r.trigger;
if (r.mcp_context) metadata.mcp_context = r.mcp_context;
if (r.notification_type) metadata.notification_type = r.notification_type;
if (r.stop_hook_active !== undefined) metadata.stop_hook_active = r.stop_hook_active;
if (r.original_request_name) metadata.original_request_name = r.original_request_name;
@@ -102,10 +66,8 @@ export const geminiCliAdapter: PlatformAdapter = {
},
formatOutput(result) {
// Gemini CLI expects: { continue, stopReason, suppressOutput, systemMessage, decision, reason, hookSpecificOutput }
const output: Record<string, unknown> = {};
// Flow control — always include `continue` to prevent accidental agent termination
output.continue = result.continue ?? true;
if (result.suppressOutput !== undefined) {
@@ -113,14 +75,10 @@ export const geminiCliAdapter: PlatformAdapter = {
}
if (result.systemMessage) {
// Strip ANSI escape sequences: matches colors, text formatting, and terminal control codes
// Gemini CLI often has issues with ANSI escape sequences in tool output (showing them as raw text)
const ansiRegex = /[\u001b\u009b][[()#;?]*(?:[0-9]{1,4}(?:;[0-9]{0,4})*)?[0-9A-ORZcf-nqry=><]/g;
output.systemMessage = result.systemMessage.replace(ansiRegex, '');
}
// hookSpecificOutput is a first-class Gemini CLI field — pass through directly
// This includes additionalContext for context injection in SessionStart, BeforeAgent, AfterTool
if (result.hookSpecificOutput) {
output.hookSpecificOutput = {
additionalContext: result.hookSpecificOutput.additionalContext,
-1
View File
@@ -13,7 +13,6 @@ export function getPlatformAdapter(platform: string): PlatformAdapter {
case 'gemini-cli': return geminiCliAdapter;
case 'windsurf': return windsurfAdapter;
case 'raw': return rawAdapter;
// Codex CLI and other compatible platforms use the raw adapter (accepts both camelCase and snake_case fields)
default: return rawAdapter;
}
}
-2
View File
@@ -1,11 +1,9 @@
import type { PlatformAdapter, NormalizedHookInput, HookResult } from '../types.js';
import { AdapterRejectedInput, isValidCwd } from './errors.js';
// Raw adapter passes through with minimal transformation - useful for testing
export const rawAdapter: PlatformAdapter = {
normalizeInput(raw) {
const r = (raw ?? {}) as any;
// Plan 05 Phase 6 — cwd validation at the adapter boundary.
const cwd = r.cwd ?? process.cwd();
if (!isValidCwd(cwd)) {
throw new AdapterRejectedInput('invalid_cwd');
-15
View File
@@ -1,24 +1,12 @@
import type { PlatformAdapter, NormalizedHookInput, HookResult } from '../types.js';
import { AdapterRejectedInput, isValidCwd } from './errors.js';
// Maps Windsurf stdin format — JSON envelope with agent_action_name + tool_info payload
//
// Common envelope (all hooks):
// { agent_action_name, trajectory_id, execution_id, timestamp, tool_info: { ... } }
//
// Event-specific tool_info payloads:
// pre_user_prompt: { user_prompt: string }
// post_write_code: { file_path, edits: [{ old_string, new_string }] }
// post_run_command: { command_line, cwd }
// post_mcp_tool_use: { mcp_server_name, mcp_tool_name, mcp_tool_arguments, mcp_result }
// post_cascade_response: { response }
export const windsurfAdapter: PlatformAdapter = {
normalizeInput(raw) {
const r = (raw ?? {}) as any;
const toolInfo = r.tool_info ?? {};
const actionName: string = r.agent_action_name ?? '';
// Plan 05 Phase 6 — cwd validation at the adapter boundary.
const cwd = toolInfo.cwd ?? process.cwd();
if (!isValidCwd(cwd)) {
throw new AdapterRejectedInput('invalid_cwd');
@@ -73,14 +61,11 @@ export const windsurfAdapter: PlatformAdapter = {
};
default:
// Unknown action — pass through what we can
return base;
}
},
formatOutput(result) {
// Windsurf exit codes: 0 = success, 2 = block (pre-hooks only)
// The CLI layer handles exit codes; here we just return a simple continue flag
return { continue: result.continue ?? true };
},
};
-60
View File
@@ -1,14 +1,3 @@
/**
* CLAUDE.md Generation and Cleanup Commands
*
* Shared module for CLAUDE.md file management that can be invoked from:
* - CLI: `claude-mem generate` / `claude-mem clean`
* - Worker service API endpoints
*
* Provides two main operations:
* - generateClaudeMd: Regenerate CLAUDE.md files for folders with observations
* - cleanClaudeMd: Remove auto-generated content from CLAUDE.md files
*/
import { Database } from 'bun:sqlite';
import path from 'path';
@@ -45,7 +34,6 @@ interface ObservationRow {
discovery_tokens: number | null;
}
// Type icon map (matches ModeManager)
const TYPE_ICONS: Record<string, string> = {
'bugfix': '🔴',
'feature': '🟣',
@@ -69,10 +57,6 @@ function estimateTokens(obs: ObservationRow): number {
return Math.ceil(size / 4);
}
/**
* Get tracked folders using git ls-files.
* Respects .gitignore and only returns folders within the project.
*/
function getTrackedFolders(workingDir: string): Set<string> {
const folders = new Set<string>();
@@ -105,9 +89,6 @@ function getTrackedFolders(workingDir: string): Set<string> {
return folders;
}
/**
* Fallback directory walker that skips common ignored patterns.
*/
function walkDirectoriesWithIgnore(dir: string, folders: Set<string>, depth: number = 0): void {
if (depth > 10) return;
@@ -133,9 +114,6 @@ function walkDirectoriesWithIgnore(dir: string, folders: Set<string>, depth: num
}
}
/**
* Check if an observation has any files that are direct children of the folder.
*/
function hasDirectChildFile(obs: ObservationRow, folderPath: string): boolean {
const checkFiles = (filesJson: string | null): boolean => {
if (!filesJson) return false;
@@ -153,10 +131,6 @@ function hasDirectChildFile(obs: ObservationRow, folderPath: string): boolean {
return checkFiles(obs.files_modified) || checkFiles(obs.files_read);
}
/**
* Query observations for a specific folder.
* Only returns observations with files directly in the folder (not in subfolders).
*/
function findObservationsByFolder(db: Database, relativeFolderPath: string, project: string, limit: number): ObservationRow[] {
const queryLimit = limit * 3;
@@ -169,7 +143,6 @@ function findObservationsByFolder(db: Database, relativeFolderPath: string, proj
LIMIT ?
`;
// Database stores paths with forward slashes (git-normalized)
const normalizedFolderPath = relativeFolderPath.split(path.sep).join('/');
const likePattern = `%"${normalizedFolderPath}/%`;
const allMatches = db.prepare(sql).all(project, likePattern, likePattern, queryLimit) as ObservationRow[];
@@ -177,10 +150,6 @@ function findObservationsByFolder(db: Database, relativeFolderPath: string, proj
return allMatches.filter(obs => hasDirectChildFile(obs, relativeFolderPath)).slice(0, limit);
}
/**
* Extract relevant file from an observation for display.
* Only returns files that are direct children of the folder.
*/
function extractRelevantFile(obs: ObservationRow, relativeFolder: string): string {
if (obs.files_modified) {
try {
@@ -215,9 +184,6 @@ function extractRelevantFile(obs: ObservationRow, relativeFolder: string): strin
return 'General';
}
/**
* Format observations for CLAUDE.md content.
*/
function formatObservationsForClaudeMd(observations: ObservationRow[], folderPath: string): string {
const lines: string[] = [];
lines.push('# Recent Activity');
@@ -268,14 +234,9 @@ function formatObservationsForClaudeMd(observations: ObservationRow[], folderPat
return lines.join('\n').trim();
}
/**
* Write CLAUDE.md file with tagged content preservation.
* Only writes to folders that exist — never creates directories.
*/
function writeClaudeMdToFolder(folderPath: string, newContent: string): void {
const resolvedPath = path.resolve(folderPath);
// Never write inside .git directories — corrupts refs (#1165)
if (resolvedPath.includes('/.git/') || resolvedPath.includes('\\.git\\') || resolvedPath.endsWith('/.git') || resolvedPath.endsWith('\\.git')) return;
const claudeMdPath = path.join(folderPath, 'CLAUDE.md');
@@ -313,9 +274,6 @@ function writeClaudeMdToFolder(folderPath: string, newContent: string): void {
renameSync(tempFile, claudeMdPath);
}
/**
* Regenerate CLAUDE.md for a single folder.
*/
function regenerateFolder(
db: Database,
absoluteFolder: string,
@@ -329,7 +287,6 @@ function regenerateFolder(
return { success: false, observationCount: 0, error: 'Folder no longer exists' };
}
// Validate folder is within project root (prevent path traversal)
const resolvedFolder = path.resolve(absoluteFolder);
const resolvedWorkingDir = path.resolve(workingDir);
if (!resolvedFolder.startsWith(resolvedWorkingDir + path.sep)) {
@@ -413,12 +370,6 @@ function processAllFoldersForGeneration(
return 0;
}
/**
* Generate CLAUDE.md files for all folders with observations.
*
* @param dryRun - If true, only report what would be done without writing files
* @returns Exit code (0 for success, 1 for error)
*/
export async function generateClaudeMd(dryRun: boolean): Promise<number> {
const workingDir = process.cwd();
const settings = SettingsDefaultsManager.loadFromFile(SETTINGS_PATH);
@@ -508,17 +459,6 @@ function cleanSingleFile(file: string, relativePath: string, dryRun: boolean): '
}
}
/**
* Clean up auto-generated CLAUDE.md files.
*
* For each file with <claude-mem-context> tags:
* - Strip the tagged section
* - If empty after stripping, delete the file
* - If has remaining content, save the stripped version
*
* @param dryRun - If true, only report what would be done without modifying files
* @returns Exit code (0 for success, 1 for error)
*/
export async function cleanClaudeMd(dryRun: boolean): Promise<number> {
const workingDir = process.cwd();
-13
View File
@@ -1,9 +1,3 @@
/**
* Context Handler - SessionStart
*
* Extracted from context-hook.ts - calls worker to generate context.
* Returns context as hookSpecificOutput for Claude Code to inject.
*/
import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js';
import {
@@ -22,11 +16,9 @@ export const contextHandler: EventHandler = {
const context = getProjectContext(cwd);
const port = getWorkerPort();
// Plan 05 Phase 4: settings via process-scope cache.
const settings = loadFromFileOnce();
const showTerminalOutput = settings.CLAUDE_MEM_CONTEXT_SHOW_TERMINAL_OUTPUT === 'true';
// Pass all projects (parent + worktree if applicable) for unified timeline
const projectsParam = context.allProjects.join(',');
const apiPath = `/api/context/inject?projects=${encodeURIComponent(projectsParam)}`;
const colorApiPath = input.platform === 'claude-code' ? `${apiPath}&colors=true` : apiPath;
@@ -36,7 +28,6 @@ export const contextHandler: EventHandler = {
exitCode: HOOK_EXIT_CODES.SUCCESS,
};
// Plan 05 Phase 2: single helper for ensure-worker-alive → request → fallback.
const contextResult = await executeWithWorkerFallback<string>(apiPath, 'GET');
if (isWorkerFallback(contextResult)) {
return emptyResult;
@@ -48,7 +39,6 @@ export const contextHandler: EventHandler = {
} else if (contextResult === undefined) {
additionalContext = '';
} else {
// Unexpected non-string body — log and fall back to empty.
logger.warn('HOOK', 'Context response was not a string', { type: typeof contextResult });
return emptyResult;
}
@@ -63,9 +53,6 @@ export const contextHandler: EventHandler = {
const platform = input.platform;
// Use colored timeline for display if available, otherwise fall back to
// plain markdown context (especially useful for platforms like Gemini
// where we want to ensure visibility even if colors aren't fetched).
const displayContent = coloredTimeline || (platform === 'gemini-cli' || platform === 'gemini' ? additionalContext : '');
const systemMessage = showTerminalOutput && displayContent
+1 -49
View File
@@ -1,9 +1,3 @@
/**
* File Context Handler - PreToolUse
*
* Injects relevant observation history when Claude reads/edits a file,
* so it can avoid duplicating past work.
*/
import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js';
import { executeWithWorkerFallback, isWorkerFallback } from '../../shared/worker-utils.js';
@@ -14,13 +8,10 @@ import path from 'path';
import { shouldTrackProject } from '../../shared/should-track-project.js';
import { getProjectContext } from '../../utils/project-name.js';
/** Skip the gate for files smaller than this — timeline overhead exceeds file read cost. */
const FILE_READ_GATE_MIN_BYTES = 1_500;
/** Fetch more candidates than the display limit so dedup still fills 15 slots. */
const FETCH_LOOKAHEAD_LIMIT = 40;
/** Maximum observations to show in the timeline. */
const DISPLAY_LIMIT = 15;
const TYPE_ICONS: Record<string, string> = {
@@ -56,21 +47,11 @@ interface ObservationRow {
files_modified: string | null;
}
/**
* Deduplicate and rank observations for the timeline display.
*
* 1. Same-session dedup: keep only the most recent observation per session
* (input is already sorted newest-first by SQL).
* 2. Specificity scoring: rank by how specifically the observation is about
* the target file (modified > read-only, fewer total files > many).
* 3. Truncate to displayLimit.
*/
function deduplicateObservations(
observations: ObservationRow[],
targetPath: string,
displayLimit: number
): ObservationRow[] {
// Phase 1: Keep only the most recent observation per session
const seenSessions = new Set<string>();
const dedupedBySession: ObservationRow[] = [];
for (const obs of observations) {
@@ -81,7 +62,6 @@ function deduplicateObservations(
}
}
// Phase 2: Score by specificity to the target file
const scored = dedupedBySession.map(obs => {
const filesRead = parseJsonArray(obs.files_read);
const filesModified = parseJsonArray(obs.files_modified);
@@ -93,12 +73,10 @@ function deduplicateObservations(
if (inModified) specificityScore += 2;
if (totalFiles <= 3) specificityScore += 2;
else if (totalFiles <= 8) specificityScore += 1;
// totalFiles > 8: no bonus (survey-like observation)
return { obs, specificityScore };
});
// Stable sort: higher specificity first, preserve chronological order within same score
scored.sort((a, b) => b.specificityScore - a.specificityScore);
return scored.slice(0, displayLimit).map(s => s.obs);
@@ -108,9 +86,7 @@ function formatFileTimeline(
observations: ObservationRow[],
filePath: string
): string {
// Escape filePath for safe interpolation into recovery hints (quotes, backslashes, newlines)
const safePath = filePath.replace(/\\/g, '\\\\').replace(/"/g, '\\"').replace(/\n/g, '\\n');
// Group observations by day
const byDay = new Map<string, ObservationRow[]>();
for (const obs of observations) {
const day = formatDate(obs.created_at_epoch);
@@ -120,16 +96,14 @@ function formatFileTimeline(
byDay.get(day)!.push(obs);
}
// Sort days chronologically (use earliest observation in each group, not first — which is specificity-sorted)
const sortedDays = Array.from(byDay.entries()).sort((a, b) => {
const aEpoch = Math.min(...a[1].map(o => o.created_at_epoch));
const bEpoch = Math.min(...b[1].map(o => o.created_at_epoch));
return aEpoch - bEpoch;
});
// Include current date/time so the model can judge recency of observations
const now = new Date();
const currentDate = now.toLocaleDateString('en-CA'); // YYYY-MM-DD
const currentDate = now.toLocaleDateString('en-CA');
const currentTime = now.toLocaleTimeString('en-US', {
hour: 'numeric',
minute: '2-digit',
@@ -137,9 +111,6 @@ function formatFileTimeline(
}).toLowerCase().replace(' ', '');
const currentTimezone = now.toLocaleTimeString('en-US', { timeZoneName: 'short' }).split(' ').pop();
// The hook never modifies the Read call (#2094) — Claude always sees the
// full requested section. The timeline below is supplementary priming, not
// a replacement for the file contents.
const lines: string[] = [
`Current: ${currentDate} ${currentTime} ${currentTimezone}`,
`This file has prior observations — supplementary context follows. The Read result below is the full requested section.`,
@@ -148,7 +119,6 @@ function formatFileTimeline(
];
for (const [day, dayObservations] of sortedDays) {
// Sort within each day chronologically (deduplicateObservations reorders by specificity)
const chronological = [...dayObservations].sort((a, b) => a.created_at_epoch - b.created_at_epoch);
lines.push(`### ${day}`);
for (const obs of chronological) {
@@ -164,7 +134,6 @@ function formatFileTimeline(
export const fileContextHandler: EventHandler = {
async execute(input: NormalizedHookInput): Promise<HookResult> {
// Extract file_path from toolInput
const toolInput = input.toolInput as Record<string, unknown> | undefined;
const filePath = toolInput?.file_path as string | undefined;
@@ -172,16 +141,12 @@ export const fileContextHandler: EventHandler = {
return { continue: true, suppressOutput: true };
}
// Stat the file once: size (gate) + mtime (cache invalidation).
// 0 = stat failed non-fatally (e.g. EPERM) — skip mtime check, fall through to context injection.
let fileMtimeMs = 0;
try {
const statPath = path.isAbsolute(filePath)
? filePath
: path.resolve(input.cwd || process.cwd(), filePath);
const stat = statSync(statPath);
// Skip gate for files below the token-economics threshold — timeline (~370 tokens)
// costs more than reading small files directly.
if (stat.size < FILE_READ_GATE_MIN_BYTES) {
return { continue: true, suppressOutput: true };
}
@@ -190,29 +155,24 @@ export const fileContextHandler: EventHandler = {
if (err instanceof Error && 'code' in err && (err as NodeJS.ErrnoException).code === 'ENOENT') {
return { continue: true, suppressOutput: true };
}
// Other errors (symlink, permission denied) — fall through and let gate proceed
logger.debug('HOOK', 'File stat failed, proceeding with gate', { error: err instanceof Error ? err.message : String(err) });
}
// Plan 05 Phase 5: project exclusion via single helper.
if (input.cwd && !shouldTrackProject(input.cwd)) {
logger.debug('HOOK', 'Project excluded from tracking, skipping file context', { cwd: input.cwd });
return { continue: true, suppressOutput: true };
}
// Query worker for observations related to this file
const context = getProjectContext(input.cwd);
const cwd = input.cwd || process.cwd();
const absolutePath = path.isAbsolute(filePath) ? filePath : path.resolve(cwd, filePath);
const relativePath = path.relative(cwd, absolutePath).split(path.sep).join("/");
const queryParams = new URLSearchParams({ path: relativePath });
// Pass all project names (parent + worktree) for unified lookup
if (context.allProjects.length > 0) {
queryParams.set('projects', context.allProjects.join(','));
}
queryParams.set('limit', String(FETCH_LOOKAHEAD_LIMIT));
// Plan 05 Phase 2: single helper for ensure-worker-alive → request → fallback.
const result = await executeWithWorkerFallback<{ observations: ObservationRow[]; count: number }>(
`/api/observations/by-file?${queryParams.toString()}`,
'GET',
@@ -230,8 +190,6 @@ export const fileContextHandler: EventHandler = {
return { continue: true, suppressOutput: true };
}
// mtime invalidation: skip the timeline injection when the file is newer than the latest
// observation — past observations are stale and adding them risks misleading the model.
if (fileMtimeMs > 0) {
const newestObservationMs = Math.max(...data.observations.map(o => o.created_at_epoch));
if (fileMtimeMs >= newestObservationMs) {
@@ -244,17 +202,11 @@ export const fileContextHandler: EventHandler = {
}
}
// Deduplicate: one per session, ranked by specificity to this file
const dedupedObservations = deduplicateObservations(data.observations, relativePath, DISPLAY_LIMIT);
if (dedupedObservations.length === 0) {
return { continue: true, suppressOutput: true };
}
// #2094: never modify the Read call. Returning `updatedInput` with `limit: 1` previously
// truncated unconstrained reads, leaving Claude with a stale 1-line snapshot in context
// while the timeline told it not to re-read. Subsequent Edit calls then deadlocked because
// Claude Code's read-state tracker reported the file as "read" but the actual content was
// missing. The hook now only injects supplementary context — the Read proceeds unmodified.
const timeline = formatFileTimeline(dedupedObservations, filePath);
return {
-9
View File
@@ -1,9 +1,3 @@
/**
* File Edit Handler - Cursor-specific afterFileEdit
*
* Handles file edit observations from Cursor IDE.
* Similar to observation handler but with file-specific metadata.
*/
import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js';
import { executeWithWorkerFallback, isWorkerFallback } from '../../shared/worker-utils.js';
@@ -24,13 +18,10 @@ export const fileEditHandler: EventHandler = {
editCount: edits?.length ?? 0
});
// Plan 05 Phase 6: cwd is validated at the adapter boundary; this is a
// belt-and-suspenders type guard so TypeScript narrows.
if (!cwd) {
throw new Error(`Missing cwd in FileEdit hook input for session ${sessionId}, file ${filePath}`);
}
// Plan 05 Phase 2: single helper for ensure-worker-alive → request → fallback.
const result = await executeWithWorkerFallback<{ status?: string }>(
'/api/sessions/observations',
'POST',
+7 -23
View File
@@ -1,8 +1,3 @@
/**
* Event Handler Factory
*
* Returns the appropriate handler for a given event type.
*/
import type { EventHandler } from '../types.js';
import { HOOK_EXIT_CODES } from '../../shared/hook-constants.js';
@@ -16,13 +11,13 @@ import { fileEditHandler } from './file-edit.js';
import { fileContextHandler } from './file-context.js';
export type EventType =
| 'context' // SessionStart - inject context
| 'session-init' // UserPromptSubmit - initialize session
| 'observation' // PostToolUse - save observation
| 'summarize' // Stop - generate summary (phase 1)
| 'user-message' // SessionStart (parallel) - display to user
| 'file-edit' // Cursor afterFileEdit
| 'file-context'; // PreToolUse - inject file observation history
| 'context'
| 'session-init'
| 'observation'
| 'summarize'
| 'user-message'
| 'file-edit'
| 'file-context';
const handlers: Record<EventType, EventHandler> = {
'context': contextHandler,
@@ -34,16 +29,6 @@ const handlers: Record<EventType, EventHandler> = {
'file-context': fileContextHandler
};
/**
* Get the event handler for a given event type.
*
* Returns a no-op handler for unknown event types instead of throwing (fix #984).
* Claude Code may send new event types that the plugin doesn't handle yet —
* throwing would surface as a BLOCKING_ERROR to the user.
*
* @param eventType The type of event to handle
* @returns The appropriate EventHandler, or a no-op handler for unknown types
*/
export function getEventHandler(eventType: string): EventHandler {
const handler = handlers[eventType as EventType];
if (!handler) {
@@ -57,7 +42,6 @@ export function getEventHandler(eventType: string): EventHandler {
return handler;
}
// Re-export individual handlers for direct access if needed
export { contextHandler } from './context.js';
export { sessionInitHandler } from './session-init.js';
export { observationHandler } from './observation.js';
-14
View File
@@ -1,8 +1,3 @@
/**
* Observation Handler - PostToolUse
*
* Extracted from save-hook.ts - sends tool usage to worker for storage.
*/
import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js';
import { executeWithWorkerFallback, isWorkerFallback } from '../../shared/worker-utils.js';
@@ -17,7 +12,6 @@ export const observationHandler: EventHandler = {
const platformSource = normalizePlatformSource(input.platform);
if (!toolName) {
// No tool name provided - skip observation gracefully
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
@@ -25,20 +19,15 @@ export const observationHandler: EventHandler = {
logger.dataIn('HOOK', `PostToolUse: ${toolStr}`, {});
// Plan 05 Phase 6: cwd is validated at the adapter boundary; the adapter
// rejects empty cwd before reaching the handler. We still type-narrow for
// TypeScript and as a belt-and-suspenders guard.
if (!cwd) {
throw new Error(`Missing cwd in PostToolUse hook input for session ${sessionId}, tool ${toolName}`);
}
// Plan 05 Phase 5: project exclusion via single helper.
if (!shouldTrackProject(cwd)) {
logger.debug('HOOK', 'Project excluded from tracking, skipping observation', { cwd, toolName });
return { continue: true, suppressOutput: true };
}
// Plan 05 Phase 2: single helper for ensure-worker-alive → request → fallback.
const result = await executeWithWorkerFallback<{ status?: string }>(
'/api/sessions/observations',
'POST',
@@ -55,9 +44,6 @@ export const observationHandler: EventHandler = {
);
if (isWorkerFallback(result)) {
// Worker unreachable — fail-loud counter has already been incremented
// and may have escalated to exit 2. If we got here, threshold not yet
// reached, so degrade gracefully.
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
+1 -43
View File
@@ -1,8 +1,3 @@
/**
* Session Init Handler - UserPromptSubmit
*
* Extracted from new-hook.ts - initializes session and starts SDK agent.
*/
import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js';
import { executeWithWorkerFallback, isWorkerFallback } from '../../shared/worker-utils.js';
@@ -30,22 +25,18 @@ interface SemanticContextResponse {
export const sessionInitHandler: EventHandler = {
async execute(input: NormalizedHookInput): Promise<HookResult> {
const { sessionId, prompt: rawPrompt } = input;
const cwd = input.cwd ?? process.cwd(); // Match context.ts fallback (#1918)
const cwd = input.cwd ?? process.cwd();
// Guard: Codex CLI and other platforms may not provide a session_id (#744)
if (!sessionId) {
logger.warn('HOOK', 'session-init: No sessionId provided, skipping (Codex CLI or unknown platform)');
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
// Plan 05 Phase 5: project exclusion via single helper.
if (!shouldTrackProject(cwd)) {
logger.info('HOOK', 'Project excluded from tracking', { cwd });
return { continue: true, suppressOutput: true };
}
// Filter on the raw prompt so the check is independent of the
// [media prompt] substitution below.
if (rawPrompt && isInternalProtocolPayload(rawPrompt)) {
logger.debug('HOOK', 'session-init: skipping internal protocol payload', {
preview: rawPrompt.slice(0, 80),
@@ -53,8 +44,6 @@ export const sessionInitHandler: EventHandler = {
return { continue: true, suppressOutput: true };
}
// Handle image-only prompts (where text prompt is empty/undefined)
// Use placeholder so sessions still get created and tracked for memory
const prompt = (!rawPrompt || !rawPrompt.trim()) ? '[media prompt]' : rawPrompt;
const project = getProjectContext(cwd).primary;
@@ -62,7 +51,6 @@ export const sessionInitHandler: EventHandler = {
logger.debug('HOOK', 'session-init: Calling /api/sessions/init', { contentSessionId: sessionId, project });
// Plan 05 Phase 2: single helper for ensure-worker-alive → request → fallback.
const initResult = await executeWithWorkerFallback<SessionInitResponse>(
'/api/sessions/init',
'POST',
@@ -78,7 +66,6 @@ export const sessionInitHandler: EventHandler = {
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
// Worker may have returned a non-2xx body (parsed but missing fields). Fail-soft.
if (typeof initResult?.sessionDbId !== 'number') {
logger.failure('HOOK', 'Session initialization returned malformed response', { contentSessionId: sessionId, project });
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
@@ -89,10 +76,8 @@ export const sessionInitHandler: EventHandler = {
logger.debug('HOOK', 'session-init: Received from /api/sessions/init', { sessionDbId, promptNumber, skipped: initResult.skipped, contextInjected: initResult.contextInjected });
// Debug-level alignment log for detailed tracing
logger.debug('HOOK', `[ALIGNMENT] Hook Entry | contentSessionId=${sessionId} | prompt#=${promptNumber} | sessionDbId=${sessionDbId}`);
// Check if prompt was entirely private (worker performs privacy check)
if (initResult.skipped && initResult.reason === 'private') {
logger.info('HOOK', `INIT_COMPLETE | sessionDbId=${sessionDbId} | promptNumber=${promptNumber} | skipped=true | reason=private`, {
sessionId: sessionDbId
@@ -100,32 +85,6 @@ export const sessionInitHandler: EventHandler = {
return { continue: true, suppressOutput: true };
}
// Plan 05 Phase 7: agent init is idempotent — call unconditionally for
// every Claude Code session. Cursor still skipped (no SDK agent).
if (input.platform !== 'cursor' && sessionDbId) {
// Strip leading slash from commands for memory agent
// /review 101 -> review 101 (more semantic for observations)
const cleanedPrompt = prompt.startsWith('/') ? prompt.substring(1) : prompt;
logger.debug('HOOK', 'session-init: Calling /sessions/{sessionDbId}/init', { sessionDbId, promptNumber });
const agentInitResult = await executeWithWorkerFallback<{ status?: string }>(
`/sessions/${sessionDbId}/init`,
'POST',
{ userPrompt: cleanedPrompt, promptNumber },
);
if (isWorkerFallback(agentInitResult)) {
// Worker became unreachable mid-invocation; fail-loud counter handled it.
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
} else if (input.platform === 'cursor') {
logger.debug('HOOK', 'session-init: Skipping SDK agent init for Cursor platform', { sessionDbId, promptNumber });
}
// Semantic context injection: query Chroma for relevant past observations
// and inject as additionalContext so Claude receives relevant memory each prompt.
// Controlled by CLAUDE_MEM_SEMANTIC_INJECT setting (default: true).
// Plan 05 Phase 4: settings via process-scope cache.
const settings = loadFromFileOnce();
const semanticInject =
String(settings.CLAUDE_MEM_SEMANTIC_INJECT).toLowerCase() === 'true';
@@ -148,7 +107,6 @@ export const sessionInitHandler: EventHandler = {
sessionId: sessionDbId
});
// Return with semantic context if available
if (additionalContext) {
return {
continue: true,
-24
View File
@@ -1,11 +1,3 @@
/**
* Summarize Handler - Stop
*
* Fire-and-forget: queue the summarize request and exit. The worker handles
* summary generation, storage, and session cleanup asynchronously. The Stop
* hook does not wait for any of it — Claude Code must exit immediately.
* Session-complete cleanup is performed by the SessionEnd handler.
*/
import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js';
import { executeWithWorkerFallback, isWorkerFallback } from '../../shared/worker-utils.js';
@@ -18,18 +10,10 @@ import { shouldTrackProject } from '../../shared/should-track-project.js';
export const summarizeHandler: EventHandler = {
async execute(input: NormalizedHookInput): Promise<HookResult> {
// Skip Stop hook entirely when firing from an excluded project (notably
// OBSERVER_SESSIONS_DIR). Without this, the SDK observer's own Stop hook
// queues summaries against its meta-session and triggers a recovery loop.
if (input.cwd && !shouldTrackProject(input.cwd)) {
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
// Skip summaries in subagent context — subagents do not own the session summary.
// Gate on agentId only: that field is present exclusively for Task-spawned subagents.
// agentType alone (no agentId) indicates `--agent`-started main sessions, which still
// own their summary. Do this BEFORE the worker call so a subagent Stop hook
// does not bootstrap the worker.
if (input.agentId) {
logger.debug('HOOK', 'Skipping summary: subagent context detected', {
sessionId: input.sessionId,
@@ -41,20 +25,15 @@ export const summarizeHandler: EventHandler = {
const { sessionId, transcriptPath } = input;
// Validate required fields before processing
if (!sessionId) {
logger.warn('HOOK', 'summarize: No sessionId provided, skipping');
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
if (!transcriptPath) {
// No transcript available - skip summary gracefully (not an error)
logger.debug('HOOK', `No transcriptPath in Stop hook input for session ${sessionId} - skipping summary`);
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
// Extract last assistant message from transcript (the work Claude did)
// Note: "user" messages in transcripts are mostly tool_results, not actual user input.
// The user's original request is already stored in user_prompts table.
let lastAssistantMessage = '';
try {
lastAssistantMessage = extractLastMessage(transcriptPath, 'assistant', true);
@@ -64,8 +43,6 @@ export const summarizeHandler: EventHandler = {
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
}
// Skip summary if transcript has no assistant message (prevents repeated
// empty summarize requests that pollute logs — upstream bug)
if (!lastAssistantMessage || !lastAssistantMessage.trim()) {
logger.debug('HOOK', 'No assistant message in transcript - skipping summary', {
sessionId,
@@ -80,7 +57,6 @@ export const summarizeHandler: EventHandler = {
const platformSource = normalizePlatformSource(input.platform);
// 1. Queue summarize request — worker returns immediately with { status: 'queued' }
const queueResult = await executeWithWorkerFallback<{ status?: string }>(
'/api/sessions/summarize',
'POST',
-7
View File
@@ -1,9 +1,3 @@
/**
* User Message Handler - SessionStart (parallel)
*
* Displays context info to user via stderr.
* Uses exit code 0 (SUCCESS) - stderr is not shown to Claude with exit 0.
*/
import { basename } from 'path';
import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js';
@@ -20,7 +14,6 @@ export const userMessageHandler: EventHandler = {
const project = basename(input.cwd ?? process.cwd());
const colorsParam = input.platform === 'claude-code' ? '&colors=true' : '';
// Plan 05 Phase 2: single helper for ensure-worker-alive → request → fallback.
const result = await executeWithWorkerFallback<string>(
`/api/context/inject?project=${encodeURIComponent(project)}${colorsParam}`,
'GET',
+3 -37
View File
@@ -6,29 +6,13 @@ import { HOOK_EXIT_CODES } from '../shared/hook-constants.js';
import { logger } from '../utils/logger.js';
export interface HookCommandOptions {
/** If true, don't call process.exit() - let caller handle process lifecycle */
skipExit?: boolean;
}
/**
* Classify whether an error indicates the worker is unavailable (graceful degradation)
* vs a handler/client bug (blocking error that developers need to see).
*
* Exit 0 (graceful degradation):
* - Transport failures: ECONNREFUSED, ECONNRESET, EPIPE, ETIMEDOUT, fetch failed
* - Timeout errors: timed out, timeout
* - Server errors: HTTP 5xx status codes
*
* Exit 2 (blocking error — handler/client bug):
* - HTTP 4xx status codes (bad request, not found, validation error)
* - Programming errors (TypeError, ReferenceError, SyntaxError)
* - All other unexpected errors
*/
export function isWorkerUnavailableError(error: unknown): boolean {
const message = error instanceof Error ? error.message : String(error);
const lower = message.toLowerCase();
// Transport failures — worker unreachable
const transportPatterns = [
'econnrefused',
'econnreset',
@@ -44,25 +28,18 @@ export function isWorkerUnavailableError(error: unknown): boolean {
];
if (transportPatterns.some(p => lower.includes(p))) return true;
// Timeout errors — worker didn't respond in time
if (lower.includes('timed out') || lower.includes('timeout')) return true;
// HTTP 5xx server errors — worker has internal problems
if (/failed:\s*5\d{2}/.test(message) || /status[:\s]+5\d{2}/.test(message)) return true;
// HTTP 429 (rate limit) — treat as transient unavailability, not a bug
if (/failed:\s*429/.test(message) || /status[:\s]+429/.test(message)) return true;
// HTTP 4xx client errors — our bug, NOT worker unavailability
if (/failed:\s*4\d{2}/.test(message) || /status[:\s]+4\d{2}/.test(message)) return false;
// Programming errors — code bugs, not worker unavailability
// Note: TypeError('fetch failed') already handled by transport patterns above
if (error instanceof TypeError || error instanceof ReferenceError || error instanceof SyntaxError) {
return false;
}
// Default: treat unknown errors as blocking (conservative — surface bugs)
return false;
}
@@ -74,7 +51,7 @@ async function executeHookPipeline(
): Promise<number> {
const rawInput = await readJsonFromStdin();
const input = adapter.normalizeInput(rawInput);
input.platform = platform; // Inject platform for handler-level decisions
input.platform = platform;
const result = await handler.execute(input);
const output = adapter.formatOutput(result);
@@ -87,9 +64,6 @@ async function executeHookPipeline(
}
export async function hookCommand(platform: string, event: string, options: HookCommandOptions = {}): Promise<number> {
// Suppress stderr in hook context — Claude Code shows stderr as error UI (#1181)
// Exit 1: stderr shown to user. Exit 2: stderr fed to Claude for processing.
// All diagnostics go to log file via logger; stderr must stay clean.
const originalStderrWrite = process.stderr.write.bind(process.stderr);
process.stderr.write = (() => true) as typeof process.stderr.write;
@@ -99,10 +73,6 @@ export async function hookCommand(platform: string, event: string, options: Hook
try {
return await executeHookPipeline(adapter, handler, platform, options);
} catch (error) {
// Plan 05 Phase 6 — adapter rejected the input (invalid cwd or other
// boundary-detected payload defect). Treat as graceful: emit a continue
// envelope and exit 0 so the user's session is not blocked by a malformed
// hook payload from the platform.
if (error instanceof AdapterRejectedInput) {
logger.warn('HOOK', `Adapter rejected input (${error.reason}), skipping hook`);
console.log(JSON.stringify({ continue: true, suppressOutput: true }));
@@ -112,23 +82,19 @@ export async function hookCommand(platform: string, event: string, options: Hook
return HOOK_EXIT_CODES.SUCCESS;
}
if (isWorkerUnavailableError(error)) {
// Worker unavailable — degrade gracefully, don't block the user
// Log to file instead of stderr (#1181)
logger.warn('HOOK', `Worker unavailable, skipping hook: ${error instanceof Error ? error.message : error}`);
if (!options.skipExit) {
process.exit(HOOK_EXIT_CODES.SUCCESS); // = 0 (graceful)
process.exit(HOOK_EXIT_CODES.SUCCESS);
}
return HOOK_EXIT_CODES.SUCCESS;
}
// Handler/client bug — log to file instead of stderr (#1181)
logger.error('HOOK', `Hook error: ${error instanceof Error ? error.message : error}`, {}, error instanceof Error ? error : undefined);
if (!options.skipExit) {
process.exit(HOOK_EXIT_CODES.BLOCKING_ERROR); // = 2
process.exit(HOOK_EXIT_CODES.BLOCKING_ERROR);
}
return HOOK_EXIT_CODES.BLOCKING_ERROR;
} finally {
// Restore stderr for non-hook code paths (e.g., when skipExit is true and process continues as worker)
process.stderr.write = originalStderrWrite;
}
}
-57
View File
@@ -1,58 +1,23 @@
// Stdin reading utility for Claude Code hooks
//
// Problem: Claude Code doesn't close stdin after writing hook input,
// so stdin.on('end') never fires and hooks hang indefinitely (#727).
//
// Solution: JSON is self-delimiting. We detect complete JSON by attempting
// to parse after each chunk. Once we have valid JSON, we resolve immediately
// without waiting for EOF. This is the proper fix, not a timeout workaround.
//
// Resolve/reject contract:
// - Resolves with parsed JSON value when stdin yields valid JSON.
// - Resolves with `undefined` when stdin is unavailable, closes empty,
// or emits a stream error.
// - Rejects with an Error when stdin closes (or the safety timeout fires)
// after non-empty bytes that never form valid JSON. Malformed input is
// a handler/client bug — surfacing it lets the upstream exit-code
// strategy treat it as a blocking error (exit 2) rather than silently
// proceeding as if no input was given. (#2089)
import { logger } from '../utils/logger.js';
/**
* Check if stdin is available and readable.
*
* Bun has a bug where accessing process.stdin can crash with EINVAL
* if Claude Code doesn't provide a valid stdin file descriptor (#646).
* This function safely checks if stdin is usable.
*/
function isStdinAvailable(): boolean {
try {
const stdin = process.stdin;
// If stdin is a TTY, we're running interactively (not from Claude Code hook)
if (stdin.isTTY) {
return false;
}
// Accessing stdin.readable triggers Bun's lazy initialization.
// If we get here without throwing, stdin is available.
// Note: We don't check the value since Node/Bun don't reliably set it to false.
// eslint-disable-next-line @typescript-eslint/no-unused-expressions
stdin.readable;
return true;
} catch (error) {
// Bun crashed trying to access stdin (EINVAL from fstat)
// This is expected when Claude Code doesn't provide valid stdin
logger.debug('HOOK', 'stdin not available (expected for some runtimes)', { error: error instanceof Error ? error.message : String(error) });
return false;
}
}
/**
* Try to parse the accumulated input as JSON.
* Returns the parsed value if successful, undefined if incomplete/invalid.
*/
function tryParseJson(input: string): { success: true; value: unknown } | { success: false } {
const trimmed = input.trim();
if (!trimmed) {
@@ -63,23 +28,16 @@ function tryParseJson(input: string): { success: true; value: unknown } | { succ
const value = JSON.parse(trimmed);
return { success: true, value };
} catch (error) {
// JSON is incomplete or invalid — expected during incremental parsing
logger.debug('HOOK', 'JSON parse attempt incomplete', { error: error instanceof Error ? error.message : String(error) });
return { success: false };
}
}
// Safety timeout - only kicks in if JSON never completes (malformed input).
// This should rarely/never be hit in normal operation since we detect complete JSON.
const SAFETY_TIMEOUT_MS = 30000;
// Short delay after last data chunk to try parsing
// This handles the case where JSON arrives in multiple chunks
const PARSE_DELAY_MS = 50;
export async function readJsonFromStdin(): Promise<unknown> {
// First, check if stdin is even available
// This catches the Bun EINVAL crash from issue #646
if (!isStdinAvailable()) {
return undefined;
}
@@ -126,16 +84,12 @@ export async function readJsonFromStdin(): Promise<unknown> {
return false;
};
// Safety timeout - fallback if JSON never completes
const safetyTimeoutId = setTimeout(() => {
if (!resolved) {
// Try one final parse attempt
if (!tryResolveWithJson()) {
// If we have data but it's not valid JSON, that's an error
if (input.trim()) {
rejectWith(new Error(`Incomplete JSON after ${SAFETY_TIMEOUT_MS}ms: ${input.slice(0, 100)}...`));
} else {
// No data received - resolve with undefined
resolveWith(undefined);
}
}
@@ -145,31 +99,23 @@ export async function readJsonFromStdin(): Promise<unknown> {
const onData = (chunk: Buffer | string) => {
input += chunk;
// Clear any pending parse delay
if (parseDelayId) {
clearTimeout(parseDelayId);
parseDelayId = null;
}
// Try to parse immediately - if JSON is complete, resolve now
if (tryResolveWithJson()) {
return;
}
// If immediate parse failed, set a short delay and try again
// This handles multi-chunk delivery where the last chunk completes the JSON
parseDelayId = setTimeout(() => {
tryResolveWithJson();
}, PARSE_DELAY_MS);
};
const onEnd = () => {
// stdin closed - parse whatever we have
if (!resolved) {
if (!tryResolveWithJson()) {
// Mirror the safety-timeout semantics (#2089):
// non-empty bytes that never parsed = malformed input, surface it.
// Empty stdin = "no input given", resolve undefined.
if (input.trim()) {
rejectWith(new Error(`Malformed JSON at stdin EOF: ${input.slice(0, 100)}...`));
} else {
@@ -181,8 +127,6 @@ export async function readJsonFromStdin(): Promise<unknown> {
const onError = () => {
if (!resolved) {
// Don't reject on stdin errors - just return undefined
// This is more graceful for hook execution
resolveWith(undefined);
}
};
@@ -192,7 +136,6 @@ export async function readJsonFromStdin(): Promise<unknown> {
process.stdin.on('end', onEnd);
process.stdin.on('error', onError);
} catch (error) {
// If attaching listeners fails (Bun stdin issue), resolve with undefined
logger.debug('HOOK', 'Failed to attach stdin listeners', { error: error instanceof Error ? error.message : String(error) });
resolved = true;
clearTimeout(safetyTimeoutId);
+5 -10
View File
@@ -1,21 +1,17 @@
export interface NormalizedHookInput {
sessionId: string;
cwd: string;
platform?: string; // 'claude-code', 'cursor', 'gemini-cli', etc.
platform?: string;
prompt?: string;
toolName?: string;
toolInput?: unknown;
toolResponse?: unknown;
transcriptPath?: string;
// Cursor-specific fields
filePath?: string; // afterFileEdit
edits?: unknown[]; // afterFileEdit
// Platform-specific metadata (source, reason, trigger, mcp_context, etc.)
filePath?: string;
edits?: unknown[];
metadata?: Record<string, unknown>;
// Claude Code subagent identity — present only when hook fires inside a subagent.
// Main session has both undefined. Discriminator for subagent context.
agentId?: string; // Claude Code subagent agent_id (undefined in main session)
agentType?: string; // Claude Code subagent agent_type (undefined in main session)
agentId?: string;
agentType?: string;
}
export interface HookResult {
@@ -25,7 +21,6 @@ export interface HookResult {
hookEventName: string;
additionalContext: string;
permissionDecision?: 'allow' | 'deny';
permissionDecisionReason?: string;
updatedInput?: Record<string, unknown>;
};
systemMessage?: string;
-7
View File
@@ -1,10 +1,3 @@
/**
* Standard hook response for all hooks.
* Tells Claude Code to continue processing and suppress the hook's output.
*
* Note: SessionStart uses context-hook.ts which constructs its own response
* with hookSpecificOutput for context injection.
*/
export const STANDARD_HOOK_RESPONSE = JSON.stringify({
continue: true,
suppressOutput: true
+1 -82
View File
@@ -1,22 +1,4 @@
/**
* OpenCode Plugin for claude-mem
*
* Integrates claude-mem persistent memory with OpenCode (110k+ stars).
* Runs inside OpenCode's Bun-based plugin runtime.
*
* Plugin hooks:
* - tool.execute.after: Captures tool execution observations
* - Bus events: session.created, message.updated, session.compacted,
* file.edited, session.deleted (in-memory cleanup only; worker self-completes)
*
* Custom tool:
* - claude_mem_search: Search memory database from within OpenCode
*/
// ============================================================================
// Minimal type declarations for OpenCode Plugin SDK
// These match the runtime API provided by @opencode-ai/plugin
// ============================================================================
interface OpenCodeProject {
name?: string;
@@ -29,7 +11,7 @@ interface OpenCodePluginContext {
directory: string;
worktree: string;
serverUrl: URL;
$: unknown; // BunShell
$: unknown;
}
interface ToolExecuteAfterInput {
@@ -51,7 +33,6 @@ interface ToolDefinition {
execute: (args: Record<string, unknown>, context: unknown) => Promise<string>;
}
// Bus event payloads
interface SessionCreatedEvent {
event: {
sessionID: string;
@@ -90,17 +71,6 @@ interface SessionDeletedEvent {
};
}
// ============================================================================
// Constants
// ============================================================================
/**
* Resolve the worker port matching SettingsDefaultsManager's algorithm:
* process.env.CLAUDE_MEM_WORKER_PORT, else 37700 + (uid % 100).
* Required for multi-account isolation (#2101) and so this plugin talks to
* the same worker the rest of claude-mem (hooks, npx-cli) connects to.
* Inlined rather than imported to keep this OpenCode plugin standalone.
*/
function resolveWorkerPort(): string {
const fromEnv = process.env.CLAUDE_MEM_WORKER_PORT;
const parsed = fromEnv ? Number.parseInt(fromEnv.trim(), 10) : NaN;
@@ -114,39 +84,8 @@ function resolveWorkerPort(): string {
const WORKER_BASE_URL = `http://127.0.0.1:${resolveWorkerPort()}`;
const MAX_TOOL_RESPONSE_LENGTH = 1000;
// ============================================================================
// Worker HTTP Client
// ============================================================================
const JSON_HEADERS: Record<string, string> = { "Content-Type": "application/json" };
async function workerPost(
path: string,
body: Record<string, unknown>,
): Promise<Record<string, unknown> | null> {
let response: Response;
try {
response = await fetch(`${WORKER_BASE_URL}${path}`, {
method: "POST",
headers: JSON_HEADERS,
body: JSON.stringify(body),
});
} catch (error: unknown) {
// Gracefully handle ECONNREFUSED — worker may not be running
const message = error instanceof Error ? error.message : String(error);
if (!message.includes("ECONNREFUSED")) {
console.warn(`[claude-mem] Worker POST ${path} failed: ${message}`);
}
return null;
}
if (!response.ok) {
console.warn(`[claude-mem] Worker POST ${path} returned ${response.status}`);
return null;
}
return (await response.json()) as Record<string, unknown>;
}
function workerPostFireAndForget(
path: string,
body: Record<string, unknown>,
@@ -180,17 +119,12 @@ async function workerGetText(path: string): Promise<string | null> {
}
}
// ============================================================================
// Session tracking
// ============================================================================
const contentSessionIdsByOpenCodeSessionId = new Map<string, string>();
const MAX_SESSION_MAP_ENTRIES = 1000;
function getOrCreateContentSessionId(openCodeSessionId: string): string {
if (!contentSessionIdsByOpenCodeSessionId.has(openCodeSessionId)) {
// Evict oldest entries when the map exceeds the cap (Map preserves insertion order)
while (contentSessionIdsByOpenCodeSessionId.size >= MAX_SESSION_MAP_ENTRIES) {
const oldestKey = contentSessionIdsByOpenCodeSessionId.keys().next().value;
if (oldestKey !== undefined) {
@@ -207,19 +141,12 @@ function getOrCreateContentSessionId(openCodeSessionId: string): string {
return contentSessionIdsByOpenCodeSessionId.get(openCodeSessionId)!;
}
// ============================================================================
// Plugin Entry Point
// ============================================================================
export const ClaudeMemPlugin = async (ctx: OpenCodePluginContext) => {
const projectName = ctx.project?.name || "opencode";
console.log(`[claude-mem] OpenCode plugin loading (project: ${projectName})`);
return {
// ------------------------------------------------------------------
// Direct interceptor hooks
// ------------------------------------------------------------------
hooks: {
tool: {
execute: {
@@ -229,7 +156,6 @@ export const ClaudeMemPlugin = async (ctx: OpenCodePluginContext) => {
) => {
const contentSessionId = getOrCreateContentSessionId(input.sessionID);
// Truncate long tool output
let toolResponseText = output.output || "";
if (toolResponseText.length > MAX_TOOL_RESPONSE_LENGTH) {
toolResponseText = toolResponseText.slice(0, MAX_TOOL_RESPONSE_LENGTH);
@@ -247,9 +173,6 @@ export const ClaudeMemPlugin = async (ctx: OpenCodePluginContext) => {
},
},
// ------------------------------------------------------------------
// Bus event handlers
// ------------------------------------------------------------------
event: (eventName: string, payload: unknown) => {
switch (eventName) {
case "session.created": {
@@ -267,7 +190,6 @@ export const ClaudeMemPlugin = async (ctx: OpenCodePluginContext) => {
case "message.updated": {
const { event } = payload as MessageUpdatedEvent;
// Only capture assistant messages as observations
if (event.role !== "assistant") break;
const contentSessionId = getOrCreateContentSessionId(event.sessionID);
@@ -322,9 +244,6 @@ export const ClaudeMemPlugin = async (ctx: OpenCodePluginContext) => {
}
},
// ------------------------------------------------------------------
// Custom tools
// ------------------------------------------------------------------
tool: {
claude_mem_search: {
description:
File diff suppressed because one or more lines are too long
+180
View File
@@ -0,0 +1,180 @@
import { inflateRawSync } from 'zlib';
import { BANNER } from './banner-frames.js';
const HIDE_CURSOR = '\x1b[?25l';
const SHOW_CURSOR = '\x1b[?25h';
const CLEAR_SCREEN = '\x1b[2J\x1b[3J\x1b[H';
const RESET = '\x1b[0m';
const FRAME_SEP = '\x01';
function primaryColor(truecolor: boolean, brightness: number = 1.0): string {
if (!truecolor) return '\x1b[38;5;208m';
const r = Math.min(255, Math.round(230 * brightness));
const g = Math.min(255, Math.round(115 * brightness));
const b = Math.min(255, Math.round(70 * brightness));
return `\x1b[38;2;${r};${g};${b}m`;
}
function accentColor(truecolor: boolean, brightness: number = 1.0): string {
if (!truecolor) return '\x1b[38;5;215m';
const r = Math.min(255, Math.round(255 * brightness));
const g = Math.min(255, Math.round(180 * brightness));
const b = Math.min(255, Math.round(122 * brightness));
return `\x1b[38;2;${r};${g};${b}m`;
}
let frames: string[] | null = null;
function getFrames(): string[] {
if (frames) return frames;
// Banner is decorative — if frame payload decoding fails for any reason
// (corrupted bundle, mismatched zlib, etc.) we must not break the CLI.
// Fail open by returning an empty frame list; playBanner() bails on empty.
try {
const raw = inflateRawSync(Buffer.from(BANNER.compressed, 'base64')).toString('utf8');
frames = raw.split(FRAME_SEP).filter(Boolean);
} catch {
frames = [];
}
return frames;
}
function styleFrame(
frame: string,
truecolor: boolean,
brightness: number = 1.0,
): string {
const primary = primaryColor(truecolor, brightness);
const accent = accentColor(truecolor, brightness);
let out = primary;
let i = 0;
let inSpan = false;
while (i < frame.length) {
const ch = frame[i];
if (ch === '<') {
const isClosing = frame[i + 1] === '/';
while (i < frame.length && frame[i] !== '>') i++;
i++;
inSpan = !isClosing;
out += inSpan ? accent : primary;
continue;
}
out += ch;
i++;
}
return out + RESET;
}
function detectTruecolor(): boolean {
return process.env.COLORTERM === 'truecolor' || process.env.COLORTERM === '24bit';
}
const WORDMARK_BUBBLE: readonly string[] = [
" _ _ ",
" ___| | __ _ _ _ __| | ___ _ __ ___ ___ _ __ ___ ",
" / __| |/ _` | | | |/ _` |/ _ \\_____| '_ ` _ \\ / _ \\ '_ ` _ \\ ",
"| (__| | (_| | |_| | (_| | __/_____| | | | | | __/ | | | | |",
" \\___|_|\\__,_|\\__,_|\\__,_|\\___| |_| |_| |_|\\___|_| |_| |_|",
] as const;
const BUBBLE_HEIGHT = WORDMARK_BUBBLE.length;
const BUBBLE_WIDTH = WORDMARK_BUBBLE[0].length;
const TAGLINE_GAP = 1;
const TOTAL_ROWS = BANNER.height + BUBBLE_HEIGHT + TAGLINE_GAP + 1;
function writeBubbleRow(rowIdx: number, colsRevealed: number): string {
const src = WORDMARK_BUBBLE[rowIdx];
const W = BANNER.width;
const visible = src.slice(0, Math.min(BUBBLE_WIDTH, colsRevealed)).padEnd(BUBBLE_WIDTH, ' ');
const pad = Math.max(0, Math.floor((W - BUBBLE_WIDTH) / 2));
return ' '.repeat(pad) + `\x1b[1;97m${visible}\x1b[0m` + ' '.repeat(Math.max(0, W - pad - BUBBLE_WIDTH));
}
function writeTaglineRow(text: string): string {
const W = BANNER.width;
const pad = Math.max(0, Math.floor((W - text.length) / 2));
return ' '.repeat(pad) + `\x1b[2;37m${text}\x1b[0m` + ' '.repeat(Math.max(0, W - pad - text.length));
}
export function isBannerEnabled(): boolean {
if (!process.stdout.isTTY) return false;
if (process.env.CI) return false;
if (process.env.CLAUDE_MEM_NO_BANNER) return false;
if (process.env.NO_COLOR) return false;
const cols = process.stdout.columns ?? 0;
return cols >= BANNER.width;
}
const sleep = (ms: number) => new Promise<void>((r) => setTimeout(r, ms));
export async function playBanner(): Promise<void> {
if (!isBannerEnabled()) return;
const truecolor = detectTruecolor();
const allFrames = getFrames();
if (allFrames.length === 0) return;
let aborted = false;
const onResize = () => { aborted = true; };
process.stdout.on('resize', onResize);
process.stdout.write(CLEAR_SCREEN);
process.stdout.write(HIDE_CURSOR);
process.stdout.write('\n'.repeat(TOTAL_ROWS));
process.stdout.write(`\x1b[${TOTAL_ROWS}A`);
process.stdout.write('\x1b[s');
const blankRow = ' '.repeat(BANNER.width);
const writeFrame = (frameText: string, colsRevealed: number, tagline: string, brightness: number = 1.0) => {
process.stdout.write('\x1b[u');
process.stdout.write(styleFrame(frameText, truecolor, brightness));
process.stdout.write('\n');
for (let i = 0; i < BUBBLE_HEIGHT; i++) {
process.stdout.write(writeBubbleRow(i, colsRevealed));
process.stdout.write('\n');
}
for (let g = 0; g < TAGLINE_GAP; g++) {
process.stdout.write(blankRow);
process.stdout.write('\n');
}
process.stdout.write(writeTaglineRow(tagline));
};
try {
for (let i = 0; i < allFrames.length; i++) {
if (aborted) return;
writeFrame(allFrames[i], 0, '');
await sleep(BANNER.frameDelay);
}
const finalFrame = allFrames[allFrames.length - 1];
const TAGLINE = 'persistent memory across sessions';
const REVEAL_STEPS = 14;
for (let s = 1; s <= REVEAL_STEPS; s++) {
if (aborted) return;
const cols = Math.ceil(BUBBLE_WIDTH * (s / REVEAL_STEPS));
writeFrame(finalFrame, cols, '');
await sleep(45);
}
for (let s = 1; s <= 6; s++) {
if (aborted) return;
const chars = Math.ceil(TAGLINE.length * (s / 6));
writeFrame(finalFrame, BUBBLE_WIDTH, TAGLINE.slice(0, chars));
await sleep(33);
}
for (const brightness of [0.85, 0.95, 1.0]) {
if (aborted) return;
writeFrame(finalFrame, BUBBLE_WIDTH, TAGLINE, brightness);
await sleep(100);
}
await sleep(150);
} finally {
process.stdout.off('resize', onResize);
process.stdout.write(RESET);
process.stdout.write(SHOW_CURSOR);
process.stdout.write('\n');
}
}
+1 -49
View File
@@ -1,45 +1,23 @@
/**
* IDE Auto-Detection
*
* Detects which AI coding IDEs / tools are installed on the system by
* probing known config directories and checking for binaries in PATH.
*
* Pure Node.js — no Bun APIs used.
*/
import { execSync } from 'child_process';
import { existsSync, readdirSync } from 'fs';
import { homedir } from 'os';
import { join } from 'path';
import { IS_WINDOWS } from '../utils/paths.js';
// ---------------------------------------------------------------------------
// IDE type and metadata
// ---------------------------------------------------------------------------
export interface IDEInfo {
/** Machine-readable identifier. */
id: string;
/** Human-readable label for display in prompts. */
label: string;
/** Whether the IDE was detected on this system. */
detected: boolean;
/** Whether claude-mem has implemented setup for this IDE. */
supported: boolean;
/** Short hint text shown in the multi-select. */
hint?: string;
}
// ---------------------------------------------------------------------------
// PATH helper
// ---------------------------------------------------------------------------
function isCommandInPath(command: string): boolean {
try {
const whichCommand = IS_WINDOWS ? 'where' : 'which';
execSync(`${whichCommand} ${command}`, { stdio: 'pipe' });
return true;
} catch (error: unknown) {
// Command not found in PATH — expected for non-installed IDEs
if (process.env.DEBUG) {
console.error(`[ide-detection] ${command} not in PATH:`, error instanceof Error ? error.message : String(error));
}
@@ -47,10 +25,6 @@ function isCommandInPath(command: string): boolean {
}
}
// ---------------------------------------------------------------------------
// VS Code extension directory scanner
// ---------------------------------------------------------------------------
function hasVscodeExtension(extensionNameFragment: string): boolean {
const extensionsDirectory = join(homedir(), '.vscode', 'extensions');
if (!existsSync(extensionsDirectory)) return false;
@@ -63,15 +37,6 @@ function hasVscodeExtension(extensionNameFragment: string): boolean {
}
}
// ---------------------------------------------------------------------------
// Detection map
// ---------------------------------------------------------------------------
/**
* Detect all known IDEs and return an array of `IDEInfo` objects.
* Each entry indicates whether the IDE was found and whether claude-mem
* currently supports setting it up.
*/
export function detectInstalledIDEs(): IDEInfo[] {
const home = homedir();
@@ -79,7 +44,7 @@ export function detectInstalledIDEs(): IDEInfo[] {
{
id: 'claude-code',
label: 'Claude Code',
detected: existsSync(join(home, '.claude')),
detected: isCommandInPath('claude'),
supported: true,
hint: 'recommended',
},
@@ -146,13 +111,6 @@ export function detectInstalledIDEs(): IDEInfo[] {
supported: true,
hint: 'MCP-based integration',
},
{
id: 'crush',
label: 'Crush',
detected: isCommandInPath('crush'),
supported: true,
hint: 'MCP-based integration',
},
{
id: 'roo-code',
label: 'Roo Code',
@@ -170,9 +128,3 @@ export function detectInstalledIDEs(): IDEInfo[] {
];
}
/**
* Return only the IDEs that were detected on this system.
*/
export function getDetectedIDEs(): IDEInfo[] {
return detectInstalledIDEs().filter((ide) => ide.detected);
}
File diff suppressed because it is too large Load Diff
-51
View File
@@ -1,11 +1,3 @@
/**
* Runtime command routing for `npx claude-mem start|stop|restart|status|search|transcript`.
*
* These commands delegate to the installed plugin's worker-service.cjs via Bun,
* or hit the worker's HTTP API directly (for `search`).
*
* Pure Node.js — no Bun APIs used.
*/
import { spawn } from 'child_process';
import { existsSync } from 'fs';
import { join } from 'path';
@@ -14,10 +6,6 @@ import { resolveBunBinaryPath } from '../utils/bun-resolver.js';
import { isPluginInstalled, marketplaceDirectory } from '../utils/paths.js';
import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js';
// ---------------------------------------------------------------------------
// Installation guard
// ---------------------------------------------------------------------------
function ensureInstalledOrExit(): void {
if (!isPluginInstalled()) {
console.error(pc.red('claude-mem is not installed.'));
@@ -26,10 +14,6 @@ function ensureInstalledOrExit(): void {
}
}
// ---------------------------------------------------------------------------
// Bun guard
// ---------------------------------------------------------------------------
function resolveBunOrExit(): string {
const bunPath = resolveBunBinaryPath();
if (!bunPath) {
@@ -41,18 +25,10 @@ function resolveBunOrExit(): string {
return bunPath;
}
// ---------------------------------------------------------------------------
// Worker-service path
// ---------------------------------------------------------------------------
function workerServiceScriptPath(): string {
return join(marketplaceDirectory(), 'plugin', 'scripts', 'worker-service.cjs');
}
// ---------------------------------------------------------------------------
// Spawn helper
// ---------------------------------------------------------------------------
function spawnBunWorkerCommand(command: string, extraArgs: string[] = []): void {
ensureInstalledOrExit();
const bunPath = resolveBunOrExit();
@@ -82,10 +58,6 @@ function spawnBunWorkerCommand(command: string, extraArgs: string[] = []): void
});
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
export function runStartCommand(): void {
spawnBunWorkerCommand('start');
}
@@ -102,12 +74,6 @@ export function runStatusCommand(): void {
spawnBunWorkerCommand('status');
}
/**
* Stamp merged-worktree provenance on observations/summaries and keep Chroma
* metadata in lockstep. Delegates to the worker-service.cjs `adopt` subcommand
* so adoption runs in Bun (needed for bun:sqlite) while preserving the user's
* working directory — that's what the engine uses to locate the parent repo.
*/
export function runAdoptCommand(extraArgs: string[] = []): void {
ensureInstalledOrExit();
const bunPath = resolveBunOrExit();
@@ -119,8 +85,6 @@ export function runAdoptCommand(extraArgs: string[] = []): void {
process.exit(1);
}
// Pass user's cwd explicitly via --cwd because we override cwd on spawn to
// marketplaceDirectory() (required for the worker's own file resolution).
const userCwd = process.cwd();
const args = [workerScript, 'adopt', '--cwd', userCwd, ...extraArgs];
@@ -140,18 +104,10 @@ export function runAdoptCommand(extraArgs: string[] = []): void {
});
}
/**
* Run the one-time v12.4.3 pollution cleanup, or preview it via --dry-run.
* Delegates to the worker-service.cjs `cleanup` subcommand so the scan and
* (optional) deletion run in Bun (needed for bun:sqlite). (#2126 item 5)
*/
export function runCleanupCommand(extraArgs: string[] = []): void {
spawnBunWorkerCommand('cleanup', extraArgs);
}
/**
* Search the worker API at `GET /api/search?query=<query>`.
*/
export async function runSearchCommand(queryParts: string[]): Promise<void> {
ensureInstalledOrExit();
@@ -161,9 +117,6 @@ export async function runSearchCommand(queryParts: string[]): Promise<void> {
process.exit(1);
}
// Resolve port via SettingsDefaultsManager so CLAUDE_MEM_WORKER_PORT env
// takes priority and the per-UID default (37700 + uid % 100) is used
// otherwise. Required for multi-account isolation (#2101).
const workerPort = SettingsDefaultsManager.get('CLAUDE_MEM_WORKER_PORT');
const searchUrl = `http://127.0.0.1:${workerPort}/api/search?query=${encodeURIComponent(query)}`;
@@ -208,9 +161,6 @@ export async function runSearchCommand(queryParts: string[]): Promise<void> {
}
}
/**
* Start the transcript watcher via Bun.
*/
export function runTranscriptWatchCommand(): void {
ensureInstalledOrExit();
const bunPath = resolveBunOrExit();
@@ -223,7 +173,6 @@ export function runTranscriptWatchCommand(): void {
);
if (!existsSync(transcriptWatcherPath)) {
// Fall back to worker-service with transcript subcommand
spawnBunWorkerCommand('transcript', ['watch']);
return;
}
+1 -58
View File
@@ -1,12 +1,3 @@
/**
* Uninstall command for `npx claude-mem uninstall`.
*
* Removes the plugin from the marketplace directory, cache, plugin
* registrations, and Claude settings. Optionally cleans up IDE-specific
* configurations.
*
* Pure Node.js — no Bun APIs used.
*/
import * as p from '@clack/prompts';
import pc from 'picocolors';
import { existsSync, readFileSync, readdirSync, rmSync, writeFileSync } from 'fs';
@@ -25,10 +16,6 @@ import { readJsonSafe } from '../../utils/json-utils.js';
import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js';
import { shutdownWorkerAndWait } from '../../services/install/shutdown-helper.js';
// ---------------------------------------------------------------------------
// Cleanup helpers
// ---------------------------------------------------------------------------
function removeMarketplaceDirectory(): boolean {
const marketplaceDir = marketplaceDirectory();
if (existsSync(marketplaceDir)) {
@@ -63,14 +50,6 @@ function removeFromInstalledPlugins(): void {
}
}
/**
* Strip the legacy `claude-mem` shell alias/function from common shell rc files
* (#2054). The alias used to be added by `installCLI()` in smart-install.js;
* that function was deleted, but existing users still have the line. This is
* a one-time best-effort cleanup — idempotent (no-op if the line is absent),
* and safely matches only lines that BEGIN with `alias claude-mem=` or
* `function claude-mem` to avoid mangling unrelated code.
*/
function stripLegacyClaudeMemAlias(): void {
const home = homedir();
const candidateFiles = [
@@ -79,9 +58,6 @@ function stripLegacyClaudeMemAlias(): void {
join(home, 'Documents', 'PowerShell', 'Microsoft.PowerShell_profile.ps1'),
];
// Only strip simple aliases. A function declaration would span multiple
// lines and can't be safely removed by a line filter — leave it for the
// user to remove manually.
const aliasLineRegex = /^\s*alias\s+claude-mem\s*=/;
for (const filePath of candidateFiles) {
@@ -95,7 +71,7 @@ function stripLegacyClaudeMemAlias(): void {
}
const lines = content.split('\n');
const filtered = lines.filter((line) => !aliasLineRegex.test(line));
if (filtered.length === lines.length) continue; // no match — leave file untouched
if (filtered.length === lines.length) continue;
try {
writeFileSync(filePath, filtered.join('\n'));
console.error(`Removed legacy claude-mem alias from ${filePath}`);
@@ -113,25 +89,10 @@ function removeFromClaudeSettings(): void {
}
}
/**
* Best-effort cleanup of stray claude-mem residue (#2106 item 4) that
* accumulates outside of `~/.claude/plugins/marketplaces/thedotmack/`:
*
* - `~/.npm/_npx/<hash>/node_modules/claude-mem` (npx install caches)
* - `~/.cache/claude-cli-nodejs/<project>/mcp-logs-plugin-claude-mem-*`
* - `~/.claude/plugins/data/claude-mem-thedotmack/`
*
* Each step is wrapped in its own try/catch — a failure on one path
* (e.g. permissions denied on a single npx hash dir) must not abort
* the rest. We log the failure and continue.
*
* Returns the count of paths actually removed (purely for reporting).
*/
function removeStrayClaudeMemPaths(): number {
const home = homedir();
let removedCount = 0;
// 1. ~/.npm/_npx/*/node_modules/claude-mem
const npxRoot = join(home, '.npm', '_npx');
if (existsSync(npxRoot)) {
let hashDirs: string[] = [];
@@ -152,7 +113,6 @@ function removeStrayClaudeMemPaths(): number {
}
}
// 2. ~/.cache/claude-cli-nodejs/*/mcp-logs-plugin-claude-mem-*
const cacheRoot = join(home, '.cache', 'claude-cli-nodejs');
if (existsSync(cacheRoot)) {
let projectDirs: string[] = [];
@@ -183,7 +143,6 @@ function removeStrayClaudeMemPaths(): number {
}
}
// 3. ~/.claude/plugins/data/claude-mem-thedotmack/
const pluginDataDir = join(home, '.claude', 'plugins', 'data', 'claude-mem-thedotmack');
if (existsSync(pluginDataDir)) {
try {
@@ -197,17 +156,12 @@ function removeStrayClaudeMemPaths(): number {
return removedCount;
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
export async function runUninstallCommand(): Promise<void> {
p.intro(pc.bgRed(pc.white(' claude-mem uninstall ')));
if (!isPluginInstalled()) {
p.log.warn('claude-mem does not appear to be installed.');
// Still offer to clean up partial state
if (process.stdin.isTTY) {
const shouldCleanup = await p.confirm({
message: 'Clean up any remaining registration data anyway?',
@@ -234,14 +188,6 @@ export async function runUninstallCommand(): Promise<void> {
}
}
// Stop the worker and wait for it to exit before deleting files.
// Resolve port via SettingsDefaultsManager so CLAUDE_MEM_WORKER_PORT env
// takes priority and the per-UID default (37700 + uid % 100) is used
// otherwise. Required for multi-account isolation (#2101).
//
// The worker's graceful shutdown also stops chroma-mcp via
// GracefulShutdown -> ChromaMcpManager.stop(), so this single call
// cascades to the chroma-mcp subprocess as well.
const workerPort = SettingsDefaultsManager.get('CLAUDE_MEM_WORKER_PORT');
try {
const result = await shutdownWorkerAndWait(workerPort, 10000);
@@ -249,7 +195,6 @@ export async function runUninstallCommand(): Promise<void> {
p.log.info('Worker service stopped.');
}
} catch (error: unknown) {
// shutdownWorkerAndWait swallows its own errors, but guard anyway.
console.warn('[uninstall] Worker shutdown attempt failed:', error instanceof Error ? error.message : String(error));
}
@@ -311,7 +256,6 @@ export async function runUninstallCommand(): Promise<void> {
},
]);
// Remove IDE-specific hooks and config (best-effort, each is independent)
const ideCleanups: Array<{ label: string; fn: () => Promise<number> | number }> = [
{ label: 'Gemini CLI hooks', fn: async () => {
const { uninstallGeminiCliHooks } = await import('../../services/integrations/GeminiCliHooksInstaller.js');
@@ -342,7 +286,6 @@ export async function runUninstallCommand(): Promise<void> {
p.log.info(`${label}: removed.`);
}
} catch (error: unknown) {
// IDE not configured or uninstaller errored — log and continue
console.warn(`[uninstall] ${label} cleanup failed:`, error instanceof Error ? error.message : String(error));
}
}
+49 -55
View File
@@ -1,36 +1,17 @@
/**
* NPX CLI entry point for claude-mem.
*
* Usage:
* npx claude-mem → interactive install
* npx claude-mem install → interactive install
* npx claude-mem install --ide <id> → direct IDE setup
* npx claude-mem update → update to latest version
* npx claude-mem uninstall → remove plugin and IDE configs
* npx claude-mem version → print version
* npx claude-mem start → start worker service
* npx claude-mem stop → stop worker service
* npx claude-mem restart → restart worker service
* npx claude-mem status → show worker status
* npx claude-mem search <query> → search observations
* npx claude-mem transcript watch → start transcript watcher
*
* This file is pure Node.js — Bun is NOT required for install commands.
* Runtime commands (`start`, `stop`, etc.) delegate to Bun via the installed plugin.
*/
import pc from 'picocolors';
import { readPluginVersion } from './utils/paths.js';
// ---------------------------------------------------------------------------
// Argument parsing
// ---------------------------------------------------------------------------
import type { InstallOptions } from './commands/install.js';
const args = process.argv.slice(2);
const command = args[0]?.toLowerCase() ?? '';
// ---------------------------------------------------------------------------
// Help text
// ---------------------------------------------------------------------------
const firstArg = args[0]?.toLowerCase() ?? '';
// If the first token is a flag (e.g. `npx claude-mem --provider claude`),
// treat the invocation as `install` with those flags. Help/version flags are
// handled directly so they don't get swallowed by the install path.
const HELP_OR_VERSION_FLAGS = new Set(['-h', '--help', '-v', '--version']);
const command =
firstArg.startsWith('-') && !HELP_OR_VERSION_FLAGS.has(firstArg)
? 'install'
: firstArg;
function printHelp(): void {
const version = readPluginVersion();
@@ -42,6 +23,10 @@ ${pc.bold('Install Commands')} (no Bun required):
${pc.cyan('npx claude-mem')} Interactive install
${pc.cyan('npx claude-mem install')} Interactive install
${pc.cyan('npx claude-mem install --ide <id>')} Install for specific IDE
${pc.cyan('npx claude-mem install --provider claude|gemini|openrouter')} Set LLM provider non-interactively
${pc.cyan('npx claude-mem install --model <id>')} Set Claude model (when provider=claude)
${pc.cyan('npx claude-mem install --no-auto-start')} Skip worker auto-start at the end
${pc.cyan('npx claude-mem repair')} Repair runtime (re-runs Bun/uv setup and bun install in plugin cache)
${pc.cyan('npx claude-mem update')} Update to latest version
${pc.cyan('npx claude-mem uninstall')} Remove plugin and configs
${pc.cyan('npx claude-mem version')} Print version
@@ -59,34 +44,52 @@ ${pc.bold('Runtime Commands')} (requires Bun, delegates to installed plugin):
${pc.bold('IDE Identifiers')}:
claude-code, cursor, gemini-cli, opencode, openclaw,
windsurf, codex-cli, copilot-cli, antigravity, goose,
crush, roo-code, warp
roo-code, warp
`);
}
// ---------------------------------------------------------------------------
// Command routing
// ---------------------------------------------------------------------------
function readFlag(argv: string[], name: string): string | undefined {
const i = argv.indexOf(name);
if (i === -1) return undefined;
const next = argv[i + 1];
// Reject missing or flag-shaped values so e.g. `--model --no-auto-start`
// doesn't silently treat `--no-auto-start` as the model name.
if (next === undefined || next.startsWith('-')) {
console.error(pc.red(`Flag ${name} requires a value.`));
process.exit(1);
}
return next;
}
function parseInstallOptions(argv: string[]): InstallOptions {
const provider = readFlag(argv, '--provider');
if (provider !== undefined && provider !== 'claude' && provider !== 'gemini' && provider !== 'openrouter') {
console.error(`Unknown --provider: ${provider}. Allowed: claude, gemini, openrouter`);
process.exit(1);
}
return {
ide: readFlag(argv, '--ide'),
provider: provider as InstallOptions['provider'],
model: readFlag(argv, '--model'),
noAutoStart: argv.includes('--no-auto-start'),
};
}
async function main(): Promise<void> {
switch (command) {
// -- No command: default to install ------------------------------------
case '': {
const { runInstallCommand } = await import('./commands/install.js');
await runInstallCommand();
break;
}
// -- Install -----------------------------------------------------------
case '':
case 'install': {
const ideIndex = args.indexOf('--ide');
const ideValue = ideIndex !== -1 ? args[ideIndex + 1] : undefined;
const { runInstallCommand } = await import('./commands/install.js');
await runInstallCommand({ ide: ideValue });
await runInstallCommand(parseInstallOptions(args));
break;
}
case 'repair': {
const { runRepairCommand } = await import('./commands/install.js');
await runRepairCommand();
break;
}
// -- Update (alias for install — overwrite with latest) ----------------
case 'update':
case 'upgrade': {
const { runInstallCommand } = await import('./commands/install.js');
@@ -94,7 +97,6 @@ async function main(): Promise<void> {
break;
}
// -- Uninstall ---------------------------------------------------------
case 'uninstall':
case 'remove': {
const { runUninstallCommand } = await import('./commands/uninstall.js');
@@ -102,7 +104,6 @@ async function main(): Promise<void> {
break;
}
// -- Version -----------------------------------------------------------
case 'version':
case '--version':
case '-v': {
@@ -110,7 +111,6 @@ async function main(): Promise<void> {
break;
}
// -- Help --------------------------------------------------------------
case 'help':
case '--help':
case '-h': {
@@ -118,7 +118,6 @@ async function main(): Promise<void> {
break;
}
// -- Runtime: start / stop / restart / status --------------------------
case 'start': {
const { runStartCommand } = await import('./commands/runtime.js');
runStartCommand();
@@ -140,28 +139,24 @@ async function main(): Promise<void> {
break;
}
// -- Search ------------------------------------------------------------
case 'search': {
const { runSearchCommand } = await import('./commands/runtime.js');
await runSearchCommand(args.slice(1));
break;
}
// -- Adopt merged worktrees -------------------------------------------
case 'adopt': {
const { runAdoptCommand } = await import('./commands/runtime.js');
runAdoptCommand(args.slice(1));
break;
}
// -- One-time v12.4.3 cleanup ------------------------------------------
case 'cleanup': {
const { runCleanupCommand } = await import('./commands/runtime.js');
runCleanupCommand(args.slice(1));
break;
}
// -- Transcript --------------------------------------------------------
case 'transcript': {
const subCommand = args[1]?.toLowerCase();
if (subCommand === 'watch') {
@@ -175,7 +170,6 @@ async function main(): Promise<void> {
break;
}
// -- Unknown -----------------------------------------------------------
default: {
console.error(pc.red(`Unknown command: ${command}`));
console.error(`Run ${pc.bold('npx claude-mem --help')} for usage information.`);
+271
View File
@@ -0,0 +1,271 @@
import { existsSync, readFileSync, writeFileSync } from 'fs';
import { execSync, spawnSync } from 'child_process';
import { join } from 'path';
import { homedir } from 'os';
const IS_WINDOWS = process.platform === 'win32';
const BUN_COMMON_PATHS = IS_WINDOWS
? [join(homedir(), '.bun', 'bin', 'bun.exe')]
: [join(homedir(), '.bun', 'bin', 'bun'), '/usr/local/bin/bun', '/opt/homebrew/bin/bun'];
const UV_COMMON_PATHS = IS_WINDOWS
? [join(homedir(), '.local', 'bin', 'uv.exe'), join(homedir(), '.cargo', 'bin', 'uv.exe')]
: [join(homedir(), '.local', 'bin', 'uv'), join(homedir(), '.cargo', 'bin', 'uv'), '/usr/local/bin/uv', '/opt/homebrew/bin/uv'];
interface MarkerSchema {
version: string;
bun?: string;
uv?: string;
installedAt?: string;
}
function markerPath(targetDir: string): string {
return join(targetDir, '.install-version');
}
function getBunPath(): string | null {
try {
const result = spawnSync('bun', ['--version'], {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'pipe'],
shell: IS_WINDOWS,
});
if (result.status === 0) return 'bun';
} catch {
// Not in PATH
}
return BUN_COMMON_PATHS.find(existsSync) || null;
}
function isBunInstalled(): boolean {
return getBunPath() !== null;
}
function getBunVersion(): string | null {
const bunPath = getBunPath();
if (!bunPath) return null;
try {
const result = spawnSync(bunPath, ['--version'], {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'pipe'],
shell: IS_WINDOWS,
});
return result.status === 0 ? result.stdout.trim() : null;
} catch {
return null;
}
}
function getUvPath(): string | null {
try {
const result = spawnSync('uv', ['--version'], {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'pipe'],
shell: IS_WINDOWS,
});
if (result.status === 0) return 'uv';
} catch {
// Not in PATH
}
return UV_COMMON_PATHS.find(existsSync) || null;
}
function isUvInstalled(): boolean {
return getUvPath() !== null;
}
function getUvVersion(): string | null {
const uvPath = getUvPath();
if (!uvPath) return null;
try {
const result = spawnSync(uvPath, ['--version'], {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'pipe'],
shell: IS_WINDOWS,
});
return result.status === 0 ? result.stdout.trim() : null;
} catch {
return null;
}
}
function describeExecError(error: unknown): string {
if (error && typeof error === 'object') {
const e = error as { message?: string; stdout?: Buffer | string; stderr?: Buffer | string };
const parts: string[] = [];
if (e.message) parts.push(e.message);
const stderr = e.stderr ? e.stderr.toString().trim() : '';
if (stderr) parts.push(`stderr: ${stderr}`);
const stdout = e.stdout ? e.stdout.toString().trim() : '';
if (!stderr && stdout) parts.push(`stdout: ${stdout}`);
return parts.join('\n');
}
return String(error);
}
function installBun(): void {
try {
if (IS_WINDOWS) {
execSync('powershell -c "irm bun.sh/install.ps1 | iex"', {
stdio: 'pipe',
shell: process.env.ComSpec ?? 'cmd.exe',
});
} else {
execSync('curl -fsSL https://bun.sh/install | bash', {
stdio: 'pipe',
shell: '/bin/bash',
});
}
if (!isBunInstalled()) {
throw new Error(
'Bun installation completed but binary not found. Please restart your terminal and try again.',
);
}
} catch (error) {
const manualInstructions = IS_WINDOWS
? ' - winget install Oven-sh.Bun\n - Or: powershell -c "irm bun.sh/install.ps1 | iex"'
: ' - curl -fsSL https://bun.sh/install | bash\n - Or: brew install oven-sh/bun/bun';
throw new Error(
`Failed to install Bun. Please install manually:\n${manualInstructions}\nThen restart your terminal and try again.\n` +
`Underlying error: ${describeExecError(error)}`,
);
}
}
function installUv(): void {
try {
if (IS_WINDOWS) {
execSync('powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"', {
stdio: 'pipe',
shell: process.env.ComSpec ?? 'cmd.exe',
});
} else {
execSync('curl -LsSf https://astral.sh/uv/install.sh | sh', {
stdio: 'pipe',
shell: '/bin/bash',
});
}
if (!isUvInstalled()) {
throw new Error(
'uv installation completed but binary not found. Please restart your terminal and try again.',
);
}
} catch (error) {
const manualInstructions = IS_WINDOWS
? ' - winget install astral-sh.uv\n - Or: powershell -c "irm https://astral.sh/uv/install.ps1 | iex"'
: ' - curl -LsSf https://astral.sh/uv/install.sh | sh\n - Or: brew install uv (macOS)';
throw new Error(
`Failed to install uv. Please install manually:\n${manualInstructions}\nThen restart your terminal and try again.\n` +
`Underlying error: ${describeExecError(error)}`,
);
}
}
function verifyCriticalModules(targetDir: string): void {
const pkg = JSON.parse(readFileSync(join(targetDir, 'package.json'), 'utf-8'));
const dependencies = Object.keys(pkg.dependencies || {});
const missing: string[] = [];
for (const dep of dependencies) {
const modulePath = join(targetDir, 'node_modules', ...dep.split('/'));
if (!existsSync(modulePath)) {
missing.push(dep);
}
}
if (missing.length > 0) {
throw new Error(`Post-install check failed: missing modules: ${missing.join(', ')}`);
}
}
export async function ensureBun(): Promise<{ bunPath: string; version: string }> {
if (!isBunInstalled()) {
installBun();
}
const bunPath = getBunPath();
if (!bunPath) {
throw new Error('Bun executable not found after install attempt.');
}
const version = getBunVersion();
if (!version) {
throw new Error('Bun installed but version probe failed.');
}
return { bunPath, version };
}
export async function ensureUv(): Promise<{ uvPath: string; version: string }> {
if (!isUvInstalled()) {
installUv();
}
const uvPath = getUvPath();
if (!uvPath) {
throw new Error('uv executable not found after install attempt.');
}
const version = getUvVersion();
if (!version) {
throw new Error('uv installed but version probe failed.');
}
return { uvPath, version };
}
export async function installPluginDependencies(targetDir: string, bunPath: string): Promise<void> {
if (!existsSync(join(targetDir, 'package.json'))) {
throw new Error(`installPluginDependencies: no package.json at ${targetDir}`);
}
const bunCmd = IS_WINDOWS && bunPath.includes(' ') ? `"${bunPath}"` : bunPath;
try {
execSync(`${bunCmd} install`, {
cwd: targetDir,
stdio: 'pipe',
...(IS_WINDOWS ? { shell: process.env.ComSpec ?? 'cmd.exe' } : {}),
});
} catch (error) {
throw new Error(`bun install failed in ${targetDir}\n${describeExecError(error)}`);
}
verifyCriticalModules(targetDir);
}
export function readInstallMarker(targetDir: string): MarkerSchema | null {
const path = markerPath(targetDir);
if (!existsSync(path)) return null;
try {
return JSON.parse(readFileSync(path, 'utf-8')) as MarkerSchema;
} catch {
return null;
}
}
export function writeInstallMarker(
targetDir: string,
version: string,
bunVersion: string,
uvVersion: string,
): void {
const payload: MarkerSchema = {
version,
bun: bunVersion,
uv: uvVersion,
installedAt: new Date().toISOString(),
};
writeFileSync(markerPath(targetDir), JSON.stringify(payload));
}
export function isInstallCurrent(targetDir: string, expectedVersion: string): boolean {
if (!existsSync(join(targetDir, 'node_modules'))) return false;
const marker = readInstallMarker(targetDir);
if (!marker) return false;
if (marker.version !== expectedVersion) return false;
const currentBun = getBunVersion();
if (currentBun && marker.bun && currentBun !== marker.bun) return false;
return true;
}
+1 -24
View File
@@ -1,21 +1,9 @@
/**
* Bun binary resolution utility.
*
* Extracted from `plugin/scripts/bun-runner.js` so that the NPX CLI
* can locate Bun without duplicating the search logic.
*
* Pure Node.js — no Bun APIs used.
*/
import { spawnSync } from 'child_process';
import { existsSync } from 'fs';
import { homedir } from 'os';
import { join } from 'path';
import { IS_WINDOWS } from './paths.js';
/**
* Well-known locations where Bun might be installed, beyond PATH.
* Order matches the search priority in bun-runner.js and smart-install.js.
*/
function bunCandidatePaths(): string[] {
if (IS_WINDOWS) {
return [
@@ -32,17 +20,7 @@ function bunCandidatePaths(): string[] {
];
}
/**
* Attempt to locate the Bun executable.
*
* 1. Check PATH via `which` / `where`.
* 2. Probe well-known installation directories.
*
* Returns the absolute path to the binary, `'bun'` if it is in PATH,
* or `null` if Bun cannot be found.
*/
export function resolveBunBinaryPath(): string | null {
// Try PATH first
const whichCommand = IS_WINDOWS ? 'where' : 'which';
const pathCheck = spawnSync(whichCommand, ['bun'], {
encoding: 'utf-8',
@@ -51,10 +29,9 @@ export function resolveBunBinaryPath(): string | null {
});
if (pathCheck.status === 0 && pathCheck.stdout.trim()) {
return 'bun'; // Available in PATH — use short name
return 'bun';
}
// Probe known install locations
for (const candidatePath of bunCandidatePaths()) {
if (existsSync(candidatePath)) {
return candidatePath;
-64
View File
@@ -1,78 +1,40 @@
/**
* Shared path utilities for the NPX CLI.
*
* All platform-specific path logic is centralized here so that every command
* resolves directories in exactly the same way, regardless of OS.
*/
import { existsSync, mkdirSync, readFileSync, writeFileSync } from 'fs';
import { homedir } from 'os';
import { dirname, join } from 'path';
import { fileURLToPath } from 'url';
// ---------------------------------------------------------------------------
// Platform detection
// ---------------------------------------------------------------------------
export const IS_WINDOWS = process.platform === 'win32';
// ---------------------------------------------------------------------------
// Core paths
// ---------------------------------------------------------------------------
/** Root of the Claude Code config directory. */
export function claudeConfigDirectory(): string {
return process.env.CLAUDE_CONFIG_DIR || join(homedir(), '.claude');
}
/** Marketplace install directory for thedotmack. */
export function marketplaceDirectory(): string {
return join(claudeConfigDirectory(), 'plugins', 'marketplaces', 'thedotmack');
}
/** Top-level plugins directory. */
export function pluginsDirectory(): string {
return join(claudeConfigDirectory(), 'plugins');
}
/** Path to `known_marketplaces.json`. */
export function knownMarketplacesPath(): string {
return join(pluginsDirectory(), 'known_marketplaces.json');
}
/** Path to `installed_plugins.json`. */
export function installedPluginsPath(): string {
return join(pluginsDirectory(), 'installed_plugins.json');
}
/** Path to `~/.claude/settings.json`. */
export function claudeSettingsPath(): string {
return join(claudeConfigDirectory(), 'settings.json');
}
/** Plugin cache directory for a specific version. */
export function pluginCacheDirectory(version: string): string {
return join(pluginsDirectory(), 'cache', 'thedotmack', 'claude-mem', version);
}
/** claude-mem data directory (default `~/.claude-mem`). */
export function claudeMemDataDirectory(): string {
return join(homedir(), '.claude-mem');
}
// ---------------------------------------------------------------------------
// NPM package root (where the NPX package lives on disk)
// ---------------------------------------------------------------------------
/**
* Resolve the root of the installed npm package.
*
* After bundling, the CLI entry point lives at `<pkg>/dist/npx-cli/index.js`.
* Walking up 2 levels from `import.meta.url` reaches the package root
* where `plugin/` and `package.json` can be found.
*/
export function npmPackageRootDirectory(): string {
const currentFilePath = fileURLToPath(import.meta.url);
// <pkg>/dist/npx-cli/index.js -> up 2 levels -> <pkg>
const root = join(dirname(currentFilePath), '..', '..');
if (!existsSync(join(root, 'package.json'))) {
throw new Error(
@@ -83,23 +45,11 @@ export function npmPackageRootDirectory(): string {
return root;
}
/**
* Path to the `plugin/` directory bundled inside the npm package.
*/
export function npmPackagePluginDirectory(): string {
return join(npmPackageRootDirectory(), 'plugin');
}
// ---------------------------------------------------------------------------
// Version helpers
// ---------------------------------------------------------------------------
/**
* Read the current plugin version from the npm package's
* `plugin/.claude-plugin/plugin.json` (preferred) or from `package.json`.
*/
export function readPluginVersion(): string {
// Try plugin.json first (authoritative for plugin version)
const pluginJsonPath = join(npmPackagePluginDirectory(), '.claude-plugin', 'plugin.json');
if (existsSync(pluginJsonPath)) {
try {
@@ -110,7 +60,6 @@ export function readPluginVersion(): string {
}
}
// Fall back to package.json at package root
const packageJsonPath = join(npmPackageRootDirectory(), 'package.json');
if (existsSync(packageJsonPath)) {
try {
@@ -124,30 +73,17 @@ export function readPluginVersion(): string {
return '0.0.0';
}
// ---------------------------------------------------------------------------
// Installation detection
// ---------------------------------------------------------------------------
/** Returns true if the plugin appears to be installed in the marketplace dir. */
export function isPluginInstalled(): boolean {
const marketplaceDir = marketplaceDirectory();
return existsSync(join(marketplaceDir, 'plugin', '.claude-plugin', 'plugin.json'));
}
// ---------------------------------------------------------------------------
// JSON file helpers
// ---------------------------------------------------------------------------
export function ensureDirectoryExists(directoryPath: string): void {
if (!existsSync(directoryPath)) {
mkdirSync(directoryPath, { recursive: true });
}
}
/**
* @deprecated Use `readJsonSafe` from `../../utils/json-utils.js` instead.
* Kept as re-export for backward compatibility.
*/
export { readJsonSafe } from '../../utils/json-utils.js';
export function writeJsonFileAtomic(filepath: string, data: any): void {
-2
View File
@@ -1,2 +0,0 @@
export * from './parser.js';
export * from './prompts.js';
+11 -84
View File
@@ -1,14 +1,3 @@
/**
* XML Parser Module
*
* Single fail-fast entry point for SDK agent XML responses.
*
* Per PATHFINDER-2026-04-22 plan 03 phase 1:
* - One function (`parseAgentXml`) for all agent responses.
* - Discriminated-union return: `{ valid: true, kind, data }` or `{ valid: false, reason }`.
* - No coercion. No silent passthrough. No "lenient mode".
* - `<skip_summary reason="…"/>` is a first-class summary case (skipped: true).
*/
import { logger } from '../utils/logger.js';
import { ModeManager } from '../services/domain/ModeManager.js';
@@ -31,43 +20,25 @@ export interface ParsedSummary {
completed: string | null;
next_steps: string | null;
notes: string | null;
/** True when the response was an explicit `<skip_summary reason="…"/>` bypass. */
skipped?: boolean;
/** Non-null when `skipped: true`. */
skip_reason?: string | null;
}
export type ParseResult =
| { valid: true; kind: 'observation'; data: ParsedObservation[] }
| { valid: true; kind: 'summary'; data: ParsedSummary }
| { valid: false; reason: string };
| { valid: true; observations: ParsedObservation[]; summary: ParsedSummary | null }
| { valid: false };
/**
* Parse an SDK agent response. Inspects the first significant XML root element
* and returns a discriminated union. Never coerces. Never returns null/undefined.
*
* Recognised roots:
* <observation> … </observation> → { kind: 'observation', data: ParsedObservation[] }
* <summary> … </summary> → { kind: 'summary', data: ParsedSummary }
* <skip_summary reason="…" /> → { kind: 'summary', data: { skipped: true, … } }
*
* Anything else → { valid: false, reason }. The caller is responsible for
* surfacing the reason (markFailed, log, etc.). No retry coercion.
*/
export function parseAgentXml(raw: string, correlationId?: string | number): ParseResult {
if (typeof raw !== 'string' || !raw.trim()) {
return { valid: false, reason: 'empty: response had no content' };
return { valid: false };
}
// Skip-summary is recognised even when wrapped in other text, but only as the
// sole structural signal. It outranks <observation> / <summary> matches because
// it is an explicit protocol bypass. `reason` is optional.
const skipMatch = /<skip_summary(?:\s+reason="([^"]*)")?\s*\/>/.exec(raw);
if (skipMatch) {
return {
valid: true,
kind: 'summary',
data: {
observations: [],
summary: {
request: null,
investigated: null,
learned: null,
@@ -80,45 +51,27 @@ export function parseAgentXml(raw: string, correlationId?: string | number): Par
};
}
// Find the first significant element by scanning for the first `<…>` opener
// that is one of the recognised roots. This tolerates leading prose / debug
// output from the model while still failing fast on entirely-non-XML payloads.
const firstRoot = /<(observation|summary)\b/i.exec(raw);
if (!firstRoot) {
const preview = raw.length > 120 ? `${raw.slice(0, 120)}` : raw;
return {
valid: false,
reason: `unknown root: response contained no <observation>, <summary>, or <skip_summary/> element (preview: ${preview.replace(/\s+/g, ' ')})`,
};
return { valid: false };
}
const rootName = firstRoot[1].toLowerCase();
if (rootName === 'observation') {
const observations = parseObservationBlocks(raw, correlationId);
if (observations.length === 0) {
return {
valid: false,
reason: '<observation>: no parseable observation block (every block was empty or ghost)',
};
return { valid: false };
}
return { valid: true, kind: 'observation', data: observations };
return { valid: true, observations, summary: null };
}
// rootName === 'summary'
const summary = parseSummaryBlock(raw, correlationId);
if (!summary) {
return {
valid: false,
reason: '<summary>: empty or missing every required sub-tag (request/investigated/learned/completed/next_steps)',
};
return { valid: false };
}
return { valid: true, kind: 'summary', data: summary };
return { valid: true, observations: [], summary };
}
/**
* Parse all <observation>…</observation> blocks. Filters out ghost
* observations (every content field empty). Returns the surviving list.
*/
function parseObservationBlocks(text: string, correlationId?: string | number): ParsedObservation[] {
const observations: ParsedObservation[] = [];
@@ -137,10 +90,6 @@ function parseObservationBlocks(text: string, correlationId?: string | number):
const files_read = extractArrayElements(obsContent, 'files_read', 'file');
const files_modified = extractArrayElements(obsContent, 'files_modified', 'file');
// Type fallback: per existing semantics, missing/invalid type degrades to the
// first type in the active mode. This is parser-internal validation, not
// recovery from a contract violation: every mode's first type is intentionally
// the catch-all bucket.
const mode = ModeManager.getInstance().getActiveMode();
const validTypes = mode.observation_types.map(t => t.id);
const fallbackType = validTypes[0];
@@ -155,7 +104,6 @@ function parseObservationBlocks(text: string, correlationId?: string | number):
logger.error('PARSER', `Observation missing type field, using "${fallbackType}"`, { correlationId });
}
// Filter out type from concepts array (types and concepts are separate dimensions)
const cleanedConcepts = concepts.filter(c => c !== finalType);
if (cleanedConcepts.length !== concepts.length) {
@@ -167,9 +115,6 @@ function parseObservationBlocks(text: string, correlationId?: string | number):
});
}
// Skip ghost observations — records where every content field is null/empty.
// (subtitle and file lists are intentionally excluded from this guard:
// an observation with only a subtitle is still too thin to be useful.)
if (!title && !narrative && facts.length === 0 && cleanedConcepts.length === 0) {
logger.warn('PARSER', 'Skipping empty observation (all content fields null)', {
correlationId,
@@ -193,11 +138,6 @@ function parseObservationBlocks(text: string, correlationId?: string | number):
return observations;
}
/**
* Parse a single <summary>…</summary> block. Returns null when the block has
* no usable sub-tags (every required field empty) — the caller maps this to
* a fail-fast `{ valid: false, reason }` result.
*/
function parseSummaryBlock(text: string, correlationId?: string | number): ParsedSummary | null {
const summaryRegex = /<summary>([\s\S]*?)<\/summary>/;
const summaryMatch = summaryRegex.exec(text);
@@ -210,11 +150,8 @@ function parseSummaryBlock(text: string, correlationId?: string | number): Parse
const learned = extractField(summaryContent, 'learned');
const completed = extractField(summaryContent, 'completed');
const next_steps = extractField(summaryContent, 'next_steps');
const notes = extractField(summaryContent, 'notes'); // optional
const notes = extractField(summaryContent, 'notes');
// Per maintainer note: a summary with at least one populated sub-tag must be
// saved. Missing sub-tags are tolerated; an entirely empty <summary> block is
// a false-positive (covered the #1360 regression) and is rejected.
if (!request && !investigated && !learned && !completed && !next_steps) {
logger.warn('PARSER', 'Summary block has no sub-tags — rejecting false positive', { correlationId });
return null;
@@ -230,12 +167,6 @@ function parseSummaryBlock(text: string, correlationId?: string | number): Parse
};
}
/**
* Extract a simple field value from XML content
* Returns null for missing or empty/whitespace-only fields
*
* Uses non-greedy match to handle nested tags and code snippets (Issue #798)
*/
function extractField(content: string, fieldName: string): string | null {
const regex = new RegExp(`<${fieldName}>([\\s\\S]*?)</${fieldName}>`);
const match = regex.exec(content);
@@ -245,10 +176,6 @@ function extractField(content: string, fieldName: string): string | null {
return trimmed === '' ? null : trimmed;
}
/**
* Extract array of elements from XML content
* Handles nested tags and code snippets (Issue #798)
*/
function extractArrayElements(content: string, arrayName: string, elementName: string): string[] {
const elements: string[] = [];
-42
View File
@@ -1,18 +1,7 @@
/**
* SDK Prompts Module
* Generates prompts for the Claude Agent SDK memory worker
*/
import { logger } from '../utils/logger.js';
import type { ModeConfig } from '../services/domain/types.js';
/**
* Marker string embedded in summary prompts — historically used by
* ResponseProcessor to detect summary turns for the (now-deleted) coercion
* fallback. Kept here because `buildSummaryPrompt` still embeds it as the
* mode-switch banner; deleting the constant would require rewriting the
* prompt builder, which is out of scope for plan 03.
*/
export const SUMMARY_MODE_MARKER = 'MODE SWITCH: PROGRESS SUMMARY';
export interface Observation {
@@ -32,9 +21,6 @@ export interface SDKSession {
last_assistant_message?: string;
}
/**
* Build initial prompt to initialize the SDK agent
*/
export function buildInitPrompt(project: string, sessionId: string, userPrompt: string, mode: ModeConfig): string {
return `${mode.prompts.system_identity}
@@ -94,11 +80,7 @@ ${mode.prompts.footer}
${mode.prompts.header_memory_start}`;
}
/**
* Build prompt to send tool observation to SDK agent
*/
export function buildObservationPrompt(obs: Observation): string {
// Safely parse tool_input and tool_output - they're already JSON strings
let toolInput: any;
let toolOutput: any;
@@ -132,9 +114,6 @@ Concrete debugging findings from logs, queue state, database rows, session routi
Never reply with prose such as "Skipping", "No substantive tool executions", or any explanation outside XML. Non-XML text is discarded.`;
}
/**
* Build prompt to generate progress summary
*/
export function buildSummaryPrompt(session: SDKSession, mode: ModeConfig): string {
const lastAssistantMessage = session.last_assistant_message || (() => {
logger.error('SDK', 'Missing last_assistant_message in session for summary prompt', {
@@ -169,27 +148,6 @@ REMINDER: Your response MUST use <summary> as the root tag, NOT <observation>.
${mode.prompts.summary_footer}`;
}
/**
* Build prompt for continuation of existing session
*
* CRITICAL: Why contentSessionId Parameter is Required
* ====================================================
* This function receives contentSessionId from SDKAgent.ts, which comes from:
* - SessionManager.initializeSession (fetched from database)
* - SessionStore.createSDKSession (stored by new-hook.ts)
* - new-hook.ts receives it from Claude Code's hook context
*
* The contentSessionId is the SAME session_id used by:
* - NEW hook (to create/fetch session)
* - SAVE hook (to store observations)
* - This continuation prompt (to maintain session context)
*
* This is how everything stays connected - ONE session_id threading through
* all hooks and prompts in the same conversation.
*
* Called when: promptNumber > 1 (see SDKAgent.ts line 150)
* First prompt: Uses buildInitPrompt instead (promptNumber === 1)
*/
export function buildContinuationPrompt(userPrompt: string, promptNumber: number, contentSessionId: string, mode: ModeConfig): string {
return `${mode.prompts.continuation_greeting}
+4 -111
View File
@@ -1,21 +1,9 @@
/**
* Claude-mem MCP Search Server - Thin HTTP Wrapper
*
* Refactored from 2,718 lines to ~600-800 lines
* Delegates all business logic to Worker HTTP API at localhost:37777
* Maintains MCP protocol handling and tool schemas
*/
// Version injected at build time by esbuild define
declare const __DEFAULT_PACKAGE_VERSION__: string;
const packageVersion = typeof __DEFAULT_PACKAGE_VERSION__ !== 'undefined' ? __DEFAULT_PACKAGE_VERSION__ : '0.0.0-dev';
// Import logger first
import { logger } from '../utils/logger.js';
// CRITICAL: Redirect console to stderr BEFORE other imports
// MCP uses stdio transport where stdout is reserved for JSON-RPC protocol messages.
// Any logs to stdout break the protocol (Claude Desktop parses "[2025..." as JSON array).
console['log'] = (...args: any[]) => {
logger.error('CONSOLE', 'Intercepted console output (MCP protocol protection)', undefined, { args });
};
@@ -36,52 +24,19 @@ import { dirname, resolve } from 'node:path';
import { homedir } from 'node:os';
import { fileURLToPath } from 'node:url';
// Resolve the path to worker-service.cjs, which lives alongside mcp-server.cjs
// in the plugin's scripts directory. We need an explicit path because the MCP
// server runs under Node while the worker must run under Bun, so we can't rely
// on `__filename` pointing to a self-spawnable script.
//
// In the deployed CJS bundle, `__dirname` is always defined — the import.meta
// fallback only exists to keep the source future-proof against an eventual
// ESM port. Both fallback branches should be functionally unreachable today.
let mcpServerDirResolutionFailed = false;
const mcpServerDir = (() => {
if (typeof __dirname !== 'undefined') return __dirname;
try {
return dirname(fileURLToPath(import.meta.url));
} catch {
// Last-ditch fallback: cwd is almost certainly wrong, but throwing here
// would crash the MCP server before it can serve a single request. Mark
// the failure so the existence check below can produce a single, loud,
// root-cause-attributing log line instead of a confusing "missing worker
// bundle" warning that hides the dirname resolution failure.
mcpServerDirResolutionFailed = true;
return process.cwd();
}
})();
const WORKER_SCRIPT_PATH = resolve(mcpServerDir, 'worker-service.cjs');
/**
* Surface a clear, actionable error if the worker bundle isn't where we
* expect. Without this check, a missing or partial install only fails later
* inside spawnDaemon as a generic "failed to spawn" message.
*
* If dirname resolution itself failed (extremely unlikely in CJS), attribute
* the missing-bundle warning to the root cause so the user doesn't waste time
* looking for an install bug that doesn't exist.
*
* Called lazily from `ensureWorkerConnection` (not at module load) so that
* tests or tools that import this module without booting the MCP server
* don't see noisy ERROR-level log lines for a worker they never intended
* to start. The check is cheap and idempotent, so calling it on every
* auto-start attempt is fine.
*/
function errorIfWorkerScriptMissing(): void {
// Only log here when the dirname resolution itself failed — that's the
// mcp-server-specific root cause attribution that the spawner cannot
// provide. The plain "missing bundle" case is already covered by the
// existsSync guard inside ensureWorkerStarted, and logging from both
// sites would produce a confusing double-log on the same code path.
if (!mcpServerDirResolutionFailed) return;
if (existsSync(WORKER_SCRIPT_PATH)) return;
@@ -92,17 +47,11 @@ function errorIfWorkerScriptMissing(): void {
);
}
/**
* Map tool names to Worker HTTP endpoints
*/
const TOOL_ENDPOINT_MAP: Record<string, string> = {
'search': '/api/search',
'timeline': '/api/timeline'
};
/**
* Call Worker HTTP API endpoint (uses socket or TCP automatically)
*/
async function callWorkerAPI(
endpoint: string,
params: Record<string, any>
@@ -111,7 +60,6 @@ async function callWorkerAPI(
const searchParams = new URLSearchParams();
// Convert params to query string
for (const [key, value] of Object.entries(params)) {
if (value !== undefined && value !== null) {
searchParams.append(key, String(value));
@@ -132,7 +80,6 @@ async function callWorkerAPI(
logger.debug('SYSTEM', '← Worker API success', undefined, { endpoint });
// Worker returns { content: [...] } format directly
return data;
} catch (error: unknown) {
logger.error('SYSTEM', '← Worker API error', { endpoint }, error instanceof Error ? error : new Error(String(error)));
@@ -173,9 +120,6 @@ async function executeWorkerPostRequest(
};
}
/**
* Call Worker HTTP API with POST body
*/
async function callWorkerAPIPost(
endpoint: string,
body: Record<string, any>
@@ -196,24 +140,16 @@ async function callWorkerAPIPost(
}
}
/**
* Verify Worker is accessible
*/
async function verifyWorkerConnection(): Promise<boolean> {
try {
const response = await workerHttpRequest('/api/health');
return response.ok;
} catch (error: unknown) {
// Expected during worker startup or if worker is down
logger.debug('SYSTEM', 'Worker health check failed', {}, error instanceof Error ? error : new Error(String(error)));
return false;
}
}
/**
* Ensure Worker is available for Codex and other MCP-only clients.
* Claude hooks already start the worker; this path makes Codex turnkey.
*/
async function ensureWorkerConnection(): Promise<boolean> {
if (await verifyWorkerConnection()) {
return true;
@@ -221,22 +157,18 @@ async function ensureWorkerConnection(): Promise<boolean> {
logger.warn('SYSTEM', 'Worker not available, attempting auto-start for MCP client');
// Validate the worker bundle path lazily here (rather than at module load)
// so that tests/tools that import this module without booting the MCP
// server don't see noisy ERROR-level log lines for a worker they never
// intended to start.
errorIfWorkerScriptMissing();
try {
const port = getWorkerPort();
const started = await ensureWorkerStarted(port, WORKER_SCRIPT_PATH);
if (!started) {
const result = await ensureWorkerStarted(port, WORKER_SCRIPT_PATH);
if (result === 'dead') {
logger.error(
'SYSTEM',
'Worker auto-start returned false — MCP tools that require the worker (search, timeline, get_observations) will fail until the worker is running. Check earlier log lines for the specific failure reason (Bun not found, missing worker bundle, port conflict, etc.).'
'Worker auto-start failed — MCP tools that require the worker (search, timeline, get_observations) will fail until the worker is running. Check earlier log lines for the specific failure reason (Bun not found, missing worker bundle, port conflict, etc.).'
);
}
return started;
return result !== 'dead';
} catch (error: unknown) {
logger.error(
'SYSTEM',
@@ -248,10 +180,6 @@ async function ensureWorkerConnection(): Promise<boolean> {
}
}
/**
* Tool definitions with HTTP-based handlers
* Minimal descriptions - use help() tool with operation parameter for detailed docs
*/
const tools = [
{
name: '__IMPORTANT',
@@ -411,7 +339,6 @@ NEVER fetch full details without filtering first. 10x token savings.`,
content: [{ type: 'text' as const, text: unfolded }]
};
}
// Symbol not found — show available symbols
const parsed = parseFile(content, filePath);
if (parsed.symbols.length > 0) {
const available = parsed.symbols.map(s => ` - ${s.name} (${s.kind})`).join('\n');
@@ -567,7 +494,6 @@ NEVER fetch full details without filtering first. 10x token savings.`,
}
];
// Create the MCP server
const server = new Server(
{
name: 'claude-mem',
@@ -580,7 +506,6 @@ const server = new Server(
}
);
// Register tools/list handler
server.setRequestHandler(ListToolsRequestSchema, async () => {
return {
tools: tools.map(tool => ({
@@ -591,7 +516,6 @@ server.setRequestHandler(ListToolsRequestSchema, async () => {
};
});
// Register tools/call handler
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const tool = tools.find(t => t.name === request.params.name);
@@ -613,8 +537,6 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
}
});
// Parent heartbeat: self-exit when parent dies (ppid=1 on Unix means orphaned)
// Prevents orphaned MCP server processes when Claude Code exits unexpectedly
const HEARTBEAT_INTERVAL_MS = 30_000;
let heartbeatTimer: ReturnType<typeof setInterval> | null = null;
let isCleaningUp = false;
@@ -643,7 +565,6 @@ function detachStdioLifecycle() {
}
function startParentHeartbeat() {
// ppid-based orphan detection only works on Unix
if (process.platform === 'win32') return;
const initialPpid = process.ppid;
@@ -657,12 +578,9 @@ function startParentHeartbeat() {
}
}, HEARTBEAT_INTERVAL_MS);
// Don't let the heartbeat timer keep the process alive
if (heartbeatTimer.unref) heartbeatTimer.unref();
}
// Cleanup function — synchronous to ensure consistent behavior whether called
// from signal handlers, heartbeat interval, or awaited in async context
function cleanup(reason: string = 'shutdown') {
if (isCleaningUp) return;
isCleaningUp = true;
@@ -673,28 +591,11 @@ function cleanup(reason: string = 'shutdown') {
process.exit(0);
}
// Register cleanup handlers for graceful shutdown
process.on('SIGTERM', cleanup);
process.on('SIGINT', cleanup);
/**
* Issue #2174: When the IDE extension (e.g. Cursor's Claude Code) loses its
* marketplace directory at ~/.claude/plugins/marketplaces/<source>/, the
* extension's hook loader silently skips claude-mem hooks while the MCP
* server (this process) keeps working. The session becomes invisible to
* memory with no error surfaced.
*
* The MCP server is the one piece that DOES boot in this state, so we use
* it as the canary: detect the missing marketplace dir and emit a single
* loud, actionable warning. We don't run smart-install.js from here — the
* MCP server runs under the IDE's permission model, not the user's shell,
* so attempting an install at MCP startup creates more failure modes than
* it fixes. Instead we tell the user exactly what to do.
*/
function checkMarketplaceMarker(): void {
try {
// Use os.homedir() so this works on Windows (HOME is unset there;
// USERPROFILE is the Windows convention and homedir() picks it up).
const home = homedir();
const marketplaceCandidates = [
resolve(home, '.claude', 'plugins', 'marketplaces', 'thedotmack'),
@@ -716,25 +617,19 @@ function checkMarketplaceMarker(): void {
);
}
} catch {
// Self-heal probe is best-effort; never fail MCP startup for it.
}
}
// Start the server
async function main() {
// Start the MCP server
const transport = new StdioServerTransport();
attachStdioLifecycle();
await server.connect(transport);
logger.info('SYSTEM', 'Claude-mem search server started');
// Surface marketplace-dir corruption that silently disables hook loading
checkMarketplaceMarker();
// Start parent heartbeat to detect orphaned MCP servers
startParentHeartbeat();
// Check Worker availability in background
setTimeout(async () => {
const workerAvailable = await ensureWorkerConnection();
if (!workerAvailable) {
@@ -749,7 +644,5 @@ async function main() {
main().catch((error) => {
logger.error('SYSTEM', 'Fatal error', undefined, error);
// Exit gracefully: Windows Terminal won't keep tab open on exit 0
// The wrapper/plugin will handle restart logic if needed
process.exit(0);
});
-8
View File
@@ -1,8 +0,0 @@
/**
* Context - Named re-export facade
*
* Provides a clean import path for context generation functionality.
* Import from './Context.js' or './context/index.js'.
*/
export * from './context/index.js';
-15
View File
@@ -1,19 +1,4 @@
/**
* Context Generator - DEPRECATED
*
* This file is maintained for backward compatibility.
* New code should import from './Context.js' or './context/index.js'.
*
* The context generation logic has been restructured into:
* - src/services/context/ContextBuilder.ts - Main orchestrator
* - src/services/context/ContextConfigLoader.ts - Configuration loading
* - src/services/context/TokenCalculator.ts - Token economics
* - src/services/context/ObservationCompiler.ts - Data retrieval
* - src/services/context/formatters/ - Output formatting
* - src/services/context/sections/ - Section rendering
*/
import { logger } from '../utils/logger.js';
// Re-export everything from the new context module
export { generateContext } from './context/index.js';
export type { ContextInput, ContextConfig } from './context/types.js';
-38
View File
@@ -1,9 +1,3 @@
/**
* ContextBuilder - Main orchestrator for context generation
*
* Coordinates all context generation components to build the final output.
* This is the primary entry point for context generation.
*/
import path from 'path';
import { homedir } from 'os';
@@ -32,7 +26,6 @@ import { renderPreviouslySection, renderFooter } from './sections/FooterRenderer
import { renderAgentEmptyState } from './formatters/AgentFormatter.js';
import { renderHumanEmptyState } from './formatters/HumanFormatter.js';
// Version marker path for native module error handling
const VERSION_MARKER_PATH = path.join(
homedir(),
'.claude',
@@ -43,9 +36,6 @@ const VERSION_MARKER_PATH = path.join(
'.install-version'
);
/**
* Initialize database connection with error handling
*/
function initializeDatabase(): SessionStore | null {
try {
return new SessionStore();
@@ -67,16 +57,10 @@ function initializeDatabase(): SessionStore | null {
}
}
/**
* Render empty state when no data exists
*/
function renderEmptyState(project: string, forHuman: boolean): string {
return forHuman ? renderHumanEmptyState(project) : renderAgentEmptyState(project);
}
/**
* Build context output from loaded data
*/
function buildContextOutput(
project: string,
observations: Observation[],
@@ -88,22 +72,17 @@ function buildContextOutput(
): string {
const output: string[] = [];
// Calculate token economics
const economics = calculateTokenEconomics(observations);
// Render header section
output.push(...renderHeader(project, economics, config, forHuman));
// Prepare timeline data
const displaySummaries = summaries.slice(0, config.sessionCount);
const summariesForTimeline = prepareSummariesForTimeline(displaySummaries, summaries);
const timeline = buildTimeline(observations, summariesForTimeline);
const fullObservationIds = getFullObservationIds(observations, config.fullObservationCount);
// Render timeline
output.push(...renderTimeline(timeline, fullObservationIds, config, cwd, forHuman));
// Render most recent summary if applicable
const mostRecentSummary = summaries[0];
const mostRecentObservation = observations[0];
@@ -111,22 +90,14 @@ function buildContextOutput(
output.push(...renderSummaryFields(mostRecentSummary, forHuman));
}
// Render previously section (prior assistant message)
const priorMessages = getPriorSessionMessages(observations, config, sessionId, cwd);
output.push(...renderPreviouslySection(priorMessages, forHuman));
// Render footer
output.push(...renderFooter(economics, config, forHuman));
return output.join('\n').trimEnd();
}
/**
* Generate context for a project
*
* Main entry point for context generation. Orchestrates loading config,
* querying data, and rendering the final context string.
*/
export async function generateContext(
input?: ContextInput,
forHuman: boolean = false
@@ -135,27 +106,20 @@ export async function generateContext(
const cwd = input?.cwd ?? process.cwd();
const context = getProjectContext(cwd);
// Single source of truth: explicit projects override cwd-derived context.
// `project` (used for header + single-project query) is always the last entry
// of `projects` so the empty-state header and the query target stay in sync
// when a caller passes `projects` without a matching cwd (e.g. worker route).
const projects = input?.projects?.length ? input.projects : context.allProjects;
const project = projects[projects.length - 1] ?? context.primary;
// Full mode: fetch all observations but keep normal rendering (level 1 summaries)
if (input?.full) {
config.totalObservationCount = 999999;
config.sessionCount = 999999;
}
// Initialize database
const db = initializeDatabase();
if (!db) {
return '';
}
try {
// Query data for all projects (supports worktree: parent + worktree combined)
const observations = projects.length > 1
? queryObservationsMulti(db, projects, config)
: queryObservations(db, project, config);
@@ -163,12 +127,10 @@ export async function generateContext(
? querySummariesMulti(db, projects, config)
: querySummaries(db, project, config);
// Handle empty state
if (observations.length === 0 && summaries.length === 0) {
return renderEmptyState(project, forHuman);
}
// Build and return context
const output = buildContextOutput(
project,
observations,
@@ -1,8 +1,3 @@
/**
* ContextConfigLoader - Loads and validates context configuration
*
* Handles loading settings from file with mode-based filtering for observation types.
*/
import path from 'path';
import { homedir } from 'os';
@@ -10,15 +5,10 @@ import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js
import { ModeManager } from '../domain/ModeManager.js';
import type { ContextConfig } from './types.js';
/**
* Load all context configuration settings
* Priority: ~/.claude-mem/settings.json > env var > defaults
*/
export function loadContextConfig(): ContextConfig {
const settingsPath = path.join(homedir(), '.claude-mem', 'settings.json');
const settings = SettingsDefaultsManager.loadFromFile(settingsPath);
// Always read types/concepts from the active mode definition
const mode = ModeManager.getInstance().getActiveMode();
const observationTypes = new Set(mode.observation_types.map(t => t.id));
const observationConcepts = new Set(mode.observation_concepts.map(c => c.id));
+11 -48
View File
@@ -1,8 +1,3 @@
/**
* ObservationCompiler - Query building and data retrieval for context
*
* Handles database queries for observations and summaries, plus transcript extraction.
*/
import path from 'path';
import { existsSync, readFileSync } from 'fs';
@@ -20,9 +15,6 @@ import type {
} from './types.js';
import { SUMMARY_LOOKAHEAD } from './types.js';
/**
* Query observations from database with type and concept filtering
*/
export function queryObservations(
db: SessionStore,
project: string,
@@ -68,9 +60,6 @@ export function queryObservations(
) as Observation[];
}
/**
* Query recent session summaries from database
*/
export function querySummaries(
db: SessionStore,
project: string,
@@ -96,12 +85,6 @@ export function querySummaries(
`).all(project, project, config.sessionCount + SUMMARY_LOOKAHEAD) as SessionSummary[];
}
/**
* Query observations from multiple projects (for worktree support)
*
* Returns observations from all specified projects, interleaved chronologically.
* Used when running in a worktree to show both parent repo and worktree observations.
*/
export function queryObservationsMulti(
db: SessionStore,
projects: string[],
@@ -112,7 +95,6 @@ export function queryObservationsMulti(
const conceptArray = Array.from(config.observationConcepts);
const conceptPlaceholders = conceptArray.map(() => '?').join(',');
// Build IN clause for projects
const projectPlaceholders = projects.map(() => '?').join(',');
return db.db.prepare(`
@@ -152,18 +134,22 @@ export function queryObservationsMulti(
) as Observation[];
}
/**
* Query session summaries from multiple projects (for worktree support)
*
* Returns summaries from all specified projects, interleaved chronologically.
* Used when running in a worktree to show both parent repo and worktree summaries.
*/
export function countObservationsByProjects(db: SessionStore, projects: string[]): number {
if (projects.length === 0) return 0;
const projectPlaceholders = projects.map(() => '?').join(',');
const row = db.db.prepare(`
SELECT COUNT(*) as count FROM observations
WHERE project IN (${projectPlaceholders})
OR merged_into_project IN (${projectPlaceholders})
`).get(...projects, ...projects) as { count: number } | undefined;
return row?.count ?? 0;
}
export function querySummariesMulti(
db: SessionStore,
projects: string[],
config: ContextConfig
): SessionSummary[] {
// Build IN clause for projects
const projectPlaceholders = projects.map(() => '?').join(',');
return db.db.prepare(`
@@ -188,16 +174,10 @@ export function querySummariesMulti(
`).all(...projects, ...projects, config.sessionCount + SUMMARY_LOOKAHEAD) as SessionSummary[];
}
/**
* Convert cwd path to dashed format for transcript lookup
*/
function cwdToDashed(cwd: string): string {
return cwd.replace(/\//g, '-');
}
/**
* Find the last assistant message text from parsed transcript lines.
*/
function parseAssistantTextFromLine(line: string): string | null {
if (!line.includes('"type":"assistant"')) return null;
@@ -230,9 +210,6 @@ function findLastAssistantMessage(lines: string[]): string {
return '';
}
/**
* Extract prior messages from transcript file
*/
export function extractPriorMessages(transcriptPath: string): PriorMessages {
try {
if (!existsSync(transcriptPath)) return { userMessage: '', assistantMessage: '' };
@@ -252,9 +229,6 @@ export function extractPriorMessages(transcriptPath: string): PriorMessages {
}
}
/**
* Get prior session messages if enabled
*/
export function getPriorSessionMessages(
observations: Observation[],
config: ContextConfig,
@@ -272,14 +246,10 @@ export function getPriorSessionMessages(
const priorSessionId = priorSessionObs.memory_session_id;
const dashedCwd = cwdToDashed(cwd);
// Use CLAUDE_CONFIG_DIR to support custom Claude config directories
const transcriptPath = path.join(CLAUDE_CONFIG_DIR, 'projects', dashedCwd, `${priorSessionId}.jsonl`);
return extractPriorMessages(transcriptPath);
}
/**
* Prepare summaries for timeline display
*/
export function prepareSummariesForTimeline(
displaySummaries: SessionSummary[],
allSummaries: SessionSummary[]
@@ -297,9 +267,6 @@ export function prepareSummariesForTimeline(
});
}
/**
* Build unified timeline from observations and summaries
*/
export function buildTimeline(
observations: Observation[],
summaries: SummaryTimelineItem[]
@@ -309,7 +276,6 @@ export function buildTimeline(
...summaries.map(summary => ({ type: 'summary' as const, data: summary }))
];
// Sort chronologically
timeline.sort((a, b) => {
const aEpoch = a.type === 'observation' ? a.data.created_at_epoch : a.data.displayEpoch;
const bEpoch = b.type === 'observation' ? b.data.created_at_epoch : b.data.displayEpoch;
@@ -319,9 +285,6 @@ export function buildTimeline(
return timeline;
}
/**
* Get set of observation IDs that should show full details
*/
export function getFullObservationIds(observations: Observation[], count: number): Set<number> {
return new Set(
observations
-20
View File
@@ -1,16 +1,8 @@
/**
* TokenCalculator - Token budget calculations for context economics
*
* Handles estimation of token counts for observations and context economics.
*/
import type { Observation, TokenEconomics, ContextConfig } from './types.js';
import { CHARS_PER_TOKEN_ESTIMATE } from './types.js';
import { ModeManager } from '../domain/ModeManager.js';
/**
* Calculate token count for a single observation
*/
export function calculateObservationTokens(obs: Observation): number {
const obsSize = (obs.title?.length || 0) +
(obs.subtitle?.length || 0) +
@@ -19,9 +11,6 @@ export function calculateObservationTokens(obs: Observation): number {
return Math.ceil(obsSize / CHARS_PER_TOKEN_ESTIMATE);
}
/**
* Calculate context economics for a set of observations
*/
export function calculateTokenEconomics(observations: Observation[]): TokenEconomics {
const totalObservations = observations.length;
@@ -47,16 +36,10 @@ export function calculateTokenEconomics(observations: Observation[]): TokenEcono
};
}
/**
* Get work emoji for an observation type
*/
export function getWorkEmoji(obsType: string): string {
return ModeManager.getInstance().getWorkEmoji(obsType);
}
/**
* Format token display for an observation
*/
export function formatObservationTokenDisplay(
obs: Observation,
config: ContextConfig
@@ -69,9 +52,6 @@ export function formatObservationTokenDisplay(
return { readTokens, discoveryTokens, discoveryDisplay, workEmoji };
}
/**
* Check if context economics should be shown
*/
export function shouldShowContextEconomics(config: ContextConfig): boolean {
return config.showReadTokens || config.showWorkTokens ||
config.showSavingsAmount || config.showSavingsPercent;
@@ -1,9 +1,3 @@
/**
* AgentFormatter - Formats context output as compact markdown for LLM injection
*
* Optimized for token efficiency: flat lines instead of tables, no repeated headers.
* The human-readable terminal formatter (HumanFormatter.ts) handles human-readable display separately.
*/
import type {
ContextConfig,
@@ -15,12 +9,9 @@ import type {
import { ModeManager } from '../../domain/ModeManager.js';
import { formatObservationTokenDisplay } from '../TokenCalculator.js';
/**
* Format current date/time for header display
*/
function formatHeaderDateTime(): string {
const now = new Date();
const date = now.toLocaleDateString('en-CA'); // YYYY-MM-DD format
const date = now.toLocaleDateString('en-CA');
const time = now.toLocaleTimeString('en-US', {
hour: 'numeric',
minute: '2-digit',
@@ -30,9 +21,6 @@ function formatHeaderDateTime(): string {
return `${date} ${time} ${tz}`;
}
/**
* Render agent header
*/
export function renderAgentHeader(project: string): string[] {
return [
`# [${project}] recent context, ${formatHeaderDateTime()}`,
@@ -40,9 +28,6 @@ export function renderAgentHeader(project: string): string[] {
];
}
/**
* Render agent legend
*/
export function renderAgentLegend(): string[] {
const mode = ModeManager.getInstance().getActiveMode();
const typeLegendItems = mode.observation_types.map(t => `${t.emoji}${t.id}`).join(' ');
@@ -55,23 +40,14 @@ export function renderAgentLegend(): string[] {
];
}
/**
* Render agent column key - no longer needed in compact format
*/
export function renderAgentColumnKey(): string[] {
return [];
}
/**
* Render agent context index instructions - folded into legend
*/
export function renderAgentContextIndex(): string[] {
return [];
}
/**
* Render agent context economics
*/
export function renderAgentContextEconomics(
economics: TokenEconomics,
config: ContextConfig
@@ -97,33 +73,20 @@ export function renderAgentContextEconomics(
return output;
}
/**
* Render agent day header
*/
export function renderAgentDayHeader(day: string): string[] {
return [
`### ${day}`,
];
}
/**
* Render agent file header - no longer renders table headers in compact format
*/
export function renderAgentFileHeader(_file: string): string[] {
// File grouping eliminated in compact format - file context is in observation titles
return [];
}
/**
* Format compact time: "9:23 AM" "9:23a", "12:05 PM" "12:05p"
*/
function compactTime(time: string): string {
return time.toLowerCase().replace(' am', 'a').replace(' pm', 'p');
}
/**
* Render compact flat line for observation (replaces table row)
*/
export function renderAgentTableRow(
obs: Observation,
timeDisplay: string,
@@ -136,9 +99,6 @@ export function renderAgentTableRow(
return `${obs.id} ${time} ${icon} ${title}`;
}
/**
* Render agent full observation
*/
export function renderAgentFullObservation(
obs: Observation,
timeDisplay: string,
@@ -171,9 +131,6 @@ export function renderAgentFullObservation(
return output;
}
/**
* Render agent summary item in timeline
*/
export function renderAgentSummaryItem(
summary: { id: number; request: string | null },
formattedTime: string
@@ -183,17 +140,11 @@ export function renderAgentSummaryItem(
];
}
/**
* Render agent summary field
*/
export function renderAgentSummaryField(label: string, value: string | null): string[] {
if (!value) return [];
return [`**${label}**: ${value}`, ''];
}
/**
* Render agent previously section
*/
export function renderAgentPreviouslySection(priorMessages: PriorMessages): string[] {
if (!priorMessages.assistantMessage) return [];
@@ -208,9 +159,6 @@ export function renderAgentPreviouslySection(priorMessages: PriorMessages): stri
];
}
/**
* Render agent footer
*/
export function renderAgentFooter(totalDiscoveryTokens: number, totalReadTokens: number): string[] {
const workTokensK = Math.round(totalDiscoveryTokens / 1000);
return [
@@ -219,9 +167,6 @@ export function renderAgentFooter(totalDiscoveryTokens: number, totalReadTokens:
];
}
/**
* Render agent empty state
*/
export function renderAgentEmptyState(project: string): string {
return `# [${project}] recent context, ${formatHeaderDateTime()}\n\nNo previous sessions found.`;
}
@@ -1,8 +1,3 @@
/**
* HumanFormatter - Formats context output with ANSI colors for terminal
*
* Handles all colored formatting for context injection (terminal display).
*/
import type {
ContextConfig,
@@ -14,12 +9,9 @@ import { colors } from '../types.js';
import { ModeManager } from '../../domain/ModeManager.js';
import { formatObservationTokenDisplay } from '../TokenCalculator.js';
/**
* Format current date/time for header display
*/
function formatHeaderDateTime(): string {
const now = new Date();
const date = now.toLocaleDateString('en-CA'); // YYYY-MM-DD format
const date = now.toLocaleDateString('en-CA');
const time = now.toLocaleTimeString('en-US', {
hour: 'numeric',
minute: '2-digit',
@@ -29,9 +21,6 @@ function formatHeaderDateTime(): string {
return `${date} ${time} ${tz}`;
}
/**
* Render human-readable header
*/
export function renderHumanHeader(project: string): string[] {
return [
'',
@@ -41,9 +30,6 @@ export function renderHumanHeader(project: string): string[] {
];
}
/**
* Render human-readable legend
*/
export function renderHumanLegend(): string[] {
const mode = ModeManager.getInstance().getActiveMode();
const typeLegendItems = mode.observation_types.map(t => `${t.emoji} ${t.id}`).join(' | ');
@@ -54,9 +40,6 @@ export function renderHumanLegend(): string[] {
];
}
/**
* Render human-readable column key
*/
export function renderHumanColumnKey(): string[] {
return [
`${colors.bright}Column Key${colors.reset}`,
@@ -66,9 +49,6 @@ export function renderHumanColumnKey(): string[] {
];
}
/**
* Render human-readable context index instructions
*/
export function renderHumanContextIndex(): string[] {
return [
`${colors.dim}Context Index: This semantic index (titles, types, files, tokens) is usually sufficient to understand past work.${colors.reset}`,
@@ -81,9 +61,6 @@ export function renderHumanContextIndex(): string[] {
];
}
/**
* Render human-readable context economics
*/
export function renderHumanContextEconomics(
economics: TokenEconomics,
config: ContextConfig
@@ -110,9 +87,6 @@ export function renderHumanContextEconomics(
return output;
}
/**
* Render human-readable day header
*/
export function renderHumanDayHeader(day: string): string[] {
return [
`${colors.bright}${colors.cyan}${day}${colors.reset}`,
@@ -120,18 +94,12 @@ export function renderHumanDayHeader(day: string): string[] {
];
}
/**
* Render human-readable file header
*/
export function renderHumanFileHeader(file: string): string[] {
return [
`${colors.dim}${file}${colors.reset}`
];
}
/**
* Render human-readable table row for observation
*/
export function renderHumanTableRow(
obs: Observation,
time: string,
@@ -149,9 +117,6 @@ export function renderHumanTableRow(
return ` ${colors.dim}#${obs.id}${colors.reset} ${timePart} ${icon} ${title} ${readPart} ${discoveryPart}`;
}
/**
* Render human-readable full observation
*/
export function renderHumanFullObservation(
obs: Observation,
time: string,
@@ -180,9 +145,6 @@ export function renderHumanFullObservation(
return output;
}
/**
* Render human-readable summary item in timeline
*/
export function renderHumanSummaryItem(
summary: { id: number; request: string | null },
formattedTime: string
@@ -194,17 +156,11 @@ export function renderHumanSummaryItem(
];
}
/**
* Render human-readable summary field
*/
export function renderHumanSummaryField(label: string, value: string | null, color: string): string[] {
if (!value) return [];
return [`${color}${label}:${colors.reset} ${value}`, ''];
}
/**
* Render human-readable previously section
*/
export function renderHumanPreviouslySection(priorMessages: PriorMessages): string[] {
if (!priorMessages.assistantMessage) return [];
@@ -219,9 +175,6 @@ export function renderHumanPreviouslySection(priorMessages: PriorMessages): stri
];
}
/**
* Render human-readable footer
*/
export function renderHumanFooter(totalDiscoveryTokens: number, totalReadTokens: number): string[] {
const workTokensK = Math.round(totalDiscoveryTokens / 1000);
return [
@@ -230,9 +183,6 @@ export function renderHumanFooter(totalDiscoveryTokens: number, totalReadTokens:
];
}
/**
* Render human-readable empty state
*/
export function renderHumanEmptyState(project: string): string {
return `\n${colors.bright}${colors.cyan}[${project}] recent context, ${formatHeaderDateTime()}${colors.reset}\n${colors.gray}${'─'.repeat(60)}${colors.reset}\n\n${colors.dim}No previous sessions found for this project yet.${colors.reset}\n`;
}
-6
View File
@@ -1,13 +1,7 @@
/**
* Context Module - Public API
*
* Re-exports the main context generation functionality.
*/
export { generateContext } from './ContextBuilder.js';
export type { ContextInput, ContextConfig } from './types.js';
// Component exports for advanced usage
export { loadContextConfig } from './ContextConfigLoader.js';
export { calculateTokenEconomics, calculateObservationTokens } from './TokenCalculator.js';
export {
@@ -1,17 +1,9 @@
/**
* FooterRenderer - Renders the context footer sections
*
* Handles rendering of previously section and token savings footer.
*/
import type { ContextConfig, TokenEconomics, PriorMessages } from '../types.js';
import { shouldShowContextEconomics } from '../TokenCalculator.js';
import * as Agent from '../formatters/AgentFormatter.js';
import * as Human from '../formatters/HumanFormatter.js';
/**
* Render the previously section (prior assistant message)
*/
export function renderPreviouslySection(
priorMessages: PriorMessages,
forHuman: boolean
@@ -22,15 +14,11 @@ export function renderPreviouslySection(
return Agent.renderAgentPreviouslySection(priorMessages);
}
/**
* Render the footer with token savings info
*/
export function renderFooter(
economics: TokenEconomics,
config: ContextConfig,
forHuman: boolean
): string[] {
// Only show footer if we have savings to display
if (!shouldShowContextEconomics(config) || economics.totalDiscoveryTokens <= 0 || economics.savings <= 0) {
return [];
}
@@ -1,17 +1,9 @@
/**
* HeaderRenderer - Renders the context header sections
*
* Handles rendering of header, legend, column key, context index, and economics.
*/
import type { ContextConfig, TokenEconomics } from '../types.js';
import { shouldShowContextEconomics } from '../TokenCalculator.js';
import * as Agent from '../formatters/AgentFormatter.js';
import * as Human from '../formatters/HumanFormatter.js';
/**
* Render the complete header section
*/
export function renderHeader(
project: string,
economics: TokenEconomics,
@@ -20,35 +12,30 @@ export function renderHeader(
): string[] {
const output: string[] = [];
// Main header
if (forHuman) {
output.push(...Human.renderHumanHeader(project));
} else {
output.push(...Agent.renderAgentHeader(project));
}
// Legend
if (forHuman) {
output.push(...Human.renderHumanLegend());
} else {
output.push(...Agent.renderAgentLegend());
}
// Column key
if (forHuman) {
output.push(...Human.renderHumanColumnKey());
} else {
output.push(...Agent.renderAgentColumnKey());
}
// Context index instructions
if (forHuman) {
output.push(...Human.renderHumanContextIndex());
} else {
output.push(...Agent.renderAgentContextIndex());
}
// Context economics
if (shouldShowContextEconomics(config)) {
if (forHuman) {
output.push(...Human.renderHumanContextEconomics(economics, config));
@@ -1,17 +1,9 @@
/**
* SummaryRenderer - Renders the summary section at the end of context
*
* Handles rendering of the most recent session summary fields.
*/
import type { ContextConfig, Observation, SessionSummary } from '../types.js';
import { colors } from '../types.js';
import * as Agent from '../formatters/AgentFormatter.js';
import * as Human from '../formatters/HumanFormatter.js';
/**
* Check if summary should be displayed
*/
export function shouldShowSummary(
config: ContextConfig,
mostRecentSummary: SessionSummary | undefined,
@@ -32,7 +24,6 @@ export function shouldShowSummary(
return false;
}
// Only show if summary is more recent than observations
if (mostRecentObservation && mostRecentSummary.created_at_epoch <= mostRecentObservation.created_at_epoch) {
return false;
}
@@ -40,9 +31,6 @@ export function shouldShowSummary(
return true;
}
/**
* Render summary fields
*/
export function renderSummaryFields(
summary: SessionSummary,
forHuman: boolean
@@ -1,9 +1,3 @@
/**
* TimelineRenderer - Renders the chronological timeline of observations and summaries
*
* Handles day grouping and rendering. In agent (LLM) mode, uses flat compact lines.
* In human (terminal) mode, uses file grouping with visual formatting.
*/
import type {
ContextConfig,
@@ -15,9 +9,6 @@ import { formatTime, formatDate, formatDateTime, extractFirstFile, parseJsonArra
import * as Agent from '../formatters/AgentFormatter.js';
import * as Human from '../formatters/HumanFormatter.js';
/**
* Group timeline items by day
*/
export function groupTimelineByDay(timeline: TimelineItem[]): Map<string, TimelineItem[]> {
const itemsByDay = new Map<string, TimelineItem[]>();
@@ -30,7 +21,6 @@ export function groupTimelineByDay(timeline: TimelineItem[]): Map<string, Timeli
itemsByDay.get(day)!.push(item);
}
// Sort days chronologically
const sortedEntries = Array.from(itemsByDay.entries()).sort((a, b) => {
const aDate = new Date(a[0]).getTime();
const bDate = new Date(b[0]).getTime();
@@ -40,9 +30,6 @@ export function groupTimelineByDay(timeline: TimelineItem[]): Map<string, Timeli
return new Map(sortedEntries);
}
/**
* Get detail field content for full observation display
*/
function getDetailField(obs: Observation, config: ContextConfig): string | null {
if (config.fullObservationField === 'narrative') {
return obs.narrative;
@@ -50,9 +37,6 @@ function getDetailField(obs: Observation, config: ContextConfig): string | null
return obs.facts ? parseJsonArray(obs.facts).join('\n') : null;
}
/**
* Render a single day's timeline items (agent/LLM mode - flat compact lines)
*/
function renderDayTimelineAgent(
day: string,
dayItems: TimelineItem[],
@@ -91,9 +75,6 @@ function renderDayTimelineAgent(
return output;
}
/**
* Render a single day's timeline items (human/terminal mode - file grouped with tables)
*/
function renderDayTimelineHuman(
day: string,
dayItems: TimelineItem[],
@@ -125,7 +106,6 @@ function renderDayTimelineHuman(
const shouldShowFull = fullObservationIds.has(obs.id);
// Check if we need a new file section
if (file !== currentFile) {
output.push(...Human.renderHumanFileHeader(file));
currentFile = file;
@@ -145,9 +125,6 @@ function renderDayTimelineHuman(
return output;
}
/**
* Render a single day's timeline items
*/
export function renderDayTimeline(
day: string,
dayItems: TimelineItem[],
@@ -162,9 +139,6 @@ export function renderDayTimeline(
return renderDayTimelineAgent(day, dayItems, fullObservationIds, config);
}
/**
* Render the complete timeline
*/
export function renderTimeline(
timeline: TimelineItem[],
fullObservationIds: Set<number>,
-41
View File
@@ -1,51 +1,33 @@
/**
* Context Types - Shared types for context generation module
*/
/**
* Input parameters for context generation
*/
export interface ContextInput {
session_id?: string;
transcript_path?: string;
cwd?: string;
hook_event_name?: string;
source?: "startup" | "resume" | "clear" | "compact";
/** Array of projects to query (for worktree support: [parent, worktree]) */
projects?: string[];
/** When true, return ALL observations with no limit */
full?: boolean;
[key: string]: any;
}
/**
* Configuration for context generation
*/
export interface ContextConfig {
// Display counts
totalObservationCount: number;
fullObservationCount: number;
sessionCount: number;
// Token display toggles
showReadTokens: boolean;
showWorkTokens: boolean;
showSavingsAmount: boolean;
showSavingsPercent: boolean;
// Filters
observationTypes: Set<string>;
observationConcepts: Set<string>;
// Display options
fullObservationField: 'narrative' | 'facts';
showLastSummary: boolean;
showLastMessage: boolean;
}
/**
* Observation record from database
*/
export interface Observation {
id: number;
memory_session_id: string;
@@ -61,13 +43,9 @@ export interface Observation {
discovery_tokens: number | null;
created_at: string;
created_at_epoch: number;
/** Project this observation belongs to (for multi-project queries) */
project?: string;
}
/**
* Session summary record from database
*/
export interface SessionSummary {
id: number;
memory_session_id: string;
@@ -79,29 +57,19 @@ export interface SessionSummary {
next_steps: string | null;
created_at: string;
created_at_epoch: number;
/** Project this summary belongs to (for multi-project queries) */
project?: string;
}
/**
* Summary with timeline display info
*/
export interface SummaryTimelineItem extends SessionSummary {
displayEpoch: number;
displayTime: string;
shouldShowLink: boolean;
}
/**
* Timeline item - either observation or summary
*/
export type TimelineItem =
| { type: 'observation'; data: Observation }
| { type: 'summary'; data: SummaryTimelineItem };
/**
* Token economics data
*/
export interface TokenEconomics {
totalObservations: number;
totalReadTokens: number;
@@ -110,17 +78,11 @@ export interface TokenEconomics {
savingsPercent: number;
}
/**
* Prior messages from transcript
*/
export interface PriorMessages {
userMessage: string;
assistantMessage: string;
}
/**
* ANSI color codes for terminal output
*/
export const colors = {
reset: '\x1b[0m',
bright: '\x1b[1m',
@@ -134,8 +96,5 @@ export const colors = {
red: '\x1b[31m',
};
/**
* Configuration constants
*/
export const CHARS_PER_TOKEN_ESTIMATE = 4;
export const SUMMARY_LOOKAHEAD = 1;
+1 -70
View File
@@ -1,10 +1,3 @@
/**
* ModeManager - Singleton for loading and managing mode profiles
*
* Mode profiles define observation types, concepts, and prompts for different use cases.
* Default mode is 'code' (software development). Other modes like 'email-investigation'
* can be selected via CLAUDE_MEM_MODE setting.
*/
import { readFileSync, existsSync } from 'fs';
import { join } from 'path';
@@ -18,12 +11,8 @@ export class ModeManager {
private modesDir: string;
private constructor() {
// Modes are in plugin/modes/
// getPackageRoot() points to plugin/ in production and src/ in development
// We want to ensure we find the modes directory which is at the project root/plugin/modes
const packageRoot = getPackageRoot();
// Check for plugin/modes relative to package root (covers both dev and prod if paths are right)
const possiblePaths = [
join(packageRoot, 'modes'), // Production (plugin/modes)
join(packageRoot, '..', 'plugin', 'modes'), // Development (src/../plugin/modes)
@@ -33,9 +22,6 @@ export class ModeManager {
this.modesDir = foundPath || possiblePaths[0];
}
/**
* Get singleton instance
*/
static getInstance(): ModeManager {
if (!ModeManager.instance) {
ModeManager.instance = new ModeManager();
@@ -43,9 +29,6 @@ export class ModeManager {
return ModeManager.instance;
}
/**
* Parse mode ID for inheritance pattern (parent--override)
*/
private parseInheritance(modeId: string): {
hasParent: boolean;
parentId: string;
@@ -57,7 +40,6 @@ export class ModeManager {
return { hasParent: false, parentId: '', overrideId: '' };
}
// Support only one level: code--ko, not code--ko--verbose
if (parts.length > 2) {
throw new Error(
`Invalid mode inheritance: ${modeId}. Only one level of inheritance supported (parent--override)`
@@ -67,13 +49,10 @@ export class ModeManager {
return {
hasParent: true,
parentId: parts[0],
overrideId: modeId // Use the full modeId (e.g., code--es) to find the override file
overrideId: modeId
};
}
/**
* Check if value is a plain object (not array, not null)
*/
private isPlainObject(value: unknown): boolean {
return (
value !== null &&
@@ -82,12 +61,6 @@ export class ModeManager {
);
}
/**
* Deep merge two objects
* - Recursively merge nested objects
* - Replace arrays completely (no merging)
* - Override primitives
*/
private deepMerge<T>(base: T, override: Partial<T>): T {
const result = { ...base } as T;
@@ -96,10 +69,8 @@ export class ModeManager {
const baseValue = base[key];
if (this.isPlainObject(overrideValue) && this.isPlainObject(baseValue)) {
// Recursively merge nested objects
result[key] = this.deepMerge(baseValue, overrideValue as any);
} else {
// Replace arrays and primitives completely
result[key] = overrideValue as T[Extract<keyof T, string>];
}
}
@@ -107,9 +78,6 @@ export class ModeManager {
return result;
}
/**
* Load a mode file from disk without inheritance processing
*/
private loadModeFile(modeId: string): ModeConfig {
const modePath = join(this.modesDir, `${modeId}.json`);
@@ -121,19 +89,9 @@ export class ModeManager {
return JSON.parse(jsonContent) as ModeConfig;
}
/**
* Load a mode profile by ID with inheritance support
* Caches the result for subsequent calls
*
* Supports inheritance via parent--override pattern (e.g., code--ko)
* - Loads parent mode recursively
* - Loads override file from modes directory
* - Deep merges override onto parent
*/
loadMode(modeId: string): ModeConfig {
const inheritance = this.parseInheritance(modeId);
// No inheritance - load file directly (existing behavior)
if (!inheritance.hasParent) {
try {
const mode = this.loadModeFile(modeId);
@@ -149,7 +107,6 @@ export class ModeManager {
} else {
logger.warn('WORKER', `Mode file not found: ${modeId}, falling back to 'code'`, { error: String(error) });
}
// If we're already trying to load 'code', throw to prevent infinite recursion
if (modeId === 'code') {
throw new Error('Critical: code.json mode file missing');
}
@@ -157,10 +114,8 @@ export class ModeManager {
}
}
// Has inheritance - load parent and merge with override
const { parentId, overrideId } = inheritance;
// Load parent mode recursively
let parentMode: ModeConfig;
try {
parentMode = this.loadMode(parentId);
@@ -173,7 +128,6 @@ export class ModeManager {
parentMode = this.loadMode('code');
}
// Load override file
let overrideConfig: Partial<ModeConfig>;
try {
overrideConfig = this.loadModeFile(overrideId);
@@ -188,14 +142,12 @@ export class ModeManager {
return parentMode;
}
// Validate override file loaded successfully
if (!overrideConfig) {
logger.warn('SYSTEM', `Invalid override file: ${overrideId}, using parent mode '${parentId}' only`);
this.activeMode = parentMode;
return parentMode;
}
// Deep merge override onto parent
const mergedMode = this.deepMerge(parentMode, overrideConfig);
this.activeMode = mergedMode;
@@ -209,9 +161,6 @@ export class ModeManager {
return mergedMode;
}
/**
* Get currently active mode
*/
getActiveMode(): ModeConfig {
if (!this.activeMode) {
throw new Error('No mode loaded. Call loadMode() first.');
@@ -219,46 +168,28 @@ export class ModeManager {
return this.activeMode;
}
/**
* Get all observation types from active mode
*/
getObservationTypes(): ObservationType[] {
return this.getActiveMode().observation_types;
}
/**
* Get all observation concepts from active mode
*/
getObservationConcepts(): ObservationConcept[] {
return this.getActiveMode().observation_concepts;
}
/**
* Get icon for a specific observation type
*/
getTypeIcon(typeId: string): string {
const type = this.getObservationTypes().find(t => t.id === typeId);
return type?.emoji || '📝';
}
/**
* Get work emoji for a specific observation type
*/
getWorkEmoji(typeId: string): string {
const type = this.getObservationTypes().find(t => t.id === typeId);
return type?.work_emoji || '📝';
}
/**
* Validate that a type ID exists in the active mode
*/
validateType(typeId: string): boolean {
return this.getObservationTypes().some(t => t.id === typeId);
}
/**
* Get label for a specific observation type
*/
getTypeLabel(typeId: string): string {
const type = this.getObservationTypes().find(t => t.id === typeId);
return type?.label || typeId;
+32 -41
View File
@@ -1,6 +1,3 @@
/**
* TypeScript interfaces for mode configuration system
*/
export interface ObservationType {
id: string;
@@ -17,49 +14,43 @@ export interface ObservationConcept {
}
export interface ModePrompts {
system_identity: string; // Base persona and role definition
language_instruction?: string; // Optional language constraints (e.g., "Write in Korean")
spatial_awareness: string; // Working directory context guidance
observer_role: string; // What the observer's job is in this mode
recording_focus: string; // What to record and how to think about it
skip_guidance: string; // What to skip recording
type_guidance: string; // Valid observation types for this mode
concept_guidance: string; // Valid concept categories for this mode
field_guidance: string; // Guidance for facts/files fields
output_format_header: string; // Text introducing the XML schema
format_examples: string; // Optional additional XML examples (empty string if not needed)
footer: string; // Closing instructions and encouragement
system_identity: string;
spatial_awareness: string;
observer_role: string;
recording_focus: string;
skip_guidance: string;
type_guidance: string;
concept_guidance: string;
field_guidance: string;
output_format_header: string;
format_examples: string;
footer: string;
// Observation XML placeholders
xml_title_placeholder: string; // e.g., "[**title**: Short title capturing the core action or topic]"
xml_subtitle_placeholder: string; // e.g., "[**subtitle**: One sentence explanation (max 24 words)]"
xml_fact_placeholder: string; // e.g., "[Concise, self-contained statement]"
xml_narrative_placeholder: string; // e.g., "[**narrative**: Full context: What was done, how it works, why it matters]"
xml_concept_placeholder: string; // e.g., "[knowledge-type-category]"
xml_file_placeholder: string; // e.g., "[path/to/file]"
xml_title_placeholder: string;
xml_subtitle_placeholder: string;
xml_fact_placeholder: string;
xml_narrative_placeholder: string;
xml_concept_placeholder: string;
xml_file_placeholder: string;
// Summary XML placeholders
xml_summary_request_placeholder: string; // e.g., "[Short title capturing the user's request AND...]"
xml_summary_investigated_placeholder: string; // e.g., "[What has been explored so far? What was examined?]"
xml_summary_learned_placeholder: string; // e.g., "[What have you learned about how things work?]"
xml_summary_completed_placeholder: string; // e.g., "[What work has been completed so far? What has shipped or changed?]"
xml_summary_next_steps_placeholder: string; // e.g., "[What are you actively working on or planning to work on next in this session?]"
xml_summary_notes_placeholder: string; // e.g., "[Additional insights or observations about the current progress]"
xml_summary_request_placeholder: string;
xml_summary_investigated_placeholder: string;
xml_summary_learned_placeholder: string;
xml_summary_completed_placeholder: string;
xml_summary_next_steps_placeholder: string;
xml_summary_notes_placeholder: string;
// Section headers (with separator lines)
header_memory_start: string; // e.g., "MEMORY PROCESSING START\n======================="
header_memory_continued: string; // e.g., "MEMORY PROCESSING CONTINUED\n==========================="
header_summary_checkpoint: string; // e.g., "PROGRESS SUMMARY CHECKPOINT\n==========================="
header_memory_start: string;
header_memory_continued: string;
header_summary_checkpoint: string;
// Continuation prompts
continuation_greeting: string; // e.g., "Hello memory agent, you are continuing to observe the primary Claude session."
continuation_instruction: string; // e.g., "IMPORTANT: Continue generating observations from tool use messages using the XML structure below."
continuation_greeting: string;
continuation_instruction: string;
// Summary prompts
summary_instruction: string; // Instructions for writing progress summary
summary_context_label: string; // Label for Claude's response section (e.g., "Claude's Full Response to User:")
summary_format_instruction: string; // Instruction to use XML format (e.g., "Respond in this XML format:")
summary_footer: string; // Footer with closing instructions and language requirement
summary_instruction: string;
summary_context_label: string;
summary_format_instruction: string;
summary_footer: string;
}
export interface ModeConfig {
@@ -1,25 +1,3 @@
/**
* One-time v12.4.3 pollution cleanup.
*
* Removes accumulated junk that v12.4.0/v12.4.2 fixes prevent from ever recurring:
* 1. observer-sessions: rows that polluted user-facing search/timeline before
* the observer-sessions filter shipped. Cascades to user_prompts, observations,
* and session_summaries via existing FK ON DELETE CASCADE.
* 2. Stuck pending_messages: poisoned chains where 10 rows for a single
* session_db_id are stuck in 'failed' or 'processing'. Threshold spares
* legitimate transient failures while clearing the cascade-failure cases
* from the pre-v12.4.2 context-overflow loop.
*
* After SQLite is cleaned, ~/.claude-mem/chroma/ and ~/.claude-mem/chroma-sync-state.json
* are removed so backfillAllProjects rebuilds the vector store from the cleaned SQLite.
*
* Marker-file gated. Idempotent. Opt-out via CLAUDE_MEM_SKIP_CLEANUP_V12_4_3=1.
*
* Mirrors the runOneTimeChromaMigration / runOneTimeCwdRemap pattern in
* ProcessManager.ts. Must run AFTER dbManager.initialize() (so migrations have
* applied) and BEFORE ChromaSync.backfillAllProjects (so backfill sees the
* cleaned state).
*/
import path from 'path';
import { existsSync, writeFileSync, mkdirSync, rmSync, statSync, copyFileSync, statfsSync } from 'fs';
@@ -45,16 +23,6 @@ interface MarkerPayload {
skipped?: string;
}
/**
* Run the one-time v12.4.3 cleanup. Safe to call on every worker startup;
* the marker file ensures the work runs at most once per data directory.
*
* @param dataDirectory - Override for DATA_DIR (used in tests)
* @param options.dryRun - When true, scans + reports counts but performs NO
* DB writes, NO backup, NO chroma wipe, and does NOT write the marker.
* Used by `claude-mem cleanup --dry-run` to preview what would happen
* without mutating user state. (#2126 item 5)
*/
export function runOneTimeV12_4_3Cleanup(
dataDirectory?: string,
options: { dryRun?: boolean } = {},
@@ -106,11 +74,6 @@ export function runOneTimeV12_4_3Cleanup(
}
}
/**
* Read-only scan: count what runOneTimeV12_4_3Cleanup *would* delete.
* Mirrors the COUNT(*) queries from runObserverSessionsPurge and
* runStuckPendingPurge. Opens the DB read-only never mutates.
*/
function scanCleanupCounts(dbPath: string): CleanupCounts {
const counts = emptyCounts();
const db = new Database(dbPath, { readonly: true });
@@ -152,8 +115,6 @@ function executeCleanup(dbPath: string, effectiveDataDir: string, markerPath: st
const fs = statfsSync(effectiveDataDir);
const free = Number(fs.bavail) * Number(fs.bsize);
if (free < required) {
// Don't write the marker — once the user frees disk space, the next
// worker startup should retry the cleanup rather than skipping forever.
logger.error('SYSTEM', 'Insufficient disk for v12.4.3 backup; skipping cleanup (will retry on next startup)', { dbSize, free, required });
return;
}
@@ -177,17 +138,12 @@ function executeCleanup(dbPath: string, effectiveDataDir: string, markerPath: st
vacuumFailed = true;
vacuumError = err instanceof Error ? err : new Error(String(err));
}
// Close before any fallback: on Windows an open SQLite handle holds a
// file lock that can prevent copyFileSync from reading the source.
backupDb.close();
if (vacuumFailed) {
logger.warn('SYSTEM', 'VACUUM INTO failed, falling back to copyFileSync', {}, vacuumError ?? undefined);
try {
copyFileSync(dbPath, backupPath);
// The DB is in WAL mode; recent committed pages may live in -wal/-shm.
// VACUUM INTO captures them automatically; copyFileSync does not, so
// mirror them alongside so the backup represents the same state.
const walPath = `${dbPath}-wal`;
const shmPath = `${dbPath}-shm`;
if (existsSync(walPath)) copyFileSync(walPath, `${backupPath}-wal`);
@@ -202,7 +158,6 @@ function executeCleanup(dbPath: string, effectiveDataDir: string, markerPath: st
const counts = emptyCounts();
const db = new Database(dbPath);
// PRAGMA foreign_keys must be set OUTSIDE a transaction to take effect on this connection.
db.run('PRAGMA foreign_keys = ON');
try {
@@ -212,9 +167,6 @@ function executeCleanup(dbPath: string, effectiveDataDir: string, markerPath: st
db.close();
}
// SQLite purge succeeded; chroma wipe failure must NOT re-run the migration
// on the next startup or we accumulate one new backup per boot. Capture the
// failure on the marker instead.
let chromaWiped = false;
let chromaWipeError: string | undefined;
try {
@@ -244,9 +196,6 @@ function executeCleanup(dbPath: string, effectiveDataDir: string, markerPath: st
function runObserverSessionsPurge(db: Database, counts: CleanupCounts): void {
db.run('BEGIN IMMEDIATE');
try {
// Count rows before the delete: bun:sqlite's result.changes inflates with
// FTS-trigger and cascade row counts, so it can't stand in for a session
// count or a cascade-row count on its own.
const sessionCount = (db.prepare(`SELECT COUNT(*) AS n FROM sdk_sessions WHERE project = ?`).get(OBSERVER_SESSIONS_PROJECT) as { n: number }).n;
const cascadeRows =
(db.prepare(`SELECT COUNT(*) AS n FROM user_prompts WHERE content_session_id IN (SELECT content_session_id FROM sdk_sessions WHERE project = ?)`).get(OBSERVER_SESSIONS_PROJECT) as { n: number }).n
@@ -263,8 +212,6 @@ function runObserverSessionsPurge(db: Database, counts: CleanupCounts): void {
cascadeRows: counts.observerCascadeRows,
});
} catch (err: unknown) {
// Defensive: SQLite may have already auto-rolled back on certain
// constraint failures. Don't let a no-op ROLLBACK shadow the real error.
try { db.run('ROLLBACK'); } catch { /* already rolled back */ }
throw err;
}
@@ -273,9 +220,6 @@ function runObserverSessionsPurge(db: Database, counts: CleanupCounts): void {
function runStuckPendingPurge(db: Database, counts: CleanupCounts): void {
db.run('BEGIN IMMEDIATE');
try {
// Pre-count for consistency with runObserverSessionsPurge: result.changes
// would be reliable today (no FTS on pending_messages) but the explicit
// count protects against future schema changes.
const stuckCount = (db.prepare(
`SELECT COUNT(*) AS n FROM pending_messages
WHERE status IN ('failed', 'processing')
@@ -302,8 +246,6 @@ function runStuckPendingPurge(db: Database, counts: CleanupCounts): void {
db.run('COMMIT');
logger.info('SYSTEM', 'v12.4.3: stuck pending_messages purge committed', { rows: counts.stuckPendingMessages });
} catch (err: unknown) {
// Defensive: SQLite may have already auto-rolled back on certain
// constraint failures. Don't let a no-op ROLLBACK shadow the real error.
try { db.run('ROLLBACK'); } catch { /* already rolled back */ }
throw err;
}
@@ -1,12 +1,3 @@
/**
* GracefulShutdown - Cleanup utilities for graceful exit
*
* Extracted from worker-service.ts to provide centralized shutdown coordination.
* Handles:
* - HTTP server closure (with Windows-specific delays)
* - Session manager shutdown coordination
* - Child process cleanup (Windows zombie port fix)
*/
import http from 'http';
import { logger } from '../../utils/logger.js';
@@ -24,16 +15,10 @@ export interface CloseableDatabase {
close(): Promise<void>;
}
/**
* Stoppable service interface for ChromaMcpManager
*/
export interface StoppableService {
stop(): Promise<void>;
}
/**
* Configuration for graceful shutdown
*/
export interface GracefulShutdownConfig {
server: http.Server | null;
sessionManager: ShutdownableService;
@@ -42,71 +27,47 @@ export interface GracefulShutdownConfig {
chromaMcpManager?: StoppableService;
}
/**
* Perform graceful shutdown of all services
*
* IMPORTANT: On Windows, we must kill all child processes before exiting
* to prevent zombie ports. The socket handle can be inherited by children,
* and if not properly closed, the port stays bound after process death.
*/
export async function performGracefulShutdown(config: GracefulShutdownConfig): Promise<void> {
logger.info('SYSTEM', 'Shutdown initiated');
// STEP 1: Close HTTP server first
if (config.server) {
await closeHttpServer(config.server);
logger.info('SYSTEM', 'HTTP server closed');
}
// STEP 2: Shutdown active sessions
await config.sessionManager.shutdownAll();
// STEP 3: Close MCP client connection (signals child to exit gracefully)
if (config.mcpClient) {
await config.mcpClient.close();
logger.info('SYSTEM', 'MCP client closed');
}
// STEP 4: Stop Chroma MCP connection
if (config.chromaMcpManager) {
logger.info('SHUTDOWN', 'Stopping Chroma MCP connection...');
await config.chromaMcpManager.stop();
logger.info('SHUTDOWN', 'Chroma MCP connection stopped');
}
// STEP 5: Close database connection (includes ChromaSync cleanup)
if (config.dbManager) {
await config.dbManager.close();
}
// STEP 6: Supervisor handles tracked child termination, PID cleanup, and stale sockets.
// Plan 06 Phase 8 — call the supervisor singleton directly; the wrapper
// re-export from supervisor/index.ts was deleted (one wrapper, one caller,
// no value).
await getSupervisor().stop();
logger.info('SYSTEM', 'Worker shutdown complete');
}
/**
* Close HTTP server with Windows-specific delays
* Windows needs extra time to release sockets properly
*/
async function closeHttpServer(server: http.Server): Promise<void> {
// Close all active connections
server.closeAllConnections();
// Give Windows time to close connections before closing server (prevents zombie ports)
if (process.platform === 'win32') {
await new Promise(r => setTimeout(r, 500));
}
// Close the server
await new Promise<void>((resolve, reject) => {
server.close(err => err ? reject(err) : resolve());
});
// Extra delay on Windows to ensure port is fully released
if (process.platform === 'win32') {
await new Promise(r => setTimeout(r, 500));
logger.info('SYSTEM', 'Waited for Windows port cleanup');
@@ -1,13 +1,3 @@
/**
* HealthMonitor - Port monitoring, health checks, and version checking
*
* Extracted from worker-service.ts monolith to provide centralized health monitoring.
* Handles:
* - Port availability checking
* - Worker health/readiness polling
* - Version mismatch detection (critical for plugin updates)
* - HTTP-based shutdown requests
*/
import path from 'path';
import net from 'net';
@@ -15,17 +5,12 @@ import { readFileSync } from 'fs';
import { logger } from '../../utils/logger.js';
import { MARKETPLACE_ROOT } from '../../shared/paths.js';
/**
* Make an HTTP request to the worker via TCP.
* Returns { ok, statusCode, body } or throws on transport error.
*/
async function httpRequestToWorker(
port: number,
endpointPath: string,
method: string = 'GET'
): Promise<{ ok: boolean; statusCode: number; body: string }> {
const response = await fetch(`http://127.0.0.1:${port}${endpointPath}`, { method });
// Gracefully handle cases where response body isn't available (e.g., test mocks)
let body = '';
try {
body = await response.text();
@@ -35,21 +20,8 @@ async function httpRequestToWorker(
return { ok: response.ok, statusCode: response.status, body };
}
/**
* Check if a port is in use by attempting an atomic socket bind.
* More reliable than HTTP health check for daemon spawn guards
* prevents TOCTOU race where two daemons both see "port free" via
* HTTP and then both try to listen() (upstream bug workaround).
*
* Falls back to HTTP health check on Windows where socket bind
* behavior differs.
*/
export async function isPortInUse(port: number): Promise<boolean> {
if (process.platform === 'win32') {
// APPROVED OVERRIDE: Windows keeps HTTP health check because socket bind
// semantics differ (SO_REUSEADDR defaults, firewall prompts). The TOCTOU
// race remains on Windows but is an accepted limitation — the atomic
// socket approach would cause false positives or UAC popups.
try {
const response = await fetch(`http://127.0.0.1:${port}/api/health`);
return response.ok;
@@ -63,7 +35,6 @@ export async function isPortInUse(port: number): Promise<boolean> {
}
}
// Unix: atomic socket bind check — no TOCTOU race
return new Promise((resolve) => {
const server = net.createServer();
server.once('error', (err: NodeJS.ErrnoException) => {
@@ -80,10 +51,6 @@ export async function isPortInUse(port: number): Promise<boolean> {
});
}
/**
* Poll a worker endpoint until it returns 200 OK or timeout.
* Shared implementation for liveness and readiness checks.
*/
async function pollEndpointUntilOk(
port: number,
endpointPath: string,
@@ -96,7 +63,6 @@ async function pollEndpointUntilOk(
const result = await httpRequestToWorker(port, endpointPath);
if (result.ok) return true;
} catch (error) {
// [ANTI-PATTERN IGNORED]: Retry loop - expected failures during startup, will retry
if (error instanceof Error) {
logger.debug('SYSTEM', retryLogMessage, {}, error);
} else {
@@ -108,29 +74,14 @@ async function pollEndpointUntilOk(
return false;
}
/**
* Wait for the worker HTTP server to become responsive (liveness check).
* Uses /api/health which returns 200 as soon as the HTTP server is listening.
* For full initialization (DB + search), use waitForReadiness() instead.
*/
export function waitForHealth(port: number, timeoutMs: number = 30000): Promise<boolean> {
return pollEndpointUntilOk(port, '/api/health', timeoutMs, 'Service not ready yet, will retry');
}
/**
* Wait for the worker to be fully initialized (DB + search ready).
* Uses /api/readiness which returns 200 only after core initialization completes.
* Now that initializationCompleteFlag is set after DB/search init (not MCP),
* this typically completes in a few seconds.
*/
export function waitForReadiness(port: number, timeoutMs: number = 30000): Promise<boolean> {
return pollEndpointUntilOk(port, '/api/readiness', timeoutMs, 'Worker not ready yet, will retry');
}
/**
* Wait for a port to become free (no longer responding to health checks)
* Used after shutdown to confirm the port is available for restart
*/
export async function waitForPortFree(port: number, timeoutMs: number = 10000): Promise<boolean> {
const start = Date.now();
while (Date.now() - start < timeoutMs) {
@@ -140,10 +91,6 @@ export async function waitForPortFree(port: number, timeoutMs: number = 10000):
return false;
}
/**
* Send HTTP shutdown request to a running worker
* @returns true if shutdown request was acknowledged, false otherwise
*/
export async function httpShutdown(port: number): Promise<boolean> {
try {
const result = await httpRequestToWorker(port, '/api/admin/shutdown', 'POST');
@@ -153,22 +100,15 @@ export async function httpShutdown(port: number): Promise<boolean> {
}
return true;
} catch (error) {
// Connection refused is expected if worker already stopped
if (error instanceof Error && error.message?.includes('ECONNREFUSED')) {
logger.debug('SYSTEM', 'Worker already stopped', {}, error);
return false;
}
// Unexpected error - log full details
logger.error('SYSTEM', 'Shutdown request failed unexpectedly', {}, error as Error);
return false;
}
}
/**
* Get the plugin version from the installed marketplace package.json
* This is the "expected" version that should be running.
* Returns 'unknown' on ENOENT/EBUSY (shutdown race condition, fix #1042).
*/
export function getInstalledPluginVersion(): string {
try {
const packageJsonPath = path.join(MARKETPLACE_ROOT, 'package.json');
@@ -187,10 +127,6 @@ export function getInstalledPluginVersion(): string {
}
}
/**
* Get the running worker's version via API
* This is the "actual" version currently running.
*/
export async function getRunningWorkerVersion(port: number): Promise<string | null> {
try {
const result = await httpRequestToWorker(port, '/api/version');
@@ -198,7 +134,6 @@ export async function getRunningWorkerVersion(port: number): Promise<string | nu
const data = JSON.parse(result.body) as { version: string };
return data.version;
} catch {
// Expected: worker not running or version endpoint unavailable
logger.debug('SYSTEM', 'Could not fetch worker version', {});
return null;
}
@@ -210,16 +145,10 @@ export interface VersionCheckResult {
workerVersion: string | null;
}
/**
* Check if worker version matches plugin version
* Critical for detecting when plugin is updated but worker is still running old code
* Returns true if versions match or if we can't determine (assume match for graceful degradation)
*/
export async function checkVersionMatch(port: number): Promise<VersionCheckResult> {
const pluginVersion = getInstalledPluginVersion();
const workerVersion = await getRunningWorkerVersion(port);
// If either version is unknown/null, assume match (graceful degradation, fix #1042)
if (!workerVersion || pluginVersion === 'unknown') {
return { matches: true, pluginVersion, workerVersion };
}
@@ -1,12 +1,3 @@
/**
* ProcessManager - PID files, signal handlers, and child process lifecycle management
*
* Extracted from worker-service.ts monolith to provide centralized process management.
* Handles:
* - PID file management for daemon coordination
* - Signal handler registration for graceful shutdown
* - Child process enumeration and cleanup (especially for Windows zombie port fix)
*/
import path from 'path';
import { homedir } from 'os';
@@ -20,21 +11,9 @@ import { getSupervisor, validateWorkerPidFile, type ValidateWorkerPidStatus } fr
const execAsync = promisify(exec);
// Standard paths for PID file management
const DATA_DIR = path.join(homedir(), '.claude-mem');
const PID_FILE = path.join(DATA_DIR, 'worker.pid');
// Orphaned process cleanup patterns and thresholds
// These are claude-mem processes that can accumulate if not properly terminated
const ORPHAN_PROCESS_PATTERNS = [
'mcp-server.cjs', // Main MCP server process
'worker-service.cjs', // Background worker daemon
'chroma-mcp' // ChromaDB MCP subprocess
];
// Only kill processes older than this to avoid killing the current session
const ORPHAN_MAX_AGE_MINUTES = 30;
interface RuntimeResolverOptions {
platform?: NodeJS.Platform;
execPath?: string;
@@ -77,43 +56,9 @@ function lookupBinaryInPath(binaryName: string, platform: NodeJS.Platform): stri
return firstMatch || null;
}
// Memoize the resolved runtime path for the no-options call site (which is
// what spawnDaemon uses). Caches successful resolutions so repeated spawn
// attempts (crash loops, health thrashing) don't repeatedly hit `statSync`
// on the candidate paths.
//
// IMPORTANT: only success is cached. A `null` result (Bun not found) is
// never cached so that a long-running MCP server can recover if the user
// installs Bun in another terminal between the first failed lookup and a
// subsequent retry. Caching `null` would permanently break the process
// until restart. Per PR #1645 round-10 review.
//
// `undefined` means "not yet resolved"; tests that pass options bypass the
// cache entirely.
let cachedWorkerRuntimePath: string | undefined = undefined;
/**
* Reset the memoized runtime path. Exported for test isolation only
* production code never needs to call this.
*/
export function resetWorkerRuntimePathCache(): void {
cachedWorkerRuntimePath = undefined;
}
/**
* Resolve the runtime executable for spawning the worker daemon.
*
* worker-service.cjs imports `bun:sqlite`, so it MUST run under Bun on every
* platform not just Windows. When the caller is already running under Bun
* (e.g. the worker self-spawning from a hook), we reuse process.execPath to
* avoid an extra PATH lookup. Otherwise (notably when the MCP server running
* under Node spawns the worker for the first time) we locate the Bun binary
* via env vars, well-known install locations, and finally the system PATH.
*/
export function resolveWorkerRuntimePath(options: RuntimeResolverOptions = {}): string | null {
// Memoization fast path — only when called with no injected options. Tests
// that pass options always run the full resolution (and never populate or
// read the cache) to keep the existing test cases deterministic.
const isMemoizable = Object.keys(options).length === 0;
if (isMemoizable && cachedWorkerRuntimePath !== undefined) {
return cachedWorkerRuntimePath;
@@ -121,8 +66,6 @@ export function resolveWorkerRuntimePath(options: RuntimeResolverOptions = {}):
const result = resolveWorkerRuntimePathUncached(options);
// Only cache successful resolutions. See the comment on
// `cachedWorkerRuntimePath` above for the rationale.
if (isMemoizable && result !== null) {
cachedWorkerRuntimePath = result;
}
@@ -133,7 +76,6 @@ function resolveWorkerRuntimePathUncached(options: RuntimeResolverOptions): stri
const platform = options.platform ?? process.platform;
const execPath = options.execPath ?? process.execPath;
// If already running under Bun, reuse it directly.
if (isBunExecutablePath(execPath)) {
return execPath;
}
@@ -172,11 +114,6 @@ function resolveWorkerRuntimePathUncached(options: RuntimeResolverOptions): stri
return normalized;
}
// Allow command-style values from env (e.g. BUN=bun). The previous branch
// would also match this candidate via isBunExecutablePath('bun') === true,
// but pathExists('bun') is false because it's a relative name — so this
// branch is what actually fires for the bare-command case. We return the
// bare name unchanged so child_process.spawn() resolves it via PATH.
if (normalized.toLowerCase() === 'bun') {
return normalized;
}
@@ -192,14 +129,6 @@ import {
} from '../../supervisor/process-registry.js';
export { captureProcessStartToken, verifyPidFileOwnership, type PidInfo };
/**
* Write PID info to the standard PID file location.
*
* Automatically captures a process-start token for `info.pid` if the caller
* didn't supply one. The token lets future readers detect PID reuse across
* reboots/container restarts see captureProcessStartToken in
* supervisor/process-registry.ts.
*/
export function writePidFile(info: PidInfo): void {
mkdirSync(DATA_DIR, { recursive: true });
const resolvedToken = info.startToken ?? captureProcessStartToken(info.pid);
@@ -207,10 +136,6 @@ export function writePidFile(info: PidInfo): void {
writeFileSync(PID_FILE, JSON.stringify(payload, null, 2));
}
/**
* Read PID info from the standard PID file location
* Returns null if file doesn't exist or is corrupted
*/
export function readPidFile(): PidInfo | null {
if (!existsSync(PID_FILE)) return null;
@@ -226,16 +151,12 @@ export function readPidFile(): PidInfo | null {
}
}
/**
* Remove the PID file (called during shutdown)
*/
export function removePidFile(): void {
if (!existsSync(PID_FILE)) return;
try {
unlinkSync(PID_FILE);
} catch (error: unknown) {
// [ANTI-PATTERN IGNORED]: Cleanup function - PID file removal failure is non-critical
if (error instanceof Error) {
logger.warn('SYSTEM', 'Failed to remove PID file', { path: PID_FILE }, error);
} else {
@@ -244,36 +165,22 @@ export function removePidFile(): void {
}
}
/**
* Get platform-adjusted timeout for worker-side socket operations (2.0x on Windows).
*
* Note: Two platform multiplier functions exist intentionally:
* - getTimeout() in hook-constants.ts uses 1.5x for hook-side operations (fast path)
* - getPlatformTimeout() here uses 2.0x for worker-side socket operations (slower path)
*/
export function getPlatformTimeout(baseMs: number): number {
const WINDOWS_MULTIPLIER = 2.0;
return process.platform === 'win32' ? Math.round(baseMs * WINDOWS_MULTIPLIER) : baseMs;
}
/**
* Get all child process PIDs (Windows-specific)
* Used for cleanup to prevent zombie ports when parent exits
*/
export async function getChildProcesses(parentPid: number): Promise<number[]> {
if (process.platform !== 'win32') {
return [];
}
// SECURITY: Validate PID is a positive integer to prevent command injection
if (!Number.isInteger(parentPid) || parentPid <= 0) {
logger.warn('SYSTEM', 'Invalid parent PID for child process enumeration', { parentPid });
return [];
}
try {
// Use WQL -Filter to avoid $_ pipeline syntax that breaks in Git Bash (#1062, #1024).
// Get-CimInstance with server-side filtering is also more efficient than piping through Where-Object.
const cmd = `powershell -NoProfile -NonInteractive -Command "Get-CimInstance Win32_Process -Filter 'ParentProcessId=${parentPid}' | Select-Object -ExpandProperty ProcessId"`;
const { stdout } = await execAsync(cmd, { timeout: HOOK_TIMEOUTS.POWERSHELL_COMMAND, windowsHide: true });
return stdout
@@ -283,7 +190,6 @@ export async function getChildProcesses(parentPid: number): Promise<number[]> {
.map(line => parseInt(line, 10))
.filter(pid => pid > 0);
} catch (error: unknown) {
// Shutdown cleanup - failure is non-critical, continue without child process cleanup
if (error instanceof Error) {
logger.error('SYSTEM', 'Failed to enumerate child processes', { parentPid }, error);
} else {
@@ -293,77 +199,12 @@ export async function getChildProcesses(parentPid: number): Promise<number[]> {
}
}
/**
* Force kill a process by PID
* Windows: uses taskkill /F /T to kill process tree
* Unix: uses SIGKILL
*/
export async function forceKillProcess(pid: number): Promise<void> {
// SECURITY: Validate PID is a positive integer to prevent command injection
if (!Number.isInteger(pid) || pid <= 0) {
logger.warn('SYSTEM', 'Invalid PID for force kill', { pid });
return;
}
try {
if (process.platform === 'win32') {
// /T kills entire process tree, /F forces termination
await execAsync(`taskkill /PID ${pid} /T /F`, { timeout: HOOK_TIMEOUTS.POWERSHELL_COMMAND, windowsHide: true });
} else {
process.kill(pid, 'SIGKILL');
}
logger.info('SYSTEM', 'Killed process', { pid });
} catch (error: unknown) {
// [ANTI-PATTERN IGNORED]: Shutdown cleanup - process already exited, continue
if (error instanceof Error) {
logger.debug('SYSTEM', 'Process already exited during force kill', { pid }, error);
} else {
logger.debug('SYSTEM', 'Process already exited during force kill', { pid }, new Error(String(error)));
}
}
}
/**
* Wait for processes to fully exit
*/
export async function waitForProcessesExit(pids: number[], timeoutMs: number): Promise<void> {
const start = Date.now();
while (Date.now() - start < timeoutMs) {
const stillAlive = pids.filter(pid => {
try {
process.kill(pid, 0);
return true;
} catch {
// process.kill(pid, 0) throws when PID doesn't exist — expected during cleanup
// [ANTI-PATTERN IGNORED]: Tight loop checking 100s of PIDs every 100ms during cleanup
return false;
}
});
if (stillAlive.length === 0) {
logger.info('SYSTEM', 'All child processes exited');
return;
}
logger.debug('SYSTEM', 'Waiting for processes to exit', { stillAlive });
await new Promise(r => setTimeout(r, 100));
}
logger.warn('SYSTEM', 'Timeout waiting for child processes to exit');
}
/**
* Parse process elapsed time from ps etime format: [[DD-]HH:]MM:SS
* Returns age in minutes, or -1 if parsing fails
*/
export function parseElapsedTime(etime: string): number {
if (!etime || etime.trim() === '') return -1;
const cleaned = etime.trim();
let totalMinutes = 0;
// DD-HH:MM:SS format
const dayMatch = cleaned.match(/^(\d+)-(\d+):(\d+):(\d+)$/);
if (dayMatch) {
totalMinutes = parseInt(dayMatch[1], 10) * 24 * 60 +
@@ -372,14 +213,12 @@ export function parseElapsedTime(etime: string): number {
return totalMinutes;
}
// HH:MM:SS format
const hourMatch = cleaned.match(/^(\d+):(\d+):(\d+)$/);
if (hourMatch) {
totalMinutes = parseInt(hourMatch[1], 10) * 60 + parseInt(hourMatch[2], 10);
return totalMinutes;
}
// MM:SS format
const minMatch = cleaned.match(/^(\d+):(\d+)$/);
if (minMatch) {
return parseInt(minMatch[1], 10);
@@ -388,171 +227,8 @@ export function parseElapsedTime(etime: string): number {
return -1;
}
/**
* Enumerate orphaned claude-mem processes matching ORPHAN_PROCESS_PATTERNS.
* Returns PIDs of processes older than ORPHAN_MAX_AGE_MINUTES.
*/
async function enumerateOrphanedProcesses(isWindows: boolean, currentPid: number): Promise<number[]> {
const pidsToKill: number[] = [];
if (isWindows) {
// Windows: Use WQL -Filter for server-side filtering (no $_ pipeline syntax).
// Avoids Git Bash $_ interpretation (#1062) and PowerShell syntax errors (#1024).
const wqlPatternConditions = ORPHAN_PROCESS_PATTERNS
.map(p => `CommandLine LIKE '%${p}%'`)
.join(' OR ');
const cmd = `powershell -NoProfile -NonInteractive -Command "Get-CimInstance Win32_Process -Filter '(${wqlPatternConditions}) AND ProcessId != ${currentPid}' | Select-Object ProcessId, CreationDate | ConvertTo-Json"`;
const { stdout } = await execAsync(cmd, { timeout: HOOK_TIMEOUTS.POWERSHELL_COMMAND, windowsHide: true });
if (!stdout.trim() || stdout.trim() === 'null') {
logger.debug('SYSTEM', 'No orphaned claude-mem processes found (Windows)');
return [];
}
const processes = JSON.parse(stdout);
const processList = Array.isArray(processes) ? processes : [processes];
const now = Date.now();
for (const proc of processList) {
const pid = proc.ProcessId;
// SECURITY: Validate PID is positive integer and not current process
if (!Number.isInteger(pid) || pid <= 0 || pid === currentPid) continue;
// Parse Windows WMI date format: /Date(1234567890123)/
const creationMatch = proc.CreationDate?.match(/\/Date\((\d+)\)\//);
if (creationMatch) {
const creationTime = parseInt(creationMatch[1], 10);
const ageMinutes = (now - creationTime) / (1000 * 60);
if (ageMinutes >= ORPHAN_MAX_AGE_MINUTES) {
pidsToKill.push(pid);
logger.debug('SYSTEM', 'Found orphaned process', { pid, ageMinutes: Math.round(ageMinutes) });
}
}
}
} else {
// Unix: Use ps with elapsed time for age-based filtering
const patternRegex = ORPHAN_PROCESS_PATTERNS.join('|');
const { stdout } = await execAsync(
`ps -eo pid,etime,command | grep -E "${patternRegex}" | grep -v grep || true`
);
if (!stdout.trim()) {
logger.debug('SYSTEM', 'No orphaned claude-mem processes found (Unix)');
return [];
}
const lines = stdout.trim().split('\n');
for (const line of lines) {
// Parse: " 1234 01:23:45 /path/to/process"
const match = line.trim().match(/^(\d+)\s+(\S+)\s+(.*)$/);
if (!match) continue;
const pid = parseInt(match[1], 10);
const etime = match[2];
// SECURITY: Validate PID is positive integer and not current process
if (!Number.isInteger(pid) || pid <= 0 || pid === currentPid) continue;
const ageMinutes = parseElapsedTime(etime);
if (ageMinutes >= ORPHAN_MAX_AGE_MINUTES) {
pidsToKill.push(pid);
logger.debug('SYSTEM', 'Found orphaned process', { pid, ageMinutes, command: match[3].substring(0, 80) });
}
}
}
return pidsToKill;
}
/**
* Clean up orphaned claude-mem processes from previous worker sessions
*
* Targets mcp-server.cjs, worker-service.cjs, and chroma-mcp processes
* that survived a previous daemon crash. Only kills processes older than
* ORPHAN_MAX_AGE_MINUTES to avoid killing the current session.
*
* The periodic ProcessRegistry reaper handles in-session orphans;
* this function handles cross-session orphans at startup.
*/
export async function cleanupOrphanedProcesses(): Promise<void> {
const isWindows = process.platform === 'win32';
const currentPid = process.pid;
let pidsToKill: number[];
try {
pidsToKill = await enumerateOrphanedProcesses(isWindows, currentPid);
} catch (error: unknown) {
// Orphan cleanup is non-critical - log and continue
if (error instanceof Error) {
logger.error('SYSTEM', 'Failed to enumerate orphaned processes', {}, error);
} else {
logger.error('SYSTEM', 'Failed to enumerate orphaned processes', {}, new Error(String(error)));
}
return;
}
if (pidsToKill.length === 0) {
return;
}
logger.info('SYSTEM', 'Cleaning up orphaned claude-mem processes', {
platform: isWindows ? 'Windows' : 'Unix',
count: pidsToKill.length,
pids: pidsToKill,
maxAgeMinutes: ORPHAN_MAX_AGE_MINUTES
});
// Kill all found processes
if (isWindows) {
for (const pid of pidsToKill) {
// SECURITY: Double-check PID validation before using in taskkill command
if (!Number.isInteger(pid) || pid <= 0) {
logger.warn('SYSTEM', 'Skipping invalid PID', { pid });
continue;
}
try {
execSync(`taskkill /PID ${pid} /T /F`, { timeout: HOOK_TIMEOUTS.POWERSHELL_COMMAND, stdio: 'ignore', windowsHide: true });
} catch (error: unknown) {
// [ANTI-PATTERN IGNORED]: Cleanup loop - process may have exited, continue to next PID
if (error instanceof Error) {
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, error);
} else {
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, new Error(String(error)));
}
}
}
} else {
for (const pid of pidsToKill) {
try {
process.kill(pid, 'SIGKILL');
} catch (error: unknown) {
// [ANTI-PATTERN IGNORED]: Cleanup loop - process may have exited, continue to next PID
if (error instanceof Error) {
logger.debug('SYSTEM', 'Process already exited', { pid }, error);
} else {
logger.debug('SYSTEM', 'Process already exited', { pid }, new Error(String(error)));
}
}
}
}
logger.info('SYSTEM', 'Orphaned processes cleaned up', { count: pidsToKill.length });
}
const CHROMA_MIGRATION_MARKER_FILENAME = '.chroma-cleaned-v10.3';
/**
* One-time chroma data wipe for users upgrading from versions with duplicate
* worker bugs that could corrupt chroma data. Since chroma is always rebuildable
* from SQLite (via backfillAllProjects), this is safe.
*
* Checks for a marker file. If absent, wipes ~/.claude-mem/chroma/ and writes
* the marker. If present, skips. Idempotent.
*
* @param dataDirectory - Override for DATA_DIR (used in tests)
*/
export function runOneTimeChromaMigration(dataDirectory?: string): void {
const effectiveDataDir = dataDirectory ?? DATA_DIR;
const markerPath = path.join(effectiveDataDir, CHROMA_MIGRATION_MARKER_FILENAME);
@@ -570,7 +246,6 @@ export function runOneTimeChromaMigration(dataDirectory?: string): void {
logger.info('SYSTEM', 'Chroma data directory removed', { chromaDir });
}
// Write marker file to prevent future wipes
mkdirSync(effectiveDataDir, { recursive: true });
writeFileSync(markerPath, new Date().toISOString());
logger.info('SYSTEM', 'Chroma migration marker written', { markerPath });
@@ -616,17 +291,6 @@ function classifyCwdForRemap(cwd: string): CwdClassification {
return { kind: 'worktree', project: `${parent}/${leaf}` };
}
/**
* One-time remap of sdk_sessions.project (+ observations.project,
* session_summaries.project) using the cwd captured in pending_messages.cwd
* as the source of truth. Required because pre-worktree builds stored bare
* project names that collide across parent/worktree checkouts.
*
* Backs up the DB before writes. Idempotent via marker file. Skips silently
* if the DB or pending_messages table doesn't exist yet (fresh install).
*
* @param dataDirectory - Override for DATA_DIR (used in tests)
*/
export function runOneTimeCwdRemap(dataDirectory?: string): void {
const effectiveDataDir = dataDirectory ?? DATA_DIR;
const markerPath = path.join(effectiveDataDir, CWD_REMAP_MARKER_FILENAME);
@@ -657,10 +321,6 @@ export function runOneTimeCwdRemap(dataDirectory?: string): void {
}
}
/**
* Execute the cwd-remap DB migration. Extracted to keep the try block small.
* Opens, queries, and updates the DB, then writes the marker file on success.
*/
function executeCwdRemap(dbPath: string, effectiveDataDir: string, markerPath: string): void {
const { Database } = require('bun:sqlite') as typeof import('bun:sqlite');
@@ -743,25 +403,6 @@ function executeCwdRemap(dbPath: string, effectiveDataDir: string, markerPath: s
}
}
/**
* Spawn a detached daemon process.
*
* Uses Node's child_process.spawn with the arg-array form on every platform.
* The arg-array form bypasses the shell entirely on Windows, so no quoting
* heuristics or PowerShell wrappers are needed (handles paths with spaces
* like `C:\Users\Alex Newman\...` natively).
*
* On Unix, prefer setsid to detach from the controlling terminal so SIGHUP
* can't reach the daemon even if the in-process handler fails. The
* `detached: true` option already creates a new process group on POSIX;
* setsid is the belt-and-suspenders extra.
*
* Bun.spawn is intentionally NOT used here: it does not support detached
* spawning (see comment in process-registry.ts:633-639).
*
* PID file is written by the worker itself after listen() succeeds,
* not by the spawner (race-free, works on all platforms).
*/
export function spawnDaemon(
scriptPath: string,
port: number,
@@ -775,9 +416,6 @@ export function spawnDaemon(
...extraEnv
});
// worker-service.cjs imports `bun:sqlite`, so the spawned runtime MUST be
// Bun on every platform — never the current process.execPath, which may be
// Node when the caller is the MCP server.
const runtimePath = resolveWorkerRuntimePath();
if (!runtimePath) {
logger.error(
@@ -787,16 +425,7 @@ export function spawnDaemon(
return undefined;
}
// On Windows, child_process.spawn with `detached: true` ignores
// `windowsHide: true` (Node docs: behavior is undefined). Spawning the
// worker via PowerShell `Start-Process -WindowStyle Hidden` is the only
// approach that reliably hides the console window AND inherits parent
// env vars (WMIC was tried in PR #751 but is deprecated/absent on
// modern Windows 11). Re-applies the fix that PR #751 (e6ae0176)
// introduced and commit d13662d5 reverted. See issues #2150, #2186,
// #2187, #2190, #2198.
if (process.platform === 'win32') {
// Use -EncodedCommand so paths with spaces don't need shell quoting.
const psScript = `Start-Process -FilePath '${runtimePath.replace(/'/g, "''")}' -ArgumentList @('${scriptPath.replace(/'/g, "''")}','--daemon') -WindowStyle Hidden`;
const encodedCommand = Buffer.from(psScript, 'utf16le').toString('base64');
@@ -806,13 +435,6 @@ export function spawnDaemon(
windowsHide: true,
env
});
// Windows success sentinel: PowerShell `Start-Process` does not return
// the spawned PID, and we don't want to pay for an extra `Get-Process`
// round-trip just to discover it. Return 0 (a conventionally invalid
// Unix PID) so callers can distinguish "spawn dispatched" from "spawn
// failed". Callers MUST use `pid === undefined` to detect failure —
// never falsy checks like `if (!pid)`, which would silently treat
// success as failure here.
return 0;
} catch (error: unknown) {
logger.error(
@@ -825,7 +447,6 @@ export function spawnDaemon(
}
}
// On Unix, prefer setsid to fully detach from the controlling terminal.
const setsidPath = '/usr/bin/setsid';
const useSetsid = existsSync(setsidPath);
@@ -848,21 +469,9 @@ export function spawnDaemon(
return child.pid;
}
/**
* Check if a process with the given PID is alive.
*
* Uses the process.kill(pid, 0) idiom: signal 0 doesn't send a signal,
* it just checks if the process exists and is reachable.
*
* EPERM is treated as "alive" because it means the process exists but
* belongs to a different user/session (common in multi-user setups).
* PID 0 (Windows sentinel for unknown PID) is treated as alive.
*/
export function isProcessAlive(pid: number): boolean {
// PID 0 is the Windows sentinel value — process was spawned but PID unknown
if (pid === 0) return true;
// Invalid PIDs are not alive
if (!Number.isInteger(pid) || pid < 0) return false;
try {
@@ -871,27 +480,15 @@ export function isProcessAlive(pid: number): boolean {
} catch (error: unknown) {
if (error instanceof Error) {
const code = (error as NodeJS.ErrnoException).code;
// EPERM = process exists but different user/session — treat as alive
if (code === 'EPERM') return true;
logger.debug('SYSTEM', 'Process not alive', { pid, code });
} else {
logger.debug('SYSTEM', 'Process not alive (non-Error thrown)', { pid }, new Error(String(error)));
}
// ESRCH = no such process — it's dead
return false;
}
}
/**
* Check if the PID file was written recently (within thresholdMs).
*
* Used to coordinate restarts across concurrent sessions: if the PID file
* was recently written, another session likely just restarted the worker.
* Callers should poll /api/health instead of attempting their own restart.
*
* @param thresholdMs - Maximum age in ms to consider "recent" (default: 15000)
* @returns true if the PID file exists and was modified within thresholdMs
*/
export function isPidFileRecent(thresholdMs: number = 15000): boolean {
try {
const stats = statSync(PID_FILE);
@@ -906,10 +503,6 @@ export function isPidFileRecent(thresholdMs: number = 15000): boolean {
}
}
/**
* Touch the PID file to update its mtime without changing contents.
* Used after a restart to signal other sessions that a restart just completed.
*/
export function touchPidFile(): void {
try {
if (!existsSync(PID_FILE)) return;
@@ -920,46 +513,7 @@ export function touchPidFile(): void {
}
}
/**
* Read the PID file and remove it if the recorded process is dead (stale).
*
* This is a cheap operation: one filesystem read + one signal-0 check.
* Called at the top of ensureWorkerStarted() to clean up after WSL2
* hibernate, OOM kills, or other ungraceful worker deaths.
*/
export function cleanStalePidFile(): ValidateWorkerPidStatus {
return validateWorkerPidFile({ logAlive: false });
}
/**
* Create signal handler factory for graceful shutdown
* Returns a handler function that can be passed to process.on('SIGTERM') etc.
*/
export function createSignalHandler(
shutdownFn: () => Promise<void>,
isShuttingDownRef: { value: boolean }
): (signal: string) => Promise<void> {
return async (signal: string) => {
if (isShuttingDownRef.value) {
logger.warn('SYSTEM', `Received ${signal} but shutdown already in progress`);
return;
}
isShuttingDownRef.value = true;
logger.info('SYSTEM', `Received ${signal}, shutting down...`);
try {
await shutdownFn();
process.exit(0);
} catch (error: unknown) {
// Top-level signal handler - log any shutdown error and exit
if (error instanceof Error) {
logger.error('SYSTEM', 'Error during shutdown', {}, error);
} else {
logger.error('SYSTEM', 'Error during shutdown', {}, new Error(String(error)));
}
// Exit gracefully: Windows Terminal won't keep tab open on exit 0
// Even on shutdown errors, exit cleanly to prevent tab accumulation
process.exit(0);
}
};
}
@@ -1,24 +1,3 @@
/**
* WorktreeAdoption - Stamp observations from merged worktrees into their parent project.
*
* Given a parent repo path, this engine:
* 1. Uses git to enumerate worktrees of the parent repo.
* 2. Classifies each worktree's branch as "merged" (in `git branch --merged HEAD`)
* or manually overridden via `onlyBranch` (for squash-merge detection).
* 3. Stamps `merged_into_project` on `observations` and `session_summaries` rows
* whose `project` matches the composite `parent/worktree` name.
* 4. Propagates the same metadata to Chroma so semantic search includes the
* adopted rows under the parent project.
*
* `project` is never overwritten it remains immutable provenance. The
* `merged_into_project` column is a virtual pointer that query layers OR into
* their WHERE predicates.
*
* DB lifecycle mirrors `runOneTimeCwdRemap` in ProcessManager.ts: we manage our
* own Database handle (open -> transaction -> close in finally) so this engine
* can be called on worker startup before `dbManager.initialize()` without
* contending on the shared handle.
*/
import path from 'path';
import { homedir } from 'os';
@@ -86,10 +65,6 @@ function gitCapture(cwd: string, args: string[]): string | null {
return (r.stdout ?? '').trim();
}
/**
* Resolve the main working-tree root for an arbitrary cwd inside a repo or worktree.
* Mirrors the handling in `scripts/cwd-remap.ts:48-51`.
*/
function resolveMainRepoPath(cwd: string): string | null {
const commonDir = gitCapture(cwd, [
'rev-parse',
@@ -98,7 +73,6 @@ function resolveMainRepoPath(cwd: string): string | null {
]);
if (!commonDir) return null;
// Normal: common-dir is "<repo>/.git". Bare: strip the trailing ".git".
const mainRoot = commonDir.endsWith('/.git')
? path.dirname(commonDir)
: commonDir.replace(/\.git$/, '');
@@ -116,7 +90,6 @@ function listWorktrees(mainRepo: string): WorktreeEntry[] {
if (current.path) entries.push({ path: current.path, branch: current.branch ?? null });
current = { path: line.slice('worktree '.length).trim(), branch: null };
} else if (line.startsWith('branch ')) {
// `branch refs/heads/<name>` — strip the ref prefix.
const refName = line.slice('branch '.length).trim();
current.branch = refName.startsWith('refs/heads/')
? refName.slice('refs/heads/'.length)
@@ -143,22 +116,6 @@ function listMergedBranches(mainRepo: string): Set<string> {
);
}
/**
* Stamp `merged_into_project` on observations and session_summaries for every
* worktree of `opts.repoPath` whose branch has been merged into the parent's HEAD.
*
* SQL writes are idempotent: an UPDATE only touches rows where
* `merged_into_project IS NULL`. `result.adoptedObservations` / `adoptedSummaries`
* reflect the actual SQL changes on each run.
*
* Chroma patches are self-healing: the Chroma id set is built from ALL
* observations whose `project` matches a merged worktree (both unadopted rows
* AND rows previously stamped to this parent), and `updateMergedIntoProject`
* is idempotent, so a transient Chroma failure on an earlier run is retried
* automatically on the next adoption pass. `result.chromaUpdates` therefore
* counts the total Chroma writes performed this pass (which may exceed
* `adoptedObservations` when retries happen).
*/
export async function adoptMergedWorktrees(opts: {
repoPath?: string;
dataDirectory?: string;
@@ -227,11 +184,6 @@ export async function adoptMergedWorktrees(opts: {
const { Database } = require('bun:sqlite') as typeof import('bun:sqlite');
db = new Database(dbPath);
// Schema guard: adoption may be invoked on worker startup before
// DatabaseManager runs migrations. If the `merged_into_project` column
// isn't present yet, prepared statements below will fail with
// "no such column", silently skipping adoption until the next restart.
// Return early so the next boot (post-migration) picks this up.
interface ColumnInfo { name: string }
const obsColumns = db
.prepare('PRAGMA table_info(observations)')
@@ -250,12 +202,6 @@ export async function adoptMergedWorktrees(opts: {
return result;
}
// Select ALL observations for the worktree project (both unadopted rows
// AND rows already stamped to this parent), not just unadopted ones. This
// ensures a transient Chroma failure on a prior run gets retried the next
// time adoption executes: SQL may already be stamped, but we re-include
// those ids in the Chroma patch set (updateMergedIntoProject is idempotent
// — it replays the same metadata write).
const selectObsForPatch = db.prepare(
`SELECT id FROM observations
WHERE project = ?
@@ -297,7 +243,6 @@ export async function adoptMergedWorktrees(opts: {
}
}
if (dryRun) {
// Throw a dedicated error to force rollback. Caught below by instanceof check.
throw new DryRunRollback();
}
});
@@ -367,20 +312,6 @@ export async function adoptMergedWorktrees(opts: {
return result;
}
/**
* Run adoption once per distinct parent repo referenced by recorded cwds.
*
* Worker startup adoption cannot use `process.cwd()` as a seed the daemon is
* spawned with cwd=marketplace-plugin-dir, which isn't a git repo. Instead, we
* derive candidate parent repos from `pending_messages.cwd` (the user's actual
* working directories), dedupe via `resolveMainRepoPath`, and run adoption
* against each. Failures on individual repos are logged but don't short-circuit
* the others.
*
* Safe to call before `dbManager.initialize()`: opens its own short-lived DB
* handle (readonly) to enumerate cwds, then delegates to `adoptMergedWorktrees`
* which opens its own writable handle.
*/
export async function adoptMergedWorktreesForAllKnownRepos(opts: {
dataDirectory?: string;
dryRun?: boolean;
-4
View File
@@ -1,8 +1,4 @@
/**
* Infrastructure module - Process management, health monitoring, and shutdown utilities
*/
export * from './ProcessManager.js';
export * from './HealthMonitor.js';
export * from './GracefulShutdown.js';
export * from './CleanupV12_4_3.js';
+3 -23
View File
@@ -1,20 +1,6 @@
/**
* Shared worker-shutdown helper used by both `install` (to clear out a
* running worker before overwriting plugin files) and `uninstall` (to
* release file locks before deletion).
*
* Posts to `/api/admin/shutdown`, then polls `/api/health` until the
* connection is refused (= worker is gone) or the timeout elapses.
*
* Best-effort: if the worker is not running, the POST throws and we
* return immediately. Callers should never depend on this throwing.
*/
export interface ShutdownResult {
/** True if we actively shut down a worker; false if none was running. */
workerWasRunning: boolean;
/** True if we observed the worker stop responding before the timeout. */
confirmedStopped: boolean;
}
export async function shutdownWorkerAndWait(
@@ -31,9 +17,7 @@ export async function shutdownWorkerAndWait(
});
workerWasRunning = true;
} catch {
// Worker not running (connection refused) or shutdown POST timed out.
// Either way, nothing more to do.
return { workerWasRunning: false, confirmedStopped: true };
return { workerWasRunning: false };
}
const pollIntervalMs = 500;
@@ -44,15 +28,11 @@ export async function shutdownWorkerAndWait(
await fetch(`${baseUrl}/api/health`, {
signal: AbortSignal.timeout(1000),
});
// Health endpoint still responding — worker is still alive, keep waiting.
} catch (err) {
// AbortError = health endpoint timed out (worker still accepting
// connections but slow). Keep polling. Any other error
// (ECONNREFUSED, ECONNRESET) means the worker is gone.
if (err instanceof Error && err.name === 'AbortError') continue;
return { workerWasRunning, confirmedStopped: true };
return { workerWasRunning };
}
}
return { workerWasRunning, confirmedStopped: false };
return { workerWasRunning };
}
@@ -1,18 +1,3 @@
/**
* CodexCliInstaller - Codex CLI integration for claude-mem
*
* Uses transcript-only watching (no notify hook). The watcher infrastructure
* already exists in src/services/transcripts/. This installer:
*
* 1. Writes/merges transcript-watch config to ~/.claude-mem/transcript-watch.json
* 2. Sets up watch for ~/.codex/sessions/**\/*.jsonl using existing watcher
* 3. Injects context via workspace-local AGENTS.md files (Codex reads these natively)
*
* Anti-patterns:
* - Does NOT add notify hooks -- transcript watching is sufficient
* - Does NOT modify existing transcript watcher infrastructure
* - Does NOT overwrite existing transcript-watch.json -- merges only
*/
import path from 'path';
import { homedir } from 'os';
@@ -26,28 +11,12 @@ import {
} from '../transcripts/config.js';
import type { TranscriptWatchConfig, WatchTarget } from '../transcripts/types.js';
// ---------------------------------------------------------------------------
// Constants
// ---------------------------------------------------------------------------
const CODEX_DIR = path.join(homedir(), '.codex');
const CODEX_AGENTS_MD_PATH = path.join(CODEX_DIR, 'AGENTS.md');
const CLAUDE_MEM_DIR = path.join(homedir(), '.claude-mem');
/**
* The watch name used to identify the Codex CLI entry in transcript-watch.json.
* Must match the name in SAMPLE_CONFIG for merging to work correctly.
*/
const CODEX_WATCH_NAME = 'codex';
// ---------------------------------------------------------------------------
// Transcript Watch Config Merging
// ---------------------------------------------------------------------------
/**
* Load existing transcript-watch.json, or return an empty config scaffold.
* Never throws -- returns a valid empty config on any parse error.
*/
function loadExistingTranscriptWatchConfig(): TranscriptWatchConfig {
const configPath = DEFAULT_CONFIG_PATH;
@@ -59,7 +28,6 @@ function loadExistingTranscriptWatchConfig(): TranscriptWatchConfig {
const raw = readFileSync(configPath, 'utf-8');
const parsed = JSON.parse(raw) as TranscriptWatchConfig;
// Ensure required fields exist
if (!parsed.version) parsed.version = 1;
if (!parsed.watches) parsed.watches = [];
if (!parsed.schemas) parsed.schemas = {};
@@ -73,7 +41,6 @@ function loadExistingTranscriptWatchConfig(): TranscriptWatchConfig {
logger.error('WORKER', 'Corrupt transcript-watch.json, creating backup', { path: configPath }, new Error(String(parseError)));
}
// Back up corrupt file
const backupPath = `${configPath}.backup.${Date.now()}`;
writeFileSync(backupPath, readFileSync(configPath));
console.warn(` Backed up corrupt transcript-watch.json to ${backupPath}`);
@@ -82,24 +49,15 @@ function loadExistingTranscriptWatchConfig(): TranscriptWatchConfig {
}
}
/**
* Merge Codex watch configuration into existing transcript-watch.json.
*
* - If a watch with name 'codex' already exists, it is replaced in-place.
* - If the 'codex' schema already exists, it is replaced in-place.
* - All other watches and schemas are preserved untouched.
*/
function mergeCodexWatchConfig(existingConfig: TranscriptWatchConfig): TranscriptWatchConfig {
const merged = { ...existingConfig };
// Merge schemas: add/replace the codex schema
merged.schemas = { ...merged.schemas };
const codexSchema = SAMPLE_CONFIG.schemas?.[CODEX_WATCH_NAME];
if (codexSchema) {
merged.schemas[CODEX_WATCH_NAME] = codexSchema;
}
// Merge watches: add/replace the codex watch entry
const codexWatchFromSample = SAMPLE_CONFIG.watches.find(
(w: WatchTarget) => w.name === CODEX_WATCH_NAME,
);
@@ -110,10 +68,8 @@ function mergeCodexWatchConfig(existingConfig: TranscriptWatchConfig): Transcrip
);
if (existingWatchIndex !== -1) {
// Replace existing codex watch in-place
merged.watches[existingWatchIndex] = codexWatchFromSample;
} else {
// Append new codex watch
merged.watches.push(codexWatchFromSample);
}
}
@@ -121,23 +77,11 @@ function mergeCodexWatchConfig(existingConfig: TranscriptWatchConfig): Transcrip
return merged;
}
/**
* Write the merged transcript-watch.json config atomically.
*/
function writeTranscriptWatchConfig(config: TranscriptWatchConfig): void {
mkdirSync(CLAUDE_MEM_DIR, { recursive: true });
writeFileSync(DEFAULT_CONFIG_PATH, JSON.stringify(config, null, 2) + '\n');
}
// ---------------------------------------------------------------------------
// Context Injection (AGENTS.md)
// ---------------------------------------------------------------------------
/**
* Remove legacy claude-mem context from ~/.codex/AGENTS.md.
* Codex now uses workspace-local AGENTS.md files to avoid cross-project bleed.
* Preserves any existing user content outside the tags.
*/
function removeCodexAgentsMdContext(): void {
if (!existsSync(CODEX_AGENTS_MD_PATH)) return;
@@ -173,28 +117,11 @@ function readAndStripContextTags(startTag: string, endTag: string): void {
console.log(` Removed legacy global context from ${CODEX_AGENTS_MD_PATH}`);
}
/**
* @deprecated Codex now uses workspace-local AGENTS.md via transcript processor fallback.
* Preserves user content outside the <claude-mem-context> tags.
*/
const cleanupLegacyCodexAgentsMdContext = removeCodexAgentsMdContext;
// ---------------------------------------------------------------------------
// Public API: Install
// ---------------------------------------------------------------------------
/**
* Install Codex CLI integration for claude-mem.
*
* 1. Merges Codex transcript-watch config into ~/.claude-mem/transcript-watch.json
* 2. Cleans up any legacy global context block in ~/.codex/AGENTS.md
*
* @returns 0 on success, 1 on failure
*/
export async function installCodexCli(): Promise<number> {
console.log('\nInstalling Claude-Mem for Codex CLI (transcript watching)...\n');
// Step 1: Merge transcript-watch config
const existingConfig = loadExistingTranscriptWatchConfig();
const mergedConfig = mergeCodexWatchConfig(existingConfig);
@@ -233,22 +160,9 @@ Next steps:
`);
}
// ---------------------------------------------------------------------------
// Public API: Uninstall
// ---------------------------------------------------------------------------
/**
* Remove Codex CLI integration from claude-mem.
*
* 1. Removes the codex watch and schema from transcript-watch.json (preserves others)
* 2. Removes context section from AGENTS.md (preserves user content)
*
* @returns 0 on success, 1 on failure
*/
export function uninstallCodexCli(): number {
console.log('\nUninstalling Claude-Mem Codex CLI integration...\n');
// Step 1: Remove codex watch from transcript-watch.json
if (existsSync(DEFAULT_CONFIG_PATH)) {
const config = loadExistingTranscriptWatchConfig();
@@ -272,7 +186,6 @@ export function uninstallCodexCli(): number {
console.log(' No transcript-watch.json found -- nothing to remove.');
}
// Step 2: Remove legacy global context section from AGENTS.md
cleanupLegacyCodexAgentsMdContext();
console.log('\nUninstallation complete!');
@@ -1,13 +1,3 @@
/**
* CursorHooksInstaller - Cursor IDE integration for claude-mem
*
* Extracted from worker-service.ts monolith to provide centralized Cursor integration.
* Handles:
* - Cursor hooks installation/uninstallation
* - MCP server configuration
* - Context file generation
* - Project registry management
*/
import path from 'path';
import { homedir } from 'os';
@@ -27,48 +17,24 @@ import type { CursorInstallTarget, CursorHooksJson, CursorMcpConfig, Platform }
const execAsync = promisify(exec);
// Standard paths
const CURSOR_REGISTRY_FILE = path.join(DATA_DIR, 'cursor-projects.json');
// ============================================================================
// Platform Detection
// ============================================================================
/**
* Detect platform for script selection
*/
export function detectPlatform(): Platform {
return process.platform === 'win32' ? 'windows' : 'unix';
}
/**
* Get script extension based on platform
*/
export function getScriptExtension(): string {
return detectPlatform() === 'windows' ? '.ps1' : '.sh';
}
// ============================================================================
// Project Registry
// ============================================================================
/**
* Read the Cursor project registry
*/
export function readCursorRegistry(): CursorProjectRegistry {
return readCursorRegistryFromFile(CURSOR_REGISTRY_FILE);
}
/**
* Write the Cursor project registry
*/
export function writeCursorRegistry(registry: CursorProjectRegistry): void {
writeCursorRegistryToFile(CURSOR_REGISTRY_FILE, registry);
}
/**
* Register a project for auto-context updates
*/
export function registerCursorProject(projectName: string, workspacePath: string): void {
const registry = readCursorRegistry();
registry[projectName] = {
@@ -79,9 +45,6 @@ export function registerCursorProject(projectName: string, workspacePath: string
logger.info('CURSOR', 'Registered project for auto-context updates', { projectName, workspacePath });
}
/**
* Unregister a project from auto-context updates
*/
export function unregisterCursorProject(projectName: string): void {
const registry = readCursorRegistry();
if (registry[projectName]) {
@@ -91,18 +54,13 @@ export function unregisterCursorProject(projectName: string): void {
}
}
/**
* Update Cursor context files for all registered projects matching this project name.
* Called by SDK agents after saving a summary.
*/
export async function updateCursorContextForProject(projectName: string, _port: number): Promise<void> {
const registry = readCursorRegistry();
const entry = registry[projectName];
if (!entry) return; // Project doesn't have Cursor hooks installed
if (!entry) return;
try {
// Fetch fresh context from worker (uses socket or TCP automatically)
const response = await workerHttpRequest(
`/api/context/inject?project=${encodeURIComponent(projectName)}`
);
@@ -112,11 +70,9 @@ export async function updateCursorContextForProject(projectName: string, _port:
const context = await response.text();
if (!context || !context.trim()) return;
// Write to the project's Cursor rules file using shared utility
writeContextFile(entry.workspacePath, context);
logger.debug('CURSOR', 'Updated context file', { projectName, workspacePath: entry.workspacePath });
} catch (error) {
// [ANTI-PATTERN IGNORED]: Background context update - failure is non-critical, user workflow continues
if (error instanceof Error) {
logger.error('WORKER', 'Failed to update context file', { projectName }, error);
} else {
@@ -125,19 +81,9 @@ export async function updateCursorContextForProject(projectName: string, _port:
}
}
// ============================================================================
// Path Finding
// ============================================================================
/**
* Find MCP server script path
* Searches in order: marketplace install, source repo
*/
export function findMcpServerPath(): string | null {
const possiblePaths = [
// Marketplace install location
path.join(MARKETPLACE_ROOT, 'plugin', 'scripts', 'mcp-server.cjs'),
// Development/source location
path.join(process.cwd(), 'plugin', 'scripts', 'mcp-server.cjs'),
];
@@ -149,15 +95,9 @@ export function findMcpServerPath(): string | null {
return null;
}
/**
* Find worker-service.cjs path for unified CLI
* Searches in order: marketplace install, source repo
*/
export function findWorkerServicePath(): string | null {
const possiblePaths = [
// Marketplace install location
path.join(MARKETPLACE_ROOT, 'plugin', 'scripts', 'worker-service.cjs'),
// Development/source location
path.join(process.cwd(), 'plugin', 'scripts', 'worker-service.cjs'),
];
@@ -169,19 +109,11 @@ export function findWorkerServicePath(): string | null {
return null;
}
/**
* Find the Bun executable path
* Required because worker-service.cjs uses bun:sqlite which is Bun-specific
* Searches common installation locations across platforms
*/
export function findBunPath(): string {
const possiblePaths = [
// Standard user install location (most common)
path.join(homedir(), '.bun', 'bin', 'bun'),
// Global install locations
'/usr/local/bin/bun',
'/usr/bin/bun',
// Windows locations
...(process.platform === 'win32' ? [
path.join(homedir(), '.bun', 'bin', 'bun.exe'),
path.join(process.env.LOCALAPPDATA || '', 'bun', 'bun.exe'),
@@ -194,15 +126,9 @@ export function findBunPath(): string {
}
}
// Fallback to 'bun' and hope it's in PATH
// This allows the installation to proceed even if we can't find bun
// The user will get a clear error when the hook runs if bun isn't available
return 'bun';
}
/**
* Get the target directory for Cursor hooks based on install target
*/
export function getTargetDir(target: CursorInstallTarget): string | null {
switch (target) {
case 'project':
@@ -223,15 +149,6 @@ export function getTargetDir(target: CursorInstallTarget): string | null {
}
}
// ============================================================================
// MCP Configuration
// ============================================================================
/**
* Configure MCP server in Cursor's mcp.json
* @param target 'project' or 'user'
* @returns 0 on success, 1 on failure
*/
export function configureCursorMcp(target: CursorInstallTarget): number {
const mcpServerPath = findMcpServerPath();
@@ -250,10 +167,8 @@ export function configureCursorMcp(target: CursorInstallTarget): number {
const mcpJsonPath = path.join(targetDir, 'mcp.json');
try {
// Create directory if needed
mkdirSync(targetDir, { recursive: true });
// Load existing config or create new
let config: CursorMcpConfig = { mcpServers: {} };
if (existsSync(mcpJsonPath)) {
try {
@@ -262,7 +177,6 @@ export function configureCursorMcp(target: CursorInstallTarget): number {
config.mcpServers = {};
}
} catch (error) {
// [ANTI-PATTERN IGNORED]: Fallback behavior - corrupt config, continue with empty
if (error instanceof Error) {
logger.error('WORKER', 'Corrupt mcp.json, creating new config', { path: mcpJsonPath }, error);
} else {
@@ -272,7 +186,6 @@ export function configureCursorMcp(target: CursorInstallTarget): number {
}
}
// Add claude-mem MCP server
config.mcpServers['claude-mem'] = {
command: 'node',
args: [mcpServerPath]
@@ -289,14 +202,6 @@ export function configureCursorMcp(target: CursorInstallTarget): number {
}
}
// ============================================================================
// Hook Installation
// ============================================================================
/**
* Install Cursor hooks using unified CLI
* No longer copies shell scripts - uses node CLI directly
*/
export async function installCursorHooks(target: CursorInstallTarget): Promise<number> {
console.log(`\nInstalling Claude-Mem Cursor hooks (${target} level)...\n`);
@@ -306,7 +211,6 @@ export async function installCursorHooks(target: CursorInstallTarget): Promise<n
return 1;
}
// Find the worker-service.cjs path
const workerServicePath = findWorkerServicePath();
if (!workerServicePath) {
console.error('Could not find worker-service.cjs');
@@ -316,18 +220,13 @@ export async function installCursorHooks(target: CursorInstallTarget): Promise<n
const workspaceRoot = process.cwd();
// Generate hooks.json with unified CLI commands
const hooksJsonPath = path.join(targetDir, 'hooks.json');
// Find bun executable - required because worker-service.cjs uses bun:sqlite
const bunPath = findBunPath();
const escapedBunPath = bunPath.replace(/\\/g, '\\\\');
// Use the absolute path to worker-service.cjs
// Escape backslashes for JSON on Windows
const escapedWorkerPath = workerServicePath.replace(/\\/g, '\\\\');
// Helper to create hook command using unified CLI with bun runtime
const makeHookCommand = (command: string) => {
return `"${escapedBunPath}" "${escapedWorkerPath}" hook cursor ${command}`;
};
@@ -357,7 +256,6 @@ export async function installCursorHooks(target: CursorInstallTarget): Promise<n
};
try {
// Create target directory inside try to catch EACCES/EPERM
mkdirSync(targetDir, { recursive: true });
await writeHooksJsonAndSetupProject(hooksJsonPath, hooksJson, workerServicePath, target, targetDir, workspaceRoot);
return 0;
@@ -383,7 +281,6 @@ async function writeHooksJsonAndSetupProject(
console.log(` Created hooks.json (unified CLI mode)`);
console.log(` Worker service: ${workerServicePath}`);
// For project-level: create initial context file
if (target === 'project') {
await setupProjectContext(targetDir, workspaceRoot);
}
@@ -405,9 +302,6 @@ Context Injection:
`);
}
/**
* Setup initial context file for project-level installation
*/
async function setupProjectContext(targetDir: string, workspaceRoot: string): Promise<void> {
const rulesDir = path.join(targetDir, 'rules');
mkdirSync(rulesDir, { recursive: true });
@@ -420,7 +314,6 @@ async function setupProjectContext(targetDir: string, workspaceRoot: string): Pr
try {
contextGenerated = await fetchInitialContextFromWorker(projectName, workspaceRoot);
} catch (error) {
// [ANTI-PATTERN IGNORED]: Fallback behavior - worker not running, use placeholder
if (error instanceof Error) {
logger.debug('WORKER', 'Worker not running during install', {}, error);
} else {
@@ -429,7 +322,6 @@ async function setupProjectContext(targetDir: string, workspaceRoot: string): Pr
}
if (!contextGenerated) {
// Create placeholder context file
const rulesFile = path.join(rulesDir, 'claude-mem-context.mdc');
const placeholderContent = `---
alwaysApply: true
@@ -446,7 +338,6 @@ Use claude-mem's MCP search tools for manual memory queries.
console.log(` Created placeholder context file (will populate after first session)`);
}
// Register project for automatic context updates after summaries
registerCursorProject(projectName, workspaceRoot);
console.log(` Registered for auto-context updates`);
}
@@ -472,9 +363,6 @@ async function fetchInitialContextFromWorker(
return false;
}
/**
* Uninstall Cursor hooks
*/
export function uninstallCursorHooks(target: CursorInstallTarget): number {
console.log(`\nUninstalling Claude-Mem Cursor hooks (${target} level)...\n`);
@@ -487,7 +375,6 @@ export function uninstallCursorHooks(target: CursorInstallTarget): number {
const hooksDir = path.join(targetDir, 'hooks');
const hooksJsonPath = path.join(targetDir, 'hooks.json');
// Remove legacy shell scripts if they exist (from old installations)
const bashScripts = ['common.sh', 'session-init.sh', 'context-inject.sh',
'save-observation.sh', 'save-file-edit.sh', 'session-summary.sh'];
const psScripts = ['common.ps1', 'session-init.ps1', 'context-inject.ps1',
@@ -541,9 +428,6 @@ function removeCursorHooksFiles(
console.log('Restart Cursor to apply changes.');
}
/**
* Check Cursor hooks installation status
*/
export function checkCursorHooksStatus(): number {
console.log('\nClaude-Mem Cursor Hooks Status\n');
@@ -569,7 +453,6 @@ export function checkCursorHooksStatus(): number {
console.log(`${loc.name}: Installed`);
console.log(` Config: ${hooksJson}`);
// Check if using unified CLI mode or legacy shell scripts
let hooksContent: any = null;
try {
hooksContent = JSON.parse(readFileSync(hooksJson, 'utf-8'));
@@ -588,7 +471,6 @@ export function checkCursorHooksStatus(): number {
if (firstCommand.includes('worker-service.cjs') && firstCommand.includes('hook cursor')) {
console.log(` Mode: Unified CLI (bun worker-service.cjs)`);
} else {
// Detect legacy shell scripts
const bashScripts = ['session-init.sh', 'context-inject.sh', 'save-observation.sh'];
const psScripts = ['session-init.ps1', 'context-inject.ps1', 'save-observation.ps1'];
@@ -610,7 +492,6 @@ export function checkCursorHooksStatus(): number {
}
}
// Check for context file (project only)
if (loc.name === 'Project') {
const contextFile = path.join(loc.dir, 'rules', 'claude-mem-context.mdc');
if (existsSync(contextFile)) {
@@ -632,19 +513,13 @@ export function checkCursorHooksStatus(): number {
return 0;
}
/**
* Detect if Claude Code is available
* Checks for the Claude Code CLI and plugin directory
*/
export async function detectClaudeCode(): Promise<boolean> {
try {
// Check for Claude Code CLI
const { stdout } = await execAsync('which claude || where claude', { timeout: 5000 });
if (stdout.trim()) {
return true;
}
} catch (error) {
// [ANTI-PATTERN IGNORED]: Fallback behavior - CLI not found, continue to directory check
if (error instanceof Error) {
logger.debug('WORKER', 'Claude CLI not in PATH', {}, error);
} else {
@@ -652,7 +527,6 @@ export async function detectClaudeCode(): Promise<boolean> {
}
}
// Check for Claude Code plugin directory (respects CLAUDE_CONFIG_DIR)
const pluginDir = path.join(CLAUDE_CONFIG_DIR, 'plugins');
if (existsSync(pluginDir)) {
return true;
@@ -661,9 +535,6 @@ export async function detectClaudeCode(): Promise<boolean> {
return false;
}
/**
* Handle cursor subcommand for hooks installation
*/
export async function handleCursorCommand(subcommand: string, args: string[]): Promise<number> {
switch (subcommand) {
case 'install': {
@@ -681,8 +552,6 @@ export async function handleCursorCommand(subcommand: string, args: string[]): P
}
case 'setup': {
// Interactive guided setup - handled by main() in worker-service.ts
// This is a placeholder that should not be reached
console.log('Use the main entry point for setup');
return 0;
}
@@ -1,26 +1,3 @@
/**
* GeminiCliHooksInstaller - Gemini CLI integration for claude-mem
*
* Installs hooks into ~/.gemini/settings.json using the unified CLI:
* bun worker-service.cjs hook gemini-cli <event>
*
* This routes through the hook-command.ts framework:
* readJsonFromStdin() gemini-cli adapter event handler POST to worker
*
* Gemini CLI supports 11 lifecycle hooks; we register 8 that map to
* useful memory events. See src/cli/adapters/gemini-cli.ts for the
* adapter that normalizes Gemini's stdin JSON to NormalizedHookInput.
*
* Hook config format (verified against Gemini CLI source):
* {
* "hooks": {
* "AfterTool": [{
* "matcher": "*",
* "hooks": [{ "name": "claude-mem", "type": "command", "command": "...", "timeout": 5000 }]
* }]
* }
* }
*/
import path from 'path';
import { homedir } from 'os';
@@ -28,11 +5,6 @@ import { existsSync, readFileSync, writeFileSync, mkdirSync } from 'fs';
import { logger } from '../../utils/logger.js';
import { findWorkerServicePath, findBunPath } from './CursorHooksInstaller.js';
// ============================================================================
// Types
// ============================================================================
/** A single hook entry in a Gemini CLI hook group */
interface GeminiHookEntry {
name: string;
type: 'command';
@@ -40,27 +12,20 @@ interface GeminiHookEntry {
timeout: number;
}
/** A hook group — matcher selects which tools/events this applies to */
interface GeminiHookGroup {
matcher: string;
hooks: GeminiHookEntry[];
}
/** The hooks section in ~/.gemini/settings.json */
interface GeminiHooksConfig {
[eventName: string]: GeminiHookGroup[];
}
/** Full ~/.gemini/settings.json structure (partial — we only care about hooks) */
interface GeminiSettingsJson {
hooks?: GeminiHooksConfig;
[key: string]: unknown;
}
// ============================================================================
// Constants
// ============================================================================
const GEMINI_CONFIG_DIR = path.join(homedir(), '.gemini');
const GEMINI_SETTINGS_PATH = path.join(GEMINI_CONFIG_DIR, 'settings.json');
const GEMINI_MD_PATH = path.join(GEMINI_CONFIG_DIR, 'GEMINI.md');
@@ -68,16 +33,6 @@ const GEMINI_MD_PATH = path.join(GEMINI_CONFIG_DIR, 'GEMINI.md');
const HOOK_NAME = 'claude-mem';
const HOOK_TIMEOUT_MS = 10000;
/**
* Mapping from Gemini CLI hook events to internal claude-mem event types.
*
* These events are processed by hookCommand() in src/cli/hook-command.ts,
* which reads stdin via readJsonFromStdin(), normalizes through the
* gemini-cli adapter, and dispatches to the matching event handler.
*
* Events NOT mapped (too chatty for memory capture):
* BeforeModel, AfterModel, BeforeToolSelection
*/
const GEMINI_EVENT_TO_INTERNAL_EVENT: Record<string, string> = {
'SessionStart': 'context',
'BeforeAgent': 'session-init',
@@ -88,26 +43,6 @@ const GEMINI_EVENT_TO_INTERNAL_EVENT: Record<string, string> = {
'Notification': 'observation',
};
// ============================================================================
// Hook Command Builder
// ============================================================================
/**
* Build the hook command string for a given Gemini CLI event.
*
* The command invokes worker-service.cjs with the `hook` subcommand,
* which delegates to hookCommand('gemini-cli', event) the same
* framework used by Claude Code and Cursor hooks.
*
* Pipeline: bun worker-service.cjs hook gemini-cli <event>
* worker-service.ts parses args, ensures worker daemon is running
* hookCommand('gemini-cli', '<event>')
* readJsonFromStdin() reads Gemini's JSON payload
* geminiCliAdapter.normalizeInput() NormalizedHookInput
* eventHandler.execute(input)
* geminiCliAdapter.formatOutput(result)
* JSON.stringify to stdout
*/
function buildHookCommand(
bunPath: string,
workerServicePath: string,
@@ -118,20 +53,12 @@ function buildHookCommand(
throw new Error(`Unknown Gemini CLI event: ${geminiEventName}`);
}
// Double-escape backslashes intentionally: this command string is embedded inside
// a JSON value, so `\\` in the source becomes `\` when the JSON is parsed by the
// IDE. Without double-escaping, Windows paths like C:\Users would lose their
// backslashes and break when the IDE deserializes the hook configuration.
const escapedBunPath = bunPath.replace(/\\/g, '\\\\');
const escapedWorkerPath = workerServicePath.replace(/\\/g, '\\\\');
return `"${escapedBunPath}" "${escapedWorkerPath}" hook gemini-cli ${internalEvent}`;
}
/**
* Create a hook group entry for a Gemini CLI event.
* Uses matcher "*" to match all tools/contexts for that event.
*/
function createHookGroup(hookCommand: string): GeminiHookGroup {
return {
matcher: '*',
@@ -144,14 +71,6 @@ function createHookGroup(hookCommand: string): GeminiHookGroup {
};
}
// ============================================================================
// Settings JSON Management
// ============================================================================
/**
* Read ~/.gemini/settings.json, returning empty object if missing.
* Throws on corrupt JSON to prevent silent data loss.
*/
function readGeminiSettings(): GeminiSettingsJson {
if (!existsSync(GEMINI_SETTINGS_PATH)) {
return {};
@@ -170,24 +89,11 @@ function readGeminiSettings(): GeminiSettingsJson {
}
}
/**
* Write settings back to ~/.gemini/settings.json.
* Creates the directory if it doesn't exist.
*/
function writeGeminiSettings(settings: GeminiSettingsJson): void {
mkdirSync(GEMINI_CONFIG_DIR, { recursive: true });
writeFileSync(GEMINI_SETTINGS_PATH, JSON.stringify(settings, null, 2) + '\n');
}
/**
* Deep-merge claude-mem hooks into existing settings.
*
* For each event:
* - If the event already has a hook group with a claude-mem hook, update it
* - Otherwise, append a new hook group
*
* Preserves all non-claude-mem hooks and all non-hook settings.
*/
function mergeHooksIntoSettings(
existingSettings: GeminiSettingsJson,
newHooks: GeminiHooksConfig,
@@ -200,15 +106,12 @@ function mergeHooksIntoSettings(
for (const [eventName, newGroups] of Object.entries(newHooks)) {
const existingGroups: GeminiHookGroup[] = settings.hooks[eventName] ?? [];
// For each new hook group, check if there's already a group
// containing a claude-mem hook — update it in place
for (const newGroup of newGroups) {
const existingGroupIndex = existingGroups.findIndex((group: GeminiHookGroup) =>
group.hooks.some((hook: GeminiHookEntry) => hook.name === HOOK_NAME)
);
if (existingGroupIndex >= 0) {
// Update existing group: replace the claude-mem hook entry
const existingGroup: GeminiHookGroup = existingGroups[existingGroupIndex];
const hookIndex = existingGroup.hooks.findIndex((hook: GeminiHookEntry) => hook.name === HOOK_NAME);
if (hookIndex >= 0) {
@@ -217,7 +120,6 @@ function mergeHooksIntoSettings(
existingGroup.hooks.push(newGroup.hooks[0]);
}
} else {
// No existing claude-mem group — append
existingGroups.push(newGroup);
}
}
@@ -228,14 +130,6 @@ function mergeHooksIntoSettings(
return settings;
}
// ============================================================================
// GEMINI.md Context Injection
// ============================================================================
/**
* Append or update the claude-mem context section in ~/.gemini/GEMINI.md.
* Uses the same <claude-mem-context> tag pattern as CLAUDE.md.
*/
function setupGeminiMdContextSection(): void {
const contextTag = '<claude-mem-context>';
const contextEndTag = '</claude-mem-context>';
@@ -251,11 +145,9 @@ ${contextEndTag}`;
}
if (content.includes(contextTag)) {
// Already has claude-mem section — leave it alone (may have real context)
return;
}
// Append the section
const separator = content.length > 0 && !content.endsWith('\n') ? '\n\n' : content.length > 0 ? '\n' : '';
const newContent = content + separator + placeholder + '\n';
@@ -263,22 +155,9 @@ ${contextEndTag}`;
writeFileSync(GEMINI_MD_PATH, newContent);
}
// ============================================================================
// Public API
// ============================================================================
/**
* Install claude-mem hooks into ~/.gemini/settings.json.
*
* Merges hooks non-destructively: existing settings and non-claude-mem
* hooks are preserved. Existing claude-mem hooks are updated in place.
*
* @returns 0 on success, 1 on failure
*/
export async function installGeminiCliHooks(): Promise<number> {
console.log('\nInstalling Claude-Mem Gemini CLI hooks...\n');
// Find required paths
const workerServicePath = findWorkerServicePath();
if (!workerServicePath) {
console.error('Could not find worker-service.cjs');
@@ -291,14 +170,12 @@ export async function installGeminiCliHooks(): Promise<number> {
console.log(` Worker service: ${workerServicePath}`);
try {
// Build hook commands for all mapped events
const hooksConfig: GeminiHooksConfig = {};
for (const geminiEvent of Object.keys(GEMINI_EVENT_TO_INTERNAL_EVENT)) {
const command = buildHookCommand(bunPath, workerServicePath, geminiEvent);
hooksConfig[geminiEvent] = [createHookGroup(command)];
}
// Read existing settings and merge
const existingSettings = readGeminiSettings();
const mergedSettings = mergeHooksIntoSettings(existingSettings, hooksConfig);
@@ -342,13 +219,6 @@ Context Injection:
`);
}
/**
* Uninstall claude-mem hooks from ~/.gemini/settings.json.
*
* Removes only claude-mem hooks other hooks and settings are preserved.
*
* @returns 0 on success, 1 on failure
*/
export function uninstallGeminiCliHooks(): number {
console.log('\nUninstalling Claude-Mem Gemini CLI hooks...\n');
@@ -366,7 +236,6 @@ export function uninstallGeminiCliHooks(): number {
let removedCount = 0;
// Remove claude-mem hooks from within each group, preserving other hooks
for (const [eventName, groups] of Object.entries(settings.hooks)) {
const filteredGroups = groups
.map(group => {
@@ -383,7 +252,6 @@ export function uninstallGeminiCliHooks(): number {
}
}
// Clean up empty hooks object
if (Object.keys(settings.hooks).length === 0) {
delete settings.hooks;
}
@@ -418,11 +286,6 @@ function writeSettingsAndCleanupGeminiContext(
console.log('Restart Gemini CLI to apply changes.');
}
/**
* Check Gemini CLI hooks installation status.
*
* @returns 0 always (informational)
*/
export function checkGeminiCliHooksStatus(): number {
console.log('\nClaude-Mem Gemini CLI Hooks Status\n');
@@ -453,7 +316,6 @@ export function checkGeminiCliHooksStatus(): number {
return 0;
}
// Check for claude-mem hooks
const installedEvents: string[] = [];
for (const [eventName, groups] of Object.entries(settings.hooks)) {
const hasClaudeMem = groups.some(group =>
@@ -478,7 +340,6 @@ export function checkGeminiCliHooksStatus(): number {
console.log(` ${event}${internalEvent}`);
}
// Check GEMINI.md context
if (existsSync(GEMINI_MD_PATH)) {
const mdContent = readFileSync(GEMINI_MD_PATH, 'utf-8');
if (mdContent.includes('<claude-mem-context>')) {
@@ -494,9 +355,6 @@ export function checkGeminiCliHooksStatus(): number {
return 0;
}
/**
* Handle gemini-cli subcommand for hooks management.
*/
export async function handleGeminiCliCommand(subcommand: string, _args: string[]): Promise<number> {
switch (subcommand) {
case 'install':
@@ -1,20 +1,3 @@
/**
* McpIntegrations - MCP-based IDE integrations for claude-mem
*
* Handles MCP config writing and context injection for IDEs that support
* the Model Context Protocol. These are "MCP-only" integrations: they provide
* search tools and context injection but do NOT capture transcripts.
*
* Supported IDEs:
* - Copilot CLI
* - Antigravity (Gemini)
* - Goose
* - Crush
* - Roo Code
* - Warp
*
* All IDEs point to the same MCP server: plugin/scripts/mcp-server.cjs
*/
import path from 'path';
import { homedir } from 'os';
@@ -24,24 +7,12 @@ import { findMcpServerPath } from './CursorHooksInstaller.js';
import { readJsonSafe } from '../../utils/json-utils.js';
import { injectContextIntoMarkdownFile } from '../../utils/context-injection.js';
// ============================================================================
// Shared Constants
// ============================================================================
const PLACEHOLDER_CONTEXT = `# claude-mem: Cross-Session Memory
*No context yet. Complete your first session and context will appear here.*
Use claude-mem's MCP search tools for manual memory queries.`;
// ============================================================================
// Shared Utilities
// ============================================================================
/**
* Build the standard MCP server entry that all IDEs use.
* Points to the same mcp-server.cjs script.
*/
function buildMcpServerEntry(mcpServerPath: string): { command: string; args: string[] } {
return {
command: process.execPath,
@@ -49,10 +20,6 @@ function buildMcpServerEntry(mcpServerPath: string): { command: string; args: st
};
}
/**
* Write a standard MCP JSON config file, merging with existing config.
* Supports both { "mcpServers": { ... } } and { "servers": { ... } } formats.
*/
function writeMcpJsonConfig(
configFilePath: string,
mcpServerPath: string,
@@ -72,13 +39,6 @@ function writeMcpJsonConfig(
writeFileSync(configFilePath, JSON.stringify(existingConfig, null, 2) + '\n');
}
// ============================================================================
// MCP Installer Factory (Phase 1D)
// ============================================================================
/**
* Configuration for a JSON-based MCP IDE integration.
*/
interface McpInstallerConfig {
ideId: string;
ideLabel: string;
@@ -90,10 +50,6 @@ interface McpInstallerConfig {
};
}
/**
* Factory function that creates an MCP installer for any JSON-config-based IDE.
* Handles MCP config writing and optional context injection.
*/
function installMcpIntegration(config: McpInstallerConfig): () => Promise<number> {
return async (): Promise<number> => {
console.log(`\nInstalling Claude-Mem MCP integration for ${config.ideLabel}...\n`);
@@ -107,7 +63,6 @@ function installMcpIntegration(config: McpInstallerConfig): () => Promise<number
const configPath = config.configPath;
// Warp special case: skip config write if ~/.warp/ doesn't exist
const skipWarpConfigWrite = config.ideId === 'warp' && !existsSync(path.dirname(configPath));
let contextPath: string | undefined;
@@ -164,10 +119,6 @@ function writeMcpConfigAndContext(
console.log(summaryLines.join('\n'));
}
// ============================================================================
// Factory Configs for JSON-based IDEs
// ============================================================================
const COPILOT_CLI_CONFIG: McpInstallerConfig = {
ideId: 'copilot-cli',
ideLabel: 'Copilot CLI',
@@ -190,13 +141,6 @@ const ANTIGRAVITY_CONFIG: McpInstallerConfig = {
},
};
const CRUSH_CONFIG: McpInstallerConfig = {
ideId: 'crush',
ideLabel: 'Crush',
configPath: path.join(homedir(), '.config', 'crush', 'mcp.json'),
configKey: 'mcpServers',
};
const ROO_CODE_CONFIG: McpInstallerConfig = {
ideId: 'roo-code',
ideLabel: 'Roo Code',
@@ -219,34 +163,16 @@ const WARP_CONFIG: McpInstallerConfig = {
},
};
// ============================================================================
// Goose (YAML-based — separate handler)
// ============================================================================
/**
* Get the Goose config path.
* Goose stores its config at ~/.config/goose/config.yaml.
*/
function getGooseConfigPath(): string {
return path.join(homedir(), '.config', 'goose', 'config.yaml');
}
/**
* Check if a YAML string already has a claude-mem entry under mcpServers.
* Uses string matching to avoid needing a YAML parser.
*/
function gooseConfigHasClaudeMemEntry(yamlContent: string): boolean {
// Look for "claude-mem:" indented under mcpServers
return yamlContent.includes('claude-mem:') &&
yamlContent.includes('mcpServers:');
}
/**
* Build the Goose YAML MCP server block as a string.
* Produces properly indented YAML without needing a parser.
*/
function buildGooseMcpYamlBlock(mcpServerPath: string): string {
// Goose expects the mcpServers section at the top level
return [
'mcpServers:',
' claude-mem:',
@@ -256,9 +182,6 @@ function buildGooseMcpYamlBlock(mcpServerPath: string): string {
].join('\n');
}
/**
* Build just the claude-mem server entry (for appending under existing mcpServers).
*/
function buildGooseClaudeMemEntryYaml(mcpServerPath: string): string {
return [
' claude-mem:',
@@ -268,14 +191,6 @@ function buildGooseClaudeMemEntryYaml(mcpServerPath: string): string {
].join('\n');
}
/**
* Install claude-mem MCP integration for Goose.
*
* - Writes/merges MCP config into ~/.config/goose/config.yaml
* - Uses string manipulation for YAML (no parser dependency)
*
* @returns 0 on success, 1 on failure
*/
export async function installGooseMcpIntegration(): Promise<number> {
console.log('\nInstalling Claude-Mem MCP integration for Goose...\n');
@@ -352,19 +267,10 @@ Next steps:
`);
}
// ============================================================================
// Unified Installer (used by npx install command)
// ============================================================================
/**
* Map of IDE identifiers to their install functions.
* Used by the install command to dispatch to the correct integration.
*/
export const MCP_IDE_INSTALLERS: Record<string, () => Promise<number>> = {
'copilot-cli': installMcpIntegration(COPILOT_CLI_CONFIG),
'antigravity': installMcpIntegration(ANTIGRAVITY_CONFIG),
'goose': installGooseMcpIntegration,
'crush': installMcpIntegration(CRUSH_CONFIG),
'roo-code': installMcpIntegration(ROO_CODE_CONFIG),
'warp': installMcpIntegration(WARP_CONFIG),
};
@@ -1,18 +1,3 @@
/**
* OpenClawInstaller - OpenClaw gateway integration installer for claude-mem
*
* Installs the pre-built claude-mem plugin into OpenClaw's extension directory
* and registers it in ~/.openclaw/openclaw.json.
*
* Install strategy: File-based
* - Copies the pre-built plugin from the npm package's openclaw/dist/ directory
* to ~/.openclaw/extensions/claude-mem/dist/
* - Registers the plugin in openclaw.json under plugins.entries.claude-mem
* - Sets the memory slot to claude-mem
*
* Important: The OpenClaw plugin ships pre-built from the npm package.
* It must NOT be rebuilt at install time.
*/
import path from 'path';
import { homedir } from 'os';
@@ -28,55 +13,28 @@ import {
import { logger } from '../../utils/logger.js';
import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js';
// ============================================================================
// Path Resolution
// ============================================================================
/**
* Resolve the OpenClaw config directory (~/.openclaw).
*/
export function getOpenClawConfigDirectory(): string {
return path.join(homedir(), '.openclaw');
}
/**
* Resolve the OpenClaw extensions directory where plugins are installed.
*/
export function getOpenClawExtensionsDirectory(): string {
return path.join(getOpenClawConfigDirectory(), 'extensions');
}
/**
* Resolve the claude-mem extension install directory.
*/
export function getOpenClawClaudeMemExtensionDirectory(): string {
return path.join(getOpenClawExtensionsDirectory(), 'claude-mem');
}
/**
* Resolve the path to openclaw.json config file.
*/
export function getOpenClawConfigFilePath(): string {
return path.join(getOpenClawConfigDirectory(), 'openclaw.json');
}
// ============================================================================
// Pre-built Plugin Location
// ============================================================================
/**
* Find the pre-built OpenClaw plugin bundle in the npm package.
* Searches in: openclaw/dist/index.js relative to package root,
* then the marketplace install location.
*/
export function findPreBuiltPluginDirectory(): string | null {
const possibleRoots = [
// Marketplace install location (production — after `npx claude-mem install`)
path.join(
process.env.CLAUDE_CONFIG_DIR || path.join(homedir(), '.claude'),
'plugins', 'marketplaces', 'thedotmack',
),
// Development location (relative to project root)
process.cwd(),
];
@@ -91,9 +49,6 @@ export function findPreBuiltPluginDirectory(): string | null {
return null;
}
/**
* Find the openclaw.plugin.json file for copying alongside the plugin.
*/
export function findPluginManifestPath(): string | null {
const possibleRoots = [
path.join(
@@ -113,9 +68,6 @@ export function findPluginManifestPath(): string | null {
return null;
}
/**
* Find the openclaw skills directory for copying alongside the plugin.
*/
export function findPluginSkillsDirectory(): string | null {
const possibleRoots = [
path.join(
@@ -135,13 +87,6 @@ export function findPluginSkillsDirectory(): string | null {
return null;
}
// ============================================================================
// OpenClaw Config (openclaw.json) Management
// ============================================================================
/**
* Read openclaw.json safely, returning an empty object if missing or invalid.
*/
function readOpenClawConfig(): Record<string, any> {
const configFilePath = getOpenClawConfigFilePath();
if (!existsSync(configFilePath)) return {};
@@ -154,20 +99,12 @@ function readOpenClawConfig(): Record<string, any> {
}
}
/**
* Write openclaw.json atomically, creating the directory if needed.
*/
function writeOpenClawConfig(config: Record<string, any>): void {
const configDirectory = getOpenClawConfigDirectory();
mkdirSync(configDirectory, { recursive: true });
writeFileSync(getOpenClawConfigFilePath(), JSON.stringify(config, null, 2) + '\n', 'utf-8');
}
/**
* Register claude-mem in openclaw.json by merging into the existing config.
* Does NOT overwrite the entire file -- only touches the claude-mem entry
* and the memory slot.
*/
function registerPluginInOpenClawConfig(
workerPort: number,
project: string = 'openclaw',
@@ -175,15 +112,12 @@ function registerPluginInOpenClawConfig(
): void {
const config = readOpenClawConfig();
// Ensure the plugins structure exists
if (!config.plugins) config.plugins = {};
if (!config.plugins.slots) config.plugins.slots = {};
if (!config.plugins.entries) config.plugins.entries = {};
// Set the memory slot to claude-mem
config.plugins.slots.memory = 'claude-mem';
// Create or update the claude-mem plugin entry
if (!config.plugins.entries['claude-mem']) {
config.plugins.entries['claude-mem'] = {
enabled: true,
@@ -194,13 +128,11 @@ function registerPluginInOpenClawConfig(
},
};
} else {
// Merge: enable and update config without losing existing user settings
config.plugins.entries['claude-mem'].enabled = true;
if (!config.plugins.entries['claude-mem'].config) {
config.plugins.entries['claude-mem'].config = {};
}
const existingPluginConfig = config.plugins.entries['claude-mem'].config;
// Only set defaults if not already configured
if (existingPluginConfig.workerPort === undefined) existingPluginConfig.workerPort = workerPort;
if (existingPluginConfig.project === undefined) existingPluginConfig.project = project;
if (existingPluginConfig.syncMemoryFile === undefined) existingPluginConfig.syncMemoryFile = syncMemoryFile;
@@ -209,21 +141,16 @@ function registerPluginInOpenClawConfig(
writeOpenClawConfig(config);
}
/**
* Remove claude-mem from openclaw.json without deleting other config.
*/
function unregisterPluginFromOpenClawConfig(): void {
const configFilePath = getOpenClawConfigFilePath();
if (!existsSync(configFilePath)) return;
const config = readOpenClawConfig();
// Remove claude-mem entry
if (config.plugins?.entries?.['claude-mem']) {
delete config.plugins.entries['claude-mem'];
}
// Clear memory slot if it points to claude-mem
if (config.plugins?.slots?.memory === 'claude-mem') {
delete config.plugins.slots.memory;
}
@@ -231,16 +158,6 @@ function unregisterPluginFromOpenClawConfig(): void {
writeOpenClawConfig(config);
}
// ============================================================================
// Plugin Installation
// ============================================================================
/**
* Install the claude-mem plugin into OpenClaw's extensions directory.
* Copies the pre-built plugin bundle and registers it in openclaw.json.
*
* @returns 0 on success, 1 on failure
*/
export function installOpenClawPlugin(): number {
const preBuiltDistDirectory = findPreBuiltPluginDirectory();
if (!preBuiltDistDirectory) {
@@ -253,7 +170,6 @@ export function installOpenClawPlugin(): number {
const extensionDirectory = getOpenClawClaudeMemExtensionDirectory();
const destinationDistDirectory = path.join(extensionDirectory, 'dist');
// Locate optional assets before entering the try block
const manifestPath = findPluginManifestPath();
const skillsDirectory = findPluginSkillsDirectory();
@@ -266,7 +182,6 @@ export function installOpenClawPlugin(): number {
};
try {
// Create the extension directory structure inside try to catch EACCES/ENOSPC
mkdirSync(destinationDistDirectory, { recursive: true });
copyPluginFilesAndRegister(preBuiltDistDirectory, destinationDistDirectory, extensionDirectory, manifestPath, skillsDirectory, extensionPackageJson);
return 0;
@@ -306,9 +221,6 @@ function copyPluginFilesAndRegister(
'utf-8',
);
// Resolve port via SettingsDefaultsManager so CLAUDE_MEM_WORKER_PORT env
// takes priority and the per-UID default (37700 + uid % 100) is used
// otherwise. Required for multi-account isolation (#2101).
const workerPort = SettingsDefaultsManager.getInt('CLAUDE_MEM_WORKER_PORT');
registerPluginInOpenClawConfig(workerPort);
console.log(` Registered in openclaw.json`);
@@ -316,20 +228,9 @@ function copyPluginFilesAndRegister(
logger.info('OPENCLAW', 'Plugin installed', { destination: extensionDirectory });
}
// ============================================================================
// Uninstallation
// ============================================================================
/**
* Remove the claude-mem plugin from OpenClaw.
* Removes extension files and unregisters from openclaw.json.
*
* @returns 0 on success, 1 on failure
*/
export function uninstallOpenClawPlugin(): number {
let hasErrors = false;
// Remove extension directory
const extensionDirectory = getOpenClawClaudeMemExtensionDirectory();
if (existsSync(extensionDirectory)) {
try {
@@ -342,7 +243,6 @@ export function uninstallOpenClawPlugin(): number {
}
}
// Unregister from openclaw.json
try {
unregisterPluginFromOpenClawConfig();
console.log(` Unregistered from openclaw.json`);
@@ -355,15 +255,6 @@ export function uninstallOpenClawPlugin(): number {
return hasErrors ? 1 : 0;
}
// ============================================================================
// Status Check
// ============================================================================
/**
* Check OpenClaw integration status.
*
* @returns 0 always (informational only)
*/
export function checkOpenClawStatus(): number {
console.log('\nClaude-Mem OpenClaw Integration Status\n');
@@ -409,19 +300,9 @@ export function checkOpenClawStatus(): number {
return 0;
}
// ============================================================================
// Full Install Flow (used by npx install command)
// ============================================================================
/**
* Run the full OpenClaw installation: copy plugin + register in config.
*
* @returns 0 on success, 1 on failure
*/
export async function installOpenClawIntegration(): Promise<number> {
console.log('\nInstalling Claude-Mem for OpenClaw...\n');
// Step 1: Install plugin files and register in config
const pluginResult = installOpenClawPlugin();
if (pluginResult !== 0) {
return pluginResult;
@@ -1,18 +1,3 @@
/**
* OpenCodeInstaller - OpenCode IDE integration installer for claude-mem
*
* Installs the claude-mem plugin into OpenCode's plugin directory and
* sets up context injection via AGENTS.md.
*
* Install strategy: File-based (Option A)
* - Copies the built plugin to the OpenCode plugins directory
* - Plugins in that directory are auto-loaded at startup
*
* Context injection:
* - Appends/updates <claude-mem-context> section in AGENTS.md
*
* Respects OPENCODE_CONFIG_DIR env var for config directory resolution.
*/
import path from 'path';
import { homedir } from 'os';
@@ -22,14 +7,6 @@ import { logger } from '../../utils/logger.js';
import { CONTEXT_TAG_OPEN, CONTEXT_TAG_CLOSE, injectContextIntoMarkdownFile } from '../../utils/context-injection.js';
import { getWorkerPort } from '../../shared/worker-utils.js';
// ============================================================================
// Path Resolution
// ============================================================================
/**
* Resolve the OpenCode config directory.
* Respects OPENCODE_CONFIG_DIR env var, falls back to ~/.config/opencode.
*/
export function getOpenCodeConfigDirectory(): string {
if (process.env.OPENCODE_CONFIG_DIR) {
return process.env.OPENCODE_CONFIG_DIR;
@@ -37,45 +14,25 @@ export function getOpenCodeConfigDirectory(): string {
return path.join(homedir(), '.config', 'opencode');
}
/**
* Resolve the OpenCode plugins directory.
*/
export function getOpenCodePluginsDirectory(): string {
return path.join(getOpenCodeConfigDirectory(), 'plugins');
}
/**
* Resolve the AGENTS.md path for context injection.
*/
export function getOpenCodeAgentsMdPath(): string {
return path.join(getOpenCodeConfigDirectory(), 'AGENTS.md');
}
/**
* Resolve the path to the installed plugin file.
*/
export function getInstalledPluginPath(): string {
return path.join(getOpenCodePluginsDirectory(), 'claude-mem.js');
}
// ============================================================================
// Plugin Installation
// ============================================================================
/**
* Find the built OpenCode plugin bundle.
* Searches in: dist/opencode-plugin/index.js (built output),
* then marketplace location.
*/
export function findBuiltPluginPath(): string | null {
const possiblePaths = [
// Marketplace install location (production)
path.join(
process.env.CLAUDE_CONFIG_DIR || path.join(homedir(), '.claude'),
'plugins', 'marketplaces', 'thedotmack',
'dist', 'opencode-plugin', 'index.js',
),
// Development location (relative to this module's package root)
path.join(path.dirname(fileURLToPath(import.meta.url)), '..', '..', '..', 'dist', 'opencode-plugin', 'index.js'),
];
@@ -88,12 +45,6 @@ export function findBuiltPluginPath(): string | null {
return null;
}
/**
* Install the claude-mem plugin into OpenCode's plugins directory.
* Copies the built plugin bundle to ~/.config/opencode/plugins/claude-mem.js
*
* @returns 0 on success, 1 on failure
*/
export function installOpenCodePlugin(): number {
const builtPluginPath = findBuiltPluginPath();
if (!builtPluginPath) {
@@ -107,10 +58,8 @@ export function installOpenCodePlugin(): number {
const destinationPath = getInstalledPluginPath();
try {
// Create plugins directory if needed
mkdirSync(pluginsDirectory, { recursive: true });
// Copy plugin bundle
copyFileSync(builtPluginPath, destinationPath);
console.log(` Plugin installed to: ${destinationPath}`);
@@ -124,20 +73,6 @@ export function installOpenCodePlugin(): number {
}
}
// ============================================================================
// Context Injection (AGENTS.md)
// ============================================================================
/**
* Inject or update claude-mem context in OpenCode's AGENTS.md file.
*
* If the file doesn't exist, creates it with the context section.
* If the file exists, replaces the existing <claude-mem-context> section
* or appends one at the end.
*
* @param contextContent - The context content to inject (without tags)
* @returns 0 on success, 1 on failure
*/
export function injectContextIntoAgentsMd(contextContent: string): number {
const agentsMdPath = getOpenCodeAgentsMdPath();
@@ -152,13 +87,6 @@ export function injectContextIntoAgentsMd(contextContent: string): number {
}
}
/**
* Sync context from the worker into OpenCode's AGENTS.md.
* Fetches context from the worker API and writes it to AGENTS.md.
*
* @param port - Worker port number
* @param project - Project name for context filtering
*/
export async function syncContextToAgentsMd(
port: number,
project: string,
@@ -166,7 +94,6 @@ export async function syncContextToAgentsMd(
try {
await fetchAndInjectOpenCodeContext(port, project);
} catch (error) {
// Worker not available — non-critical
if (error instanceof Error) {
logger.debug('WORKER', 'Worker not available during context sync', {}, error);
} else {
@@ -204,10 +131,6 @@ async function fetchAndInjectOpenCodeContext(port: number, project: string): Pro
}
}
// ============================================================================
// Uninstallation
// ============================================================================
function writeOrRemoveCleanedAgentsMd(agentsMdPath: string, trimmedContent: string): void {
if (
trimmedContent.length === 0 ||
@@ -221,16 +144,9 @@ function writeOrRemoveCleanedAgentsMd(agentsMdPath: string, trimmedContent: stri
}
}
/**
* Remove the claude-mem plugin from OpenCode.
* Removes the plugin file and cleans up the AGENTS.md context section.
*
* @returns 0 on success, 1 on failure
*/
export function uninstallOpenCodePlugin(): number {
let hasErrors = false;
// Remove plugin file
const pluginPath = getInstalledPluginPath();
if (existsSync(pluginPath)) {
try {
@@ -243,7 +159,6 @@ export function uninstallOpenCodePlugin(): number {
}
}
// Remove context section from AGENTS.md
const agentsMdPath = getOpenCodeAgentsMdPath();
if (existsSync(agentsMdPath)) {
let content: string;
@@ -279,15 +194,6 @@ export function uninstallOpenCodePlugin(): number {
return hasErrors ? 1 : 0;
}
// ============================================================================
// Status Check
// ============================================================================
/**
* Check OpenCode integration status.
*
* @returns 0 always (informational only)
*/
export function checkOpenCodeStatus(): number {
console.log('\nClaude-Mem OpenCode Integration Status\n');
@@ -317,32 +223,20 @@ export function checkOpenCodeStatus(): number {
return 0;
}
// ============================================================================
// Full Install Flow (used by npx install command)
// ============================================================================
/**
* Run the full OpenCode installation: plugin + context injection.
*
* @returns 0 on success, 1 on failure
*/
export async function installOpenCodeIntegration(): Promise<number> {
console.log('\nInstalling Claude-Mem for OpenCode...\n');
// Step 1: Install plugin
const pluginResult = installOpenCodePlugin();
if (pluginResult !== 0) {
return pluginResult;
}
// Step 2: Create initial context in AGENTS.md
const placeholderContext = `# Memory Context from Past Sessions
*No context yet. Complete your first session and context will appear here.*
Use claude-mem search tools for manual memory queries.`;
// Try to fetch real context from worker first
let contextToInject = placeholderContext;
let contextSource = 'placeholder';
try {
@@ -352,7 +246,6 @@ Use claude-mem search tools for manual memory queries.`;
contextSource = 'existing memory';
}
} catch (error) {
// Worker not available — use placeholder
if (error instanceof Error) {
logger.debug('WORKER', 'Worker not available during OpenCode install', {}, error);
} else {
@@ -1,10 +1,3 @@
/**
* TelegramNotifier
*
* Fire-and-forget Telegram notification module. Fires one message per observation
* whose type or concepts match user-configured triggers. Never throws; all errors
* are caught per-observation and logged as warnings. Bot token is never logged.
*/
import { ParsedObservation } from '../../sdk/parser.js';
import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js';
@@ -20,8 +13,6 @@ export interface TelegramNotifyInput {
const MARKDOWN_V2_RESERVED = /[_*\[\]()~`>#+\-=|{}.!\\]/g;
// Emoji per observation type. Unknown types fall back to the generic 🔔 so
// the message is still readable rather than misleadingly loud.
const TYPE_EMOJI: Record<string, string> = {
security_alert: '🚨',
security_note: '🔐',
@@ -73,9 +64,6 @@ async function postOne(botToken: string, chatId: string, text: string): Promise<
}
export async function notifyTelegram(input: TelegramNotifyInput): Promise<void> {
// loadFromFile merges env > settings.json > defaults so values stored in
// ~/.claude-mem/settings.json actually take effect. SettingsDefaultsManager.get()
// alone skips the file and would silently ignore user-configured credentials.
const settings = SettingsDefaultsManager.loadFromFile(USER_SETTINGS_PATH);
if (settings.CLAUDE_MEM_TELEGRAM_ENABLED !== 'true') {
@@ -1,25 +1,3 @@
/**
* WindsurfHooksInstaller - Windsurf IDE integration for claude-mem
*
* Handles:
* - Windsurf hooks installation/uninstallation to ~/.codeium/windsurf/hooks.json
* - Context file generation (.windsurf/rules/claude-mem-context.md)
* - Project registry management for auto-context updates
*
* Windsurf hooks.json format:
* {
* "hooks": {
* "<event_name>": [{ "command": "...", "show_output": false, "working_directory": "..." }]
* }
* }
*
* Events registered (all post-action, non-blocking):
* - pre_user_prompt session init + context injection
* - post_write_code code generation observation
* - post_run_command command execution observation
* - post_mcp_tool_use MCP tool results
* - post_cascade_response full AI response
*/
import path from 'path';
import { homedir } from 'os';
@@ -29,10 +7,6 @@ import { getWorkerPort } from '../../shared/worker-utils.js';
import { DATA_DIR } from '../../shared/paths.js';
import { findBunPath, findWorkerServicePath } from './CursorHooksInstaller.js';
// ============================================================================
// Types
// ============================================================================
interface WindsurfHookEntry {
command: string;
show_output: boolean;
@@ -51,21 +25,13 @@ interface WindsurfProjectRegistry {
};
}
// ============================================================================
// Constants
// ============================================================================
/** User-level hooks config — global coverage across all Windsurf workspaces */
const WINDSURF_HOOKS_DIR = path.join(homedir(), '.codeium', 'windsurf');
const WINDSURF_HOOKS_JSON_PATH = path.join(WINDSURF_HOOKS_DIR, 'hooks.json');
/** Windsurf context rule limit: 6,000 chars per file */
const WINDSURF_CONTEXT_CHAR_LIMIT = 6000;
/** Registry file for tracking projects with Windsurf hooks */
const WINDSURF_REGISTRY_FILE = path.join(DATA_DIR, 'windsurf-projects.json');
/** Hook events we register */
const WINDSURF_HOOK_EVENTS = [
'pre_user_prompt',
'post_write_code',
@@ -74,13 +40,6 @@ const WINDSURF_HOOK_EVENTS = [
'post_cascade_response',
] as const;
// ============================================================================
// Project Registry
// ============================================================================
/**
* Read the Windsurf project registry
*/
export function readWindsurfRegistry(): WindsurfProjectRegistry {
try {
if (!existsSync(WINDSURF_REGISTRY_FILE)) return {};
@@ -95,19 +54,12 @@ export function readWindsurfRegistry(): WindsurfProjectRegistry {
}
}
/**
* Write the Windsurf project registry
*/
export function writeWindsurfRegistry(registry: WindsurfProjectRegistry): void {
const dir = path.dirname(WINDSURF_REGISTRY_FILE);
mkdirSync(dir, { recursive: true });
writeFileSync(WINDSURF_REGISTRY_FILE, JSON.stringify(registry, null, 2));
}
/**
* Register a project for auto-context updates.
* Keys by full workspacePath to avoid collisions between directories with the same basename.
*/
export function registerWindsurfProject(workspacePath: string): void {
const registry = readWindsurfRegistry();
registry[workspacePath] = {
@@ -117,9 +69,6 @@ export function registerWindsurfProject(workspacePath: string): void {
logger.info('WINDSURF', 'Registered project for auto-context updates', { workspacePath });
}
/**
* Unregister a project from auto-context updates
*/
export function unregisterWindsurfProject(workspacePath: string): void {
const registry = readWindsurfRegistry();
if (registry[workspacePath]) {
@@ -129,15 +78,11 @@ export function unregisterWindsurfProject(workspacePath: string): void {
}
}
/**
* Update Windsurf context files for a registered project.
* Called by SDK agents after saving a summary.
*/
export async function updateWindsurfContextForProject(projectName: string, workspacePath: string, port: number): Promise<void> {
const registry = readWindsurfRegistry();
const entry = registry[workspacePath];
if (!entry) return; // Project doesn't have Windsurf hooks installed
if (!entry) return;
try {
const response = await fetch(
@@ -152,7 +97,6 @@ export async function updateWindsurfContextForProject(projectName: string, works
writeWindsurfContextFile(workspacePath, context);
logger.debug('WINDSURF', 'Updated context file', { projectName, workspacePath });
} catch (error) {
// Background context update — failure is non-critical
if (error instanceof Error) {
logger.error('WORKER', 'Failed to update context file', { projectName, workspacePath }, error);
} else {
@@ -161,15 +105,6 @@ export async function updateWindsurfContextForProject(projectName: string, works
}
}
// ============================================================================
// Context File
// ============================================================================
/**
* Write context to the workspace-level Windsurf rules directory.
* Windsurf rules are workspace-scoped: .windsurf/rules/claude-mem-context.md
* Rule file limit: 6,000 chars per file.
*/
export function writeWindsurfContextFile(workspacePath: string, context: string): void {
const rulesDir = path.join(workspacePath, '.windsurf', 'rules');
const rulesFile = path.join(rulesDir, 'claude-mem-context.md');
@@ -187,27 +122,16 @@ ${context}
*Auto-updated by claude-mem after each session. Use MCP search tools for detailed queries.*
`;
// Enforce Windsurf's 6K char limit
if (content.length > WINDSURF_CONTEXT_CHAR_LIMIT) {
content = content.slice(0, WINDSURF_CONTEXT_CHAR_LIMIT - 50) +
'\n\n*[Truncated — use MCP search for full history]*\n';
}
// Atomic write: temp file + rename
writeFileSync(tempFile, content);
renameSync(tempFile, rulesFile);
}
// ============================================================================
// Hook Installation
// ============================================================================
/**
* Build the hook command string for a given event.
* Uses bun to run worker-service.cjs with the windsurf platform adapter.
*/
function buildHookCommand(bunPath: string, workerServicePath: string, eventName: string): string {
// Map Windsurf event names to unified CLI hook commands
const eventToCommand: Record<string, string> = {
'pre_user_prompt': 'session-init',
'post_write_code': 'file-edit',
@@ -221,10 +145,6 @@ function buildHookCommand(bunPath: string, workerServicePath: string, eventName:
return `"${bunPath}" "${workerServicePath}" hook windsurf ${hookCommand}`;
}
/**
* Read existing hooks.json, merge our hooks, and write back.
* Preserves any existing hooks from other tools.
*/
function mergeAndWriteHooksJson(
bunPath: string,
workerServicePath: string,
@@ -232,7 +152,6 @@ function mergeAndWriteHooksJson(
): void {
mkdirSync(WINDSURF_HOOKS_DIR, { recursive: true });
// Read existing hooks.json if present
let existingConfig: WindsurfHooksJson = { hooks: {} };
if (existsSync(WINDSURF_HOOKS_JSON_PATH)) {
try {
@@ -250,7 +169,6 @@ function mergeAndWriteHooksJson(
}
}
// For each event, add our hook entry (remove any previous claude-mem entries first)
for (const eventName of WINDSURF_HOOK_EVENTS) {
const command = buildHookCommand(bunPath, workerServicePath, eventName);
@@ -260,7 +178,6 @@ function mergeAndWriteHooksJson(
working_directory: workingDirectory,
};
// Get existing hooks for this event, filtering out old claude-mem ones
const existingHooks = (existingConfig.hooks[eventName] ?? []).filter(
(hook) => !hook.command.includes('worker-service') || !hook.command.includes('windsurf')
);
@@ -271,14 +188,9 @@ function mergeAndWriteHooksJson(
writeFileSync(WINDSURF_HOOKS_JSON_PATH, JSON.stringify(existingConfig, null, 2));
}
/**
* Install Windsurf hooks to ~/.codeium/windsurf/hooks.json (user-level).
* Merges with existing hooks.json to preserve other integrations.
*/
export async function installWindsurfHooks(): Promise<number> {
console.log('\nInstalling Claude-Mem Windsurf hooks (user level)...\n');
// Find the worker-service.cjs path
const workerServicePath = findWorkerServicePath();
if (!workerServicePath) {
console.error('Could not find worker-service.cjs');
@@ -286,7 +198,6 @@ export async function installWindsurfHooks(): Promise<number> {
return 1;
}
// Find bun executable — required because worker-service.cjs uses bun:sqlite
const bunPath = findBunPath();
if (!bunPath) {
console.error('Could not find Bun runtime');
@@ -294,7 +205,6 @@ export async function installWindsurfHooks(): Promise<number> {
return 1;
}
// IMPORTANT: Tilde expansion is NOT supported in working_directory — use absolute paths
const workingDirectory = path.dirname(workerServicePath);
console.log(` Using Bun runtime: ${bunPath}`);
@@ -343,9 +253,6 @@ Next steps:
`);
}
/**
* Setup initial context file for a Windsurf workspace
*/
async function setupWindsurfProjectContext(workspaceRoot: string): Promise<void> {
const port = getWorkerPort();
const projectName = path.basename(workspaceRoot);
@@ -356,7 +263,6 @@ async function setupWindsurfProjectContext(workspaceRoot: string): Promise<void>
try {
contextGenerated = await fetchWindsurfContextFromWorker(port, projectName, workspaceRoot);
} catch (error) {
// Worker not running during install — non-critical
if (error instanceof Error) {
logger.debug('WORKER', 'Worker not running during install', {}, error);
} else {
@@ -365,7 +271,6 @@ async function setupWindsurfProjectContext(workspaceRoot: string): Promise<void>
}
if (!contextGenerated) {
// Create placeholder context file
const rulesDir = path.join(workspaceRoot, '.windsurf', 'rules');
mkdirSync(rulesDir, { recursive: true });
const rulesFile = path.join(rulesDir, 'claude-mem-context.md');
@@ -379,7 +284,6 @@ Use claude-mem's MCP search tools for manual memory queries.
console.log(` Created placeholder context file (will populate after first session)`);
}
// Register project for automatic context updates after summaries
registerWindsurfProject(workspaceRoot);
console.log(` Registered for auto-context updates`);
}
@@ -406,13 +310,9 @@ async function fetchWindsurfContextFromWorker(
return false;
}
/**
* Uninstall Windsurf hooks removes claude-mem entries from hooks.json
*/
export function uninstallWindsurfHooks(): number {
console.log('\nUninstalling Claude-Mem Windsurf hooks...\n');
// Remove our entries from hooks.json (preserve other integrations)
if (existsSync(WINDSURF_HOOKS_JSON_PATH)) {
try {
removeClaudeMemHookEntries();
@@ -479,9 +379,6 @@ function removeWindsurfContextAndUnregister(workspaceRoot: string): void {
console.log('Restart Windsurf to apply changes.');
}
/**
* Check Windsurf hooks installation status
*/
export function checkWindsurfHooksStatus(): number {
console.log('\nClaude-Mem Windsurf Hooks Status\n');
@@ -510,7 +407,6 @@ export function checkWindsurfHooksStatus(): number {
}
}
// Check for context file in current workspace
const contextFile = path.join(process.cwd(), '.windsurf', 'rules', 'claude-mem-context.md');
if (existsSync(contextFile)) {
console.log(` Context: Active (current workspace)`);
@@ -526,9 +422,6 @@ export function checkWindsurfHooksStatus(): number {
return 0;
}
/**
* Handle windsurf subcommand for hooks installation
*/
export async function handleWindsurfCommand(subcommand: string, _args: string[]): Promise<number> {
switch (subcommand) {
case 'install':
-12
View File
@@ -1,12 +0,0 @@
/**
* Integrations module - IDE integrations (Cursor, Gemini CLI, OpenCode, Windsurf, etc.)
*/
export * from './types.js';
export * from './CursorHooksInstaller.js';
export * from './GeminiCliHooksInstaller.js';
export * from './OpenCodeInstaller.js';
export * from './WindsurfHooksInstaller.js';
export * from './OpenClawInstaller.js';
export * from './CodexCliInstaller.js';
export * from './McpIntegrations.js';
-4
View File
@@ -1,6 +1,3 @@
/**
* Integration Types - Shared types for IDE integrations
*/
export interface CursorMcpConfig {
mcpServers: {
@@ -13,7 +10,6 @@ export interface CursorMcpConfig {
}
export type CursorInstallTarget = 'project' | 'user' | 'enterprise';
export type Platform = 'windows' | 'unix';
export interface CursorHooksJson {
version: number;
+8 -38
View File
@@ -3,12 +3,11 @@ import { PendingMessageStore, PersistentPendingMessage } from '../sqlite/Pending
import type { PendingMessageWithId } from '../worker-types.js';
import { logger } from '../../utils/logger.js';
const IDLE_TIMEOUT_MS = 3 * 60 * 1000; // 3 minutes
const IDLE_TIMEOUT_MS = 3 * 60 * 1000;
export interface CreateIteratorOptions {
sessionDbId: number;
signal: AbortSignal;
/** Called when idle timeout occurs - should trigger abort to kill subprocess */
onIdleTimeout?: () => void;
}
@@ -18,55 +17,36 @@ export class SessionQueueProcessor {
private events: EventEmitter
) {}
/**
* Create an async iterator that yields messages as they become available.
* Uses atomic claim-confirm to prevent duplicates.
* Messages are claimed (marked processing) and stay in DB until confirmProcessed().
* Self-heals stale processing messages before each claim.
* Waits for 'message' event when queue is empty.
*
* CRITICAL: Calls onIdleTimeout callback after 3 minutes of inactivity.
* The callback should trigger abortController.abort() to kill the SDK subprocess.
* Just returning from the iterator is NOT enough - the subprocess stays alive!
*/
async *createIterator(options: CreateIteratorOptions): AsyncIterableIterator<PendingMessageWithId> {
const { sessionDbId, signal, onIdleTimeout } = options;
let lastActivityTime = Date.now();
while (!signal.aborted) {
// Claim phase: atomically claim next pending message (marks as 'processing')
// Self-heals any stale processing messages before claiming
let persistentMessage: PersistentPendingMessage | null = null;
try {
persistentMessage = this.store.claimNextMessage(sessionDbId);
} catch (error) {
if (signal.aborted) return;
const normalizedError = error instanceof Error ? error : new Error(String(error));
logger.error('QUEUE', 'Failed to claim next message', { sessionDbId }, normalizedError);
await new Promise(resolve => setTimeout(resolve, 1000));
continue;
logger.error('QUEUE', 'Failed to claim next message; ending iterator', { sessionDbId }, normalizedError);
return;
}
if (persistentMessage) {
// Reset activity time when we successfully yield a message
lastActivityTime = Date.now();
// Yield the message for processing (it's marked as 'processing' in DB)
yield this.toPendingMessageWithId(persistentMessage);
continue;
}
// Wait phase: queue empty - wait for wake-up event or timeout
try {
const idleTimedOut = await this.handleWaitPhase(signal, lastActivityTime, sessionDbId, onIdleTimeout);
if (idleTimedOut) return;
// Reset timer on spurious wakeup if not timed out
lastActivityTime = Date.now();
} catch (error) {
if (signal.aborted) return;
const normalizedError = error instanceof Error ? error : new Error(String(error));
logger.error('QUEUE', 'Error waiting for message', { sessionDbId }, normalizedError);
// Small backoff to prevent tight loop on error
await new Promise(resolve => setTimeout(resolve, 1000));
logger.error('QUEUE', 'Error waiting for message; ending iterator', { sessionDbId }, normalizedError);
return;
}
}
}
@@ -80,10 +60,6 @@ export class SessionQueueProcessor {
};
}
/**
* Handle the wait phase: wait for a message or check idle timeout.
* @returns true if idle timeout was reached (caller should return/exit iterator)
*/
private async handleWaitPhase(
signal: AbortSignal,
lastActivityTime: number,
@@ -107,29 +83,23 @@ export class SessionQueueProcessor {
return false;
}
/**
* Wait for a message event or timeout.
* @param signal - AbortSignal to cancel waiting
* @param timeoutMs - Maximum time to wait before returning
* @returns true if a message was received, false if timeout occurred
*/
private waitForMessage(signal: AbortSignal, timeoutMs: number = IDLE_TIMEOUT_MS): Promise<boolean> {
return new Promise<boolean>((resolve) => {
let timeoutId: ReturnType<typeof setTimeout> | undefined;
const onMessage = () => {
cleanup();
resolve(true); // Message received
resolve(true);
};
const onAbort = () => {
cleanup();
resolve(false); // Aborted, let loop check signal.aborted
resolve(false);
};
const onTimeout = () => {
cleanup();
resolve(false); // Timeout occurred
resolve(false);
};
const cleanup = () => {
-29
View File
@@ -1,15 +1,7 @@
/**
* ErrorHandler - Centralized error handling for Express
*
* Provides error handling middleware and utilities for the server.
*/
import { Request, Response, NextFunction, ErrorRequestHandler } from 'express';
import { logger } from '../../utils/logger.js';
/**
* Standard error response format
*/
export interface ErrorResponse {
error: string;
message: string;
@@ -17,9 +9,6 @@ export interface ErrorResponse {
details?: unknown;
}
/**
* Application error with additional context
*/
export class AppError extends Error {
constructor(
message: string,
@@ -32,9 +21,6 @@ export class AppError extends Error {
}
}
/**
* Create an error response object
*/
export function createErrorResponse(
error: string,
message: string,
@@ -47,27 +33,20 @@ export function createErrorResponse(
return response;
}
/**
* Global error handler middleware
* Should be registered last in the middleware chain
*/
export const errorHandler: ErrorRequestHandler = (
err: Error | AppError,
req: Request,
res: Response,
_next: NextFunction
): void => {
// Determine status code
const statusCode = err instanceof AppError ? err.statusCode : 500;
// Log error
logger.error('HTTP', `Error handling ${req.method} ${req.path}`, {
statusCode,
error: err.message,
code: err instanceof AppError ? err.code : undefined
}, err);
// Build response
const response = createErrorResponse(
err.name || 'Error',
err.message,
@@ -75,13 +54,9 @@ export const errorHandler: ErrorRequestHandler = (
err instanceof AppError ? err.details : undefined
);
// Send response (don't call next, as we've handled the error)
res.status(statusCode).json(response);
};
/**
* Not found handler - for routes that don't exist
*/
export function notFoundHandler(req: Request, res: Response): void {
res.status(404).json(createErrorResponse(
'NotFound',
@@ -89,10 +64,6 @@ export function notFoundHandler(req: Request, res: Response): void {
));
}
/**
* Async wrapper to catch errors in async route handlers
* Automatically passes errors to Express error handler
*/
export function asyncHandler<T>(
fn: (req: Request, res: Response, next: NextFunction) => Promise<T>
): (req: Request, res: Response, next: NextFunction) => void {
-8
View File
@@ -1,12 +1,4 @@
/**
* Server Middleware - Re-exports and enhances existing middleware
*
* This module provides a unified interface for server middleware.
* Re-exports from worker/http/middleware.ts to maintain backward compatibility
* while providing a cleaner import path for server setup.
*/
// Re-export all middleware from the existing location
export {
createMiddleware,
requireLocalhost,
+5 -115
View File
@@ -1,13 +1,3 @@
/**
* Server - Express app setup and route registration
*
* Extracted from worker-service.ts monolith to provide centralized HTTP server management.
* Handles:
* - Express app creation and configuration
* - Middleware registration
* - Route registration (delegates to route handlers)
* - Core system endpoints (health, readiness, version, admin)
*/
import express, { Request, Response, Application } from 'express';
import http from 'http';
@@ -20,18 +10,8 @@ import { errorHandler, notFoundHandler } from './ErrorHandler.js';
import { getSupervisor } from '../../supervisor/index.js';
import { isPidAlive } from '../../supervisor/process-registry.js';
import { ENV_PREFIXES, ENV_EXACT_MATCHES } from '../../supervisor/env-sanitizer.js';
import { flushResponseThen } from './flushResponseThen.js';
/**
* Plan 06 Phase 6 instruction content (SKILL.md + ALLOWED_OPERATIONS .md
* files) is read once at module init and held in memory for the lifetime of
* the worker process. Process restart is the cache-invalidation event.
*
* `SKILL.md` is held as the full UTF-8 string so `extractInstructionSection`
* can slice topic windows on every request without re-reading the file.
* Per-operation files are cached as a `Map<operation, content>`. Files that
* are missing on disk simply omit from the map; the request handler returns
* 404 in that case (preserving legacy behaviour).
*/
const INSTRUCTIONS_BASE_DIR: string = path.resolve(__dirname, '../skills/mem-search');
const INSTRUCTIONS_OPERATIONS_DIR: string = path.join(INSTRUCTIONS_BASE_DIR, 'operations');
const INSTRUCTIONS_SKILL_PATH: string = path.join(INSTRUCTIONS_BASE_DIR, 'SKILL.md');
@@ -60,7 +40,6 @@ const cachedOperationContent: ReadonlyMap<string, string> = (() => {
try {
map.set(operation, fs.readFileSync(operationPath, 'utf-8'));
} catch (error: unknown) {
// Missing operation files are non-fatal — 404 is returned per request.
logger.debug('SYSTEM', 'Operation instruction file not present at boot', {
path: operationPath,
message: error instanceof Error ? error.message : String(error),
@@ -76,22 +55,15 @@ const cachedOperationContent: ReadonlyMap<string, string> = (() => {
return map;
})();
// Build-time injected version constant (set by esbuild define)
declare const __DEFAULT_PACKAGE_VERSION__: string;
const BUILT_IN_VERSION = typeof __DEFAULT_PACKAGE_VERSION__ !== 'undefined'
? __DEFAULT_PACKAGE_VERSION__
: 'development';
/**
* Interface for route handlers that can be registered with the server
*/
export interface RouteHandler {
setupRoutes(app: Application): void;
}
/**
* AI provider status for health endpoint
*/
export interface AiStatus {
provider: string;
authMethod: string;
@@ -102,28 +74,15 @@ export interface AiStatus {
} | null;
}
/**
* Options for initializing the server
*/
export interface ServerOptions {
/** Whether initialization is complete (for readiness check) */
getInitializationComplete: () => boolean;
/** Whether MCP is ready (for health/readiness info) */
getMcpReady: () => boolean;
/** Shutdown function for admin endpoints */
onShutdown: () => Promise<void>;
/** Restart function for admin endpoints */
onRestart: () => Promise<void>;
/** Filesystem path to the worker entry point */
workerPath: string;
/** Callback to get current AI provider status */
getAiStatus: () => AiStatus;
}
/**
* Express application and HTTP server wrapper
* Provides centralized setup for middleware and routes
*/
export class Server {
readonly app: Application;
private server: http.Server | null = null;
@@ -137,16 +96,10 @@ export class Server {
this.setupCoreRoutes();
}
/**
* Get the underlying HTTP server
*/
getHttpServer(): http.Server | null {
return this.server;
}
/**
* Start listening on the specified host and port
*/
async listen(port: number, host: string): Promise<void> {
return new Promise<void>((resolve, reject) => {
const server = http.createServer(this.app);
@@ -166,26 +119,19 @@ export class Server {
});
}
/**
* Close the HTTP server
*/
async close(): Promise<void> {
if (!this.server) return;
// Close all active connections
this.server.closeAllConnections();
// Give Windows time to close connections before closing server
if (process.platform === 'win32') {
await new Promise(r => setTimeout(r, 500));
}
// Close the server
await new Promise<void>((resolve, reject) => {
this.server!.close(err => err ? reject(err) : resolve());
});
// Extra delay on Windows to ensure port is fully released
if (process.platform === 'win32') {
await new Promise(r => setTimeout(r, 500));
}
@@ -194,38 +140,22 @@ export class Server {
logger.info('SYSTEM', 'HTTP server closed');
}
/**
* Register a route handler
*/
registerRoutes(handler: RouteHandler): void {
handler.setupRoutes(this.app);
}
/**
* Finalize route setup by adding error handlers
* Call this after all routes have been registered
*/
finalizeRoutes(): void {
// 404 handler for unmatched routes
this.app.use(notFoundHandler);
// Global error handler (must be last)
this.app.use(errorHandler);
}
/**
* Setup Express middleware
*/
private setupMiddleware(): void {
const middlewares = createMiddleware(summarizeRequestBody);
middlewares.forEach(mw => this.app.use(mw));
}
/**
* Setup core system routes (health, readiness, version, admin)
*/
private setupCoreRoutes(): void {
// Health check endpoint - always responds, even during initialization
this.app.get('/api/health', (_req: Request, res: Response) => {
res.status(200).json({
status: 'ok',
@@ -242,7 +172,6 @@ export class Server {
});
});
// Readiness check endpoint - returns 503 until full initialization completes
this.app.get('/api/readiness', (_req: Request, res: Response) => {
if (this.options.getInitializationComplete()) {
res.status(200).json({
@@ -257,18 +186,14 @@ export class Server {
}
});
// Version endpoint - returns the worker's built-in version
this.app.get('/api/version', (_req: Request, res: Response) => {
res.status(200).json({ version: BUILT_IN_VERSION });
});
// Instructions endpoint — Plan 06 Phase 6 — serves the cached SKILL.md /
// operations content loaded once at module init.
this.app.get('/api/instructions', (req: Request, res: Response) => {
const topic = (req.query.topic as string) || 'all';
const operation = req.query.operation as string | undefined;
// Validate topic
if (topic && !ALLOWED_TOPICS.includes(topic)) {
return res.status(400).json({ error: 'Invalid topic' });
}
@@ -294,65 +219,39 @@ export class Server {
res.json({ content: [{ type: 'text', text: sectionText }] });
});
// Admin endpoints for process management (localhost-only)
this.app.post('/api/admin/restart', requireLocalhost, async (_req: Request, res: Response) => {
res.json({ status: 'restarting' });
// Handle Windows managed mode via IPC
const isWindowsManaged = process.platform === 'win32' &&
process.env.CLAUDE_MEM_MANAGED === 'true' &&
process.send;
if (isWindowsManaged) {
res.json({ status: 'restarting' });
logger.info('SYSTEM', 'Sending restart request to wrapper');
process.send!({ type: 'restart' });
} else {
// Unix or standalone Windows - handle restart ourselves
// The spawner (ensureWorkerStarted/restart command) handles spawning the new daemon.
// This process just needs to shut down and exit.
setTimeout(async () => {
try {
await this.options.onRestart();
} finally {
process.exit(0);
}
}, 100);
flushResponseThen(res, { status: 'restarting' }, () => this.options.onRestart());
}
});
this.app.post('/api/admin/shutdown', requireLocalhost, async (_req: Request, res: Response) => {
res.json({ status: 'shutting_down' });
// Handle Windows managed mode via IPC
const isWindowsManaged = process.platform === 'win32' &&
process.env.CLAUDE_MEM_MANAGED === 'true' &&
process.send;
if (isWindowsManaged) {
res.json({ status: 'shutting_down' });
logger.info('SYSTEM', 'Sending shutdown request to wrapper');
process.send!({ type: 'shutdown' });
} else {
// Unix or standalone Windows - handle shutdown ourselves
setTimeout(async () => {
try {
await this.options.onShutdown();
} finally {
// CRITICAL: Exit the process after shutdown completes (or fails).
// Without this, the daemon stays alive as a zombie — background tasks
// (backfill, reconnects) keep running and respawn chroma-mcp subprocesses.
process.exit(0);
}
}, 100);
flushResponseThen(res, { status: 'shutting_down' }, () => this.options.onShutdown());
}
});
// Doctor endpoint - diagnostic view of supervisor, processes, and health
this.app.get('/api/admin/doctor', requireLocalhost, (_req: Request, res: Response) => {
const supervisor = getSupervisor();
const registry = supervisor.getRegistry();
const allRecords = registry.getAll();
// Check each process liveness
const processes = allRecords.map(record => ({
id: record.id,
pid: record.pid,
@@ -361,15 +260,12 @@ export class Server {
startedAt: record.startedAt,
}));
// Check for dead processes still in registry
const deadProcessPids = processes.filter(p => p.status === 'dead').map(p => p.pid);
// Check if CLAUDECODE_* env vars are leaking into this process
const envClean = !Object.keys(process.env).some(key =>
ENV_EXACT_MATCHES.has(key) || ENV_PREFIXES.some(prefix => key.startsWith(prefix))
);
// Format uptime
const uptimeMs = Date.now() - this.startTime;
const uptimeSeconds = Math.floor(uptimeMs / 1000);
const hours = Math.floor(uptimeSeconds / 3600);
@@ -391,9 +287,6 @@ export class Server {
});
}
/**
* Extract a specific section from instruction content
*/
private extractInstructionSection(content: string, topic: string): string {
const sections: Record<string, string> = {
'workflow': this.extractBetween(content, '## The Workflow', '## Search Parameters'),
@@ -405,9 +298,6 @@ export class Server {
return sections[topic] || sections['all'];
}
/**
* Extract text between two markers
*/
private extractBetween(content: string, startMarker: string, endMarker: string): string {
const startIdx = content.indexOf(startMarker);
const endIdx = content.indexOf(endMarker);
-1
View File
@@ -1,4 +1,3 @@
// Allowed values for /api/instructions security
export const ALLOWED_OPERATIONS = [
'search',
'context',
+16
View File
@@ -0,0 +1,16 @@
import { Response } from 'express';
export function flushResponseThen(
res: Response,
payload: unknown,
action: () => void | Promise<void>
): void {
res.on('finish', async () => {
try {
await action();
} finally {
process.exit(0);
}
});
res.json(payload);
}
-3
View File
@@ -1,6 +1,3 @@
/**
* Server module - HTTP server, middleware, and error handling
*/
export * from './Server.js';
export * from './Middleware.js';
-106
View File
@@ -1,14 +1,3 @@
/**
* Code structure parser shells out to tree-sitter CLI for AST-based extraction.
*
* No native bindings. No WASM. Just the CLI binary + query patterns.
*
* Supported: JS, TS, Python, Go, Rust, Ruby, Java, C, C++,
* Kotlin, Swift, PHP, Elixir, Lua, Scala, Bash, Haskell, Zig,
* CSS, SCSS, TOML, YAML, SQL, Markdown
*
* by Copter Labs
*/
import { execFileSync } from "node:child_process";
import { writeFileSync, readFileSync, mkdtempSync, rmSync, existsSync } from "node:fs";
@@ -17,15 +6,10 @@ import { tmpdir } from "node:os";
import { createRequire } from "node:module";
import { logger } from "../../utils/logger.js";
// CJS-safe require for resolving external packages at runtime.
// In ESM: import.meta.url works. In CJS bundle (esbuild): __filename works.
// typeof check avoids ReferenceError in ESM where __filename doesn't exist.
const _require = typeof __filename !== 'undefined'
? createRequire(__filename)
: createRequire(import.meta.url);
// --- Types ---
export interface CodeSymbol {
name: string;
kind: "function" | "class" | "method" | "interface" | "type" | "const" | "variable" | "export" | "struct" | "enum" | "trait" | "impl" | "property" | "getter" | "setter" | "mixin" | "section" | "code" | "metadata" | "reference";
@@ -47,8 +31,6 @@ export interface FoldedFile {
foldedTokenEstimate: number;
}
// --- Language detection ---
const LANG_MAP: Record<string, string> = {
".js": "javascript",
".mjs": "javascript",
@@ -93,15 +75,6 @@ const LANG_MAP: Record<string, string> = {
".mdx": "markdown",
};
export function detectLanguage(filePath: string): string {
const ext = filePath.slice(filePath.lastIndexOf("."));
return LANG_MAP[ext] || "unknown";
}
/**
* Detect language with fallback to user-configured grammar extensions.
* Bundled LANG_MAP takes priority.
*/
function detectLanguageWithUserGrammars(filePath: string, userConfig: UserGrammarConfig): string {
const ext = filePath.slice(filePath.lastIndexOf("."));
if (LANG_MAP[ext]) return LANG_MAP[ext];
@@ -109,20 +82,13 @@ function detectLanguageWithUserGrammars(filePath: string, userConfig: UserGramma
return "unknown";
}
/**
* Get the query key for a language, checking user config for custom queries.
*/
function getUserAwareQueryKey(language: string, userConfig: UserGrammarConfig): string {
// If user config has a specific query key for this language, use it
if (userConfig.languageToQueryKey[language]) {
return userConfig.languageToQueryKey[language];
}
// Otherwise fall back to the bundled query key mapping
return getQueryKey(language);
}
// --- User-installable grammars via .claude-mem.json ---
export interface UserGrammarEntry {
package: string;
extensions: string[];
@@ -130,11 +96,8 @@ export interface UserGrammarEntry {
}
export interface UserGrammarConfig {
/** language name → grammar entry */
grammars: Record<string, UserGrammarEntry>;
/** file extension → language name (for user-defined extensions only) */
extensionToLanguage: Record<string, string>;
/** language name → query content (custom .scm file content or "generic") */
languageToQueryKey: Record<string, string>;
}
@@ -146,11 +109,6 @@ const EMPTY_USER_GRAMMAR_CONFIG: UserGrammarConfig = {
languageToQueryKey: {},
};
/**
* Load user grammar configuration from .claude-mem.json in a project root.
* Cached per project root. Returns empty config if file doesn't exist or is invalid.
* User entries do NOT override bundled grammars.
*/
export function loadUserGrammars(projectRoot: string): UserGrammarConfig {
if (userGrammarCache.has(projectRoot)) return userGrammarCache.get(projectRoot)!;
@@ -161,7 +119,6 @@ export function loadUserGrammars(projectRoot: string): UserGrammarConfig {
const content = readFileSync(configPath, "utf-8");
rawConfig = JSON.parse(content);
} catch {
// [ANTI-PATTERN IGNORED]: .claude-mem.json missing is the normal case for most projects
userGrammarCache.set(projectRoot, EMPTY_USER_GRAMMAR_CONFIG);
return EMPTY_USER_GRAMMAR_CONFIG;
}
@@ -179,7 +136,6 @@ export function loadUserGrammars(projectRoot: string): UserGrammarConfig {
};
for (const [language, entry] of Object.entries(grammarsRaw as Record<string, unknown>)) {
// Skip if this language is already bundled
if (GRAMMAR_PACKAGES[language]) continue;
if (!entry || typeof entry !== "object" || Array.isArray(entry)) continue;
@@ -189,7 +145,6 @@ export function loadUserGrammars(projectRoot: string): UserGrammarConfig {
const extensions = typedEntry.extensions;
const queryPath = typedEntry.query;
// Validate required fields
if (typeof pkg !== "string" || !Array.isArray(extensions)) continue;
if (!extensions.every((e: unknown) => typeof e === "string")) continue;
@@ -199,19 +154,16 @@ export function loadUserGrammars(projectRoot: string): UserGrammarConfig {
query: typeof queryPath === "string" ? queryPath : undefined,
};
// Map extensions to language (skip extensions already handled by bundled LANG_MAP)
for (const ext of extensions as string[]) {
if (!LANG_MAP[ext]) {
config.extensionToLanguage[ext] = language;
}
}
// Resolve query content
if (typeof queryPath === "string") {
const fullQueryPath = join(projectRoot, queryPath);
try {
const queryContent = readFileSync(fullQueryPath, "utf-8");
// Store with a unique key to avoid collisions with built-in queries
const queryKey = `user_${language}`;
QUERIES[queryKey] = queryContent;
config.languageToQueryKey[language] = queryKey;
@@ -228,8 +180,6 @@ export function loadUserGrammars(projectRoot: string): UserGrammarConfig {
return config;
}
// --- Grammar path resolution ---
const GRAMMAR_PACKAGES: Record<string, string> = {
javascript: "tree-sitter-javascript",
typescript: "tree-sitter-typescript/typescript",
@@ -258,9 +208,6 @@ const GRAMMAR_PACKAGES: Record<string, string> = {
markdown: "@tree-sitter-grammars/tree-sitter-markdown",
};
// Grammars where the parser source lives in a subdirectory of the npm package root,
// AND that subdirectory lacks its own package.json (so require.resolve won't find it).
// Maps language → subdirectory name under the package root.
const GRAMMAR_SUBDIR: Record<string, string> = {
markdown: "tree-sitter-markdown",
};
@@ -271,7 +218,6 @@ function resolveGrammarPath(language: string): string | null {
const subdir = GRAMMAR_SUBDIR[language];
if (subdir) {
// Package root has no sub-package.json — resolve root then append subdir
try {
const rootPkgPath = _require.resolve(pkg + "/package.json");
const resolved = join(dirname(rootPkgPath), subdir);
@@ -286,21 +232,14 @@ function resolveGrammarPath(language: string): string | null {
const packageJsonPath = _require.resolve(pkg + "/package.json");
return dirname(packageJsonPath);
} catch {
// [ANTI-PATTERN IGNORED]: grammar package not installed is expected for unsupported languages
return null;
}
}
/**
* Resolve grammar path with fallback to user-installed grammars.
* First tries bundled grammars, then falls back to the project's node_modules.
*/
export function resolveGrammarPathWithFallback(language: string, projectRoot?: string): string | null {
// Try bundled grammar first
const bundled = resolveGrammarPath(language);
if (bundled) return bundled;
// Fall back to user-installed grammar in project's node_modules
if (!projectRoot) return null;
const userConfig = loadUserGrammars(projectRoot);
@@ -311,7 +250,6 @@ export function resolveGrammarPathWithFallback(language: string, projectRoot?: s
const packageJsonPath = join(projectRoot, "node_modules", entry.package, "package.json");
if (existsSync(packageJsonPath)) {
const grammarDir = dirname(packageJsonPath);
// Verify it has a src/ directory (required by tree-sitter CLI)
if (existsSync(join(grammarDir, "src"))) return grammarDir;
}
} catch {
@@ -322,8 +260,6 @@ export function resolveGrammarPathWithFallback(language: string, projectRoot?: s
return null;
}
// --- Query patterns (declarative symbol extraction) ---
const QUERIES: Record<string, string> = {
jsts: `
(function_declaration name: (identifier) @name) @func
@@ -513,8 +449,6 @@ function getQueryKey(language: string): string {
}
}
// --- Temp file management ---
let queryTmpDir: string | null = null;
const queryFileCache = new Map<string, string>();
@@ -531,14 +465,11 @@ function getQueryFile(queryKey: string): string {
return filePath;
}
// --- CLI execution ---
let cachedBinPath: string | null = null;
function getTreeSitterBin(): string {
if (cachedBinPath) return cachedBinPath;
// Try direct binary from tree-sitter-cli package
try {
const pkgPath = _require.resolve("tree-sitter-cli/package.json");
const binPath = join(dirname(pkgPath), "tree-sitter");
@@ -550,7 +481,6 @@ function getTreeSitterBin(): string {
// [ANTI-PATTERN IGNORED]: tree-sitter-cli not in node_modules is expected; falls back to PATH
}
// Fallback: assume it's on PATH
cachedBinPath = "tree-sitter";
return cachedBinPath;
}
@@ -597,7 +527,6 @@ function parseMultiFileQueryOutput(output: string): Map<string, RawMatch[]> {
let currentMatch: RawMatch | null = null;
for (const line of output.split("\n")) {
// File header: a line that doesn't start with whitespace and isn't empty
if (line.length > 0 && !line.startsWith(" ") && !line.startsWith("\t")) {
currentFile = line.trim();
if (!fileMatches.has(currentFile)) {
@@ -634,8 +563,6 @@ function parseMultiFileQueryOutput(output: string): Map<string, RawMatch[]> {
return fileMatches;
}
// --- Symbol building ---
const KIND_MAP: Record<string, CodeSymbol["kind"]> = {
func: "function",
const_func: "function",
@@ -733,7 +660,6 @@ function buildSymbols(matches: RawMatch[], lines: string[], language: string): {
const exportRanges: Array<{ startRow: number; endRow: number }> = [];
const containers: Array<{ sym: CodeSymbol; startRow: number; endRow: number }> = [];
// Collect exports and imports
for (const match of matches) {
for (const cap of match.captures) {
if (cap.tag === "exp") {
@@ -745,7 +671,6 @@ function buildSymbols(matches: RawMatch[], lines: string[], language: string): {
}
}
// Build symbols
for (const match of matches) {
const kindCapture = match.captures.find(c => KIND_MAP[c.tag]);
const nameCapture = match.captures.find(c => c.tag === "name");
@@ -756,7 +681,6 @@ function buildSymbols(matches: RawMatch[], lines: string[], language: string): {
const kind = KIND_MAP[kindCapture.tag];
const name = nameCapture?.text || "anonymous";
// Markdown-specific: extract heading level and build signature
let signature: string;
if (language === "markdown" && kind === "section") {
const headingLine = lines[startRow] || "";
@@ -795,8 +719,6 @@ function buildSymbols(matches: RawMatch[], lines: string[], language: string): {
symbols.push(sym);
}
// Markdown: deduplicate code_block matches. The catch-all `(fenced_code_block) @code_block`
// pattern and the language-specific pattern both match the same block. Keep the named one.
if (language === "markdown") {
const codeBlocksByRange = new Map<string, CodeSymbol>();
const duplicateCodeBlocks = new Set<CodeSymbol>();
@@ -805,7 +727,6 @@ function buildSymbols(matches: RawMatch[], lines: string[], language: string): {
const rangeKey = `${sym.lineStart}:${sym.lineEnd}`;
const existing = codeBlocksByRange.get(rangeKey);
if (existing) {
// Prefer the named version (has actual language tag vs "anonymous")
if (sym.name !== "anonymous") {
duplicateCodeBlocks.add(existing);
codeBlocksByRange.set(rangeKey, sym);
@@ -823,7 +744,6 @@ function buildSymbols(matches: RawMatch[], lines: string[], language: string): {
}
}
// Nest methods inside containers
const nested = new Set<CodeSymbol>();
for (const container of containers) {
for (const sym of symbols) {
@@ -839,8 +759,6 @@ function buildSymbols(matches: RawMatch[], lines: string[], language: string): {
return { symbols: symbols.filter(s => !nested.has(s)), imports };
}
// --- Main parse functions ---
export function parseFile(content: string, filePath: string, projectRoot?: string): FoldedFile {
const userConfig = projectRoot ? loadUserGrammars(projectRoot) : EMPTY_USER_GRAMMAR_CONFIG;
const language = detectLanguageWithUserGrammars(filePath, userConfig);
@@ -857,7 +775,6 @@ export function parseFile(content: string, filePath: string, projectRoot?: strin
const queryKey = getUserAwareQueryKey(language, userConfig);
const queryFile = getQueryFile(queryKey);
// Write content to temp file with correct extension for language detection
const ext = filePath.slice(filePath.lastIndexOf(".")) || ".txt";
const tmpDir = mkdtempSync(join(tmpdir(), "smart-src-"));
const tmpFile = join(tmpDir, `source${ext}`);
@@ -884,10 +801,6 @@ export function parseFile(content: string, filePath: string, projectRoot?: strin
}
}
/**
* Batch parse multiple on-disk files. Groups by language for one CLI call per language.
* Much faster than calling parseFile() per file (one process spawn per language vs per file).
*/
export function parseFilesBatch(
files: Array<{ absolutePath: string; relativePath: string; content: string }>,
projectRoot?: string
@@ -895,7 +808,6 @@ export function parseFilesBatch(
const results = new Map<string, FoldedFile>();
const userConfig = projectRoot ? loadUserGrammars(projectRoot) : EMPTY_USER_GRAMMAR_CONFIG;
// Group files by language (and thus by query + grammar)
const languageGroups = new Map<string, typeof files>();
for (const file of files) {
const language = detectLanguageWithUserGrammars(file.relativePath, userConfig);
@@ -906,7 +818,6 @@ export function parseFilesBatch(
for (const [language, groupFiles] of languageGroups) {
const grammarPath = resolveGrammarPathWithFallback(language, projectRoot);
if (!grammarPath) {
// No grammar — return empty results for these files
for (const file of groupFiles) {
const lines = file.content.split("\n");
results.set(file.relativePath, {
@@ -920,11 +831,9 @@ export function parseFilesBatch(
const queryKey = getUserAwareQueryKey(language, userConfig);
const queryFile = getQueryFile(queryKey);
// Run one batch query for all files of this language
const absolutePaths = groupFiles.map(f => f.absolutePath);
const batchResults = runBatchQuery(queryFile, absolutePaths, grammarPath);
// Build FoldedFile for each file using the batch results
for (const file of groupFiles) {
const lines = file.content.split("\n");
const matches = batchResults.get(file.absolutePath) || [];
@@ -948,8 +857,6 @@ export function parseFilesBatch(
return results;
}
// --- Formatting ---
export function formatFoldedView(file: FoldedFile): string {
if (file.language === "markdown") {
return formatMarkdownFoldedView(file);
@@ -980,14 +887,12 @@ export function formatFoldedView(file: FoldedFile): string {
function formatMarkdownFoldedView(file: FoldedFile): string {
const parts: string[] = [];
// Total width for the content column (before the line range)
const COL_WIDTH = 56;
parts.push(`📄 ${file.filePath} (${file.language}, ${file.totalLines} lines)`);
for (const sym of file.symbols) {
if (sym.kind === "section") {
// Extract heading level from the signature (count leading # characters)
const hashMatch = sym.signature.match(/^(#{1,6})\s/);
const level = hashMatch ? hashMatch[1].length : 1;
const indent = " ".repeat(level);
@@ -995,7 +900,6 @@ function formatMarkdownFoldedView(file: FoldedFile): string {
const content = `${indent}${sym.signature}`;
parts.push(`${content.padEnd(COL_WIDTH)}${lineRange}`);
} else if (sym.kind === "code") {
// Find containing heading level for indentation
const containingLevel = findContainingHeadingLevel(file.symbols, sym.lineStart);
const indent = " ".repeat(containingLevel + 1);
const lineRange = sym.lineStart === sym.lineEnd
@@ -1021,10 +925,6 @@ function formatMarkdownFoldedView(file: FoldedFile): string {
return parts.join("\n");
}
/**
* Find the heading level of the most recent section heading before the given line.
* Returns 0 if no heading precedes the line.
*/
function findContainingHeadingLevel(symbols: CodeSymbol[], lineStart: number): number {
let bestLevel = 0;
for (const sym of symbols) {
@@ -1082,8 +982,6 @@ function getSymbolIcon(kind: CodeSymbol["kind"]): string {
return icons[kind] || "·";
}
// --- Unfold ---
export function unfoldSymbol(content: string, filePath: string, symbolName: string): string | null {
const file = parseFile(content, filePath);
@@ -1103,13 +1001,11 @@ export function unfoldSymbol(content: string, filePath: string, symbolName: stri
const lines = content.split("\n");
// Markdown section unfold: return from heading to next heading of same or higher level
if (file.language === "markdown" && symbol.kind === "section") {
const hashMatch = symbol.signature.match(/^(#{1,6})\s/);
const level = hashMatch ? hashMatch[1].length : 1;
const start = symbol.lineStart;
// Find the next heading at same or higher (lower number) level
let end = lines.length - 1;
for (const sym of file.symbols) {
if (sym.kind === "section" && sym.lineStart > start) {
@@ -1117,7 +1013,6 @@ export function unfoldSymbol(content: string, filePath: string, symbolName: stri
const otherLevel = otherHashMatch ? otherHashMatch[1].length : 1;
if (otherLevel <= level) {
end = sym.lineStart - 1;
// Trim trailing blank lines
while (end > start && lines[end].trim() === "") end--;
break;
}
@@ -1128,7 +1023,6 @@ export function unfoldSymbol(content: string, filePath: string, symbolName: stri
return `<!-- 📍 ${filePath} L${start + 1}-${end + 1} -->\n${extracted}`;
}
// Include preceding comments/decorators
let start = symbol.lineStart;
for (let i = symbol.lineStart - 1; i >= 0; i--) {
const trimmed = lines[i].trim();
+6 -47
View File
@@ -1,14 +1,3 @@
/**
* Search module finds code files and symbols matching a query.
*
* Two search modes:
* 1. Grep-style: find files/lines containing the query string
* 2. Structural: parse files and match against symbol names/signatures
*
* Both return folded views, not raw content.
*
* Uses batch parsing (one CLI call per language) for fast multi-file search.
*/
import { readFile, readdir, stat } from "node:fs/promises";
import { join, relative } from "node:path";
@@ -48,7 +37,7 @@ const IGNORE_DIRS = new Set([
".claude", ".smart-file-read",
]);
const MAX_FILE_SIZE = 512 * 1024; // 512KB — skip huge files
const MAX_FILE_SIZE = 512 * 1024;
export interface SearchResult {
foldedFiles: FoldedFile[];
@@ -66,13 +55,9 @@ export interface SymbolMatch {
jsdoc?: string;
lineStart: number;
lineEnd: number;
matchReason: string; // why this matched
matchReason: string;
}
/**
* Walk a directory recursively, yielding file paths.
* extraExtensions: additional file extensions to include (from user grammar config).
*/
async function* walkDir(dir: string, rootDir: string, maxDepth: number = 20, extraExtensions?: Set<string>): AsyncGenerator<string> {
if (maxDepth <= 0) return;
@@ -81,7 +66,7 @@ async function* walkDir(dir: string, rootDir: string, maxDepth: number = 20, ext
entries = await readdir(dir, { withFileTypes: true });
} catch (error) {
logger.debug('WORKER', `walkDir: failed to read directory ${dir}`, undefined, error instanceof Error ? error : undefined);
return; // permission denied, etc.
return;
}
for (const entry of entries) {
@@ -101,9 +86,6 @@ async function* walkDir(dir: string, rootDir: string, maxDepth: number = 20, ext
}
}
/**
* Read a file safely, skipping if too large or binary.
*/
async function safeReadFile(filePath: string): Promise<string | null> {
try {
const stats = await stat(filePath);
@@ -112,7 +94,6 @@ async function safeReadFile(filePath: string): Promise<string | null> {
const content = await readFile(filePath, "utf-8");
// Quick binary check — if first 1000 chars have null bytes, skip
if (content.slice(0, 1000).includes("\0")) return null;
return content;
@@ -122,13 +103,6 @@ async function safeReadFile(filePath: string): Promise<string | null> {
}
}
/**
* Search a codebase for symbols matching a query.
*
* Phase 1: Collect files and read content
* Phase 2: Batch parse all files (one CLI call per language)
* Phase 3: Match query against parsed symbols
*/
export async function searchCodebase(
rootDir: string,
query: string,
@@ -143,7 +117,6 @@ export async function searchCodebase(
const queryLower = query.toLowerCase();
const queryParts = queryLower.split(/[\s_\-./]+/).filter(p => p.length > 0);
// Load user grammar config for extra file extensions
const projectRoot = options.projectRoot || rootDir;
const userConfig = loadUserGrammars(projectRoot);
const extraExtensions = new Set<string>();
@@ -155,7 +128,6 @@ export async function searchCodebase(
}
}
// Phase 1: Collect files
const filesToParse: Array<{ absolutePath: string; relativePath: string; content: string }> = [];
for await (const filePath of walkDir(rootDir, rootDir, 20, extraExtensions.size > 0 ? extraExtensions : undefined)) {
@@ -174,10 +146,8 @@ export async function searchCodebase(
});
}
// Phase 2: Batch parse (one CLI call per language)
const parsedFiles = parseFilesBatch(filesToParse, projectRoot);
// Phase 3: Match query against symbols
const foldedFiles: FoldedFile[] = [];
const matchingSymbols: SymbolMatch[] = [];
let totalSymbolsFound = 0;
@@ -238,7 +208,6 @@ export async function searchCodebase(
}
}
// Sort by relevance and trim
matchingSymbols.sort((a, b) => {
const aScore = matchScore(a.symbolName.toLowerCase(), queryParts);
const bScore = matchScore(b.symbolName.toLowerCase(), queryParts);
@@ -260,19 +229,14 @@ export async function searchCodebase(
};
}
/**
* Score how well query parts match a string.
* Returns 0 for no match, higher for better matches.
*/
function matchScore(text: string, queryParts: string[]): number {
let score = 0;
for (const part of queryParts) {
if (text === part) {
score += 10; // exact match
score += 10;
} else if (text.includes(part)) {
score += 5; // substring match
score += 5;
} else {
// Fuzzy: check if all chars appear in order
let ti = 0;
let matched = 0;
for (const ch of part) {
@@ -283,7 +247,7 @@ function matchScore(text: string, queryParts: string[]): number {
}
}
if (matched === part.length) {
score += 1; // loose fuzzy match
score += 1;
}
}
}
@@ -298,9 +262,6 @@ function countSymbols(file: FoldedFile): number {
return count;
}
/**
* Format search results for LLM consumption.
*/
export function formatSearchResults(result: SearchResult, query: string): string {
const parts: string[] = [];
@@ -314,7 +275,6 @@ export function formatSearchResults(result: SearchResult, query: string): string
return parts.join("\n");
}
// Show matching symbols first (compact)
parts.push("── Matching Symbols ──");
parts.push("");
for (const match of result.matchingSymbols) {
@@ -329,7 +289,6 @@ export function formatSearchResults(result: SearchResult, query: string): string
parts.push("");
}
// Show folded file views
parts.push("── Folded File Views ──");
parts.push("");
for (const file of result.foldedFiles) {
+1 -62
View File
@@ -3,8 +3,7 @@ import { DATA_DIR, DB_PATH, ensureDir } from '../../shared/paths.js';
import { logger } from '../../utils/logger.js';
import { MigrationRunner } from './migrations/runner.js';
// SQLite configuration constants
const SQLITE_MMAP_SIZE_BYTES = 256 * 1024 * 1024; // 256MB
const SQLITE_MMAP_SIZE_BYTES = 256 * 1024 * 1024;
const SQLITE_CACHE_SIZE_PAGES = 10_000;
export interface Migration {
@@ -15,30 +14,16 @@ export interface Migration {
let dbInstance: Database | null = null;
/**
* ClaudeMemDatabase - New entry point for the sqlite module
*
* Replaces SessionStore as the database coordinator.
* Sets up bun:sqlite with optimized settings and runs all migrations.
*
* Usage:
* const db = new ClaudeMemDatabase(); // uses default DB_PATH
* const db = new ClaudeMemDatabase('/path/to/db.sqlite');
* const db = new ClaudeMemDatabase(':memory:'); // for tests
*/
export class ClaudeMemDatabase {
public db: Database;
constructor(dbPath: string = DB_PATH) {
// Ensure data directory exists (skip for in-memory databases)
if (dbPath !== ':memory:') {
ensureDir(DATA_DIR);
}
// Create database connection
this.db = new Database(dbPath, { create: true, readwrite: true });
// Apply optimized SQLite settings
this.db.run('PRAGMA journal_mode = WAL');
this.db.run('PRAGMA synchronous = NORMAL');
this.db.run('PRAGMA foreign_keys = ON');
@@ -46,23 +31,15 @@ export class ClaudeMemDatabase {
this.db.run(`PRAGMA mmap_size = ${SQLITE_MMAP_SIZE_BYTES}`);
this.db.run(`PRAGMA cache_size = ${SQLITE_CACHE_SIZE_PAGES}`);
// Run all migrations
const migrationRunner = new MigrationRunner(this.db);
migrationRunner.runAllMigrations();
}
/**
* Close the database connection
*/
close(): void {
this.db.close();
}
}
/**
* SQLite Database singleton with migration support and optimized settings
* @deprecated Use ClaudeMemDatabase instead for new code
*/
export class DatabaseManager {
private static instance: DatabaseManager;
private db: Database | null = null;
@@ -75,29 +52,20 @@ export class DatabaseManager {
return DatabaseManager.instance;
}
/**
* Register a migration to be run during initialization
*/
registerMigration(migration: Migration): void {
this.migrations.push(migration);
// Keep migrations sorted by version
this.migrations.sort((a, b) => a.version - b.version);
}
/**
* Initialize database connection with optimized settings
*/
async initialize(): Promise<Database> {
if (this.db) {
return this.db;
}
// Ensure the data directory exists
ensureDir(DATA_DIR);
this.db = new Database(DB_PATH, { create: true, readwrite: true });
// Apply optimized SQLite settings
this.db.run('PRAGMA journal_mode = WAL');
this.db.run('PRAGMA synchronous = NORMAL');
this.db.run('PRAGMA foreign_keys = ON');
@@ -105,19 +73,14 @@ export class DatabaseManager {
this.db.run(`PRAGMA mmap_size = ${SQLITE_MMAP_SIZE_BYTES}`);
this.db.run(`PRAGMA cache_size = ${SQLITE_CACHE_SIZE_PAGES}`);
// Initialize schema_versions table
this.initializeSchemaVersions();
// Run migrations
await this.runMigrations();
dbInstance = this.db;
return this.db;
}
/**
* Get the current database connection
*/
getConnection(): Database {
if (!this.db) {
throw new Error('Database not initialized. Call initialize() first.');
@@ -125,18 +88,12 @@ export class DatabaseManager {
return this.db;
}
/**
* Execute a function within a transaction
*/
withTransaction<T>(fn: (db: Database) => T): T {
const db = this.getConnection();
const transaction = db.transaction(fn);
return transaction(db);
}
/**
* Close the database connection
*/
close(): void {
if (this.db) {
this.db.close();
@@ -145,9 +102,6 @@ export class DatabaseManager {
}
}
/**
* Initialize the schema_versions table
*/
private initializeSchemaVersions(): void {
if (!this.db) return;
@@ -160,9 +114,6 @@ export class DatabaseManager {
`);
}
/**
* Run all pending migrations
*/
private async runMigrations(): Promise<void> {
if (!this.db) return;
@@ -188,9 +139,6 @@ export class DatabaseManager {
}
}
/**
* Get current schema version
*/
getCurrentVersion(): number {
if (!this.db) return 0;
@@ -201,9 +149,6 @@ export class DatabaseManager {
}
}
/**
* Get the global database instance (for compatibility)
*/
export function getDatabase(): Database {
if (!dbInstance) {
throw new Error('Database not initialized. Call DatabaseManager.getInstance().initialize() first.');
@@ -211,21 +156,15 @@ export function getDatabase(): Database {
return dbInstance;
}
/**
* Initialize and get database manager
*/
export async function initializeDatabase(): Promise<Database> {
const manager = DatabaseManager.getInstance();
return await manager.initialize();
}
// Re-export bun:sqlite Database type
export { Database };
// Re-export MigrationRunner for external use
export { MigrationRunner } from './migrations/runner.js';
// Re-export all module functions for convenient imports
export * from './Sessions.js';
export * from './Observations.js';
export * from './Summaries.js';
-3
View File
@@ -1,6 +1,3 @@
/**
* Import functions for bulk data import with duplicate checking
*/
import { logger } from '../../utils/logger.js';
export * from './import/bulk.js';
-4
View File
@@ -1,7 +1,3 @@
/**
* Observations module - named re-exports
* Provides all observation-related database operations
*/
import { logger } from '../../utils/logger.js';
export * from './observations/types.js';
+31 -269
View File
@@ -2,21 +2,6 @@ import { Database } from 'bun:sqlite';
import type { PendingMessage } from '../worker-types.js';
import { logger } from '../../utils/logger.js';
/**
* Provider for the set of currently-live worker PIDs.
*
* The self-healing claim query reclaims any 'processing' row whose
* worker_pid is NOT a live worker (crash recovery without a timer).
*
* Default: a single-worker process supplies just its own PID. Multi-worker
* deployments inject a callback backed by `supervisor/process-registry.ts`
* (`getSupervisor().getRegistry().getAll().filter(r => r.type === 'worker').map(r => r.pid)`).
*/
export type LiveWorkerPidsProvider = () => readonly number[];
/**
* Persistent pending message record from database
*/
export interface PersistentPendingMessage {
id: number;
session_db_id: number;
@@ -28,78 +13,22 @@ export interface PersistentPendingMessage {
cwd: string | null;
last_assistant_message: string | null;
prompt_number: number | null;
status: 'pending' | 'processing' | 'processed' | 'failed';
retry_count: number;
status: 'pending' | 'processing';
created_at_epoch: number;
completed_at_epoch: number | null;
worker_pid: number | null;
// Claude Code subagent identity — NULL for main-session messages.
agent_type: string | null;
agent_id: string | null;
}
/**
* PendingMessageStore - Persistent work queue for SDK messages
*
* Messages are persisted before processing using a claim-confirm pattern.
* This simplifies the lifecycle and eliminates duplicate processing bugs.
*
* Lifecycle:
* 1. enqueue() - Message persisted with status 'pending'
* 2. claimNextMessage() - Atomically claims next pending message (marks as 'processing'
* and stamps the live worker's PID). Self-healing: reclaims any 'processing' row
* whose worker_pid is no longer alive (worker crash) in the same UPDATE.
* 3. confirmProcessed() - Deletes message after successful processing
*
* Self-healing semantics:
* A 'processing' row is reclaimable iff worker_pid IS NULL or worker_pid is
* not present in the live-pids list at claim time. No timer, no
* stale-cutoff timestamp liveness is the truth.
*/
export class PendingMessageStore {
private db: Database;
private maxRetries: number;
private workerPid: number;
private getLiveWorkerPids: LiveWorkerPidsProvider;
/**
* @param db SQLite database
* @param maxRetries Per-message retry ceiling for transient SDK failures (default 3)
* @param workerPid PID of the worker that owns this store; stamped into worker_pid on claim.
* Defaults to process.pid so single-process deployments need no extra wiring.
* @param getLiveWorkerPids Provider for the set of all currently-live worker PIDs.
* Defaults to `[workerPid]` only this worker is alive.
* Multi-worker deployments inject a supervisor-backed provider.
*/
constructor(
db: Database,
maxRetries: number = 3,
workerPid: number = process.pid,
getLiveWorkerPids?: LiveWorkerPidsProvider
private onMutate?: () => void
) {
this.db = db;
this.maxRetries = maxRetries;
this.workerPid = workerPid;
this.getLiveWorkerPids = getLiveWorkerPids ?? (() => [this.workerPid]);
}
/**
* Enqueue a new message (persist before processing).
*
* Uses `INSERT OR IGNORE` so duplicate (content_session_id, tool_use_id)
* pairs collapse to a single row the UNIQUE INDEX added in plan 01 phase 1
* is the authority on tool-use idempotency. Per principle 3 (UNIQUE
* constraint over dedup window), we don't time-gate duplicates.
*
* @returns The database ID of the persisted message, or 0 when the insert
* was suppressed by ON CONFLICT. Callers MUST guard with `id > 0`
* before threading the value into any subsequent SQL (e.g.
* `confirmProcessed`, `markFailed`, `processingMessageIds`)
* a zero id would silently target zero rows. The only two call
* sites today (`SessionManager.queueObservation` and
* `queueSummarize`) use the id purely for logging and both
* branch on `messageId === 0`.
*/
enqueue(sessionDbId: number, contentSessionId: string, message: PendingMessage): number {
const now = Date.now();
const stmt = this.db.prepare(`
@@ -107,9 +36,9 @@ export class PendingMessageStore {
session_db_id, content_session_id, tool_use_id, message_type,
tool_name, tool_input, tool_response, cwd,
last_assistant_message,
prompt_number, status, retry_count, created_at_epoch,
prompt_number, status, created_at_epoch,
agent_type, agent_id
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'pending', 0, ?, ?, ?)
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, 'pending', ?, ?, ?)
`);
const result = stmt.run(
@@ -128,180 +57,62 @@ export class PendingMessageStore {
message.agentId ?? null
);
this.onMutate?.();
return result.lastInsertRowid as number;
}
/**
* Atomically claim the next message for `sessionDbId`.
*
* A row is claimable iff:
* - status = 'pending', OR
* - status = 'processing' AND worker_pid is not in the live-pids set
* (i.e. the previous owner crashed). This is the self-healing branch:
* liveness is checked at claim time, not by a background reaper.
*
* The claim stamps the live worker's PID and flips status to 'processing'
* in a single UPDATE WHERE id = (subquery).
*/
claimNextMessage(sessionDbId: number): PersistentPendingMessage | null {
// Build a parameterized IN-list of live worker PIDs. We always include
// this worker's PID so that an in-flight claim doesn't accidentally
// self-reclaim a row we just stamped (the predicate is "NOT IN live").
const livePids = this.getLivePidsIncludingSelf();
const placeholders = livePids.map(() => '?').join(',');
const sql = `
UPDATE pending_messages
SET status = 'processing',
worker_pid = ?
SET status = 'processing'
WHERE id = (
SELECT id FROM pending_messages
WHERE session_db_id = ?
AND (
status = 'pending'
OR (status = 'processing' AND (worker_pid IS NULL OR worker_pid NOT IN (${placeholders})))
)
WHERE session_db_id = ? AND status = 'pending'
ORDER BY id ASC
LIMIT 1
)
RETURNING *
`;
const stmt = this.db.prepare(sql);
const params: (number | string)[] = [this.workerPid, sessionDbId, ...livePids];
const claimed = stmt.get(...params) as PersistentPendingMessage | null;
const claimed = this.db.prepare(sql).get(sessionDbId) as PersistentPendingMessage | null;
if (claimed) {
logger.info('QUEUE', `CLAIMED | sessionDbId=${sessionDbId} | messageId=${claimed.id} | type=${claimed.message_type} | workerPid=${this.workerPid}`, {
logger.info('QUEUE', `CLAIMED | sessionDbId=${sessionDbId} | messageId=${claimed.id} | type=${claimed.message_type}`, {
sessionId: sessionDbId
});
}
this.onMutate?.();
return claimed;
}
private getLivePidsIncludingSelf(): number[] {
const pids = this.getLiveWorkerPids();
if (pids.includes(this.workerPid)) return [...pids];
return [...pids, this.workerPid];
}
/**
* Confirm a message was successfully processed - DELETE it from the queue.
* CRITICAL: Only call this AFTER the observation/summary has been stored to DB.
* This prevents message loss on generator crash.
*/
confirmProcessed(messageId: number): void {
const stmt = this.db.prepare('DELETE FROM pending_messages WHERE id = ?');
const result = stmt.run(messageId);
if (result.changes > 0) {
logger.debug('QUEUE', `CONFIRMED | messageId=${messageId} | deleted from queue`);
clearPendingForSession(sessionDbId: number): number {
const stmt = this.db.prepare(`
DELETE FROM pending_messages WHERE session_db_id = ?
`);
const changes = stmt.run(sessionDbId).changes;
if (changes > 0) {
logger.info('QUEUE', `CLEARED | sessionDbId=${sessionDbId} | rowsDeleted=${changes}`, {
sessionId: sessionDbId
});
this.onMutate?.();
}
return changes;
}
/**
* Delete `status='failed'` rows older than `thresholdMs`. Called once at
* worker startup so `pending_messages` does not grow unbounded on long-
* running or high-failure-rate installations; `claimNextMessage`'s
* self-healing subquery scans this table, so bounded rows keep claim
* latency predictable. Not a reaper one-shot, idempotent.
*/
clearFailedOlderThan(thresholdMs: number): number {
const cutoff = Date.now() - thresholdMs;
const stmt = this.db.prepare(`
DELETE FROM pending_messages
WHERE status = 'failed' AND COALESCE(failed_at_epoch, completed_at_epoch, 0) < ?
`);
return stmt.run(cutoff).changes;
}
/**
* Get all pending messages for session (ordered by creation time)
*/
getAllPending(sessionDbId: number): PersistentPendingMessage[] {
const stmt = this.db.prepare(`
SELECT * FROM pending_messages
WHERE session_db_id = ? AND status = 'pending'
ORDER BY id ASC
`);
return stmt.all(sessionDbId) as PersistentPendingMessage[];
}
/**
* Transition pending_messages rows to a terminal status PATHFINDER-2026-04-22
* Plan 06 Phase 9. One SQL UPDATE path, one place to add a new terminal status
* later, zero divergence between call sites.
*
* - `failed` narrow form: only rows currently `status='processing'`.
* Used during error recovery when a session generator crashes and we want
* to mark its in-flight messages failed without touching rows that never
* left `pending`.
*
* - `abandoned` wide form: rows in `('pending', 'processing')`.
* Used during session termination or completion drain so the session
* doesn't appear in `getSessionsWithPendingMessages` forever. Both forms
* write the row's `status` column to `'failed'`; `abandoned` is just the
* broader WHERE clause.
*
* Cites Principle 6 (one helper, N callers) and Principle 7 (the
* old per-status wrapper methods were deleted in the same PR).
*
* @param status `'failed'` (processing-only) or `'abandoned'` (pending+processing)
* @param filter `{ sessionDbId: number }` scope to one session's rows.
* Required: no unscoped path exists, to prevent accidental global drain.
* @returns Number of rows updated
*/
transitionMessagesTo(
status: 'failed' | 'abandoned',
filter: { sessionDbId: number }
): number {
const now = Date.now();
const statusClause = status === 'failed'
? `status = 'processing'`
: `status IN ('pending', 'processing')`;
resetProcessingToPending(sessionDbId: number): number {
const stmt = this.db.prepare(`
UPDATE pending_messages
SET status = 'failed', failed_at_epoch = ?
WHERE session_db_id = ? AND ${statusClause}
SET status = 'pending'
WHERE session_db_id = ? AND status = 'processing'
`);
return stmt.run(now, filter.sessionDbId).changes;
}
/**
* Mark message as failed (status: pending -> failed or back to pending for retry)
* If retry_count < maxRetries, moves back to 'pending' for retry
* Otherwise marks as 'failed' permanently
*/
markFailed(messageId: number): void {
const now = Date.now();
// Get current retry count
const msg = this.db.prepare('SELECT retry_count FROM pending_messages WHERE id = ?').get(messageId) as { retry_count: number } | undefined;
if (!msg) return;
if (msg.retry_count < this.maxRetries) {
// Move back to pending for retry
const stmt = this.db.prepare(`
UPDATE pending_messages
SET status = 'pending', retry_count = retry_count + 1, worker_pid = NULL
WHERE id = ?
`);
stmt.run(messageId);
} else {
// Max retries exceeded, mark as permanently failed
const stmt = this.db.prepare(`
UPDATE pending_messages
SET status = 'failed', completed_at_epoch = ?
WHERE id = ?
`);
stmt.run(now, messageId);
const changes = stmt.run(sessionDbId).changes;
if (changes > 0) {
logger.info('QUEUE', `RESET_PROCESSING | sessionDbId=${sessionDbId} | rowsReset=${changes}`, {
sessionId: sessionDbId
});
this.onMutate?.();
}
return changes;
}
/**
* Get count of pending messages for a session
*/
getPendingCount(sessionDbId: number): number {
const stmt = this.db.prepare(`
SELECT COUNT(*) as count FROM pending_messages
@@ -311,10 +122,6 @@ export class PendingMessageStore {
return result.count;
}
/**
* Peek at pending message types for a session (for tier routing).
* Returns list of { message_type, tool_name } without claiming.
*/
peekPendingTypes(sessionDbId: number): Array<{ message_type: string; tool_name: string | null }> {
const stmt = this.db.prepare(`
SELECT message_type, tool_name FROM pending_messages
@@ -324,51 +131,6 @@ export class PendingMessageStore {
return stmt.all(sessionDbId) as Array<{ message_type: string; tool_name: string | null }>;
}
/**
* Check if any session has work that could be claimed right now.
*
* Counts a row as work iff it is 'pending' or it is 'processing' under a
* worker_pid that is not currently alive (the same predicate the
* self-healing claim uses). No side effects no UPDATE, no timer.
*/
hasAnyPendingWork(): boolean {
const livePids = this.getLivePidsIncludingSelf();
const placeholders = livePids.map(() => '?').join(',');
const stmt = this.db.prepare(`
SELECT COUNT(*) as count FROM pending_messages
WHERE status = 'pending'
OR (status = 'processing' AND (worker_pid IS NULL OR worker_pid NOT IN (${placeholders})))
`);
const result = stmt.get(...livePids) as { count: number };
return result.count > 0;
}
/**
* Get all session IDs that have pending messages (for recovery on startup)
*/
getSessionsWithPendingMessages(): number[] {
const stmt = this.db.prepare(`
SELECT DISTINCT session_db_id FROM pending_messages
WHERE status IN ('pending', 'processing')
`);
const results = stmt.all() as { session_db_id: number }[];
return results.map(r => r.session_db_id);
}
/**
* Get session info for a pending message (for recovery)
*/
getSessionInfoForMessage(messageId: number): { sessionDbId: number; contentSessionId: string } | null {
const stmt = this.db.prepare(`
SELECT session_db_id, content_session_id FROM pending_messages WHERE id = ?
`);
const result = stmt.get(messageId) as { session_db_id: number; content_session_id: string } | undefined;
return result ? { sessionDbId: result.session_db_id, contentSessionId: result.content_session_id } : null;
}
/**
* Convert a PersistentPendingMessage back to PendingMessage format
*/
toPendingMessage(persistent: PersistentPendingMessage): PendingMessage {
return {
type: persistent.message_type,
-6
View File
@@ -1,9 +1,3 @@
/**
* User prompts module - named re-exports
*
* Provides all user prompt database operations as standalone functions.
* Each function takes `db: Database` as first parameter.
*/
import { logger } from '../../utils/logger.js';
export * from './prompts/types.js';
+1 -112
View File
@@ -15,11 +15,6 @@ import {
UserPromptRow
} from './types.js';
/**
* Search interface for session-based memory
* Provides filter-only structured queries for sessions, observations, and user prompts
* Vector search is handled by ChromaDB - this class only supports filtering without query text
*/
export class SessionSearch {
private db: Database;
@@ -34,45 +29,21 @@ export class SessionSearch {
this.db.run('PRAGMA journal_mode = WAL');
}
// Cache FTS5 availability once at construction (avoids DDL probe on every query)
this._fts5Available = this.isFts5Available();
// Ensure FTS tables exist — may downgrade _fts5Available if creation fails
this.ensureFTSTables();
}
private _fts5Available: boolean;
/**
* Ensure FTS5 tables exist (backward compatibility only - no longer used for search)
*
* FTS5 tables are maintained for backward compatibility but not used for search.
* Vector search (Chroma) is now the primary search mechanism.
*
* Retention Rationale:
* - Prevents breaking existing installations with FTS5 tables
* - Allows graceful migration path for users
* - Tables maintained but search paths removed
* - Triggers still fire to keep tables synchronized
*
* FTS5 may be unavailable on some platforms (e.g., Bun on Windows #791).
* When unavailable, we skip FTS table creation search falls back to
* ChromaDB (vector) and LIKE queries (structured filters) which are unaffected.
*
* TODO: Remove FTS5 infrastructure in future major version (v7.0.0)
*/
private ensureFTSTables(): void {
// Check if FTS tables already exist
const tables = this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name LIKE '%_fts'").all() as TableNameRow[];
const hasFTS = tables.some(t => t.name === 'observations_fts' || t.name === 'session_summaries_fts');
if (hasFTS) {
// Already migrated
return;
}
// Runtime check: verify FTS5 is available before attempting to create tables.
// bun:sqlite on Windows may not include the FTS5 extension (#791).
if (!this.isFts5Available()) {
logger.warn('DB', 'FTS5 not available on this platform — skipping FTS table creation (search uses ChromaDB)');
return;
@@ -84,33 +55,22 @@ export class SessionSearch {
this.createFTSTablesAndTriggers();
logger.info('DB', 'FTS5 tables created successfully');
} catch (error) {
// FTS5 creation failed at runtime despite probe succeeding — degrade gracefully
this._fts5Available = false;
logger.warn('DB', 'FTS5 table creation failed — search will use ChromaDB and LIKE queries', {}, error instanceof Error ? error : undefined);
}
}
/**
* Probe whether the FTS5 extension is available in the current SQLite build.
* Creates and immediately drops a temporary FTS5 table.
*/
private isFts5Available(): boolean {
try {
this.db.run('CREATE VIRTUAL TABLE _fts5_probe USING fts5(test_column)');
this.db.run('DROP TABLE _fts5_probe');
return true;
} catch {
// [ANTI-PATTERN IGNORED]: FTS5 unavailability is an expected platform condition, not an error
return false;
}
}
/**
* Create FTS5 virtual tables and sync triggers for observations and session_summaries.
* Extracted from ensureFTSTables to keep try block small.
*/
private createFTSTablesAndTriggers(): void {
// Create observations_fts virtual table
this.db.run(`
CREATE VIRTUAL TABLE IF NOT EXISTS observations_fts USING fts5(
title,
@@ -124,14 +84,12 @@ export class SessionSearch {
);
`);
// Populate with existing data
this.db.run(`
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
SELECT id, title, subtitle, narrative, text, facts, concepts
FROM observations;
`);
// Create triggers for observations
this.db.run(`
CREATE TRIGGER IF NOT EXISTS observations_ai AFTER INSERT ON observations BEGIN
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
@@ -151,7 +109,6 @@ export class SessionSearch {
END;
`);
// Create session_summaries_fts virtual table
this.db.run(`
CREATE VIRTUAL TABLE IF NOT EXISTS session_summaries_fts USING fts5(
request,
@@ -165,14 +122,12 @@ export class SessionSearch {
);
`);
// Populate with existing data
this.db.run(`
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
SELECT id, request, investigated, learned, completed, next_steps, notes
FROM session_summaries;
`);
// Create triggers for session_summaries
this.db.run(`
CREATE TRIGGER IF NOT EXISTS session_summaries_ai AFTER INSERT ON session_summaries BEGIN
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
@@ -193,9 +148,6 @@ export class SessionSearch {
`);
}
/**
* Build WHERE clause for structured filters
*/
private buildFilterClause(
filters: SearchFilters,
params: any[],
@@ -203,13 +155,11 @@ export class SessionSearch {
): string {
const conditions: string[] = [];
// Project filter
if (filters.project) {
conditions.push(`${tableAlias}.project = ?`);
params.push(filters.project);
}
// Type filter (for observations only)
if (filters.type) {
if (Array.isArray(filters.type)) {
const placeholders = filters.type.map(() => '?').join(',');
@@ -221,7 +171,6 @@ export class SessionSearch {
}
}
// Date range filter
if (filters.dateRange) {
const { start, end } = filters.dateRange;
if (start) {
@@ -236,7 +185,6 @@ export class SessionSearch {
}
}
// Concepts filter (JSON array search)
if (filters.concepts) {
const concepts = Array.isArray(filters.concepts) ? filters.concepts : [filters.concepts];
const conceptConditions = concepts.map(() => {
@@ -248,7 +196,6 @@ export class SessionSearch {
}
}
// Files filter (JSON array search)
if (filters.files) {
const files = Array.isArray(filters.files) ? filters.files : [filters.files];
const fileConditions = files.map(() => {
@@ -268,9 +215,6 @@ export class SessionSearch {
return conditions.length > 0 ? conditions.join(' AND ') : '';
}
/**
* Build ORDER BY clause
*/
private buildOrderClause(orderBy: SearchOptions['orderBy'] = 'relevance', hasFTS: boolean = true, ftsTable: string = 'observations_fts'): string {
switch (orderBy) {
case 'relevance':
@@ -284,16 +228,10 @@ export class SessionSearch {
}
}
/**
* Search observations using filter-only direct SQLite query.
* Vector search is handled by ChromaDB - this only supports filtering without query text.
*/
searchObservations(query: string | undefined, options: SearchOptions = {}): ObservationSearchResult[] {
const params: any[] = [];
const { limit = 50, offset = 0, orderBy = 'relevance', ...filters } = options;
// FILTER-ONLY PATH: When no query text, query table directly
// This enables date filtering which Chroma cannot do (requires direct SQLite access)
if (!query) {
const filterClause = this.buildFilterClause(filters, params, 'o');
if (!filterClause) {
@@ -314,7 +252,6 @@ export class SessionSearch {
return this.db.prepare(sql).all(...params) as ObservationSearchResult[];
}
// FTS5 keyword fallback when ChromaDB is unavailable (#1913, #2048)
if (this._fts5Available) {
const filterClause = this.buildFilterClause(filters, params, 'o');
const orderClause = this.buildOrderClause(orderBy, true, 'observations_fts');
@@ -329,7 +266,6 @@ export class SessionSearch {
LIMIT ? OFFSET ?
`;
// Escape FTS5 special characters: wrap in quotes to treat as literal phrase
const escapedQuery = '"' + query.replace(/"/g, '""') + '"';
params.unshift(escapedQuery);
params.push(limit, offset);
@@ -337,7 +273,6 @@ export class SessionSearch {
try {
return this.db.prepare(sql).all(...params) as ObservationSearchResult[];
} catch (error) {
// Re-throw so callers can distinguish FTS failure from "no results"
logger.warn('DB', 'FTS5 observation search failed', {}, error instanceof Error ? error : undefined);
throw error;
}
@@ -347,15 +282,10 @@ export class SessionSearch {
return [];
}
/**
* Search session summaries using filter-only direct SQLite query.
* Vector search is handled by ChromaDB - this only supports filtering without query text.
*/
searchSessions(query: string | undefined, options: SearchOptions = {}): SessionSummarySearchResult[] {
const params: any[] = [];
const { limit = 50, offset = 0, orderBy = 'relevance', ...filters } = options;
// FILTER-ONLY PATH: When no query text, query session_summaries table directly
if (!query) {
const filterOptions = { ...filters };
delete filterOptions.type;
@@ -380,7 +310,6 @@ export class SessionSearch {
return this.db.prepare(sql).all(...params) as SessionSummarySearchResult[];
}
// FTS5 keyword fallback when ChromaDB is unavailable (#1913, #2048)
if (this._fts5Available) {
const filterOptions = { ...filters };
delete filterOptions.type;
@@ -402,7 +331,6 @@ export class SessionSearch {
LIMIT ? OFFSET ?
`;
// Escape FTS5 special characters: wrap in quotes to treat as literal phrase
const escapedQuery = '"' + query.replace(/"/g, '""') + '"';
params.unshift(escapedQuery);
params.push(limit, offset);
@@ -410,7 +338,6 @@ export class SessionSearch {
try {
return this.db.prepare(sql).all(...params) as SessionSummarySearchResult[];
} catch (error) {
// Re-throw so callers can distinguish FTS failure from "no results"
logger.warn('DB', 'FTS5 session search failed', {}, error instanceof Error ? error : undefined);
throw error;
}
@@ -420,14 +347,10 @@ export class SessionSearch {
return [];
}
/**
* Find observations by concept tag
*/
findByConcept(concept: string, options: SearchOptions = {}): ObservationSearchResult[] {
const params: any[] = [];
const { limit = 50, offset = 0, orderBy = 'date_desc', ...filters } = options;
// Add concept to filters
const conceptFilters = { ...filters, concepts: concept };
const filterClause = this.buildFilterClause(conceptFilters, params, 'o');
const orderClause = this.buildOrderClause(orderBy, false);
@@ -445,9 +368,6 @@ export class SessionSearch {
return this.db.prepare(sql).all(...params) as ObservationSearchResult[];
}
/**
* Check if an observation has any files that are direct children of the folder
*/
private hasDirectChildFile(obs: ObservationSearchResult, folderPath: string): boolean {
const checkFiles = (filesJson: string | null): boolean => {
if (!filesJson) return false;
@@ -465,9 +385,6 @@ export class SessionSearch {
return checkFiles(obs.files_modified) || checkFiles(obs.files_read);
}
/**
* Check if a session has any files that are direct children of the folder
*/
private hasDirectChildFileSession(session: SessionSummarySearchResult, folderPath: string): boolean {
const checkFiles = (filesJson: string | null): boolean => {
if (!filesJson) return false;
@@ -485,10 +402,6 @@ export class SessionSearch {
return checkFiles(session.files_read) || checkFiles(session.files_edited);
}
/**
* Find observations and summaries by file path
* When isFolder=true, only returns results with files directly in the folder (not subfolders)
*/
findByFile(filePath: string, options: SearchOptions = {}): {
observations: ObservationSearchResult[];
sessions: SessionSummarySearchResult[];
@@ -496,10 +409,8 @@ export class SessionSearch {
const params: any[] = [];
const { limit = 50, offset = 0, orderBy = 'date_desc', isFolder = false, ...filters } = options;
// Query more results if we're filtering to direct children
const queryLimit = isFolder ? limit * 3 : limit;
// Add file to filters
const fileFilters = { ...filters, files: filePath };
const filterClause = this.buildFilterClause(fileFilters, params, 'o');
const orderClause = this.buildOrderClause(orderBy, false);
@@ -516,15 +427,13 @@ export class SessionSearch {
let observations = this.db.prepare(observationsSql).all(...params) as ObservationSearchResult[];
// Post-filter to direct children if isFolder mode
if (isFolder) {
observations = observations.filter(obs => this.hasDirectChildFile(obs, filePath)).slice(0, limit);
}
// For session summaries, search files_read and files_edited
const sessionParams: any[] = [];
const sessionFilters = { ...filters };
delete sessionFilters.type; // Remove type filter for sessions
delete sessionFilters.type;
const baseConditions: string[] = [];
if (sessionFilters.project) {
@@ -546,7 +455,6 @@ export class SessionSearch {
}
}
// File condition
baseConditions.push(`(
EXISTS (SELECT 1 FROM json_each(s.files_read) WHERE value LIKE ?)
OR EXISTS (SELECT 1 FROM json_each(s.files_edited) WHERE value LIKE ?)
@@ -565,7 +473,6 @@ export class SessionSearch {
let sessions = this.db.prepare(sessionsSql).all(...sessionParams) as SessionSummarySearchResult[];
// Post-filter to direct children if isFolder mode
if (isFolder) {
sessions = sessions.filter(s => this.hasDirectChildFileSession(s, filePath)).slice(0, limit);
}
@@ -573,9 +480,6 @@ export class SessionSearch {
return { observations, sessions };
}
/**
* Find observations by type
*/
findByType(
type: ObservationRow['type'] | ObservationRow['type'][],
options: SearchOptions = {}
@@ -583,7 +487,6 @@ export class SessionSearch {
const params: any[] = [];
const { limit = 50, offset = 0, orderBy = 'date_desc', ...filters } = options;
// Add type to filters
const typeFilters = { ...filters, type };
const filterClause = this.buildFilterClause(typeFilters, params, 'o');
const orderClause = this.buildOrderClause(orderBy, false);
@@ -601,15 +504,10 @@ export class SessionSearch {
return this.db.prepare(sql).all(...params) as ObservationSearchResult[];
}
/**
* Search user prompts using filter-only direct SQLite query.
* Vector search is handled by ChromaDB - this only supports filtering without query text.
*/
searchUserPrompts(query: string | undefined, options: SearchOptions = {}): UserPromptSearchResult[] {
const params: any[] = [];
const { limit = 20, offset = 0, orderBy = 'relevance', ...filters } = options;
// Build filter conditions (join with sdk_sessions for project filtering)
const baseConditions: string[] = [];
if (filters.project) {
baseConditions.push('s.project = ?');
@@ -630,7 +528,6 @@ export class SessionSearch {
}
}
// FILTER-ONLY PATH: When no query text, query user_prompts table directly
if (!query) {
if (baseConditions.length === 0) {
throw new AppError(SessionSearch.MISSING_SEARCH_INPUT_MESSAGE, 400, 'INVALID_SEARCH_REQUEST');
@@ -654,8 +551,6 @@ export class SessionSearch {
return this.db.prepare(sql).all(...params) as UserPromptSearchResult[];
}
// LIKE fallback for user prompts text search (no FTS table for this entity)
// Escape LIKE metacharacters so %, _, and \ in user input are treated as literals
const escapedQuery = query.replace(/[\\%_]/g, '\\$&');
baseConditions.push("up.prompt_text LIKE ? ESCAPE '\\'");
params.push(`%${escapedQuery}%`);
@@ -678,9 +573,6 @@ export class SessionSearch {
return this.db.prepare(sql).all(...params) as UserPromptSearchResult[];
}
/**
* Get all prompts for a session by content_session_id
*/
getUserPromptsBySession(contentSessionId: string): UserPromptRow[] {
const stmt = this.db.prepare(`
SELECT
@@ -698,9 +590,6 @@ export class SessionSearch {
return stmt.all(contentSessionId) as UserPromptRow[];
}
/**
* Close the database connection
*/
close(): void {
this.db.close();
}
File diff suppressed because it is too large Load Diff
-7
View File
@@ -1,10 +1,3 @@
/**
* Sessions module - re-exports all session-related functions
*
* Usage:
* import { createSDKSession, getSessionById } from './Sessions.js';
* const sessionId = createSDKSession(db, contentId, project, prompt);
*/
import { logger } from '../../utils/logger.js';
export * from './sessions/types.js';
-3
View File
@@ -1,6 +1,3 @@
/**
* Summaries module - Named re-exports for summary-related database operations
*/
import { logger } from '../../utils/logger.js';
export * from './summaries/types.js';
-6
View File
@@ -1,9 +1,3 @@
/**
* Timeline module re-exports
* Provides time-based context queries for observations, sessions, and prompts
*
* grep-friendly: Timeline, getTimelineAroundTimestamp, getTimelineAroundObservation, getAllProjects
*/
import { logger } from '../../utils/logger.js';
export * from './timeline/queries.js';
-23
View File
@@ -1,6 +1,3 @@
/**
* Bulk import functions for importing data with duplicate checking
*/
import { Database } from 'bun:sqlite';
import { logger } from '../../../utils/logger.js';
@@ -10,10 +7,6 @@ export interface ImportResult {
id: number;
}
/**
* Import SDK session with duplicate checking
* Duplicates are identified by content_session_id
*/
export function importSdkSession(
db: Database,
session: {
@@ -28,7 +21,6 @@ export function importSdkSession(
status: string;
}
): ImportResult {
// Check if session already exists
const existing = db
.prepare('SELECT id FROM sdk_sessions WHERE content_session_id = ?')
.get(session.content_session_id) as { id: number } | undefined;
@@ -59,10 +51,6 @@ export function importSdkSession(
return { imported: true, id: result.lastInsertRowid as number };
}
/**
* Import session summary with duplicate checking
* Duplicates are identified by memory_session_id
*/
export function importSessionSummary(
db: Database,
summary: {
@@ -82,7 +70,6 @@ export function importSessionSummary(
created_at_epoch: number;
}
): ImportResult {
// Check if summary already exists for this session
const existing = db
.prepare('SELECT id FROM session_summaries WHERE memory_session_id = ?')
.get(summary.memory_session_id) as { id: number } | undefined;
@@ -119,10 +106,6 @@ export function importSessionSummary(
return { imported: true, id: result.lastInsertRowid as number };
}
/**
* Import observation with duplicate checking
* Duplicates are identified by memory_session_id + title + created_at_epoch
*/
export function importObservation(
db: Database,
obs: {
@@ -145,7 +128,6 @@ export function importObservation(
agent_id?: string | null;
}
): ImportResult {
// Check if observation already exists
const existing = db
.prepare(
`
@@ -193,10 +175,6 @@ export function importObservation(
return { imported: true, id: result.lastInsertRowid as number };
}
/**
* Import user prompt with duplicate checking
* Duplicates are identified by content_session_id + prompt_number
*/
export function importUserPrompt(
db: Database,
prompt: {
@@ -207,7 +185,6 @@ export function importUserPrompt(
created_at_epoch: number;
}
): ImportResult {
// Check if prompt already exists
const existing = db
.prepare(
`
-8
View File
@@ -1,4 +1,3 @@
// Export main components
export {
ClaudeMemDatabase,
DatabaseManager,
@@ -7,23 +6,16 @@ export {
MigrationRunner
} from './Database.js';
// Export session store (CRUD operations for sessions, observations, summaries)
// @deprecated Use modular functions from Database.ts instead
export { SessionStore } from './SessionStore.js';
// Export session search (FTS5 and structured search)
export { SessionSearch } from './SessionSearch.js';
// Export types
export * from './types.js';
// Export migrations
export { migrations } from './migrations.js';
// Export transactions
export { storeObservations, storeObservationsAndMarkComplete } from './transactions.js';
// Re-export all modular functions for convenient access
export * from './Sessions.js';
export * from './Observations.js';
export * from './Summaries.js';
-96
View File
@@ -2,16 +2,11 @@ import { Database } from 'bun:sqlite';
import { Migration } from './Database.js';
import { logger } from '../../utils/logger.js';
// Re-export MigrationRunner for SessionStore migration extraction
export { MigrationRunner } from './migrations/runner.js';
/**
* Initial schema migration - creates all core tables
*/
export const migration001: Migration = {
version: 1,
up: (db: Database) => {
// Sessions table - core session tracking
db.run(`
CREATE TABLE IF NOT EXISTS sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -32,7 +27,6 @@ export const migration001: Migration = {
CREATE INDEX IF NOT EXISTS idx_sessions_project_created ON sessions(project, created_at_epoch DESC);
`);
// Memories table - compressed memory chunks
db.run(`
CREATE TABLE IF NOT EXISTS memories (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -56,7 +50,6 @@ export const migration001: Migration = {
CREATE INDEX IF NOT EXISTS idx_memories_origin ON memories(origin);
`);
// Overviews table - session summaries (one per project)
db.run(`
CREATE TABLE IF NOT EXISTS overviews (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -76,7 +69,6 @@ export const migration001: Migration = {
CREATE UNIQUE INDEX IF NOT EXISTS idx_overviews_project_latest ON overviews(project, created_at_epoch DESC);
`);
// Diagnostics table - system health and debug info
db.run(`
CREATE TABLE IF NOT EXISTS diagnostics (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -96,7 +88,6 @@ export const migration001: Migration = {
CREATE INDEX IF NOT EXISTS idx_diagnostics_created ON diagnostics(created_at_epoch DESC);
`);
// Transcript events table - raw conversation events
db.run(`
CREATE TABLE IF NOT EXISTS transcript_events (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -131,13 +122,9 @@ export const migration001: Migration = {
}
};
/**
* Migration 002 - Add hierarchical memory fields (v2 format)
*/
export const migration002: Migration = {
version: 2,
up: (db: Database) => {
// Add new columns for hierarchical memory structure
db.run(`
ALTER TABLE memories ADD COLUMN title TEXT;
ALTER TABLE memories ADD COLUMN subtitle TEXT;
@@ -146,7 +133,6 @@ export const migration002: Migration = {
ALTER TABLE memories ADD COLUMN files_touched TEXT;
`);
// Create indexes for the new fields to improve search performance
db.run(`
CREATE INDEX IF NOT EXISTS idx_memories_title ON memories(title);
CREATE INDEX IF NOT EXISTS idx_memories_concepts ON memories(concepts);
@@ -156,21 +142,14 @@ export const migration002: Migration = {
},
down: (_db: Database) => {
// Note: SQLite doesn't support DROP COLUMN in all versions
// In production, we'd need to recreate the table without these columns
// For now, we'll just log a warning
console.log('⚠️ Warning: SQLite ALTER TABLE DROP COLUMN not fully supported');
console.log('⚠️ To rollback, manually recreate the memories table');
}
};
/**
* Migration 003 - Add streaming_sessions table for real-time session tracking
*/
export const migration003: Migration = {
version: 3,
up: (db: Database) => {
// Streaming sessions table - tracks active SDK compression sessions
db.run(`
CREATE TABLE IF NOT EXISTS streaming_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -206,14 +185,9 @@ export const migration003: Migration = {
}
};
/**
* Migration 004 - Add SDK agent architecture tables
* Implements the refactor plan for hook-driven memory with SDK agent synthesis
*/
export const migration004: Migration = {
version: 4,
up: (db: Database) => {
// SDK sessions table - tracks SDK streaming sessions
db.run(`
CREATE TABLE IF NOT EXISTS sdk_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -235,7 +209,6 @@ export const migration004: Migration = {
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_started ON sdk_sessions(started_at_epoch DESC);
`);
// Observation queue table - tracks pending observations for SDK processing
db.run(`
CREATE TABLE IF NOT EXISTS observation_queue (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -253,7 +226,6 @@ export const migration004: Migration = {
CREATE INDEX IF NOT EXISTS idx_observation_queue_pending ON observation_queue(memory_session_id, processed_at_epoch);
`);
// Observations table - stores extracted observations (what SDK decides is important)
db.run(`
CREATE TABLE IF NOT EXISTS observations (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -272,7 +244,6 @@ export const migration004: Migration = {
CREATE INDEX IF NOT EXISTS idx_observations_created ON observations(created_at_epoch DESC);
`);
// Session summaries table - stores structured session summaries
db.run(`
CREATE TABLE IF NOT EXISTS session_summaries (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -309,27 +280,17 @@ export const migration004: Migration = {
}
};
/**
* Migration 005 - Remove orphaned tables
* Drops streaming_sessions (superseded by sdk_sessions)
* Drops observation_queue (superseded by Unix socket communication)
*/
export const migration005: Migration = {
version: 5,
up: (db: Database) => {
// Drop streaming_sessions - superseded by sdk_sessions in migration004
// This table was from v2 architecture and is no longer used
db.run(`DROP TABLE IF EXISTS streaming_sessions`);
// Drop observation_queue - superseded by Unix socket communication
// Worker now uses sockets instead of database polling for observations
db.run(`DROP TABLE IF EXISTS observation_queue`);
console.log('✅ Dropped orphaned tables: streaming_sessions, observation_queue');
},
down: (db: Database) => {
// Recreate tables if needed (though they should never be used)
db.run(`
CREATE TABLE IF NOT EXISTS streaming_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -366,15 +327,9 @@ export const migration005: Migration = {
}
};
/**
* Migration 006 - Add FTS5 full-text search tables
* Creates virtual tables for fast text search on observations and session_summaries
*/
export const migration006: Migration = {
version: 6,
up: (db: Database) => {
// FTS5 may be unavailable on some platforms (e.g., Bun on Windows #791).
// Probe before creating tables — search falls back to ChromaDB when unavailable.
try {
db.run('CREATE VIRTUAL TABLE _fts5_probe USING fts5(test_column)');
db.run('DROP TABLE _fts5_probe');
@@ -383,9 +338,6 @@ export const migration006: Migration = {
return;
}
// FTS5 virtual table for observations
// Note: This assumes the hierarchical fields (title, subtitle, etc.) already exist
// from the inline migrations in SessionStore constructor
db.run(`
CREATE VIRTUAL TABLE IF NOT EXISTS observations_fts USING fts5(
title,
@@ -399,14 +351,12 @@ export const migration006: Migration = {
);
`);
// Populate FTS table with existing data
db.run(`
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
SELECT id, title, subtitle, narrative, text, facts, concepts
FROM observations;
`);
// Triggers to keep observations_fts in sync
db.run(`
CREATE TRIGGER IF NOT EXISTS observations_ai AFTER INSERT ON observations BEGIN
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
@@ -426,7 +376,6 @@ export const migration006: Migration = {
END;
`);
// FTS5 virtual table for session_summaries
db.run(`
CREATE VIRTUAL TABLE IF NOT EXISTS session_summaries_fts USING fts5(
request,
@@ -440,14 +389,12 @@ export const migration006: Migration = {
);
`);
// Populate FTS table with existing data
db.run(`
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
SELECT id, request, investigated, learned, completed, next_steps, notes
FROM session_summaries;
`);
// Triggers to keep session_summaries_fts in sync
db.run(`
CREATE TRIGGER IF NOT EXISTS session_summaries_ai AFTER INSERT ON session_summaries BEGIN
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
@@ -485,40 +432,22 @@ export const migration006: Migration = {
}
};
/**
* Migration 007 - Add discovery_tokens column for ROI metrics
* Tracks token cost of discovering/creating each observation and summary
*/
export const migration007: Migration = {
version: 7,
up: (db: Database) => {
// Add discovery_tokens to observations table
db.run(`ALTER TABLE observations ADD COLUMN discovery_tokens INTEGER DEFAULT 0`);
// Add discovery_tokens to session_summaries table
db.run(`ALTER TABLE session_summaries ADD COLUMN discovery_tokens INTEGER DEFAULT 0`);
console.log('✅ Added discovery_tokens columns for ROI tracking');
},
down: (db: Database) => {
// Note: SQLite doesn't support DROP COLUMN in all versions
// In production, would need to recreate tables without these columns
console.log('⚠️ Warning: SQLite ALTER TABLE DROP COLUMN not fully supported');
console.log('⚠️ To rollback, manually recreate the observations and session_summaries tables');
}
};
/**
* All migrations in order
*/
/**
* Migration 008: Observation feedback table for tracking observation usage
*
* Tracks how observations are used (semantic injection hits, search access,
* explicit retrieval). Foundation for future Thompson Sampling optimization.
*/
export const migration008: Migration = {
version: 25,
up: (db: Database) => {
@@ -542,18 +471,6 @@ export const migration008: Migration = {
}
};
/**
* Migration 009: Add missing columns to observations table
*
* The generated_by_model column tracks which model generated each observation
* (required for model selection optimization via Thompson Sampling).
* The relevance_count column tracks how many times an observation was reused
* (incremented by the feedback recording pipeline).
*
* Both columns may already exist in databases created by the compiled binary
* (v10.6.3) but are missing from the migration source. This migration
* conditionally adds them.
*/
export const migration009: Migration = {
version: 26,
up: (db: Database) => {
@@ -573,14 +490,6 @@ export const migration009: Migration = {
}
};
/**
* Migration 010: Label observations (and their queue rows) with the subagent identity.
*
* Claude Code hooks that fire inside a subagent carry agent_id and agent_type on the
* stdin payload. These flow hook worker pending_messages SDK storage so that
* observation rows can be attributed to the originating subagent. Main-session rows
* keep NULL for both columns.
*/
export const migration010: Migration = {
version: 27,
up: (db: Database) => {
@@ -600,8 +509,6 @@ export const migration010: Migration = {
db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_type ON observations(agent_type)');
db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_id ON observations(agent_id)');
// Also thread the same fields through the pending_messages queue so the label
// survives worker restarts between enqueue and SDK-agent processing.
const pendingColumns = db.prepare('PRAGMA table_info(pending_messages)').all() as Array<{ name: string }>;
if (pendingColumns.length > 0) {
const pendingHasAgentType = pendingColumns.some(c => c.name === 'agent_type');
@@ -628,9 +535,6 @@ export const migration010: Migration = {
}
};
/**
* All migrations in order
*/
export const migrations: Migration[] = [
migration001,
migration002,
+26 -252
View File
@@ -8,17 +8,9 @@ import {
} from '../../../types/database.js';
import { DEFAULT_PLATFORM_SOURCE } from '../../../shared/platform-source.js';
/**
* MigrationRunner handles all database schema migrations
* Extracted from SessionStore to separate concerns
*/
export class MigrationRunner {
constructor(private db: Database) {}
/**
* Run all migrations in order
* This is the only public method - all migrations are internal
*/
runAllMigrations(): void {
this.initializeSchema();
this.ensureWorkerPortColumn();
@@ -41,18 +33,10 @@ export class MigrationRunner {
this.rebuildPendingMessagesForSelfHealingClaim();
this.addObservationsUniqueContentHashIndex();
this.addObservationsMetadataColumn();
this.dropDeadPendingMessagesColumns();
}
/**
* Initialize database schema (migration004)
*
* ALWAYS creates core tables using CREATE TABLE IF NOT EXISTS safe to run
* regardless of schema_versions state. This fixes issue #979 where the old
* DatabaseManager migration system (versions 1-7) shared the schema_versions
* table, causing maxApplied > 0 and skipping core table creation entirely.
*/
private initializeSchema(): void {
// Create schema_versions table if it doesn't exist
this.db.run(`
CREATE TABLE IF NOT EXISTS schema_versions (
id INTEGER PRIMARY KEY,
@@ -61,7 +45,6 @@ export class MigrationRunner {
)
`);
// Always create core tables — IF NOT EXISTS makes this idempotent
this.db.run(`
CREATE TABLE IF NOT EXISTS sdk_sessions (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -121,18 +104,10 @@ export class MigrationRunner {
CREATE INDEX IF NOT EXISTS idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
`);
// Record migration004 as applied (OR IGNORE handles re-runs safely)
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(4, new Date().toISOString());
}
/**
* Ensure worker_port column exists (migration 5)
*
* NOTE: Version 5 conflicts with old DatabaseManager migration005 (which drops orphaned tables).
* We check actual column state rather than relying solely on version tracking.
*/
private ensureWorkerPortColumn(): void {
// Check actual column existence — don't rely on version tracking alone (issue #979)
const tableInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
const hasWorkerPort = tableInfo.some(col => col.name === 'worker_port');
@@ -141,19 +116,10 @@ export class MigrationRunner {
logger.debug('DB', 'Added worker_port column to sdk_sessions table');
}
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(5, new Date().toISOString());
}
/**
* Ensure prompt tracking columns exist (migration 6)
*
* NOTE: Version 6 conflicts with old DatabaseManager migration006 (which creates FTS5 tables).
* We check actual column state rather than relying solely on version tracking.
*/
private ensurePromptTrackingColumns(): void {
// Check actual column existence — don't rely on version tracking alone (issue #979)
// Check sdk_sessions for prompt_counter
const sessionsInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
const hasPromptCounter = sessionsInfo.some(col => col.name === 'prompt_counter');
@@ -162,7 +128,6 @@ export class MigrationRunner {
logger.debug('DB', 'Added prompt_counter column to sdk_sessions table');
}
// Check observations for prompt_number
const observationsInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
const obsHasPromptNumber = observationsInfo.some(col => col.name === 'prompt_number');
@@ -171,7 +136,6 @@ export class MigrationRunner {
logger.debug('DB', 'Added prompt_number column to observations table');
}
// Check session_summaries for prompt_number
const summariesInfo = this.db.query('PRAGMA table_info(session_summaries)').all() as TableColumnInfo[];
const sumHasPromptNumber = summariesInfo.some(col => col.name === 'prompt_number');
@@ -180,36 +144,24 @@ export class MigrationRunner {
logger.debug('DB', 'Added prompt_number column to session_summaries table');
}
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(6, new Date().toISOString());
}
/**
* Remove UNIQUE constraint from session_summaries.memory_session_id (migration 7)
*
* NOTE: Version 7 conflicts with old DatabaseManager migration007 (which adds discovery_tokens).
* We check actual constraint state rather than relying solely on version tracking.
*/
private removeSessionSummariesUniqueConstraint(): void {
// Check actual constraint state — don't rely on version tracking alone (issue #979)
const summariesIndexes = this.db.query('PRAGMA index_list(session_summaries)').all() as IndexInfo[];
const hasUniqueConstraint = summariesIndexes.some(idx => idx.unique === 1 && idx.origin !== 'pk');
if (!hasUniqueConstraint) {
// Already migrated (no constraint exists)
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(7, new Date().toISOString());
return;
}
logger.debug('DB', 'Removing UNIQUE constraint from session_summaries.memory_session_id');
// Begin transaction
this.db.run('BEGIN TRANSACTION');
// Clean up leftover temp table from a previously-crashed run
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
// Create new table without UNIQUE constraint
this.db.run(`
CREATE TABLE session_summaries_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -230,7 +182,6 @@ export class MigrationRunner {
)
`);
// Copy data from old table
this.db.run(`
INSERT INTO session_summaries_new
SELECT id, memory_session_id, project, request, investigated, learned,
@@ -239,49 +190,37 @@ export class MigrationRunner {
FROM session_summaries
`);
// Drop old table
this.db.run('DROP TABLE session_summaries');
// Rename new table
this.db.run('ALTER TABLE session_summaries_new RENAME TO session_summaries');
// Recreate indexes
this.db.run(`
CREATE INDEX idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
CREATE INDEX idx_session_summaries_project ON session_summaries(project);
CREATE INDEX idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
`);
// Commit transaction
this.db.run('COMMIT');
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(7, new Date().toISOString());
logger.debug('DB', 'Successfully removed UNIQUE constraint from session_summaries.memory_session_id');
}
/**
* Add hierarchical fields to observations table (migration 8)
*/
private addObservationHierarchicalFields(): void {
// Check if migration already applied
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(8) as SchemaVersion | undefined;
if (applied) return;
// Check if new fields already exist
const tableInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
const hasTitle = tableInfo.some(col => col.name === 'title');
if (hasTitle) {
// Already migrated
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(8, new Date().toISOString());
return;
}
logger.debug('DB', 'Adding hierarchical fields to observations table');
// Add new columns
this.db.run(`
ALTER TABLE observations ADD COLUMN title TEXT;
ALTER TABLE observations ADD COLUMN subtitle TEXT;
@@ -292,40 +231,29 @@ export class MigrationRunner {
ALTER TABLE observations ADD COLUMN files_modified TEXT;
`);
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(8, new Date().toISOString());
logger.debug('DB', 'Successfully added hierarchical fields to observations table');
}
/**
* Make observations.text nullable (migration 9)
* The text field is deprecated in favor of structured fields (title, subtitle, narrative, etc.)
*/
private makeObservationsTextNullable(): void {
// Check if migration already applied
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(9) as SchemaVersion | undefined;
if (applied) return;
// Check if text column is already nullable
const tableInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
const textColumn = tableInfo.find(col => col.name === 'text');
if (!textColumn || textColumn.notnull === 0) {
// Already migrated or text column doesn't exist
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(9, new Date().toISOString());
return;
}
logger.debug('DB', 'Making observations.text nullable');
// Begin transaction
this.db.run('BEGIN TRANSACTION');
// Clean up leftover temp table from a previously-crashed run
this.db.run('DROP TABLE IF EXISTS observations_new');
// Create new table with text as nullable
this.db.run(`
CREATE TABLE observations_new (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -347,7 +275,6 @@ export class MigrationRunner {
)
`);
// Copy data from old table (all existing columns)
this.db.run(`
INSERT INTO observations_new
SELECT id, memory_session_id, project, text, type, title, subtitle, facts,
@@ -356,13 +283,10 @@ export class MigrationRunner {
FROM observations
`);
// Drop old table
this.db.run('DROP TABLE observations');
// Rename new table
this.db.run('ALTER TABLE observations_new RENAME TO observations');
// Recreate indexes
this.db.run(`
CREATE INDEX idx_observations_sdk_session ON observations(memory_session_id);
CREATE INDEX idx_observations_project ON observations(project);
@@ -370,37 +294,27 @@ export class MigrationRunner {
CREATE INDEX idx_observations_created ON observations(created_at_epoch DESC);
`);
// Commit transaction
this.db.run('COMMIT');
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(9, new Date().toISOString());
logger.debug('DB', 'Successfully made observations.text nullable');
}
/**
* Create user_prompts table with FTS5 support (migration 10)
*/
private createUserPromptsTable(): void {
// Check if migration already applied
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(10) as SchemaVersion | undefined;
if (applied) return;
// Check if table already exists
const tableInfo = this.db.query('PRAGMA table_info(user_prompts)').all() as TableColumnInfo[];
if (tableInfo.length > 0) {
// Already migrated
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(10, new Date().toISOString());
return;
}
logger.debug('DB', 'Creating user_prompts table with FTS5 support');
// Begin transaction
this.db.run('BEGIN TRANSACTION');
// Create main table (using content_session_id since memory_session_id is set asynchronously by worker)
this.db.run(`
CREATE TABLE user_prompts (
id INTEGER PRIMARY KEY AUTOINCREMENT,
@@ -418,27 +332,19 @@ export class MigrationRunner {
CREATE INDEX idx_user_prompts_lookup ON user_prompts(content_session_id, prompt_number);
`);
// Create FTS5 virtual table — skip if FTS5 is unavailable (e.g., Bun on Windows #791).
// The user_prompts table itself is still created; only FTS indexing is skipped.
try {
this.createUserPromptsFTS();
} catch (ftsError) {
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, ftsError instanceof Error ? ftsError : new Error(String(ftsError)));
}
// Commit transaction
this.db.run('COMMIT');
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(10, new Date().toISOString());
logger.debug('DB', 'Successfully created user_prompts table');
}
/**
* Create FTS5 virtual table and sync triggers for user_prompts.
* Extracted from createUserPromptsTable to keep try block small.
*/
private createUserPromptsFTS(): void {
this.db.run(`
CREATE VIRTUAL TABLE user_prompts_fts USING fts5(
@@ -468,17 +374,10 @@ export class MigrationRunner {
`);
}
/**
* Ensure discovery_tokens column exists (migration 11)
* CRITICAL: This migration was incorrectly using version 7 (which was already taken by removeSessionSummariesUniqueConstraint)
* The duplicate version number may have caused migration tracking issues in some databases
*/
private ensureDiscoveryTokensColumn(): void {
// Check if migration already applied to avoid unnecessary re-runs
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(11) as SchemaVersion | undefined;
if (applied) return;
// Check if discovery_tokens column exists in observations table
const observationsInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
const obsHasDiscoveryTokens = observationsInfo.some(col => col.name === 'discovery_tokens');
@@ -487,7 +386,6 @@ export class MigrationRunner {
logger.debug('DB', 'Added discovery_tokens column to observations table');
}
// Check if discovery_tokens column exists in session_summaries table
const summariesInfo = this.db.query('PRAGMA table_info(session_summaries)').all() as TableColumnInfo[];
const sumHasDiscoveryTokens = summariesInfo.some(col => col.name === 'discovery_tokens');
@@ -496,21 +394,13 @@ export class MigrationRunner {
logger.debug('DB', 'Added discovery_tokens column to session_summaries table');
}
// Record migration only after successful column verification/addition
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(11, new Date().toISOString());
}
/**
* Create pending_messages table for persistent work queue (migration 16)
* Messages are persisted before processing and deleted after success.
* Enables recovery from SDK hangs and worker crashes.
*/
private createPendingMessagesTable(): void {
// Check if migration already applied
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(16) as SchemaVersion | undefined;
if (applied) return;
// Check if table already exists
const tables = this.db.query("SELECT name FROM sqlite_master WHERE type='table' AND name='pending_messages'").all() as TableNameRow[];
if (tables.length > 0) {
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(16, new Date().toISOString());
@@ -549,14 +439,6 @@ export class MigrationRunner {
logger.debug('DB', 'pending_messages table created successfully');
}
/**
* Rename session ID columns for semantic clarity (migration 17)
* - claude_session_id -> content_session_id (user's observed session)
* - sdk_session_id -> memory_session_id (memory agent's session for resume)
*
* IDEMPOTENT: Checks each table individually before renaming.
* This handles databases in any intermediate state (partial migration, fresh install, etc.)
*/
private renameSessionIdColumns(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(17) as SchemaVersion | undefined;
if (applied) return;
@@ -565,46 +447,36 @@ export class MigrationRunner {
let renamesPerformed = 0;
// Helper to safely rename a column if it exists
const safeRenameColumn = (table: string, oldCol: string, newCol: string): boolean => {
const tableInfo = this.db.query(`PRAGMA table_info(${table})`).all() as TableColumnInfo[];
const hasOldCol = tableInfo.some(col => col.name === oldCol);
const hasNewCol = tableInfo.some(col => col.name === newCol);
if (hasNewCol) {
// Already renamed, nothing to do
return false;
}
if (hasOldCol) {
// SQLite 3.25+ supports ALTER TABLE RENAME COLUMN
this.db.run(`ALTER TABLE ${table} RENAME COLUMN ${oldCol} TO ${newCol}`);
logger.debug('DB', `Renamed ${table}.${oldCol} to ${newCol}`);
return true;
}
// Neither column exists - table might not exist or has different schema
logger.warn('DB', `Column ${oldCol} not found in ${table}, skipping rename`);
return false;
};
// Rename in sdk_sessions table
if (safeRenameColumn('sdk_sessions', 'claude_session_id', 'content_session_id')) renamesPerformed++;
if (safeRenameColumn('sdk_sessions', 'sdk_session_id', 'memory_session_id')) renamesPerformed++;
// Rename in pending_messages table
if (safeRenameColumn('pending_messages', 'claude_session_id', 'content_session_id')) renamesPerformed++;
// Rename in observations table
if (safeRenameColumn('observations', 'sdk_session_id', 'memory_session_id')) renamesPerformed++;
// Rename in session_summaries table
if (safeRenameColumn('session_summaries', 'sdk_session_id', 'memory_session_id')) renamesPerformed++;
// Rename in user_prompts table
if (safeRenameColumn('user_prompts', 'claude_session_id', 'content_session_id')) renamesPerformed++;
// Record migration
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(17, new Date().toISOString());
if (renamesPerformed > 0) {
@@ -614,10 +486,6 @@ export class MigrationRunner {
}
}
/**
* Add failed_at_epoch column to pending_messages (migration 20)
* Used by transitionMessagesTo() for error recovery tracking
*/
private addFailedAtEpochColumn(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(20) as SchemaVersion | undefined;
if (applied) return;
@@ -633,22 +501,12 @@ export class MigrationRunner {
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(20, new Date().toISOString());
}
/**
* Add ON UPDATE CASCADE to FK constraints on observations and session_summaries (migration 21)
*
* Both tables have FK(memory_session_id) -> sdk_sessions(memory_session_id) with ON DELETE CASCADE
* but missing ON UPDATE CASCADE. This causes FK constraint violations when code updates
* sdk_sessions.memory_session_id while child rows still reference the old value.
*
* SQLite doesn't support ALTER TABLE for FK changes, so we recreate both tables.
*/
private addOnUpdateCascadeToForeignKeys(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(21) as SchemaVersion | undefined;
if (applied) return;
logger.debug('DB', 'Adding ON UPDATE CASCADE to FK constraints on observations and session_summaries');
// PRAGMA foreign_keys must be set outside a transaction
this.db.run('PRAGMA foreign_keys = OFF');
this.db.run('BEGIN TRANSACTION');
@@ -671,17 +529,11 @@ export class MigrationRunner {
}
}
/**
* Recreate observations table with ON UPDATE CASCADE FK constraint.
* Called within a transaction by addOnUpdateCascadeToForeignKeys.
*/
private recreateObservationsWithUpdateCascade(): void {
// Drop FTS triggers first (they reference the observations table)
this.db.run('DROP TRIGGER IF EXISTS observations_ai');
this.db.run('DROP TRIGGER IF EXISTS observations_ad');
this.db.run('DROP TRIGGER IF EXISTS observations_au');
// Clean up leftover temp table from a previously-crashed run
this.db.run('DROP TABLE IF EXISTS observations_new');
this.db.run(`
@@ -724,7 +576,6 @@ export class MigrationRunner {
CREATE INDEX idx_observations_created ON observations(created_at_epoch DESC);
`);
// Recreate FTS triggers only if observations_fts exists
const hasFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='observations_fts'").all() as { name: string }[]).length > 0;
if (hasFTS) {
this.db.run(`
@@ -748,12 +599,7 @@ export class MigrationRunner {
}
}
/**
* Recreate session_summaries table with ON UPDATE CASCADE FK constraint.
* Called within a transaction by addOnUpdateCascadeToForeignKeys.
*/
private recreateSessionSummariesWithUpdateCascade(): void {
// Clean up leftover temp table from a previously-crashed run
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
this.db.run(`
@@ -785,7 +631,6 @@ export class MigrationRunner {
FROM session_summaries
`);
// Drop session_summaries FTS triggers before dropping the table
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ai');
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ad');
this.db.run('DROP TRIGGER IF EXISTS session_summaries_au');
@@ -799,7 +644,6 @@ export class MigrationRunner {
CREATE INDEX idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
`);
// Recreate session_summaries FTS triggers if FTS table exists
const hasSummariesFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='session_summaries_fts'").all() as { name: string }[]).length > 0;
if (hasSummariesFTS) {
this.db.run(`
@@ -823,38 +667,23 @@ export class MigrationRunner {
}
}
/**
* Add content_hash column to observations for deduplication (migration 22)
* Prevents duplicate observations from being stored when the same content is processed multiple times.
* Backfills existing rows with unique random hashes so they don't block new inserts.
*/
private addObservationContentHashColumn(): void {
// Check actual schema first — cross-machine DB sync can leave schema_versions
// claiming this migration ran while the column is actually missing (e.g. migration 21
// recreated the table without content_hash on the synced machine).
const tableInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
const hasColumn = tableInfo.some(col => col.name === 'content_hash');
if (hasColumn) {
// Column exists — just ensure version record is present
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(22, new Date().toISOString());
return;
}
this.db.run('ALTER TABLE observations ADD COLUMN content_hash TEXT');
// Backfill existing rows with unique random hashes
this.db.run("UPDATE observations SET content_hash = substr(hex(randomblob(8)), 1, 16) WHERE content_hash IS NULL");
// Index for fast dedup lookups
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_content_hash ON observations(content_hash, created_at_epoch)');
logger.debug('DB', 'Added content_hash column to observations table with backfill and index');
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(22, new Date().toISOString());
}
/**
* Add custom_title column to sdk_sessions for agent attribution (migration 23)
* Allows callers (e.g. Maestro agents) to label sessions with a human-readable name.
*/
private addSessionCustomTitleColumn(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(23) as SchemaVersion | undefined;
if (applied) return;
@@ -870,11 +699,6 @@ export class MigrationRunner {
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(23, new Date().toISOString());
}
/**
* Create observation_feedback table for tracking observation usage signals.
* Foundation for tier routing optimization and future Thompson Sampling.
* Schema version 24.
*/
private createObservationFeedbackTable(): void {
const applied = this.db.query('SELECT 1 FROM schema_versions WHERE version = 24').get();
if (applied) return;
@@ -897,9 +721,6 @@ export class MigrationRunner {
logger.debug('DB', 'Created observation_feedback table for usage tracking');
}
/**
* Add platform_source column to sdk_sessions for Claude/Codex isolation (migration 25)
*/
private addSessionPlatformSourceColumn(): void {
const tableInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
const hasColumn = tableInfo.some(col => col.name === 'platform_source');
@@ -927,13 +748,6 @@ export class MigrationRunner {
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(25, new Date().toISOString());
}
/**
* Ensure merged_into_project columns + indices exist on observations and session_summaries.
*
* Self-idempotent via PRAGMA table_info guard does NOT bump schema_versions.
* Supports merged-worktree adoption: a nullable pointer that lets a worktree's rows
* be surfaced under the parent project's observation list without data movement.
*/
private ensureMergedIntoProjectColumns(): void {
const obsCols = this.db
.query('PRAGMA table_info(observations)')
@@ -956,16 +770,6 @@ export class MigrationRunner {
);
}
/**
* Add agent_type and agent_id columns to observations and pending_messages (migration 27).
*
* Labels observation rows with the originating Claude Code subagent identity so
* downstream queries can distinguish main-session work from subagent work.
* Main-session rows keep NULL for both columns.
*
* Also threads the same columns through pending_messages so the label survives
* between enqueue (hook) and SDK-agent processing (which re-inserts into observations).
*/
private addObservationSubagentColumns(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(27) as SchemaVersion | undefined;
@@ -1003,50 +807,26 @@ export class MigrationRunner {
}
}
/**
* Rebuild pending_messages for self-healing claim (migration 28).
*
* PATHFINDER-2026-04-22 Plan 01 Phase 2.
*
* - Drops the legacy stale-reset epoch column (was the input to the
* 60-s stale-reset; replaced by worker-PID liveness at claim time).
* - Adds `worker_pid INTEGER` (set by claimNextMessage to the live
* worker's PID; rows whose worker_pid is no longer alive are
* immediately reclaimable).
* - Adds `tool_use_id TEXT` so ingestion-time pairing of tool_use
* tool_result can be DB-backed instead of an in-memory Map
* (Plan 03 dependency).
* - Dedupes any existing rows that share (content_session_id,
* tool_use_id), then creates a partial UNIQUE index.
*
* Follows the table-rebuild precedent at runner.ts:691 (migration 21):
* disable FKs, BEGIN, recreate, INSERT-SELECT, RENAME, COMMIT, re-enable.
*/
private rebuildPendingMessagesForSelfHealingClaim(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(28) as SchemaVersion | undefined;
if (applied) return;
const pendingExists = (this.db.query("SELECT name FROM sqlite_master WHERE type='table' AND name='pending_messages'").all() as TableNameRow[]).length > 0;
if (!pendingExists) {
// pending_messages table never created on this DB — nothing to rebuild.
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(28, new Date().toISOString());
return;
}
logger.debug('DB', 'Rebuilding pending_messages for self-healing claim (migration 28)');
// PRAGMA foreign_keys must be set outside a transaction.
this.db.run('PRAGMA foreign_keys = OFF');
this.db.run('BEGIN TRANSACTION');
try {
// Source columns may include legacy fields. We build the SELECT explicitly
// using only columns we know are present in the source after migration 27.
const sourceCols = this.db.query('PRAGMA table_info(pending_messages)').all() as TableColumnInfo[];
const colNames = new Set(sourceCols.map(c => c.name));
const has = (name: string) => colNames.has(name);
// Clean up leftover temp from a previously-crashed run.
this.db.run('DROP TABLE IF EXISTS pending_messages_new');
this.db.run(`
@@ -1076,10 +856,6 @@ export class MigrationRunner {
)
`);
// INSERT-SELECT — note that the legacy stale-reset epoch column is
// intentionally omitted. Any 'processing' row is left with worker_pid =
// NULL so that a self-healing claim picks it up immediately on next
// worker boot.
this.db.run(`
INSERT INTO pending_messages_new (
id, session_db_id, content_session_id, tool_use_id, message_type,
@@ -1120,8 +896,6 @@ export class MigrationRunner {
this.db.run('CREATE INDEX IF NOT EXISTS idx_pending_messages_claude_session ON pending_messages(content_session_id)');
this.db.run('CREATE INDEX IF NOT EXISTS idx_pending_messages_worker_pid ON pending_messages(worker_pid)');
// Dedup any pre-existing duplicate (content_session_id, tool_use_id) pairs
// before adding the UNIQUE index. Keep the lowest id (oldest) per pair.
this.db.run(`
DELETE FROM pending_messages
WHERE tool_use_id IS NOT NULL
@@ -1153,34 +927,20 @@ export class MigrationRunner {
}
}
/**
* Add UNIQUE(memory_session_id, content_hash) on observations (migration 29).
*
* PATHFINDER-2026-04-22 Plan 01 Phase 2 + Phase 4.
*
* - Dedupes existing rows that share (memory_session_id, content_hash),
* keeping the lowest id (oldest) per pair.
* - Creates a UNIQUE index that lets writers use
* INSERT ON CONFLICT(memory_session_id, content_hash) DO NOTHING
* in place of the legacy dedup window scan.
*/
private addObservationsUniqueContentHashIndex(): void {
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(29) as SchemaVersion | undefined;
if (applied) return;
// Need both columns to exist.
const obsCols = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
const hasMem = obsCols.some(c => c.name === 'memory_session_id');
const hasHash = obsCols.some(c => c.name === 'content_hash');
if (!hasMem || !hasHash) {
// Nothing to do; record so we don't keep retrying.
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(29, new Date().toISOString());
return;
}
this.db.run('BEGIN TRANSACTION');
try {
// Dedup before adding the UNIQUE index — keep the lowest id per pair.
this.db.run(`
DELETE FROM observations
WHERE id NOT IN (
@@ -1206,17 +966,6 @@ export class MigrationRunner {
}
}
/**
* Add metadata TEXT column to observations (migration 30).
*
* Backward-compatible: nullable, no default. Holds JSON-encoded arbitrary
* metadata supplied by callers of POST /api/memory/save (#2116). Without
* this column, the route's Zod `.passthrough()` accepted unknown fields
* but the INSERT silently dropped them a quiet contract violation.
*
* Idempotent via PRAGMA table_info guard so cross-machine DB sync that
* leaves schema_versions ahead of actual schema still self-heals.
*/
private addObservationsMetadataColumn(): void {
const cols = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
const hasColumn = cols.some(c => c.name === 'metadata');
@@ -1228,4 +977,29 @@ export class MigrationRunner {
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(30, new Date().toISOString());
}
private dropDeadPendingMessagesColumns(): void {
const cols = this.db.query('PRAGMA table_info(pending_messages)').all() as TableColumnInfo[];
const colNames = new Set(cols.map(c => c.name));
const deadColumns = ['retry_count', 'failed_at_epoch', 'completed_at_epoch', 'worker_pid'];
const toDrop = deadColumns.filter(name => colNames.has(name));
if (toDrop.length === 0) {
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(31, new Date().toISOString());
return;
}
this.db.run(`DELETE FROM pending_messages WHERE status NOT IN ('pending', 'processing')`);
for (const colName of toDrop) {
try {
this.db.run(`ALTER TABLE pending_messages DROP COLUMN ${colName}`);
logger.debug('DB', `Dropped dead column ${colName} from pending_messages`);
} catch (error) {
logger.warn('DB', `Failed to drop column ${colName} from pending_messages`, {}, error instanceof Error ? error : new Error(String(error)));
}
}
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(31, new Date().toISOString());
}
}
-15
View File
@@ -1,31 +1,18 @@
/**
* Session file retrieval functions
* Extracted from SessionStore.ts for modular organization
*/
import { Database } from 'bun:sqlite';
import { logger } from '../../../utils/logger.js';
import type { SessionFilesResult } from './types.js';
/**
* Safely parse a JSON array string from the DB.
* Handles legacy bare-path strings (e.g. "/foo/bar.ts") by wrapping them
* in an array instead of crashing with a SyntaxError (fix for #1359).
*/
export function parseFileList(value: string | null | undefined): string[] {
if (!value) return [];
try {
const parsed = JSON.parse(value);
return Array.isArray(parsed) ? parsed : [String(parsed)];
} catch {
// [ANTI-PATTERN IGNORED]: legacy bare-path strings are expected input, not errors
return [value];
}
}
/**
* Get aggregated files from all observations for a session
*/
export function getFilesForSession(
db: Database,
memorySessionId: string
@@ -45,10 +32,8 @@ export function getFilesForSession(
const filesModifiedSet = new Set<string>();
for (const row of rows) {
// Parse files_read
parseFileList(row.files_read).forEach(f => filesReadSet.add(f));
// Parse files_modified
parseFileList(row.files_modified).forEach(f => filesModifiedSet.add(f));
}
-22
View File
@@ -1,16 +1,9 @@
/**
* Observation retrieval functions
* Extracted from SessionStore.ts for modular organization
*/
import { Database } from 'bun:sqlite';
import { logger } from '../../../utils/logger.js';
import type { ObservationRecord } from '../../../types/database.js';
import type { GetObservationsByIdsOptions, ObservationSessionRow } from './types.js';
/**
* Get a single observation by ID
*/
export function getObservationById(db: Database, id: number): ObservationRecord | null {
const stmt = db.prepare(`
SELECT *
@@ -21,9 +14,6 @@ export function getObservationById(db: Database, id: number): ObservationRecord
return stmt.get(id) as ObservationRecord | undefined || null;
}
/**
* Get observations by array of IDs with ordering and limit
*/
export function getObservationsByIds(
db: Database,
ids: number[],
@@ -35,18 +25,15 @@ export function getObservationsByIds(
const orderClause = orderBy === 'date_asc' ? 'ASC' : 'DESC';
const limitClause = limit ? `LIMIT ${limit}` : '';
// Build placeholders for IN clause
const placeholders = ids.map(() => '?').join(',');
const params: any[] = [...ids];
const additionalConditions: string[] = [];
// Apply project filter
if (project) {
additionalConditions.push('project = ?');
params.push(project);
}
// Apply type filter
if (type) {
if (Array.isArray(type)) {
const typePlaceholders = type.map(() => '?').join(',');
@@ -58,7 +45,6 @@ export function getObservationsByIds(
}
}
// Apply concepts filter
if (concepts) {
const conceptsList = Array.isArray(concepts) ? concepts : [concepts];
const conceptConditions = conceptsList.map(() =>
@@ -68,7 +54,6 @@ export function getObservationsByIds(
additionalConditions.push(`(${conceptConditions.join(' OR ')})`);
}
// Apply files filter
if (files) {
const filesList = Array.isArray(files) ? files : [files];
const fileConditions = filesList.map(() => {
@@ -95,9 +80,6 @@ export function getObservationsByIds(
return stmt.all(...params) as ObservationRecord[];
}
/**
* Get observations for a specific session
*/
export function getObservationsForSession(
db: Database,
memorySessionId: string
@@ -112,10 +94,6 @@ export function getObservationsForSession(
return stmt.all(memorySessionId) as ObservationSessionRow[];
}
/**
* Get observations associated with a given file path, scoped to specific projects.
* Matches on the full file path (not just basename) to avoid cross-project collisions.
*/
export function getObservationsByFilePath(
db: Database,
filePath: string,
+12 -10
View File
@@ -1,15 +1,8 @@
/**
* Recent observation retrieval functions
* Extracted from SessionStore.ts for modular organization
*/
import { Database } from 'bun:sqlite';
import { logger } from '../../../utils/logger.js';
import type { RecentObservationRow, AllRecentObservationRow } from './types.js';
/**
* Get recent observations for a project
*/
export function getRecentObservations(
db: Database,
project: string,
@@ -26,9 +19,6 @@ export function getRecentObservations(
return stmt.all(project, limit) as RecentObservationRow[];
}
/**
* Get recent observations across all projects (for web UI)
*/
export function getAllRecentObservations(
db: Database,
limit: number = 100
@@ -42,3 +32,15 @@ export function getAllRecentObservations(
return stmt.all(limit) as AllRecentObservationRow[];
}
export function getFirstObservationCreatedAt(db: Database): string | null {
const stmt = db.prepare(`
SELECT created_at
FROM observations
ORDER BY created_at_epoch ASC
LIMIT 1
`);
const row = stmt.get() as { created_at: string } | undefined;
return row ? row.created_at : null;
}
-23
View File
@@ -1,7 +1,3 @@
/**
* Store observation function
* Extracted from SessionStore.ts for modular organization
*/
import { createHash } from 'crypto';
import { Database } from 'bun:sqlite';
@@ -9,12 +5,6 @@ import { logger } from '../../../utils/logger.js';
import { getProjectContext } from '../../../utils/project-name.js';
import type { ObservationInput, StoreObservationResult } from './types.js';
/**
* Compute a short content hash for deduplication.
* Uses (memory_session_id, title, narrative) as the semantic identity of an observation.
* Subagent fields (agent_type, agent_id) are intentionally excluded so the same work
* described once by a subagent and once by its parent deduplicates across contexts.
*/
export function computeObservationContentHash(
memorySessionId: string,
title: string | null,
@@ -26,15 +16,6 @@ export function computeObservationContentHash(
.slice(0, 16);
}
/**
* Store an observation (from SDK parsing).
*
* Assumes session already exists (created by hook). Deduplication is enforced
* by the database via UNIQUE(memory_session_id, content_hash) (Plan 01 Phase 4):
* INSERT ON CONFLICT DO NOTHING absorbs duplicates silently. The returned id
* is the existing row's id when a conflict occurred, otherwise the freshly
* inserted row.
*/
export function storeObservation(
db: Database,
memorySessionId: string,
@@ -44,11 +25,9 @@ export function storeObservation(
discoveryTokens: number = 0,
overrideTimestampEpoch?: number
): StoreObservationResult {
// Use override timestamp if provided (for processing backlog messages with original timestamps)
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
const timestampIso = new Date(timestampEpoch).toISOString();
// Guard against empty project string (race condition where project isn't set yet)
const resolvedProject = project || getProjectContext(process.cwd()).primary;
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
@@ -86,13 +65,11 @@ export function storeObservation(
return { id: inserted.id, createdAtEpoch: inserted.created_at_epoch };
}
// Conflict — fetch the existing row's id for the (memory_session_id, content_hash) pair.
const existing = db.prepare(
'SELECT id, created_at_epoch FROM observations WHERE memory_session_id = ? AND content_hash = ?'
).get(memorySessionId, contentHash) as { id: number; created_at_epoch: number } | null;
if (!existing) {
// Unreachable in practice (UNIQUE conflict implies existing row), but be explicit.
throw new Error(
`storeObservation: ON CONFLICT fired but no row exists for (memory_session_id=${memorySessionId}, content_hash=${contentHash})`
);
-26
View File
@@ -1,12 +1,5 @@
/**
* Type definitions for observation operations
* Extracted from SessionStore.ts for modular organization
*/
import { logger } from '../../../utils/logger.js';
/**
* Input type for storeObservation function
*/
export interface ObservationInput {
type: string;
title: string | null;
@@ -16,22 +9,15 @@ export interface ObservationInput {
concepts: string[];
files_read: string[];
files_modified: string[];
// Claude Code subagent identity — NULL for main-session rows.
agent_type?: string | null;
agent_id?: string | null;
}
/**
* Result from storing an observation
*/
export interface StoreObservationResult {
id: number;
createdAtEpoch: number;
}
/**
* Options for getObservationsByIds
*/
export interface GetObservationsByIdsOptions {
orderBy?: 'date_desc' | 'date_asc';
limit?: number;
@@ -41,17 +27,11 @@ export interface GetObservationsByIdsOptions {
files?: string | string[];
}
/**
* Result type for getFilesForSession
*/
export interface SessionFilesResult {
filesRead: string[];
filesModified: string[];
}
/**
* Simple observation row for getObservationsForSession
*/
export interface ObservationSessionRow {
title: string;
subtitle: string;
@@ -59,9 +39,6 @@ export interface ObservationSessionRow {
prompt_number: number | null;
}
/**
* Recent observation row type
*/
export interface RecentObservationRow {
type: string;
text: string;
@@ -69,9 +46,6 @@ export interface RecentObservationRow {
created_at: string;
}
/**
* Full recent observation row (for web UI)
*/
export interface AllRecentObservationRow {
id: number;
type: string;
-28
View File
@@ -1,16 +1,9 @@
/**
* User prompt retrieval operations
*/
import type { Database } from 'bun:sqlite';
import { logger } from '../../../utils/logger.js';
import type { UserPromptRecord, LatestPromptResult } from '../../../types/database.js';
import type { RecentUserPromptResult, PromptWithProject, GetPromptsByIdsOptions } from './types.js';
/**
* Get user prompt by session ID and prompt number
* @returns The prompt text, or null if not found
*/
export function getUserPrompt(
db: Database,
contentSessionId: string,
@@ -27,10 +20,6 @@ export function getUserPrompt(
return result?.prompt_text ?? null;
}
/**
* Get current prompt number by counting user_prompts for this session
* Replaces the prompt_counter column which is no longer maintained
*/
export function getPromptNumberFromUserPrompts(db: Database, contentSessionId: string): number {
const result = db.prepare(`
SELECT COUNT(*) as count FROM user_prompts WHERE content_session_id = ?
@@ -38,10 +27,6 @@ export function getPromptNumberFromUserPrompts(db: Database, contentSessionId: s
return result.count;
}
/**
* Get latest user prompt with session info for a Claude session
* Used for syncing prompts to Chroma during session initialization
*/
export function getLatestUserPrompt(
db: Database,
contentSessionId: string
@@ -61,9 +46,6 @@ export function getLatestUserPrompt(
return stmt.get(contentSessionId) as LatestPromptResult | undefined;
}
/**
* Get recent user prompts across all sessions (for web UI)
*/
export function getAllRecentUserPrompts(
db: Database,
limit: number = 100
@@ -86,9 +68,6 @@ export function getAllRecentUserPrompts(
return stmt.all(limit) as RecentUserPromptResult[];
}
/**
* Get a single user prompt by ID
*/
export function getPromptById(db: Database, id: number): PromptWithProject | null {
const stmt = db.prepare(`
SELECT
@@ -108,9 +87,6 @@ export function getPromptById(db: Database, id: number): PromptWithProject | nul
return (stmt.get(id) as PromptWithProject | undefined) || null;
}
/**
* Get multiple user prompts by IDs
*/
export function getPromptsByIds(db: Database, ids: number[]): PromptWithProject[] {
if (ids.length === 0) return [];
@@ -133,10 +109,6 @@ export function getPromptsByIds(db: Database, ids: number[]): PromptWithProject[
return stmt.all(...ids) as PromptWithProject[];
}
/**
* Get user prompts by IDs (for hybrid Chroma search)
* Returns prompts in specified temporal order with optional project filter
*/
export function getUserPromptsByIds(
db: Database,
ids: number[],
-7
View File
@@ -1,14 +1,7 @@
/**
* User prompt storage operations
*/
import type { Database } from 'bun:sqlite';
import { logger } from '../../../utils/logger.js';
/**
* Save a user prompt to the database
* @returns The inserted row ID
*/
export function saveUserPrompt(
db: Database,
contentSessionId: string,

Some files were not shown because too many files have changed in this diff Show More