UX redesign: installer + provider rename + /learn-codebase + welcome card + SessionStart hint (#2255)

* feat(ux): claude-mem UX improvements with installer enhancements

Squashed PR #2156 commits for clean rebase onto main:
- feat(installer): add provider selection, model prompt, worker auto-start
- refactor: rename *Agent provider classes to *Provider
- feat: add /learn-codebase skill and viewer welcome card
- feat(worker): inject welcome hint when project has zero observations
- fix(pr-2156): address greptile review comments
- fix(pr-2156): address coderabbit review comments
- fix(pr-2156): persist CLAUDE_MEM_PROVIDER for non-claude in non-TTY mode
- fix(pr-2156): file-backed settings reads in installer + env-first SKILL doc

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* build: rebuild plugin artifacts after rebase onto v12.4.7

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(skills): strip claude-mem internals from learn-codebase

The learn-codebase skill, install next-step copy, WelcomeCard, and
welcome-hint previously walked the primary agent through worker endpoints
and synthetic observation payloads. The PostToolUse hook already captures
every Read/Edit the agent makes — the agent should have no awareness that
the memory layer exists. Collapse the skill to one instruction ("read every
source file in full") and rephrase touchpoints to describe only what the
user observes (Claude reading files), not what happens behind the scenes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): preflight version mismatch + settings-aware port resolution

Two related fixes for build-and-sync's worker restart step:

1. Read CLAUDE_MEM_WORKER_PORT from ~/.claude-mem/settings.json the same
   way the worker does, instead of computing the default port from the
   uid alone. Previously, users with a custom port saw a misleading
   "Worker not running" message because the restart POST hit the wrong
   port and got ECONNREFUSED.

2. Add a preflight check that aborts the sync when the running worker's
   reported version does not match the version we are about to build.
   Claude Code's plugin loader pins the worker to a specific cache
   version per session, so syncing into a newer cache directory has no
   effect until the user runs `claude plugin update thedotmack/claude-mem`
   to bump the pin. The preflight surfaces this explicitly with the exact
   command to run; --force bypasses it for intentional cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(learn-codebase): note sed for partial reads of large files

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide

Removed prose comments from all tracked source. Preserved directives
(@ts-ignore, eslint-disable, biome-ignore, prettier-ignore, triple-slash
references, webpack magic, shebangs). Deleted two tests that asserted
on comment text rather than runtime behavior.

Net: 401 files, -14,587 / +389 lines, -10.4% bytes.

Verified: typecheck passes, build passes, test count unchanged from
baseline (22 pre-existing fails, all unrelated).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(installer): move runtime setup into npx, eliminate hook dead air

Smart-install ran 3 times during a fresh install — the worst run was silent,
fired by Claude Code's Setup hook after `claude plugin install`, producing
~30s of dead air that looked like the plugin was hung.

This change makes `npx claude-mem install` the single place heavy work
happens, with a visible spinner. Hooks become runtime-only.

- New `src/npx-cli/install/setup-runtime.ts` module: ensureBun, ensureUv,
  installPluginDependencies, read/writeInstallMarker, isInstallCurrent.
  Marker schema preserved exactly ({version, bun, uv, installedAt}) so
  ContextBuilder and BranchManager readers keep working.
- `npx claude-mem install`: ungated copy/register/enable for every IDE,
  inserts a "Setting up runtime" task with honest "first install can take
  ~30s" spinner. The claude-code shell-out to `claude plugin install` is
  removed — npx already populated everything Claude reads.
- New `npx claude-mem repair` command for post-`claude plugin update`
  recovery, force-reinstalls runtime.
- Setup hook now runs `plugin/scripts/version-check.js` (29ms wall) instead
  of smart-install. Mismatch prints "run: npx claude-mem repair" on stderr.
  Always exits 0 (non-blocking, per CLAUDE.md exit-code strategy).
- SessionStart loses the smart-install entry; 2 hooks remain (worker start,
  context fetch).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(installer): delete smart-install sources, retarget tests

- Delete scripts/smart-install.js + plugin/scripts/smart-install.js (both
  are source files kept in sync manually; both must go).
- Delete tests/smart-install.test.ts (covered surface is gone).
- tests/plugin-scripts-line-endings: drop smart-install.js entry.
- tests/infrastructure/plugin-distribution: retarget two assertions at
  version-check.js (the new Setup hook script).
- New tests/setup-runtime.test.ts: 9 tests covering marker read/write,
  isInstallCurrent semantics. Marker schema invariant verified.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(installer): describe npx-driven setup + version-check Setup hook

Sweep public docs and architecture notes to reflect the new flow:
npx installer does Bun/uv setup with a visible spinner; Setup hook runs
sub-100ms version-check.js; users hit `npx claude-mem repair` after a
`claude plugin update`.

- docs/architecture-overview.md: hook lifecycle table + npx flow paragraph
- docs/public/configuration.mdx: tree + hook config example
- docs/public/development.mdx: build output line
- docs/public/hooks-architecture.mdx: full rewrite of pre-hook section,
  timing table, performance table
- docs/public/architecture/{overview,hooks,worker-service}.mdx: tree
  comments, JSON config example, Bun requirement section

docs/reports/* untouched (historical incident reports).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): mergeSettings writes via USER_SETTINGS_PATH

Greptile P1 (#2156): `settingsFilePath()` only resolved
`process.env.CLAUDE_MEM_DATA_DIR`, while `getSetting()` reads via
`USER_SETTINGS_PATH` which `resolveDataDir()` populates from BOTH the env
var AND a `CLAUDE_MEM_DATA_DIR` entry persisted in
`~/.claude-mem/settings.json`. Result: a user with the data dir saved in
settings.json but not exported in their shell would have provider/model
settings silently written to `~/.claude-mem/settings.json` while
`getSetting()` read from `/custom/path/settings.json` — read/write split.

Drop `settingsFilePath()` and the now-unused `homedir` import; reuse the
already-imported `USER_SETTINGS_PATH` constant.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(cli): parse --provider, --model, --no-auto-start install flags

Greptile P1 (#2156): InstallOptions has fields `provider`, `model`,
`noAutoStart`, but the install case in the npx-cli switch only parsed
`--ide`. The other three flags were silently dropped — `npx claude-mem
install --provider gemini` was a no-op.

Extract a `parseInstallOptions(argv)` helper, share it between the bare
`npx claude-mem` and `npx claude-mem install` paths, and validate
`--provider` against the allowed set. Update help text accordingly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): pipe runtime-setup output, always show IDE multiselect

Two issues caught in a docker test of the installer:

1. The bun.sh installer, uv installer, and `bun install` were using
   stdio: 'inherit', dumping their stdout/stderr through clack's spinner
   region — visible as raw "downloading uv 0.11.8…" / "Checked 58
   installs across 38 packages…" text streaming under the spinner. Switch
   to stdio: 'pipe' and surface captured stderr only on failure (via a
   shared describeExecError() helper that includes stdout when stderr is
   empty). Spinner stays clean on the happy path.

2. promptForIDESelection() silently picked claude-code when no IDEs were
   detected, never showing the user the multiselect. On a fresh machine
   with no IDEs present yet (e.g. our docker test container), the user
   never got to choose. Now: always show the full IDE list when
   interactive; mark detected ones with [detected] hints and pre-select
   them; show a warn line if zero are detected explaining they should pick
   what they plan to use. Non-TTY callers still get the silent
   claude-code default at the call site (unchanged).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): skip marketplace work for claude-code-only, offer to install Claude Code

Two related UX fixes from a docker test:

**Delay between "Saved Claude model=…" and "Plugin files copied OK"**

After dropping the needsManualInstall gate, every install was unconditionally
running `copyPluginToMarketplace` (which copied the entire root node_modules
tree — thousands of files, dozens of seconds) and `runNpmInstallInMarketplace`
(npm install --production) even when only claude-code was selected. Neither
is needed for claude-code: that path uses the plugin cache dir + the
installed_plugins.json + enabledPlugins flag, all of which we already write.

- Drop `node_modules` from `copyPluginToMarketplace`'s allowed-entries list;
  the dependency-install task populates it on the destination side anyway.
- Re-introduce `needsMarketplace = selectedIDEs.some(id => id !== 'claude-code')`
  scoped *only* to `copyPluginToMarketplace`, `runNpmInstallInMarketplace`,
  and the pre-install `shutdownWorkerAndWait` (also pointless for claude-code-
  only flows since we're not overwriting the worker's running cache dir
  source). All other tasks (cache copy, register, enable, runtime setup) stay
  unconditional.

**Claude Code missing → silent install of an IDE that isn't there**

When the user picked claude-code on a machine without it (e.g. a fresh
container), the install completed but `claude` was unavailable and the only
hint was a generic warn line. Replace with an explicit pre-flight prompt:

  Claude Code is not installed. Claude-mem works best in Claude Code, but
  also works with the IDEs below.
  ? Install Claude Code now?
    ◆ Yes — install Claude Code (recommended)
    ◯ No — pick another IDE below
    ◯ Cancel installation

If the user picks "Yes", run `curl -fsSL https://claude.ai/install.sh | bash`
(or the PowerShell equivalent on Windows), then re-detect IDEs and proceed
with claude-code pre-selected. If the install fails or the user picks "No",
the multiselect still appears with claude-code visible (just unmarked
[detected]), so they can opt in or pick another IDE.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): detect Claude Code via `claude` CLI, not ~/.claude dir

The directory `~/.claude` can exist (e.g. mounted in Docker, or created
by tooling) without Claude Code actually being installed. Detect the
`claude` command in PATH instead so the installer correctly offers to
install Claude Code when missing.

* docs(learn-codebase): add reviewer note explaining the cost tradeoff

The skill intentionally reads every file in full to build a cognitive
cache that pays off across the rest of the project. Add a brief note
so reviewers (human or bot) understand the tradeoff before flagging
the unbounded read as a cost issue.

* fix: address Greptile P1 feedback on welcome hint and learn-codebase

- SearchRoutes: skip welcome hint when caller passes ?full=true so
  explicit full-context requests aren't intercepted by the hint.
- learn-codebase: replace `sed` instruction with the Read tool's
  offset/limit parameters, since Bash is gated in Claude Code by
  default.

* feat(install): ASCII-animated logo splash on interactive install

Plays a ~1s bloom animation of the claude-mem sunburst logomark when
the installer starts in an interactive terminal — geometrically rendered
via 12 ray curves around a center disc, in the brand orange. The
wordmark and tagline type on alongside the final frame.

Auto-skipped on non-TTY, in CI, when NO_COLOR or CLAUDE_MEM_NO_BANNER
is set, or when the terminal is too narrow.

Inspired by ghostty +boo.

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): three-act choreography renderer with radial gradient and diff redraw

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): update preview script to support small/medium/hero tier selection

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(docker): add COLORTERM=truecolor to test-installer sandbox

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): auto-apply PATH for Claude Code with spinner UX

The Claude Code install.sh prints a Setup notes block telling users to
manually edit "your shell config file" to add ~/.local/bin to PATH —
which left fresh installs unable to launch claude from the command line.

After a successful install, detect ~/.local/bin/claude on disk and, if
the dir is missing from PATH, append the right export line to .zshrc /
.bash_profile / .bashrc / fish config (idempotent, marked with a
comment). Also updates process.env.PATH for the current install run.

Wraps the curl|bash install in a clack spinner (interactive only) so the
~4 minute native-build download doesn't look frozen — output is captured
silently and dumped on failure for debuggability. Non-interactive mode
keeps inherited stdio for CI logs.

Verified end-to-end in the test-installer docker sandbox: spinner
animates, .bashrc gets the export, fresh login shell resolves claude.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): video-frame ASCII renderer with three-act choreography

Generator switched from a single Jimp-rendered logo to pre-extracted
video frames concatenated with \x01 separators and gzip-deflated, ported
from ghostty's boo wire format. Renderer rewritten around three acts
(ignite → stagger bloom → text reveal + breathe) with adaptive sizing,
radial gradient, and diff-based redraw.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(onboarding): unify install / SessionStart / viewer around one first-success moment

Three surfaces now point at the same north-star moment — open the viewer, do
anything in Claude Code, watch an observation appear within seconds — with the
same verbatim timing and privacy lines, and a single canonical "how it works"
explainer instead of three diverging copies.

- Canonical explainer at src/services/worker/onboarding-explainer.md served via
  GET /api/onboarding/explainer; mirrored into plugin/skills/how-it-works/SKILL.md
- SessionStart welcome hint rewritten as third-person status (no imperatives
  Claude tries to execute), pinned with a default-value regression test
- Post-install Next Steps reframed as "two paths": passive default + optional
  /learn-codebase front-load; drops /mem-search and /knowledge-agent from this
  surface; adds verbatim timing + privacy lines and /how-it-works link
- /api/stats response gains firstObservationAt for the viewer stat row
- Viewer WelcomeCard branches on observationCount === 0: empty state shows live
  worker-connection dot + "waiting for activity"; has-data state shows
  observations · projects · since [date] and two example prompts. v2 dismiss key
- jimp added to package.json to fix pre-existing banner-frame build break

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(banner): play unconditionally; only honor CLAUDE_MEM_NO_BANNER

The 128-col / TTY / CI / NO_COLOR gates silently swallowed the banner in
narrower terminals, CI logs, and any non-TTY pipe — including Docker runs
where -it should preserve the experience but column width was the wrong
gate. Remove the implicit gates; keep the explicit opt-out only.

If a frame wraps in a narrow terminal, that's better than the banner
not playing at all.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* revert(banner): restore 15:33 gating logic per user request

Reverts eb6fc157. Restores isBannerEnabled to the state at commit
8e448015 (2026-04-30 15:33): TTY check, !CI, !NO_COLOR, !CLAUDE_MEM_NO_BANNER,
and cols >= BANNER.width.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): wrap remaining slow steps with spinners

Each IDE installer (Cursor, Gemini CLI, OpenCode, Windsurf, OpenClaw,
Codex CLI, MCP integrations) now runs inside a clack task spinner with
per-step progress messages instead of silent dynamic-import + cpSync.
Pre-overwrite worker shutdown (up to 10s) and the post-install health
probe (up to 3s) also get spinners.

Internal console.log/error/warn from each IDE installer is buffered
during the spinner; if the install fails, captured output is replayed
afterward via log.warn so users can see what broke.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(review): observation count + IDE pre-selection regressions

WelcomeCard's "no observations yet" empty state was triggered when a
project filter narrowed the feed to zero rows, even with thousands of
observations elsewhere. Source the count from global stats.database
to match firstObservationAt's scope.

Restore initialValues: [] in the IDE multiselect — pre-selecting every
detected IDE was the exact regression #2106 was filed for.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): trichotomy worker state + cache fallback for script path

ensureWorkerStarted now returns 'ready' | 'warming' | 'dead' instead of
boolean. The spawned-but-still-warming case (common in Docker cold
starts and slow first-time inits) was being misreported as 'did not
start', which contradicted the next-steps panel saying 'still starting
up'. Install task message and Next Steps headline now agree on the
actual state.

Also fixes the actual root cause of 'Worker did not start' on
claude-code-only installs: the worker script path was hardcoded to the
marketplace dir, which is left empty when no non-claude-code IDE is
selected. Now falls back to pluginCacheDirectory(version) when the
marketplace copy isn't present.

Verified end-to-end in docker/claude-mem with --ide claude-code,
--ide cursor, and a fresh container — install task and headline
agree on 'Worker ready at http://localhost:<port>' in all cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: align CLAUDE.md and public docs with current code

Sweep across CLAUDE.md and 10 high-traffic docs/public/ MDX files to
remove point-in-time references and align with the actual current
shape of the codebase. Highlights:

- Hardcoded port 37777 → per-user formula (37700 + uid % 100) on the
  front-door pages (introduction, installation, configuration,
  architecture/overview, architecture/worker-service, troubleshooting,
  hooks-architecture, platform-integration).
- Default model 'sonnet' → 'claude-haiku-4-5-20251001' (matches
  SettingsDefaultsManager).
- Node 18 → 20 (matches package.json engines).
- Lifecycle hook count corrected (5 events).
- Removed the nonexistent 'Smart Install' component and pre-built
  directory tree referencing files that no longer exist
  (context-hook.ts, save-hook.ts, cleanup-hook.ts, etc.); replaced
  with the real worker dispatcher shape.
- Removed CLAUDE.md '#2101' issue tag (kept the design rationale).
- Replaced obsolete hooks.json example with a description of the real
  bun-runner.js / worker-service.cjs hook event shape.

Lower-traffic doc pages still hardcode 37777 — left for a separate
global pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): land strip-comments around real parsers (postcss, remark, parse5)

Each language gets a real parser to locate comments, then we splice ranges
out of the original source. The library never serializes — that's how
remark-stringify produced 243 reformat-noise diffs in the first attempt
versus the 21 real strip targets here.

  JS/TS/JSX  -> ts.createSourceFile + getLeadingCommentRanges
  CSS/SCSS   -> postcss.parse + walkComments + node.source offsets
  MD/MDX     -> remark-parse (+ remark-mdx) + AST html / mdx-expression nodes
  HTML       -> parse5 with sourceCodeLocationInfo
  shell/py   -> kept hand-rolled hash stripper (no library worth the dep)

Preserves: shebangs, @ts-* directives, eslint-disable, biome-ignore,
prettier-ignore, triple-slash refs, webpack magic, /*! license keep,
@strip-comments-keep file marker. JS/TS handler runs a parse-roundtrip
check and refuses to write if syntax errors increased (catches the
worker-utils.ts breakage class from the 2026-04-29 attempt).

npm scripts:
  strip-comments         (apply)
  strip-comments:check   (CI-style, exits non-zero if changes needed)
  strip-comments:dry-run (list, no writes)

Verified --check on this repo: 21 changes, -4.0% bytes, no parse-error
regressions, no reformat-suspect false positives.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide via parser-backed tool

21 files changed, -17,550 bytes (-4.0%) of narrative comments removed
across .ts / .tsx / .js / .mjs and the .gitignore. JS/TS comments stripped
via ts.createSourceFile + getLeadingCommentRanges — same canonical lexer,
same behavior as the 2026-04-29 strip, no reformat noise.

Preexisting baseline (unchanged):
  typecheck: 16 errors at HEAD, 16 errors after strip (line numbers shift,
             no new error classes — verified via diff of sorted error lists)
  build:     fails at HEAD with CrushHooksInstaller.js unresolved import
             (preexisting, unrelated to this strip)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): drop Crush integration references after extract

The Crush integration was extracted to its own branch on May 1, but the
import at install.ts:280 (and the case block + ide-detection entry +
McpIntegrations config + npx-cli help text) still referenced the now-
removed CrushHooksInstaller.js, breaking the build.

Removes:
- case 'crush' block in install.ts
- crush entry in ide-detection.ts
- CRUSH_CONFIG and registration in McpIntegrations.ts
- 'crush' from the IDE Identifiers help line in index.ts

Rebuilds worker-service.cjs to match.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(banner): mark generated banner-frames.ts with @strip-comments-keep

Without this, every build/strip cycle ping-pongs five lines of doc
comments in and out of the auto-generated output. The keep-marker tells
strip-comments.ts to skip the file entirely.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): drop banner-frame regen from build script

generate-banner-frames.mjs requires PNG frames in /tmp/cmem-banner-frames
that only exist after the maintainer runs ffmpeg locally on the source
video. CI has neither the video nor the frames, so the build broke on
Windows. The output (src/npx-cli/banner-frames.ts) is committed, so the
regen is a one-shot dev step — not a build step. Run the script directly
when the video changes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): unstick the spinner — kill claim-self-lock, wake on fail, auto-broadcast

Three surgical changes that cure the stuck-spinner bug at the source.

Phase 1.1 (L9): claimNextMessage no longer self-excludes its own worker_pid.
A single UPDATE-RETURNING grabs the oldest pending row by id. Removes the
LiveWorkerPidsProvider plumbing that was never injected — Supervisor enforces
single-worker via PID file, so the multi-worker SQL was defending against a
configuration the project does not support.

Phase 1.2 (L19): SessionManager.markMessageFailed wraps PendingMessageStore.markFailed
and emits 'message' on the per-session EventEmitter. The iterator's waitForMessage
now wakes immediately on re-pend instead of parking for 3 minutes. ResponseProcessor
and SessionRoutes routed through the new wrapper.

Phase 1.3 (L24): PendingMessageStore takes an optional onMutate callback fired
from every mutator (enqueue, claimNextMessage, confirmProcessed, markFailed,
transitionMessagesTo, clearFailedOlderThan). SessionManager wires it; WorkerService
passes broadcastProcessingStatus. Ten manual broadcast calls deleted across
SessionCleanupHelper, SessionEventBroadcaster, SessionRoutes, DataRoutes, and
worker-service. Caller discipline becomes structurally impossible to forget.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): delete dead code — legacy routes, processPendingQueues, decorative guards

Pure deletions. Phase 2 of kill-the-asshole-gates.

- Legacy /sessions/:sessionDbId/* routes (handleSessionInit, handleObservations,
  handleSummarize, handleSessionStatus, handleSessionDelete, handleSessionComplete)
  bypassed all five ingest gates and were a parallel write path. Folded the
  initializeSession + broadcastNewPrompt + syncUserPrompt + ensureGeneratorRunning
  + broadcastSessionStarted work into the canonical /api/sessions/init handler so
  the hook makes one round trip instead of two.
- processPendingQueues (~104 lines, zero callers) — replaced in Phase 6 by a
  one-statement startup sweep.
- spawnInProgress Map and crashRecoveryScheduled Set — decorative dedupe over
  generatorPromise and stillExists checks that already provide the real safety.
- STALE_GENERATOR_THRESHOLD_MS — pre-empted live generators and raced with the
  finally block; the 3min idle timeout already kills zombies.
- MAX_SESSION_WALL_CLOCK_MS — ran a SELECT on every observation to enforce 24h.
  Runaway-spend protection lives in the API key, not in claude-mem.
- Missing-id 400 in shared.ts ingestObservation — Zod already enforces min(1)
  on contentSessionId and toolName at the route schema.
- SessionCompletionHandler import + completionHandler field on SessionRoutes
  (orphaned after handler deletions).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): SQL-backed getTotalQueueDepth — single source of truth

Was: iterate this.sessions.values() and sum getPendingCount per session.
Now: SELECT COUNT(*) FROM pending_messages WHERE status IN ('pending','processing').

The in-memory sessions Map drifted from the DB rows whenever a generator exited
without confirm/fail, leading to false-positive isProcessing in the UI. Phase 1.3's
auto-broadcast fires on every mutation, but it broadcast a stale Map count.
Reading from the DB makes the UI's spinner state match what the queue actually holds.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): typed abortReason replaces wasAborted boolean

Was: a boolean wasAborted that lumped every abort together. The finally block
branched on !wasAborted, so any abort skipped restart — including idle aborts
with pending work, which is exactly the case where we DO want to restart.

Now: ActiveSession.abortReason is a typed enum 'idle' | 'shutdown' | 'overflow'
| 'restart-guard'. The finally block consumes the reason and only skips restart
for 'shutdown' and 'restart-guard'. Idle and overflow aborts fall through, so
if pending work exists they trigger restart correctly.

Dropped 'stale' and 'wall-clock' from the union — Phase 2 deleted those paths.
Natural-completion abort (post-success) intentionally has no reason; it's not
gating restart logic.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): unify the two generator-exit finally blocks

Was: worker-service.ts:startSessionProcessor and SessionRoutes:ensureGeneratorRunning
each had their own ~70-line finally block with divergent restart-guard handling.
The worker-service path called terminateSession on RestartGuard trip and orphaned
pending rows (the L16 bug); the SessionRoutes path drained them. Two places to
update when rules changed.

Now: handleGeneratorExit in src/services/worker/session/GeneratorExitHandler.ts
owns the contract:
  1. Always kill the SDK subprocess if alive.
  2. Always drain processingMessageIds via sessionManager.markMessageFailed
     (which wakes the iterator — Phase 1.2).
  3. shutdown / restart-guard reasons: drain pending rows via
     transitionMessagesTo('failed'), finalize, remove from Map. Fixes L16.
  4. pendingCount=0: finalize normally and remove from Map.
  5. pendingCount>0: backoff respawn via per-session respawnTimer (no global Set;
     Phase 2.4 deleted that). RestartGuard trip drains to 'abandoned'.

Both finally blocks are now ~10-line wrappers that translate local state into the
canonical abortReason and delegate. Restored completionHandler injection into
SessionRoutes (was dropped in Phase 2 cleanup; needed by the unified helper for
finalizeSession).

Behavior change: SessionRoutes' previous "keep idle session in memory" was
deliberately replaced by the plan's "remove from Map on natural completion" —
next observation reinitializes via getMessageIterator → initializeSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(worker): startup orphan sweep — reset 'processing' rows at boot

When the worker dies (crash, kill, restart), any pending_messages rows it left
in 'processing' state are by definition orphans (the only worker is dead).
Single SQL UPDATE at boot resets them to 'pending' so the iterator can claim
them again. Replaces the deleted processPendingQueues function (Phase 2.2).

Runs in initializeBackground after dbManager.initialize() and before the
initializationComplete middleware releases blocked HTTP requests, so no
in-flight request can race the sweep. NOT on a periodic timer — after boot,
every 'processing' row has a live consumer and a periodic sweep would race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): simplify enqueue catch, replace memorySessionId throw with re-pend

7.1: queueObservation's catch was logging two ERROR-level messages and rethrowing.
The rethrow is correct (FK violations / disk full / schema drift should crash
loudly), but the verbose ERROR logging pretended the error was recoverable.
Reduced to one INFO line + rethrow.

7.2: ResponseProcessor's memorySessionId guard was throwing if the SDK hadn't
included session_id on the first user-yield, terminal-failing the entire batch.
Now warns and re-pends in-flight messages via sessionManager.markMessageFailed
(which wakes the iterator — Phase 1.2). The next iteration tries again with
memorySessionId hopefully captured.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): mirror builds to installed-version cache for hot reload

When package.json bumps past Claude Code's installed pin, sync-marketplace
wrote new code to cache/<buildVersion>/ but the worker loaded from
cache/<installedVersion>/, so worker:restart reloaded the same old code.

Replace the exit-on-mismatch preflight with a mirror step: when versions
differ, also rsync plugin/ into cache/<installedVersion>/ so worker:restart
hot-reloads new code without a Claude Code session restart. The
build-version cache still gets written for the eventual
`claude plugin update`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: delete dead barrel files and orphan utilities

- src/sdk/index.ts (re-exports parser+prompts; nothing imported the barrel)
- src/services/Context.ts (re-exports ./context/index.js; no importers)
- src/services/integrations/index.ts (no importers)
- src/services/worker/Search.ts (3-line barrel of ./search/index.js)
- src/services/infrastructure/index.ts: drop CleanupV12_4_3 re-export
- src/utils/error-messages.ts (getWorkerRestartInstructions never imported)
- src/types/transcript.ts (170 LoC of types, zero importers)
- src/npx-cli/_preview.ts (banner dev preview, no script wires it)

Build + tests still pass; observations still flowing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(parser): drop unused detectLanguage

Only the user-grammar-aware variant detectLanguageWithUserGrammars()
is actually called.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(types): drop unused SdkSessionRecord + ObservationWithContext

Both interfaces in src/types/database.ts had zero importers anywhere
in src or tests.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): drop unused getDetectedIDEs + claudeMemDataDirectory

getDetectedIDEs has no callers — install.ts uses detectInstalledIDEs
directly. claudeMemDataDirectory has no callers either.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ProcessManager): drop dead orphan-reaper + signal-handler helpers

Each had zero callers in src/ or tests/:
  - cleanupOrphanedProcesses + enumerateOrphanedProcesses
  - ORPHAN_PROCESS_PATTERNS + ORPHAN_MAX_AGE_MINUTES
  - forceKillProcess
  - waitForProcessesExit
  - createSignalHandler
  - resetWorkerRuntimePathCache

The orphan reaper was retired in PATHFINDER Plan 02 ("OS process groups
replace hand-rolled reapers", commit 94d592f2) — these were the leftover
pieces. shutdown.ts uses the supervisor's own kill-pgid path instead.

parseElapsedTime kept (covered by tests/infrastructure/process-manager.test.ts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete 11 unreferenced DX/forensic scripts

None of these are referenced by package.json npm scripts or docs/.
All last touched on Apr 29 only as part of the comment-stripping
pass — the feature code itself is older and orphaned:

  analyze-transformations-smart.js
  debug-transcript-structure.ts
  dump-transcript-readable.ts
  endless-mode-token-calculator.js
  extract-prompts-to-yaml.cjs
  extract-rich-context-examples.ts
  find-silent-failures.sh
  fix-all-timestamps.ts
  format-transcript-context.ts
  test-transcript-parser.ts
  transcript-to-markdown.ts

These are standalone tools — runtime behavior unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete unused extraction/ and types/ subdirs

- scripts/extraction/{extract-all-xml.py, filter-actual-xml.py, README.md}
  point at ~/Scripts/claude-mem/ — the user's pre-relocation path that no
  longer exists. Zero references in package.json, src/, or tests/.
- scripts/types/export.ts duplicates ObservationRecord etc. and has no
  importers (CodexCliInstaller imports transcripts/types, not this).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(BranchManager): drop dead getInstalledPluginPath

OpenCodeInstaller has its own (used) getInstalledPluginPath; the
BranchManager copy never had any external callers.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ChromaSyncState): unexport DocKind (used internally only)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(gemini): drop stale earliestPendingTimestamp / processingMessageIds

Both fields were removed from ActiveSession in earlier queue-engine
cleanup. Tests had been silently keeping them because the mock sessions
use 'as any' to bypass strict typing, so the dead fields rode along
without complaint.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 3 unused module-level constants

- src/npx-cli/banner.ts: CURSOR_HOME, CLEAR_DOWN (banner uses
  CLEAR_SCREEN which combines clear-down + cursor-home into a single
  CSI sequence; the standalone constants were leftovers).
- src/services/worker/BranchManager.ts: DEFAULT_SHELL_TIMEOUT_MS
  (BranchManager only uses GIT_COMMAND_TIMEOUT_MS / NPM_INSTALL_TIMEOUT_MS).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(opencode-plugin): drop dead workerPost helper

Only the fire-and-forget variant (workerPostFireAndForget) is actually
called. workerPost was the await-result version with no remaining caller.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 8 truly-unused interface fields

Verified each by grepping for `.field`, `"field"`, `'field'`, and
`field:` patterns across src/ + tests/ + plugin/scripts. Where the
only remaining usage was the assignment site, removed the assignments too.

- GitHubStarsData: watchers_count, forks_count (only stargazers_count read)
- TableColumnInfo: dflt_value (PRAGMA returns it but no caller reads it)
- IndexInfo: seq (PRAGMA returns it but no caller reads it)
- ObservationRecord: source_files (legacy field, no readers)
- HookResult.hookSpecificOutput: permissionDecisionReason
- WatchTarget: rescanIntervalMs (set in config, never read)
- ShutdownResult: confirmedStopped (write-only — assigned but no
  reader; updated all 3 return sites to drop it)
- ModePrompts: language_instruction (multilingual support never wired)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): reuse InstallOptions type instead of inline duplicate

parseInstallOptions had its return type written out inline as an
anonymous duplicate of InstallOptions. Use the canonical type
(import type — zero bundle cost).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(integrations): drop unused Platform type alias

The detectPlatform() function that returned this type was deleted earlier
in the branch (along with getScriptExtension that consumed it). The type
itself outlived its consumer; only string literals "Platform:" survive in
console.log diagnostics, which don't reference the alias.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): broadcast processing_status when summarize is queued

broadcastSummarizeQueued was an empty no-op even though
handleSummarizeByClaudeId calls it after enqueueing. The PendingMessageStore
onMutate callback already fires broadcastProcessingStatus on enqueue, but
calling it explicitly from broadcastSummarizeQueued ensures the spinner
ticks on the moment a summary is requested even if the onMutate chain has
any timing race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): keep spinner on while summary generates

ClaudeProvider's SDK can pull multiple synthetic prompts (e.g.
observation + summarize) before producing responses. Each pull pushed
an ID to session.processingMessageIds. When the SDK's first
observation response came back, ResponseProcessor.confirmProcessed
deleted ALL pending message rows — including the still-in-flight
summary — so getTotalQueueDepth dropped to 0 and the spinner turned
off, even though the summary took another ~22s to actually generate.

Tag each in-flight message with its type ({id, type}) so the response
processor can pop only the FIFO message of the matching type
(observation vs summarize). The summary row stays in 'processing'
until its own response arrives, keeping the spinner lit through the
entire summary window.

Also updates Gemini/OpenRouter providers and GeneratorExitHandler for
the new shape.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): clear summary from queue on any SDK response

Switch ResponseProcessor from type-aware FIFO matching to strict FIFO
popping (each SDK response → 1 in-flight message consumed). This way
the summary always clears when the SDK responds, even when the
response is unparseable or the summary doesn't actually generate
content — preventing stuck spinner / queue-depth-stuck-at-1.

Spinner behavior is preserved: messages enqueued after the summary
keep the queue depth elevated, and only when the SDK has responded
to every prompt does the queue drain to zero.

Also: when the consumed message is a 'summarize' and parsing fails,
treat it as best-effort and confirmProcessed (no retry) — summaries
that can't be parsed shouldn't keep retrying.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): redesign welcome card and remove source filters

The first-start welcome card now explains the three feed card types
(observation/summary/prompt) with color-coded badges, points users at
the gear icon for settings and the project dropdown for filtering, and
plugs /mem-search for recall — replacing the old two-line "ask:" prompts.

Source filter tabs (Claude/Codex/etc.) are removed from the header.
Filtering by AI provider was nonsense from a user POV; the project
dropdown is the only header filter now. Source tracking is also
stripped from useSSE, usePagination, App state, and CSS.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): keep welcome card in feed column, swap rows for 3 squares

Two visible problems in the previous design: the card stretched
edge-to-edge while feed cards sit in a centered 650px column, and
the body was a stack of long horizontal rows that scanned line-by-line.

Both fixed: Feed now accepts a pinnedTop slot so the welcome card
renders inside the same .feed-content column as observation cards.
Body is now a 3-column grid of square feature blocks — Live feed,
Tune it, Recall it — each with a custom inline SVG illustration
(stacked cards with color-coded stripes, gear+sliders, magnifier
over cards). Old text-row sections (welcome-card-types,
welcome-card-tips, welcome-card-section, welcome-card-tip-icon)
are removed. Squares stack to one column under 600px.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): convert welcome card to glassy modal with stylized logo

Card now opens as a centered modal with a frosted/glass backdrop
(blur + saturate) so it doubles as a proper help dialog when reopened
from the header's question-mark button. Removed the observation count,
project count, and "since" date — those don't make sense for a
first-launch surface and felt out of place in a help context.

Header art swapped from the small webp logomark to the new
high-resolution sun/sunburst PNG (claude-mem-logo-stylized.png),
shipped as a checked-in asset in src/ui and plugin/ui.

Bigger throughout: 28px h2, 16px tagline, 88px illustrations,
26px feature padding, 1:1 aspect-ratio squares. Backdrop click and
Esc both close. Mobile collapses the grid to one column and drops
the aspect-ratio constraint.

Reverted the unused pinnedTop slot on Feed.tsx since the welcome
card is now a true overlay rather than an in-feed pinned card.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): make welcome modal actually glassy

Previous version had a 55%-opacity black backdrop that almost fully
blocked the underlying UI — the "glass" was just a dark plate.

Now the backdrop is fully transparent (no darkening at all), the
panel itself drops to 55% bg-card opacity with its existing
backdrop-filter blur(28px) saturate(170%), and the feature squares
drop to 35% bg-tertiary so they layer as glass-on-glass over the
already-blurred panel. The header and feed below now read clearly
through the modal's frosted blur.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): bulletproof square features via padding-bottom + clamp() fluid type

Squares were rendering taller than wide because aspect-ratio is treated
as a minimum — content can push the box past 1:1. Switched to the
classic padding-bottom: 100% trick: percentage padding resolves against
the parent's width, so the box is ALWAYS W × W regardless of content.
Inner content sits in an absolutely-positioned flex column that can't
push the shell taller.

Whole modal is now desktop-first and fluid via clamp() — no media-query
stair-steps for type, padding, gaps, border-radius, illustration size,
or modal width. Single mobile breakpoint at <600px collapses the grid
to one column and reverts the padding-bottom trick so each feature can
grow to natural content height.

Tightened the three feature descriptions so they fit comfortably inside
the square at the desktop size.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* style(viewer): 15% black overlay + heavier modal shadow for elevation

Backdrop goes from transparent to rgba(0,0,0,0.15) — just enough
darkening to push the modal visually forward without burying the
underlying UI. Modal shadow stacked: 40px/120px ambient + 16px/48px
contact, both deeper, plus the existing inset 1px highlight.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): clear pending_messages queue on build-and-sync

Rewrites scripts/clear-failed-queue.ts to talk directly to SQLite via
bun:sqlite — the previous HTTP endpoints (/api/pending-queue/*) were
removed during the queue engine rewrite, so the script was orphaned.
Wires `npm run queue:clear` into `build-and-sync` so each rebuild
starts with a clean queue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): collapse parser to binary valid/invalid + clearPendingForSession model

- Parser: { valid: true, observations, summary } | { valid: false } — drops kind/skipped enum dispatch
- ResponseProcessor: two branches only (parseable → store + clearPendingForSession; else → no-op)
- Drop processingMessageIds + per-message claim/confirm/markFailed lifecycle across 3 providers
- PendingMessageStore: 226 → 140 lines; remove markFailed/transitionMessagesTo/confirmProcessed/clearFailedOlderThan/getAllPending/peekPendingTypes... wait keep peekPendingTypes
- Schema migration v31+v32: drop retry_count, failed_at_epoch, completed_at_epoch, worker_pid columns
- SessionQueueProcessor: delete two 1s recovery sleeps (let iterator end on error)
- Server.ts/SettingsRoutes.ts: replace four magic-number setTimeout exit-flush patterns with flushResponseThen helper
- GeneratorExitHandler: 183 → 117 lines (drain in-flight loop gone)

Net: -181 lines. No more silent data loss via maxRetries=3.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): address review comments batch 1

- install.ts: needsMarketplace true when claude-code selected (P1, was no-op)
- install.ts: throw on invalid --model so CLI exits non-zero
- install.ts: skip worker health checks + adapt next-step copy when --no-auto-start
- install.ts: repair regenerates plugin cache when missing
- index.ts: readFlag rejects missing/flag-shaped values
- index.ts: route flag-first invocations (e.g. `--provider claude`) to install
- banner.ts: fail-open if frame payload decode throws
- SearchRoutes.ts: 5s TTL cache for settings reads on hot hook path (P2)
- detect-error-handling-antipatterns.ts: trailing-brace strip whitespace-tolerant
- investigate-timestamps.ts: compute Dec 2025 epochs at runtime (was Dec 2024)
- regenerate-claude-md.ts: include workingDir in fallback walker so root is covered
- sync-marketplace.cjs: parseWorkerPort validates 1..65535 before http.request
- sync-to-marketplace.sh: resolve SOURCE_DIR from script location, not cwd
- Dockerfile.test-installer: bash --login sources .bashrc via .bash_profile
- docs/configuration.mdx: drop nonexistent .worker.port file refs, use settings.json
- docs/architecture-overview.md: dynamic port + queue model after parser collapse
- docs/architecture/worker-service.mdx: dynamic port example + drop port-file claim
- docs/platform-integration.mdx: WORKER_BASE_URL pattern, drop hardcoded 37777
- install/public/install.sh: Node 20 floor (was 18) to match docs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): reset claimed messages to pending on early-return paths

ResponseProcessor returns early in two cases:
- parser invalid (unparseable response)
- memorySessionId not yet captured

Both paths previously left the just-claimed message in `status='processing'`,
which counts toward `getPendingCount`. The generator-exit handler then sees
`pendingCount > 0` and respawns the generator, looping until the restart
guard trips and `clearPendingForSession` deletes the message — silent data
loss.

Calling `resetProcessingToPending` on these paths lets the next generator
pass re-claim the message and try again, instead of burning the restart
budget on no-op respawns.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): swebench fallback row + troubleshooting port path

- evals/swebench/run-batch.py: append fallback prediction row when
  orchestrator future raises, preserving "never drop an instance" guarantee
- docs/troubleshooting.mdx: drop nonexistent .worker.port / worker.port file
  references; use settings.json + /api/health for port discovery

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): memoize per-project observation count for welcome-hint hot path

handleContextInject runs on every PostToolUse hook (after every Read/Edit).
The welcome-hint block ran a COUNT(*) on observations for every call once
CLAUDE_MEM_WELCOME_HINT_ENABLED was true. Observation counts are
monotonically increasing — once a project has any observations it always
will — so cache the positive result in a Set and skip the COUNT(*) on
subsequent requests.

Combined with the 5s settings TTL added earlier, the steady-state cost on
the hook hot path drops to a Set lookup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): use clearProcessingForSession on AI-success path

clearPendingForSession deletes ALL rows for the session. On the success
path of processAgentResponse, that's wrong: messages that arrived as
'pending' during the (1-5s) AI response latency get deleted along with
the 'processing' row we just consumed. In a hook burst (three quick
PostToolUse hooks), B and C land while A is in flight; A's success then
nukes B and C — silent data loss.

Add a status-scoped clearProcessingForSession to PendingMessageStore +
SessionManager, and use it in ResponseProcessor's success path. The
unconditional clearPendingForSession remains correct in
GeneratorExitHandler for hard-stop / restart-guard-trip paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Revert "fix(pr-2255): use clearProcessingForSession on AI-success path"

This reverts commit a08995299c30cbad36bddc3e5bddda7af8604b35.

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-05-02 16:05:56 -07:00
committed by GitHub
parent 28b40c05f2
commit 9e2973059a
452 changed files with 6189 additions and 21059 deletions
+2 -15
View File
@@ -1,9 +1,3 @@
/**
* Tests for readLastLines() — tail-read function for /api/logs endpoint (#1203)
*
* Verifies that log files are read from the end without loading the entire
* file into memory, preventing OOM on large log files.
*/
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { writeFileSync, mkdirSync, rmSync, existsSync } from 'fs';
@@ -73,7 +67,6 @@ describe('readLastLines (#1203 OOM fix)', () => {
});
it('should work with lines larger than initial chunk size', () => {
// Create a file where lines are long enough to exceed the 64KB initial chunk
const longLine = 'X'.repeat(10000);
const lines = Array.from({ length: 20 }, (_, i) => `${i}:${longLine}`);
writeFileSync(testFile, lines.join('\n') + '\n', 'utf-8');
@@ -91,7 +84,6 @@ describe('readLastLines (#1203 OOM fix)', () => {
writeFileSync(testFile, lines.join('\n') + '\n', 'utf-8');
const result = readLastLines(testFile, 100);
// When file fits in one chunk, totalEstimate should be exact
expect(result.totalEstimate).toBe(5);
});
@@ -105,22 +97,17 @@ describe('readLastLines (#1203 OOM fix)', () => {
writeFileSync(testFile, '\n\n\n', 'utf-8');
const result = readLastLines(testFile, 2);
const resultLines = result.lines.split('\n');
// The last two "lines" before trailing newline are empty strings
expect(resultLines.length).toBe(2);
});
it('should not load entire large file for small tail request', () => {
// This test verifies the core fix: a file with many lines should
// not be fully loaded when only a few lines are requested.
// We create a file larger than the initial 64KB chunk.
const line = 'A'.repeat(100) + '\n'; // ~101 bytes per line
const lineCount = 1000; // ~101KB total
const line = 'A'.repeat(100) + '\n';
const lineCount = 1000;
writeFileSync(testFile, line.repeat(lineCount), 'utf-8');
const result = readLastLines(testFile, 5);
const resultLines = result.lines.split('\n');
expect(resultLines.length).toBe(5);
// Each returned line should be our repeated 'A' pattern
for (const l of resultLines) {
expect(l).toBe('A'.repeat(100));
}
@@ -3,10 +3,6 @@ import { EventEmitter } from 'events';
import { SessionQueueProcessor, CreateIteratorOptions } from '../../../src/services/queue/SessionQueueProcessor.js';
import type { PendingMessageStore, PersistentPendingMessage } from '../../../src/services/sqlite/PendingMessageStore.js';
/**
* Mock PendingMessageStore that returns null (empty queue) by default.
* Individual tests can override claimNextMessage behavior.
*/
function createMockStore(): PendingMessageStore {
return {
claimNextMessage: mock(() => null),
@@ -22,9 +18,6 @@ function createMockStore(): PendingMessageStore {
} as unknown as PendingMessageStore;
}
/**
* Create a mock PersistentPendingMessage for testing
*/
function createMockMessage(overrides: Partial<PersistentPendingMessage> = {}): PersistentPendingMessage {
return {
id: 1,
@@ -60,20 +53,15 @@ describe('SessionQueueProcessor', () => {
});
afterEach(() => {
// Ensure abort controller is triggered to clean up any pending iterators
abortController.abort();
// Remove all listeners to prevent memory leaks
events.removeAllListeners();
});
describe('createIterator', () => {
describe('idle timeout behavior', () => {
it('should exit after idle timeout when no messages arrive', async () => {
// Use a very short timeout for testing (50ms)
const SHORT_TIMEOUT_MS = 50;
// Mock the private waitForMessage to use short timeout
// We'll test with real timing but short durations
const onIdleTimeout = mock(() => {});
const options: CreateIteratorOptions = {
@@ -84,33 +72,21 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
// Store returns null (empty queue), so iterator waits for message event
// With no messages arriving, it should eventually timeout
const startTime = Date.now();
const results: any[] = [];
// We need to trigger the timeout scenario
// The iterator uses IDLE_TIMEOUT_MS (3 minutes) which is too long for tests
// Instead, we'll test the abort path and verify callback behavior
// Abort after a short delay to simulate timeout-like behavior
setTimeout(() => abortController.abort(), 100);
for await (const message of iterator) {
results.push(message);
}
// Iterator should exit cleanly when aborted
expect(results).toHaveLength(0);
});
it('should invoke onIdleTimeout callback when idle timeout occurs', async () => {
// This test verifies the callback mechanism works
// We can't easily test the full 3-minute timeout, so we verify the wiring
const onIdleTimeout = mock(() => {
// Callback should trigger abort in real usage
abortController.abort();
});
@@ -120,11 +96,8 @@ describe('SessionQueueProcessor', () => {
onIdleTimeout
};
// To test this properly, we'd need to mock the internal waitForMessage
// For now, verify that abort signal exits cleanly
const iterator = processor.createIterator(options);
// Simulate external abort (which is what onIdleTimeout should do)
setTimeout(() => abortController.abort(), 50);
const results: any[] = [];
@@ -139,7 +112,6 @@ describe('SessionQueueProcessor', () => {
const onIdleTimeout = mock(() => abortController.abort());
let callCount = 0;
// Return a message on first call, then null
(store.claimNextMessage as any) = mock(() => {
callCount++;
if (callCount === 1) {
@@ -157,21 +129,15 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
const results: any[] = [];
// First message should be yielded
// Then queue is empty, wait for more
// Abort after receiving first message
setTimeout(() => abortController.abort(), 100);
for await (const message of iterator) {
results.push(message);
}
// Should have received exactly one message
expect(results).toHaveLength(1);
expect(results[0]._persistentId).toBe(1);
// Store's claimNextMessage should have been called at least twice
// (once returning message, once returning null)
expect(callCount).toBeGreaterThanOrEqual(1);
});
});
@@ -188,7 +154,6 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
// Abort immediately
abortController.abort();
const results: any[] = [];
@@ -196,16 +161,13 @@ describe('SessionQueueProcessor', () => {
results.push(message);
}
// Should exit with no messages
expect(results).toHaveLength(0);
// onIdleTimeout should NOT be called when abort signal is used
expect(onIdleTimeout).not.toHaveBeenCalled();
});
it('should take precedence over timeout when both could fire', async () => {
const onIdleTimeout = mock(() => {});
// Return null to trigger wait
(store.claimNextMessage as any) = mock(() => null);
const options: CreateIteratorOptions = {
@@ -216,7 +178,6 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
// Abort very quickly - before any timeout could fire
setTimeout(() => abortController.abort(), 10);
const results: any[] = [];
@@ -224,9 +185,7 @@ describe('SessionQueueProcessor', () => {
results.push(message);
}
// Should have exited cleanly
expect(results).toHaveLength(0);
// onIdleTimeout should NOT have been called
expect(onIdleTimeout).not.toHaveBeenCalled();
});
});
@@ -239,19 +198,13 @@ describe('SessionQueueProcessor', () => {
createMockMessage({ id: 2 })
];
// First call: return null (queue empty)
// After message event: return message
// Then return null again
(store.claimNextMessage as any) = mock(() => {
callCount++;
if (callCount === 1) {
// First check - queue empty, will wait
return null;
} else if (callCount === 2) {
// After wake-up - return message
return mockMessages[0];
} else if (callCount === 3) {
// Second check after message processed - empty again
return null;
}
return null;
@@ -265,17 +218,14 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
const results: any[] = [];
// Emit message event after a short delay to wake up the iterator
setTimeout(() => events.emit('message'), 50);
// Abort after collecting results
setTimeout(() => abortController.abort(), 150);
for await (const message of iterator) {
results.push(message);
}
// Should have received exactly one message
expect(results.length).toBeGreaterThanOrEqual(1);
if (results.length > 0) {
expect(results[0]._persistentId).toBe(1);
@@ -292,26 +242,20 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
// Get initial listener count
const initialListenerCount = events.listenerCount('message');
// Abort to trigger cleanup
abortController.abort();
// Consume the iterator
const results: any[] = [];
for await (const message of iterator) {
results.push(message);
}
// After iterator completes, listener count should be same or less
// (the cleanup happens inside waitForMessage which may not be called)
const finalListenerCount = events.listenerCount('message');
expect(finalListenerCount).toBeLessThanOrEqual(initialListenerCount + 1);
});
it('should clean up event listeners when message received', async () => {
// Return a message immediately
(store.claimNextMessage as any) = mock(() => createMockMessage({ id: 1 }));
const options: CreateIteratorOptions = {
@@ -321,20 +265,16 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
// Get first message
const firstResult = await iterator.next();
expect(firstResult.done).toBe(false);
expect(firstResult.value._persistentId).toBe(1);
// Now abort and complete iteration
abortController.abort();
// Drain remaining
for await (const _ of iterator) {
// Should not get here since we aborted
}
// Verify no leftover listeners (accounting for potential timing)
const finalListenerCount = events.listenerCount('message');
expect(finalListenerCount).toBeLessThanOrEqual(1);
});
@@ -363,15 +303,13 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
const results: any[] = [];
// Abort after giving time for retry
setTimeout(() => abortController.abort(), 1500);
for await (const message of iterator) {
results.push(message);
break; // Exit after first message
break;
}
// Should have recovered and received message after error
expect(results).toHaveLength(1);
expect(callCount).toBeGreaterThanOrEqual(2);
});
@@ -388,7 +326,6 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
// Abort during the backoff period
setTimeout(() => abortController.abort(), 100);
const results: any[] = [];
@@ -396,7 +333,6 @@ describe('SessionQueueProcessor', () => {
results.push(message);
}
// Should exit cleanly with no messages
expect(results).toHaveLength(0);
});
});
@@ -423,7 +359,6 @@ describe('SessionQueueProcessor', () => {
const iterator = processor.createIterator(options);
const result = await iterator.next();
// Abort to clean up
abortController.abort();
expect(result.done).toBe(false);
@@ -33,12 +33,8 @@ describe('PendingMessageStore - Self-Healing claimNextMessage', () => {
return store.enqueue(sessionDbId, CONTENT_SESSION_ID, message);
}
/**
* Helper to simulate a stuck processing message by directly updating the DB
* to set started_processing_at_epoch to a time in the past (>60s ago)
*/
function makeMessageStaleProcessing(messageId: number): void {
const staleTimestamp = Date.now() - 120_000; // 2 minutes ago (well past 60s threshold)
const staleTimestamp = Date.now() - 120_000;
db.run(
`UPDATE pending_messages SET status = 'processing', started_processing_at_epoch = ? WHERE id = ?`,
[staleTimestamp, messageId]
@@ -46,64 +42,51 @@ describe('PendingMessageStore - Self-Healing claimNextMessage', () => {
}
test('stuck processing messages are recovered on next claim', () => {
// Enqueue a message and make it stuck in processing
const msgId = enqueueMessage();
makeMessageStaleProcessing(msgId);
// Verify it's stuck (status = processing)
const beforeClaim = db.query('SELECT status FROM pending_messages WHERE id = ?').get(msgId) as { status: string };
expect(beforeClaim.status).toBe('processing');
// claimNextMessage should self-heal: reset the stuck message, then claim it
const claimed = store.claimNextMessage(sessionDbId);
expect(claimed).not.toBeNull();
expect(claimed!.id).toBe(msgId);
// It should now be in 'processing' status again (freshly claimed)
const afterClaim = db.query('SELECT status FROM pending_messages WHERE id = ?').get(msgId) as { status: string };
expect(afterClaim.status).toBe('processing');
});
test('actively processing messages are NOT recovered', () => {
// Enqueue two messages
const activeId = enqueueMessage();
const pendingId = enqueueMessage();
// Make the first one actively processing (recent timestamp, NOT stale)
const recentTimestamp = Date.now() - 5_000; // 5 seconds ago (well within 60s threshold)
const recentTimestamp = Date.now() - 5_000;
db.run(
`UPDATE pending_messages SET status = 'processing', started_processing_at_epoch = ? WHERE id = ?`,
[recentTimestamp, activeId]
);
// claimNextMessage should NOT reset the active one — should claim the pending one instead
const claimed = store.claimNextMessage(sessionDbId);
expect(claimed).not.toBeNull();
expect(claimed!.id).toBe(pendingId);
// The active message should still be processing
const activeMsg = db.query('SELECT status FROM pending_messages WHERE id = ?').get(activeId) as { status: string };
expect(activeMsg.status).toBe('processing');
});
test('recovery and claim is atomic within single call', () => {
// Enqueue three messages
const stuckId = enqueueMessage();
const pendingId1 = enqueueMessage();
const pendingId2 = enqueueMessage();
// Make the first one stuck
makeMessageStaleProcessing(stuckId);
// Single claimNextMessage should reset stuck AND claim oldest pending (which is the reset stuck one)
const claimed = store.claimNextMessage(sessionDbId);
expect(claimed).not.toBeNull();
// The stuck message was reset to pending, and being oldest, it gets claimed
expect(claimed!.id).toBe(stuckId);
// The other two should still be pending
const msg1 = db.query('SELECT status FROM pending_messages WHERE id = ?').get(pendingId1) as { status: string };
const msg2 = db.query('SELECT status FROM pending_messages WHERE id = ?').get(pendingId2) as { status: string };
expect(msg1.status).toBe('pending');
@@ -116,14 +99,11 @@ describe('PendingMessageStore - Self-Healing claimNextMessage', () => {
});
test('self-healing only affects the specified session', () => {
// Create a second session
const session2Id = createSDKSession(db, 'other-session', 'test-project', 'Test');
// Enqueue and make stuck in session 1
const stuckInSession1 = enqueueMessage();
makeMessageStaleProcessing(stuckInSession1);
// Enqueue in session 2
const msg: PendingMessage = {
type: 'observation',
tool_name: 'TestTool',
@@ -134,12 +114,10 @@ describe('PendingMessageStore - Self-Healing claimNextMessage', () => {
const session2MsgId = store.enqueue(session2Id, 'other-session', msg);
makeMessageStaleProcessing(session2MsgId);
// Claim for session 2 — should only heal session 2's stuck message
const claimed = store.claimNextMessage(session2Id);
expect(claimed).not.toBeNull();
expect(claimed!.id).toBe(session2MsgId);
// Session 1's stuck message should still be stuck (not healed by session 2's claim)
const session1Msg = db.query('SELECT status FROM pending_messages WHERE id = ?').get(stuckInSession1) as { status: string };
expect(session1Msg.status).toBe('processing');
});
@@ -1,12 +1,3 @@
/**
* Regression test for #2153: ChromaSearchStrategy passes orderBy='relevance'
* to SessionStore.getObservationsByIds expecting Chroma's vector ranking
* (caller-provided ID order) to be preserved. The old code coerced
* 'relevance' to undefined, which then defaulted to 'date_desc' inside
* SessionStore, destroying the semantic ranking.
*
* Mock Justification: NONE - real SQLite ':memory:' covers SQL + ordering.
*/
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { SessionStore } from '../../../src/services/sqlite/SessionStore.js';
@@ -25,9 +16,6 @@ describe('SessionStore.*ByIds — orderBy: "relevance" preserves caller ID order
const sdkId = store.createSDKSession('content-relevance', 'p', 'prompt');
store.updateMemorySessionId(sdkId, 'session-relevance');
// Insert 5 observations with strictly increasing created_at_epoch so that
// a date_desc default would reverse the natural insertion order. The test
// proves that caller-provided ID order, not date order, is honored.
const baseTs = 1_700_000_000_000;
const inserted: number[] = [];
for (let i = 0; i < 5; i++) {
@@ -52,8 +40,6 @@ describe('SessionStore.*ByIds — orderBy: "relevance" preserves caller ID order
inserted.push(result.observationIds[0]);
}
// Reverse the IDs — semantic ranking from Chroma would not match
// chronological order.
const callerOrder = [...inserted].reverse();
const results = store.getObservationsByIds(callerOrder, { orderBy: 'relevance' });
@@ -87,8 +73,7 @@ describe('SessionStore.*ByIds — orderBy: "relevance" preserves caller ID order
inserted.push(result.observationIds[0]);
}
const callerOrder = [...inserted].reverse(); // [oldId... newer... oldest]
// Default order is date_desc -> newest first regardless of input order.
const callerOrder = [...inserted].reverse();
const results = store.getObservationsByIds(callerOrder);
expect(results.map(r => r.id)).toEqual([...inserted].reverse());
});
+14 -42
View File
@@ -1,13 +1,3 @@
/**
* Tests for MigrationRunner idempotency and schema initialization (#979)
*
* Mock Justification: NONE (0% mock code)
* - Uses real SQLite with ':memory:' — tests actual migration SQL
* - Validates idempotency by running migrations multiple times
* - Covers the version-conflict scenario from issue #979
*
* Value: Prevents regression where old DatabaseManager migrations mask core table creation
*/
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { Database } from 'bun:sqlite';
import { MigrationRunner } from '../../../src/services/sqlite/migrations/runner.js';
@@ -121,21 +111,20 @@ describe('MigrationRunner', () => {
runner.runAllMigrations();
const versions = getSchemaVersions(db);
// Core set of expected versions
expect(versions).toContain(4); // initializeSchema
expect(versions).toContain(5); // worker_port
expect(versions).toContain(6); // prompt tracking
expect(versions).toContain(7); // remove unique constraint
expect(versions).toContain(8); // hierarchical fields
expect(versions).toContain(9); // text nullable
expect(versions).toContain(10); // user_prompts
expect(versions).toContain(11); // discovery_tokens
expect(versions).toContain(16); // pending_messages
expect(versions).toContain(17); // rename columns
expect(versions).toContain(20); // failed_at_epoch
expect(versions).toContain(21); // ON UPDATE CASCADE
expect(versions).toContain(22); // content_hash
expect(versions).toContain(30); // observations.metadata
expect(versions).toContain(4);
expect(versions).toContain(5);
expect(versions).toContain(6);
expect(versions).toContain(7);
expect(versions).toContain(8);
expect(versions).toContain(9);
expect(versions).toContain(10);
expect(versions).toContain(11);
expect(versions).toContain(16);
expect(versions).toContain(17);
expect(versions).toContain(20);
expect(versions).toContain(21);
expect(versions).toContain(22);
expect(versions).toContain(30);
});
});
@@ -143,10 +132,8 @@ describe('MigrationRunner', () => {
it('should succeed when run twice on the same database', () => {
const runner = new MigrationRunner(db);
// First run
runner.runAllMigrations();
// Second run — must not throw
expect(() => runner.runAllMigrations()).not.toThrow();
});
@@ -206,8 +193,6 @@ describe('MigrationRunner', () => {
describe('issue #979 — old DatabaseManager version conflict', () => {
it('should create core tables even when old migration versions 1-7 are in schema_versions', () => {
// Simulate the old DatabaseManager having applied its migrations 1-7
// (which are completely different operations with the same version numbers)
db.run(`
CREATE TABLE IF NOT EXISTS schema_versions (
id INTEGER PRIMARY KEY,
@@ -221,7 +206,6 @@ describe('MigrationRunner', () => {
db.prepare('INSERT INTO schema_versions (version, applied_at) VALUES (?, ?)').run(v, now);
}
// Now run MigrationRunner — core tables MUST still be created
const runner = new MigrationRunner(db);
runner.runAllMigrations();
@@ -234,9 +218,6 @@ describe('MigrationRunner', () => {
});
it('should handle version 5 conflict (old=drop tables, new=add column) correctly', () => {
// Old migration 5 drops streaming_sessions/observation_queue
// New migration 5 adds worker_port column to sdk_sessions
// With old version 5 already recorded, MigrationRunner must still add the column
db.run(`
CREATE TABLE IF NOT EXISTS schema_versions (
id INTEGER PRIMARY KEY,
@@ -249,7 +230,6 @@ describe('MigrationRunner', () => {
const runner = new MigrationRunner(db);
runner.runAllMigrations();
// sdk_sessions should exist and have worker_port (added by later migrations even if v5 is skipped)
const columns = getColumns(db, 'sdk_sessions');
const columnNames = columns.map(c => c.name);
expect(columnNames).toContain('content_session_id');
@@ -261,7 +241,6 @@ describe('MigrationRunner', () => {
const runner = new MigrationRunner(db);
runner.runAllMigrations();
// Simulate a leftover temp table from a crash
db.run(`
CREATE TABLE session_summaries_new (
id INTEGER PRIMARY KEY,
@@ -269,10 +248,8 @@ describe('MigrationRunner', () => {
)
`);
// Remove version 7 so migration tries to re-run
db.prepare('DELETE FROM schema_versions WHERE version = 7').run();
// Re-run should handle the leftover table gracefully
expect(() => runner.runAllMigrations()).not.toThrow();
});
@@ -280,7 +257,6 @@ describe('MigrationRunner', () => {
const runner = new MigrationRunner(db);
runner.runAllMigrations();
// Simulate a leftover temp table from a crash
db.run(`
CREATE TABLE observations_new (
id INTEGER PRIMARY KEY,
@@ -288,10 +264,8 @@ describe('MigrationRunner', () => {
)
`);
// Remove version 9 so migration tries to re-run
db.prepare('DELETE FROM schema_versions WHERE version = 9').run();
// Re-run should handle the leftover table gracefully
expect(() => runner.runAllMigrations()).not.toThrow();
});
});
@@ -327,7 +301,6 @@ describe('MigrationRunner', () => {
const runner = new MigrationRunner(db);
runner.runAllMigrations();
// Insert test data
const now = new Date().toISOString();
const epoch = Date.now();
@@ -346,7 +319,6 @@ describe('MigrationRunner', () => {
VALUES (?, ?, ?, ?, ?)
`).run('test-memory-1', 'test-project', 'test request', now, epoch);
// Run migrations again — data should survive
runner.runAllMigrations();
const sessions = db.prepare('SELECT COUNT(*) as count FROM sdk_sessions').get() as { count: number };
@@ -1,21 +1,3 @@
/**
* Tests for storeObservation subagent labeling (agent_type, agent_id).
*
* Validates:
* 1. Rows carry agent_type / agent_id when set on ObservationInput.
* 2. Omitted subagent fields store as NULL (main-session rows).
* 3. Dedup is intentionally UNAFFECTED by agent_type — the content hash
* covers (memory_session_id, title, narrative) only, so two observations
* with the same semantic identity but different originating subagents
* dedup to the same row. This preserves stable observation identity
* across main-session and subagent contexts and is the documented
* intended behavior per Phase 4 anti-pattern guard in the plan.
*
* Sources:
* - Store: src/services/sqlite/observations/store.ts
* - Types: src/services/sqlite/observations/types.ts
* - Test pattern: tests/sqlite/observations.test.ts
*/
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { ClaudeMemDatabase } from '../../../../src/services/sqlite/Database.js';
import { storeObservation } from '../../../../src/services/sqlite/Observations.js';
@@ -82,7 +64,6 @@ describe('storeObservation — subagent labeling', () => {
it('stores NULL for agent_type and agent_id when fields are omitted (main-session row)', () => {
const memorySessionId = createSessionWithMemoryId('content-main-1', 'mem-main-1');
const input = createObservationInput();
// input has no agent_type / agent_id
const result = storeObservation(db, memorySessionId, 'test-project', input);
@@ -113,11 +94,6 @@ describe('storeObservation — subagent labeling', () => {
});
it('dedup is NOT affected by agent fields — second insert with different agent_type returns existing id', () => {
// INTENDED BEHAVIOR (per plan Phase 4 anti-pattern guard):
// The content hash covers (memory_session_id, title, narrative) only.
// Two observations with identical title + narrative but different
// agent_type must dedup to the same row so observation identity is
// stable across main-session and subagent contexts.
const memorySessionId = createSessionWithMemoryId('content-dedup-1', 'mem-dedup-1');
const first = storeObservation(
@@ -144,7 +120,6 @@ describe('storeObservation — subagent labeling', () => {
})
);
// Second insert is deduped → same id, no new row, original agent fields preserved.
expect(second.id).toBe(first.id);
const rowCount = db
@@ -1,9 +1,3 @@
/**
* Tests for parseFileList (fix for #1359)
*
* Validates safe JSON array parsing for files_read/files_modified DB columns
* that may contain legacy bare path strings instead of JSON arrays.
*/
import { describe, it, expect } from 'bun:test';
import { parseFileList } from '../../../src/services/sqlite/observations/files.js';
@@ -1,15 +1,3 @@
/**
* Tests for malformed schema repair in Database.ts
*
* Mock Justification: NONE (0% mock code)
* - Uses real SQLite with temp file — tests actual schema repair logic
* - Uses Python sqlite3 to simulate cross-version schema corruption
* (bun:sqlite doesn't allow writable_schema modifications)
* - Covers the cross-machine sync scenario from issue #1307
*
* Value: Prevents the silent 503 failure loop when a DB is synced between
* machines running different claude-mem versions
*/
import { describe, it, expect } from 'bun:test';
import { Database } from 'bun:sqlite';
import { ClaudeMemDatabase } from '../../../src/services/sqlite/Database.js';
@@ -39,11 +27,6 @@ function hasPython(): boolean {
}
}
/**
* Use Python's sqlite3 to corrupt a DB by removing the content_hash column
* from the observations table definition while leaving the index intact.
* This simulates what happens when a DB from a newer version is synced.
*/
function corruptDbViaPython(dbPath: string): void {
const script = join(tmpdir(), `corrupt-${Date.now()}.py`);
writeFileSync(script, `
@@ -74,7 +57,6 @@ describe('Schema repair on malformed database', () => {
const dbPath = tempDbPath();
try {
// Step 1: Create a valid database with all migrations
const db = new Database(dbPath, { create: true, readwrite: true });
db.run('PRAGMA journal_mode = WAL');
db.run('PRAGMA foreign_keys = ON');
@@ -82,19 +64,15 @@ describe('Schema repair on malformed database', () => {
const runner = new MigrationRunner(db);
runner.runAllMigrations();
// Verify content_hash column and index exist
const hasContentHash = db.prepare('PRAGMA table_info(observations)').all()
.some((col: any) => col.name === 'content_hash');
expect(hasContentHash).toBe(true);
// Checkpoint WAL so all data is in the main file
db.run('PRAGMA wal_checkpoint(TRUNCATE)');
db.close();
// Step 2: Corrupt the DB
corruptDbViaPython(dbPath);
// Step 3: Verify the DB is actually corrupted
const corruptDb = new Database(dbPath, { readwrite: true });
let threw = false;
try {
@@ -107,22 +85,18 @@ describe('Schema repair on malformed database', () => {
corruptDb.close();
expect(threw).toBe(true);
// Step 4: Open via ClaudeMemDatabase — it should auto-repair
const repaired = new ClaudeMemDatabase(dbPath);
// Verify the DB is functional
const tables = repaired.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%' ORDER BY name")
.all() as { name: string }[];
const tableNames = tables.map(t => t.name);
expect(tableNames).toContain('observations');
expect(tableNames).toContain('sdk_sessions');
// Verify the index was recreated by the migration runner
const indexes = repaired.db.prepare("SELECT name FROM sqlite_master WHERE type='index' AND name='idx_observations_content_hash'")
.all() as { name: string }[];
expect(indexes.length).toBe(1);
// Verify the content_hash column was re-added by the migration
const columns = repaired.db.prepare('PRAGMA table_info(observations)').all() as { name: string }[];
expect(columns.some(c => c.name === 'content_hash')).toBe(true);
@@ -154,9 +128,6 @@ describe('Schema repair on malformed database', () => {
const dbPath = tempDbPath();
const scriptPath = join(tmpdir(), `corrupt-nosv-${Date.now()}.py`);
try {
// Build a minimal DB with only a malformed observations table and orphaned index
// — no schema_versions table. This simulates a partially-initialized DB that was
// synced before migrations ever ran.
writeFileSync(scriptPath, `
import sqlite3, sys
c = sqlite3.connect(sys.argv[1])
@@ -175,7 +146,6 @@ c.close()
`);
execFileSync('python3', [scriptPath, dbPath], { timeout: 10000 });
// Verify it's corrupted
const corruptDb = new Database(dbPath, { readwrite: true });
let threw = false;
try {
@@ -187,7 +157,6 @@ c.close()
corruptDb.close();
expect(threw).toBe(true);
// ClaudeMemDatabase must repair and fully initialize despite missing schema_versions
const repaired = new ClaudeMemDatabase(dbPath);
const tables = repaired.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name NOT LIKE 'sqlite_%' ORDER BY name")
.all() as { name: string }[];
@@ -210,7 +179,6 @@ c.close()
const dbPath = tempDbPath();
try {
// Step 1: Create a fully migrated DB and insert a session + observation
const db = new Database(dbPath, { create: true, readwrite: true });
db.run('PRAGMA journal_mode = WAL');
db.run('PRAGMA foreign_keys = ON');
@@ -233,13 +201,10 @@ c.close()
db.run('PRAGMA wal_checkpoint(TRUNCATE)');
db.close();
// Step 2: Corrupt the DB
corruptDbViaPython(dbPath);
// Step 3: Repair via ClaudeMemDatabase
const repaired = new ClaudeMemDatabase(dbPath);
// Data must survive the repair + re-migration
const sessions = repaired.db.prepare('SELECT COUNT(*) as count FROM sdk_sessions').get() as { count: number };
const observations = repaired.db.prepare('SELECT COUNT(*) as count FROM observations').get() as { count: number };
expect(sessions.count).toBe(1);
@@ -1,15 +1,6 @@
import { describe, expect, test } from 'bun:test';
import { isDirectChild, normalizePath } from '../../../src/shared/path-utils.js';
/**
* Tests for path matching logic, specifically the isDirectChild() algorithm
* Covers fix for issue #794: Path format mismatch causes folder CLAUDE.md files to show "No recent activity"
*
* These tests validate the shared path-utils module which is used by:
* - SessionSearch.ts (runtime folder CLAUDE.md generation)
* - regenerate-claude-md.ts (CLI regeneration tool)
*/
describe('isDirectChild path matching', () => {
describe('same path format', () => {
test('returns true for direct child with relative paths', () => {
@@ -35,7 +26,6 @@ describe('isDirectChild path matching', () => {
describe('mixed path formats (absolute folder, relative file) - fixes #794', () => {
test('returns true when absolute folder ends with relative file directory', () => {
// This is the exact bug case from #794
expect(isDirectChild('app/api/router.py', '/Users/dev/project/app/api')).toBe(true);
});
@@ -89,12 +79,10 @@ describe('isDirectChild path matching', () => {
});
test('prevents false positive from partial segment match', () => {
// "api" folder should not match "api-v2" folder
expect(isDirectChild('app/api-v2/router.py', '/Users/dev/project/app/api')).toBe(false);
});
test('handles similar folder names correctly', () => {
// "components" should not match "components-old"
expect(isDirectChild('src/components-old/Button.tsx', '/project/src/components')).toBe(false);
});
});
@@ -1,9 +1,3 @@
/**
* Tests for SessionStore.markSessionCompleted (fix for #1532)
*
* Mock Justification: NONE (0% mock code)
* - Uses real SQLite with ':memory:' - tests actual SQL and schema
*/
import { describe, it, expect, beforeEach, afterEach } from 'bun:test';
import { SessionStore } from '../../../src/services/sqlite/SessionStore.js';
@@ -1,19 +1,8 @@
import { describe, it, expect, beforeEach, mock, spyOn } from 'bun:test';
/**
* Tests for Issue #1099: Stale AbortController queue stall prevention
*
* Validates that:
* 1. ActiveSession tracks lastGeneratorActivity timestamp
* 2. deleteSession uses a 30s timeout to prevent indefinite stalls
* 3. Stale generators (>30s no activity) are detected and aborted
* 4. processAgentResponse updates lastGeneratorActivity
*/
describe('Stale AbortController Guard (#1099)', () => {
describe('ActiveSession.lastGeneratorActivity', () => {
it('should be defined in ActiveSession type', () => {
// Verify the type includes lastGeneratorActivity
const session = {
sessionDbId: 1,
contentSessionId: 'test',
@@ -49,13 +38,13 @@ describe('Stale AbortController Guard (#1099)', () => {
const STALE_THRESHOLD_MS = 30_000;
it('should detect generator as stale when no activity for >30s', () => {
const lastActivity = Date.now() - 31_000; // 31 seconds ago
const lastActivity = Date.now() - 31_000;
const timeSinceActivity = Date.now() - lastActivity;
expect(timeSinceActivity).toBeGreaterThan(STALE_THRESHOLD_MS);
});
it('should NOT detect generator as stale when activity within 30s', () => {
const lastActivity = Date.now() - 5_000; // 5 seconds ago
const lastActivity = Date.now() - 5_000;
const timeSinceActivity = Date.now() - lastActivity;
expect(timeSinceActivity).toBeLessThan(STALE_THRESHOLD_MS);
});
@@ -67,13 +56,11 @@ describe('Stale AbortController Guard (#1099)', () => {
generatorPromise: Promise.resolve() as Promise<void> | null,
};
// Simulate stale recovery: abort, reset, restart
session.abortController.abort();
session.generatorPromise = null;
session.abortController = new AbortController();
session.lastGeneratorActivity = Date.now();
// After reset, should no longer be stale
const timeSinceActivity = Date.now() - session.lastGeneratorActivity;
expect(timeSinceActivity).toBeLessThan(STALE_THRESHOLD_MS);
expect(session.abortController.signal.aborted).toBe(false);
@@ -83,19 +70,17 @@ describe('Stale AbortController Guard (#1099)', () => {
describe('AbortSignal.timeout for deleteSession', () => {
it('should resolve timeout signal after specified ms', async () => {
const start = Date.now();
const timeoutMs = 50; // Use short timeout for test
const timeoutMs = 50;
await new Promise<void>(resolve => {
AbortSignal.timeout(timeoutMs).addEventListener('abort', () => resolve(), { once: true });
});
const elapsed = Date.now() - start;
// Allow some margin for timing
expect(elapsed).toBeGreaterThanOrEqual(timeoutMs - 10);
});
it('should race generator promise against timeout', async () => {
// Simulate a hung generator (never resolves)
const hungGenerator = new Promise<void>(() => {});
const timeoutMs = 50;
@@ -110,7 +95,6 @@ describe('Stale AbortController Guard (#1099)', () => {
});
it('should prefer generator completion over timeout when fast', async () => {
// Simulate a generator that resolves quickly
const fastGenerator = Promise.resolve('generator');
const timeoutMs = 5000;
@@ -3,38 +3,20 @@ import os from 'os';
import { readFileSync } from 'fs';
import { join } from 'path';
/**
* Regression test for issue #1297.
*
* When the worker spawns chroma-mcp via StdioClientTransport, if the CWD is
* the project directory and that directory contains a .env.local file with
* non-chroma env vars, pydantic-settings crashes with "Extra inputs are not
* permitted". The fix is to set `cwd: os.homedir()` so pydantic never reads
* the project's env files.
*/
const CHROMA_MCP_MANAGER_PATH = join(
import.meta.dir, '..', '..', '..', 'src', 'services', 'sync', 'ChromaMcpManager.ts'
);
describe('ChromaMcpManager: cwd isolation from project .env files (#1297)', () => {
it('StdioClientTransport is constructed with cwd set to homedir', () => {
// Source-level assertion: verify the fix is present in the source.
// ChromaMcpManager uses StdioClientTransport (from @modelcontextprotocol/sdk),
// which we cannot easily import in a unit test without spawning a real process.
// A source inspection is the appropriate guardrail here.
const source = readFileSync(CHROMA_MCP_MANAGER_PATH, 'utf-8');
// The StdioClientTransport constructor call must include `cwd: os.homedir()`
// (or equivalent) so that pydantic-settings in chroma-mcp does not read
// .env.local from the project directory.
expect(source).toContain('cwd: os.homedir()');
});
it('the cwd property appears inside the StdioClientTransport constructor call', () => {
const source = readFileSync(CHROMA_MCP_MANAGER_PATH, 'utf-8');
// Locate the StdioClientTransport constructor block and verify cwd is in it.
const transportBlockMatch = source.match(
/new StdioClientTransport\(\s*\{([\s\S]*?)\}\s*\)/
);
@@ -47,7 +29,6 @@ describe('ChromaMcpManager: cwd isolation from project .env files (#1297)', () =
it('os module is imported (required for os.homedir())', () => {
const source = readFileSync(CHROMA_MCP_MANAGER_PATH, 'utf-8');
// os is already imported in the original file — confirm it's still there
expect(source).toMatch(/import os from ['"]os['"]/);
});
});
@@ -1,24 +1,11 @@
/**
* Regression tests for ChromaMcpManager SSL flag handling (PR #1286)
*
* Validates that buildCommandArgs() always emits the correct `--ssl` flag
* based on CLAUDE_MEM_CHROMA_SSL, and omits it entirely in local mode.
*
* Strategy: mock StdioClientTransport to capture the spawned args without
* actually launching a subprocess, then inspect the captured args array.
*/
import { describe, it, expect, beforeEach, mock } from 'bun:test';
// ── Mutable settings closure (updated per test) ────────────────────────
let currentSettings: Record<string, string> = {};
// ── Mock modules BEFORE importing the module under test ────────────────
// Capture the args passed to StdioClientTransport constructor
let capturedTransportOpts: { command: string; args: string[] } | null = null;
mock.module('@modelcontextprotocol/sdk/client/stdio.js', () => ({
StdioClientTransport: class FakeTransport {
// Required: ChromaMcpManager assigns transport.onclose after connect()
onclose: (() => void) | null = null;
constructor(opts: { command: string; args: string[] }) {
capturedTransportOpts = { command: opts.command, args: opts.args };
@@ -60,10 +47,8 @@ mock.module('../../../src/utils/logger.js', () => ({
},
}));
// ── Now import the module under test ───────────────────────────────────
import { ChromaMcpManager } from '../../../src/services/sync/ChromaMcpManager.js';
// ── Helpers ────────────────────────────────────────────────────────────
async function assertSslFlag(sslSetting: string | undefined, expectedValue: string) {
currentSettings = { CLAUDE_MEM_CHROMA_MODE: 'remote' };
if (sslSetting !== undefined) currentSettings.CLAUDE_MEM_CHROMA_SSL = sslSetting;
@@ -78,7 +63,6 @@ async function assertSslFlag(sslSetting: string | undefined, expectedValue: stri
let mgr: ChromaMcpManager;
// ── Test suite ─────────────────────────────────────────────────────────
describe('ChromaMcpManager SSL flag regression (#1286)', () => {
beforeEach(async () => {
await ChromaMcpManager.reset();
@@ -2,18 +2,6 @@ import { describe, it, expect } from 'bun:test';
import { readFileSync } from 'fs';
import { join } from 'path';
/**
* Source-inspection tests for Issue #1447: Worker startup race condition
*
* When the MCP server and SessionStart hook both spawn a daemon concurrently,
* one daemon loses the port bind race (EADDRINUSE / Bun's "port in use" error).
* The loser should detect this, verify the winner is healthy, and exit cleanly
* instead of logging an ERROR that clutters the user's session start output.
*
* These are source-inspection tests because the race is non-deterministic and
* requires a real concurrent multi-process scenario to reproduce reliably.
*/
const WORKER_SERVICE_PATH = join(import.meta.dir, '../../src/services/worker-service.ts');
const source = readFileSync(WORKER_SERVICE_PATH, 'utf-8');
@@ -27,18 +15,14 @@ describe('Worker daemon port-race guard (#1447)', () => {
});
it('calls waitForHealth before exiting on a port conflict', () => {
// The guard must verify the winner is actually healthy before exiting,
// otherwise a non-worker process on the port would suppress a real error.
expect(source).toContain('isPortConflict && await waitForHealth(port,');
});
it('uses async catch handler to allow awaiting waitForHealth', () => {
// The .catch() must be async so it can await the health check.
expect(source).toContain('worker.start().catch(async (error) =>');
});
it('logs info (not error) when cleanly exiting after port race', () => {
// Must not call logger.failure() / logger.error() on the clean exit path.
expect(source).toContain("logger.info('SYSTEM', 'Duplicate daemon exiting");
});
});
-14
View File
@@ -1,22 +1,8 @@
/**
* Tests for worker-spawner.ts validation guards.
*
* These tests cover the entry-point defensive guards in `ensureWorkerStarted`
* (empty workerScriptPath, non-existent workerScriptPath). The deeper spawn
* lifecycle (PID file cleanup, health checks, daemon spawn, readiness wait)
* is not unit-tested here because it requires injectable I/O and a broader
* refactor — see PR #1645 review feedback discussion.
*/
import { describe, it, expect } from 'bun:test';
import { ensureWorkerStarted } from '../../src/services/worker-spawner.js';
describe('ensureWorkerStarted validation guards', () => {
// The port arguments here are arbitrary — both tests short-circuit on the
// workerScriptPath validation guards before any network/health-check I/O,
// so the port is never actually bound or contacted. Picked from an unlikely
// range to prevent confusion if a future test ever does run real health
// checks against these instances.
it('returns false when workerScriptPath is empty string', async () => {
const result = await ensureWorkerStarted(39001, '');