UX redesign: installer + provider rename + /learn-codebase + welcome card + SessionStart hint (#2255)

* feat(ux): claude-mem UX improvements with installer enhancements

Squashed PR #2156 commits for clean rebase onto main:
- feat(installer): add provider selection, model prompt, worker auto-start
- refactor: rename *Agent provider classes to *Provider
- feat: add /learn-codebase skill and viewer welcome card
- feat(worker): inject welcome hint when project has zero observations
- fix(pr-2156): address greptile review comments
- fix(pr-2156): address coderabbit review comments
- fix(pr-2156): persist CLAUDE_MEM_PROVIDER for non-claude in non-TTY mode
- fix(pr-2156): file-backed settings reads in installer + env-first SKILL doc

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* build: rebuild plugin artifacts after rebase onto v12.4.7

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(skills): strip claude-mem internals from learn-codebase

The learn-codebase skill, install next-step copy, WelcomeCard, and
welcome-hint previously walked the primary agent through worker endpoints
and synthetic observation payloads. The PostToolUse hook already captures
every Read/Edit the agent makes — the agent should have no awareness that
the memory layer exists. Collapse the skill to one instruction ("read every
source file in full") and rephrase touchpoints to describe only what the
user observes (Claude reading files), not what happens behind the scenes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): preflight version mismatch + settings-aware port resolution

Two related fixes for build-and-sync's worker restart step:

1. Read CLAUDE_MEM_WORKER_PORT from ~/.claude-mem/settings.json the same
   way the worker does, instead of computing the default port from the
   uid alone. Previously, users with a custom port saw a misleading
   "Worker not running" message because the restart POST hit the wrong
   port and got ECONNREFUSED.

2. Add a preflight check that aborts the sync when the running worker's
   reported version does not match the version we are about to build.
   Claude Code's plugin loader pins the worker to a specific cache
   version per session, so syncing into a newer cache directory has no
   effect until the user runs `claude plugin update thedotmack/claude-mem`
   to bump the pin. The preflight surfaces this explicitly with the exact
   command to run; --force bypasses it for intentional cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(learn-codebase): note sed for partial reads of large files

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide

Removed prose comments from all tracked source. Preserved directives
(@ts-ignore, eslint-disable, biome-ignore, prettier-ignore, triple-slash
references, webpack magic, shebangs). Deleted two tests that asserted
on comment text rather than runtime behavior.

Net: 401 files, -14,587 / +389 lines, -10.4% bytes.

Verified: typecheck passes, build passes, test count unchanged from
baseline (22 pre-existing fails, all unrelated).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(installer): move runtime setup into npx, eliminate hook dead air

Smart-install ran 3 times during a fresh install — the worst run was silent,
fired by Claude Code's Setup hook after `claude plugin install`, producing
~30s of dead air that looked like the plugin was hung.

This change makes `npx claude-mem install` the single place heavy work
happens, with a visible spinner. Hooks become runtime-only.

- New `src/npx-cli/install/setup-runtime.ts` module: ensureBun, ensureUv,
  installPluginDependencies, read/writeInstallMarker, isInstallCurrent.
  Marker schema preserved exactly ({version, bun, uv, installedAt}) so
  ContextBuilder and BranchManager readers keep working.
- `npx claude-mem install`: ungated copy/register/enable for every IDE,
  inserts a "Setting up runtime" task with honest "first install can take
  ~30s" spinner. The claude-code shell-out to `claude plugin install` is
  removed — npx already populated everything Claude reads.
- New `npx claude-mem repair` command for post-`claude plugin update`
  recovery, force-reinstalls runtime.
- Setup hook now runs `plugin/scripts/version-check.js` (29ms wall) instead
  of smart-install. Mismatch prints "run: npx claude-mem repair" on stderr.
  Always exits 0 (non-blocking, per CLAUDE.md exit-code strategy).
- SessionStart loses the smart-install entry; 2 hooks remain (worker start,
  context fetch).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(installer): delete smart-install sources, retarget tests

- Delete scripts/smart-install.js + plugin/scripts/smart-install.js (both
  are source files kept in sync manually; both must go).
- Delete tests/smart-install.test.ts (covered surface is gone).
- tests/plugin-scripts-line-endings: drop smart-install.js entry.
- tests/infrastructure/plugin-distribution: retarget two assertions at
  version-check.js (the new Setup hook script).
- New tests/setup-runtime.test.ts: 9 tests covering marker read/write,
  isInstallCurrent semantics. Marker schema invariant verified.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(installer): describe npx-driven setup + version-check Setup hook

Sweep public docs and architecture notes to reflect the new flow:
npx installer does Bun/uv setup with a visible spinner; Setup hook runs
sub-100ms version-check.js; users hit `npx claude-mem repair` after a
`claude plugin update`.

- docs/architecture-overview.md: hook lifecycle table + npx flow paragraph
- docs/public/configuration.mdx: tree + hook config example
- docs/public/development.mdx: build output line
- docs/public/hooks-architecture.mdx: full rewrite of pre-hook section,
  timing table, performance table
- docs/public/architecture/{overview,hooks,worker-service}.mdx: tree
  comments, JSON config example, Bun requirement section

docs/reports/* untouched (historical incident reports).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): mergeSettings writes via USER_SETTINGS_PATH

Greptile P1 (#2156): `settingsFilePath()` only resolved
`process.env.CLAUDE_MEM_DATA_DIR`, while `getSetting()` reads via
`USER_SETTINGS_PATH` which `resolveDataDir()` populates from BOTH the env
var AND a `CLAUDE_MEM_DATA_DIR` entry persisted in
`~/.claude-mem/settings.json`. Result: a user with the data dir saved in
settings.json but not exported in their shell would have provider/model
settings silently written to `~/.claude-mem/settings.json` while
`getSetting()` read from `/custom/path/settings.json` — read/write split.

Drop `settingsFilePath()` and the now-unused `homedir` import; reuse the
already-imported `USER_SETTINGS_PATH` constant.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(cli): parse --provider, --model, --no-auto-start install flags

Greptile P1 (#2156): InstallOptions has fields `provider`, `model`,
`noAutoStart`, but the install case in the npx-cli switch only parsed
`--ide`. The other three flags were silently dropped — `npx claude-mem
install --provider gemini` was a no-op.

Extract a `parseInstallOptions(argv)` helper, share it between the bare
`npx claude-mem` and `npx claude-mem install` paths, and validate
`--provider` against the allowed set. Update help text accordingly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): pipe runtime-setup output, always show IDE multiselect

Two issues caught in a docker test of the installer:

1. The bun.sh installer, uv installer, and `bun install` were using
   stdio: 'inherit', dumping their stdout/stderr through clack's spinner
   region — visible as raw "downloading uv 0.11.8…" / "Checked 58
   installs across 38 packages…" text streaming under the spinner. Switch
   to stdio: 'pipe' and surface captured stderr only on failure (via a
   shared describeExecError() helper that includes stdout when stderr is
   empty). Spinner stays clean on the happy path.

2. promptForIDESelection() silently picked claude-code when no IDEs were
   detected, never showing the user the multiselect. On a fresh machine
   with no IDEs present yet (e.g. our docker test container), the user
   never got to choose. Now: always show the full IDE list when
   interactive; mark detected ones with [detected] hints and pre-select
   them; show a warn line if zero are detected explaining they should pick
   what they plan to use. Non-TTY callers still get the silent
   claude-code default at the call site (unchanged).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): skip marketplace work for claude-code-only, offer to install Claude Code

Two related UX fixes from a docker test:

**Delay between "Saved Claude model=…" and "Plugin files copied OK"**

After dropping the needsManualInstall gate, every install was unconditionally
running `copyPluginToMarketplace` (which copied the entire root node_modules
tree — thousands of files, dozens of seconds) and `runNpmInstallInMarketplace`
(npm install --production) even when only claude-code was selected. Neither
is needed for claude-code: that path uses the plugin cache dir + the
installed_plugins.json + enabledPlugins flag, all of which we already write.

- Drop `node_modules` from `copyPluginToMarketplace`'s allowed-entries list;
  the dependency-install task populates it on the destination side anyway.
- Re-introduce `needsMarketplace = selectedIDEs.some(id => id !== 'claude-code')`
  scoped *only* to `copyPluginToMarketplace`, `runNpmInstallInMarketplace`,
  and the pre-install `shutdownWorkerAndWait` (also pointless for claude-code-
  only flows since we're not overwriting the worker's running cache dir
  source). All other tasks (cache copy, register, enable, runtime setup) stay
  unconditional.

**Claude Code missing → silent install of an IDE that isn't there**

When the user picked claude-code on a machine without it (e.g. a fresh
container), the install completed but `claude` was unavailable and the only
hint was a generic warn line. Replace with an explicit pre-flight prompt:

  Claude Code is not installed. Claude-mem works best in Claude Code, but
  also works with the IDEs below.
  ? Install Claude Code now?
    ◆ Yes — install Claude Code (recommended)
    ◯ No — pick another IDE below
    ◯ Cancel installation

If the user picks "Yes", run `curl -fsSL https://claude.ai/install.sh | bash`
(or the PowerShell equivalent on Windows), then re-detect IDEs and proceed
with claude-code pre-selected. If the install fails or the user picks "No",
the multiselect still appears with claude-code visible (just unmarked
[detected]), so they can opt in or pick another IDE.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): detect Claude Code via `claude` CLI, not ~/.claude dir

The directory `~/.claude` can exist (e.g. mounted in Docker, or created
by tooling) without Claude Code actually being installed. Detect the
`claude` command in PATH instead so the installer correctly offers to
install Claude Code when missing.

* docs(learn-codebase): add reviewer note explaining the cost tradeoff

The skill intentionally reads every file in full to build a cognitive
cache that pays off across the rest of the project. Add a brief note
so reviewers (human or bot) understand the tradeoff before flagging
the unbounded read as a cost issue.

* fix: address Greptile P1 feedback on welcome hint and learn-codebase

- SearchRoutes: skip welcome hint when caller passes ?full=true so
  explicit full-context requests aren't intercepted by the hint.
- learn-codebase: replace `sed` instruction with the Read tool's
  offset/limit parameters, since Bash is gated in Claude Code by
  default.

* feat(install): ASCII-animated logo splash on interactive install

Plays a ~1s bloom animation of the claude-mem sunburst logomark when
the installer starts in an interactive terminal — geometrically rendered
via 12 ray curves around a center disc, in the brand orange. The
wordmark and tagline type on alongside the final frame.

Auto-skipped on non-TTY, in CI, when NO_COLOR or CLAUDE_MEM_NO_BANNER
is set, or when the terminal is too narrow.

Inspired by ghostty +boo.

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): three-act choreography renderer with radial gradient and diff redraw

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): update preview script to support small/medium/hero tier selection

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(docker): add COLORTERM=truecolor to test-installer sandbox

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): auto-apply PATH for Claude Code with spinner UX

The Claude Code install.sh prints a Setup notes block telling users to
manually edit "your shell config file" to add ~/.local/bin to PATH —
which left fresh installs unable to launch claude from the command line.

After a successful install, detect ~/.local/bin/claude on disk and, if
the dir is missing from PATH, append the right export line to .zshrc /
.bash_profile / .bashrc / fish config (idempotent, marked with a
comment). Also updates process.env.PATH for the current install run.

Wraps the curl|bash install in a clack spinner (interactive only) so the
~4 minute native-build download doesn't look frozen — output is captured
silently and dumped on failure for debuggability. Non-interactive mode
keeps inherited stdio for CI logs.

Verified end-to-end in the test-installer docker sandbox: spinner
animates, .bashrc gets the export, fresh login shell resolves claude.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): video-frame ASCII renderer with three-act choreography

Generator switched from a single Jimp-rendered logo to pre-extracted
video frames concatenated with \x01 separators and gzip-deflated, ported
from ghostty's boo wire format. Renderer rewritten around three acts
(ignite → stagger bloom → text reveal + breathe) with adaptive sizing,
radial gradient, and diff-based redraw.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(onboarding): unify install / SessionStart / viewer around one first-success moment

Three surfaces now point at the same north-star moment — open the viewer, do
anything in Claude Code, watch an observation appear within seconds — with the
same verbatim timing and privacy lines, and a single canonical "how it works"
explainer instead of three diverging copies.

- Canonical explainer at src/services/worker/onboarding-explainer.md served via
  GET /api/onboarding/explainer; mirrored into plugin/skills/how-it-works/SKILL.md
- SessionStart welcome hint rewritten as third-person status (no imperatives
  Claude tries to execute), pinned with a default-value regression test
- Post-install Next Steps reframed as "two paths": passive default + optional
  /learn-codebase front-load; drops /mem-search and /knowledge-agent from this
  surface; adds verbatim timing + privacy lines and /how-it-works link
- /api/stats response gains firstObservationAt for the viewer stat row
- Viewer WelcomeCard branches on observationCount === 0: empty state shows live
  worker-connection dot + "waiting for activity"; has-data state shows
  observations · projects · since [date] and two example prompts. v2 dismiss key
- jimp added to package.json to fix pre-existing banner-frame build break

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(banner): play unconditionally; only honor CLAUDE_MEM_NO_BANNER

The 128-col / TTY / CI / NO_COLOR gates silently swallowed the banner in
narrower terminals, CI logs, and any non-TTY pipe — including Docker runs
where -it should preserve the experience but column width was the wrong
gate. Remove the implicit gates; keep the explicit opt-out only.

If a frame wraps in a narrow terminal, that's better than the banner
not playing at all.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* revert(banner): restore 15:33 gating logic per user request

Reverts eb6fc157. Restores isBannerEnabled to the state at commit
8e448015 (2026-04-30 15:33): TTY check, !CI, !NO_COLOR, !CLAUDE_MEM_NO_BANNER,
and cols >= BANNER.width.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): wrap remaining slow steps with spinners

Each IDE installer (Cursor, Gemini CLI, OpenCode, Windsurf, OpenClaw,
Codex CLI, MCP integrations) now runs inside a clack task spinner with
per-step progress messages instead of silent dynamic-import + cpSync.
Pre-overwrite worker shutdown (up to 10s) and the post-install health
probe (up to 3s) also get spinners.

Internal console.log/error/warn from each IDE installer is buffered
during the spinner; if the install fails, captured output is replayed
afterward via log.warn so users can see what broke.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(review): observation count + IDE pre-selection regressions

WelcomeCard's "no observations yet" empty state was triggered when a
project filter narrowed the feed to zero rows, even with thousands of
observations elsewhere. Source the count from global stats.database
to match firstObservationAt's scope.

Restore initialValues: [] in the IDE multiselect — pre-selecting every
detected IDE was the exact regression #2106 was filed for.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): trichotomy worker state + cache fallback for script path

ensureWorkerStarted now returns 'ready' | 'warming' | 'dead' instead of
boolean. The spawned-but-still-warming case (common in Docker cold
starts and slow first-time inits) was being misreported as 'did not
start', which contradicted the next-steps panel saying 'still starting
up'. Install task message and Next Steps headline now agree on the
actual state.

Also fixes the actual root cause of 'Worker did not start' on
claude-code-only installs: the worker script path was hardcoded to the
marketplace dir, which is left empty when no non-claude-code IDE is
selected. Now falls back to pluginCacheDirectory(version) when the
marketplace copy isn't present.

Verified end-to-end in docker/claude-mem with --ide claude-code,
--ide cursor, and a fresh container — install task and headline
agree on 'Worker ready at http://localhost:<port>' in all cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: align CLAUDE.md and public docs with current code

Sweep across CLAUDE.md and 10 high-traffic docs/public/ MDX files to
remove point-in-time references and align with the actual current
shape of the codebase. Highlights:

- Hardcoded port 37777 → per-user formula (37700 + uid % 100) on the
  front-door pages (introduction, installation, configuration,
  architecture/overview, architecture/worker-service, troubleshooting,
  hooks-architecture, platform-integration).
- Default model 'sonnet' → 'claude-haiku-4-5-20251001' (matches
  SettingsDefaultsManager).
- Node 18 → 20 (matches package.json engines).
- Lifecycle hook count corrected (5 events).
- Removed the nonexistent 'Smart Install' component and pre-built
  directory tree referencing files that no longer exist
  (context-hook.ts, save-hook.ts, cleanup-hook.ts, etc.); replaced
  with the real worker dispatcher shape.
- Removed CLAUDE.md '#2101' issue tag (kept the design rationale).
- Replaced obsolete hooks.json example with a description of the real
  bun-runner.js / worker-service.cjs hook event shape.

Lower-traffic doc pages still hardcode 37777 — left for a separate
global pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): land strip-comments around real parsers (postcss, remark, parse5)

Each language gets a real parser to locate comments, then we splice ranges
out of the original source. The library never serializes — that's how
remark-stringify produced 243 reformat-noise diffs in the first attempt
versus the 21 real strip targets here.

  JS/TS/JSX  -> ts.createSourceFile + getLeadingCommentRanges
  CSS/SCSS   -> postcss.parse + walkComments + node.source offsets
  MD/MDX     -> remark-parse (+ remark-mdx) + AST html / mdx-expression nodes
  HTML       -> parse5 with sourceCodeLocationInfo
  shell/py   -> kept hand-rolled hash stripper (no library worth the dep)

Preserves: shebangs, @ts-* directives, eslint-disable, biome-ignore,
prettier-ignore, triple-slash refs, webpack magic, /*! license keep,
@strip-comments-keep file marker. JS/TS handler runs a parse-roundtrip
check and refuses to write if syntax errors increased (catches the
worker-utils.ts breakage class from the 2026-04-29 attempt).

npm scripts:
  strip-comments         (apply)
  strip-comments:check   (CI-style, exits non-zero if changes needed)
  strip-comments:dry-run (list, no writes)

Verified --check on this repo: 21 changes, -4.0% bytes, no parse-error
regressions, no reformat-suspect false positives.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide via parser-backed tool

21 files changed, -17,550 bytes (-4.0%) of narrative comments removed
across .ts / .tsx / .js / .mjs and the .gitignore. JS/TS comments stripped
via ts.createSourceFile + getLeadingCommentRanges — same canonical lexer,
same behavior as the 2026-04-29 strip, no reformat noise.

Preexisting baseline (unchanged):
  typecheck: 16 errors at HEAD, 16 errors after strip (line numbers shift,
             no new error classes — verified via diff of sorted error lists)
  build:     fails at HEAD with CrushHooksInstaller.js unresolved import
             (preexisting, unrelated to this strip)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): drop Crush integration references after extract

The Crush integration was extracted to its own branch on May 1, but the
import at install.ts:280 (and the case block + ide-detection entry +
McpIntegrations config + npx-cli help text) still referenced the now-
removed CrushHooksInstaller.js, breaking the build.

Removes:
- case 'crush' block in install.ts
- crush entry in ide-detection.ts
- CRUSH_CONFIG and registration in McpIntegrations.ts
- 'crush' from the IDE Identifiers help line in index.ts

Rebuilds worker-service.cjs to match.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(banner): mark generated banner-frames.ts with @strip-comments-keep

Without this, every build/strip cycle ping-pongs five lines of doc
comments in and out of the auto-generated output. The keep-marker tells
strip-comments.ts to skip the file entirely.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): drop banner-frame regen from build script

generate-banner-frames.mjs requires PNG frames in /tmp/cmem-banner-frames
that only exist after the maintainer runs ffmpeg locally on the source
video. CI has neither the video nor the frames, so the build broke on
Windows. The output (src/npx-cli/banner-frames.ts) is committed, so the
regen is a one-shot dev step — not a build step. Run the script directly
when the video changes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): unstick the spinner — kill claim-self-lock, wake on fail, auto-broadcast

Three surgical changes that cure the stuck-spinner bug at the source.

Phase 1.1 (L9): claimNextMessage no longer self-excludes its own worker_pid.
A single UPDATE-RETURNING grabs the oldest pending row by id. Removes the
LiveWorkerPidsProvider plumbing that was never injected — Supervisor enforces
single-worker via PID file, so the multi-worker SQL was defending against a
configuration the project does not support.

Phase 1.2 (L19): SessionManager.markMessageFailed wraps PendingMessageStore.markFailed
and emits 'message' on the per-session EventEmitter. The iterator's waitForMessage
now wakes immediately on re-pend instead of parking for 3 minutes. ResponseProcessor
and SessionRoutes routed through the new wrapper.

Phase 1.3 (L24): PendingMessageStore takes an optional onMutate callback fired
from every mutator (enqueue, claimNextMessage, confirmProcessed, markFailed,
transitionMessagesTo, clearFailedOlderThan). SessionManager wires it; WorkerService
passes broadcastProcessingStatus. Ten manual broadcast calls deleted across
SessionCleanupHelper, SessionEventBroadcaster, SessionRoutes, DataRoutes, and
worker-service. Caller discipline becomes structurally impossible to forget.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): delete dead code — legacy routes, processPendingQueues, decorative guards

Pure deletions. Phase 2 of kill-the-asshole-gates.

- Legacy /sessions/:sessionDbId/* routes (handleSessionInit, handleObservations,
  handleSummarize, handleSessionStatus, handleSessionDelete, handleSessionComplete)
  bypassed all five ingest gates and were a parallel write path. Folded the
  initializeSession + broadcastNewPrompt + syncUserPrompt + ensureGeneratorRunning
  + broadcastSessionStarted work into the canonical /api/sessions/init handler so
  the hook makes one round trip instead of two.
- processPendingQueues (~104 lines, zero callers) — replaced in Phase 6 by a
  one-statement startup sweep.
- spawnInProgress Map and crashRecoveryScheduled Set — decorative dedupe over
  generatorPromise and stillExists checks that already provide the real safety.
- STALE_GENERATOR_THRESHOLD_MS — pre-empted live generators and raced with the
  finally block; the 3min idle timeout already kills zombies.
- MAX_SESSION_WALL_CLOCK_MS — ran a SELECT on every observation to enforce 24h.
  Runaway-spend protection lives in the API key, not in claude-mem.
- Missing-id 400 in shared.ts ingestObservation — Zod already enforces min(1)
  on contentSessionId and toolName at the route schema.
- SessionCompletionHandler import + completionHandler field on SessionRoutes
  (orphaned after handler deletions).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): SQL-backed getTotalQueueDepth — single source of truth

Was: iterate this.sessions.values() and sum getPendingCount per session.
Now: SELECT COUNT(*) FROM pending_messages WHERE status IN ('pending','processing').

The in-memory sessions Map drifted from the DB rows whenever a generator exited
without confirm/fail, leading to false-positive isProcessing in the UI. Phase 1.3's
auto-broadcast fires on every mutation, but it broadcast a stale Map count.
Reading from the DB makes the UI's spinner state match what the queue actually holds.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): typed abortReason replaces wasAborted boolean

Was: a boolean wasAborted that lumped every abort together. The finally block
branched on !wasAborted, so any abort skipped restart — including idle aborts
with pending work, which is exactly the case where we DO want to restart.

Now: ActiveSession.abortReason is a typed enum 'idle' | 'shutdown' | 'overflow'
| 'restart-guard'. The finally block consumes the reason and only skips restart
for 'shutdown' and 'restart-guard'. Idle and overflow aborts fall through, so
if pending work exists they trigger restart correctly.

Dropped 'stale' and 'wall-clock' from the union — Phase 2 deleted those paths.
Natural-completion abort (post-success) intentionally has no reason; it's not
gating restart logic.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): unify the two generator-exit finally blocks

Was: worker-service.ts:startSessionProcessor and SessionRoutes:ensureGeneratorRunning
each had their own ~70-line finally block with divergent restart-guard handling.
The worker-service path called terminateSession on RestartGuard trip and orphaned
pending rows (the L16 bug); the SessionRoutes path drained them. Two places to
update when rules changed.

Now: handleGeneratorExit in src/services/worker/session/GeneratorExitHandler.ts
owns the contract:
  1. Always kill the SDK subprocess if alive.
  2. Always drain processingMessageIds via sessionManager.markMessageFailed
     (which wakes the iterator — Phase 1.2).
  3. shutdown / restart-guard reasons: drain pending rows via
     transitionMessagesTo('failed'), finalize, remove from Map. Fixes L16.
  4. pendingCount=0: finalize normally and remove from Map.
  5. pendingCount>0: backoff respawn via per-session respawnTimer (no global Set;
     Phase 2.4 deleted that). RestartGuard trip drains to 'abandoned'.

Both finally blocks are now ~10-line wrappers that translate local state into the
canonical abortReason and delegate. Restored completionHandler injection into
SessionRoutes (was dropped in Phase 2 cleanup; needed by the unified helper for
finalizeSession).

Behavior change: SessionRoutes' previous "keep idle session in memory" was
deliberately replaced by the plan's "remove from Map on natural completion" —
next observation reinitializes via getMessageIterator → initializeSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(worker): startup orphan sweep — reset 'processing' rows at boot

When the worker dies (crash, kill, restart), any pending_messages rows it left
in 'processing' state are by definition orphans (the only worker is dead).
Single SQL UPDATE at boot resets them to 'pending' so the iterator can claim
them again. Replaces the deleted processPendingQueues function (Phase 2.2).

Runs in initializeBackground after dbManager.initialize() and before the
initializationComplete middleware releases blocked HTTP requests, so no
in-flight request can race the sweep. NOT on a periodic timer — after boot,
every 'processing' row has a live consumer and a periodic sweep would race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): simplify enqueue catch, replace memorySessionId throw with re-pend

7.1: queueObservation's catch was logging two ERROR-level messages and rethrowing.
The rethrow is correct (FK violations / disk full / schema drift should crash
loudly), but the verbose ERROR logging pretended the error was recoverable.
Reduced to one INFO line + rethrow.

7.2: ResponseProcessor's memorySessionId guard was throwing if the SDK hadn't
included session_id on the first user-yield, terminal-failing the entire batch.
Now warns and re-pends in-flight messages via sessionManager.markMessageFailed
(which wakes the iterator — Phase 1.2). The next iteration tries again with
memorySessionId hopefully captured.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): mirror builds to installed-version cache for hot reload

When package.json bumps past Claude Code's installed pin, sync-marketplace
wrote new code to cache/<buildVersion>/ but the worker loaded from
cache/<installedVersion>/, so worker:restart reloaded the same old code.

Replace the exit-on-mismatch preflight with a mirror step: when versions
differ, also rsync plugin/ into cache/<installedVersion>/ so worker:restart
hot-reloads new code without a Claude Code session restart. The
build-version cache still gets written for the eventual
`claude plugin update`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: delete dead barrel files and orphan utilities

- src/sdk/index.ts (re-exports parser+prompts; nothing imported the barrel)
- src/services/Context.ts (re-exports ./context/index.js; no importers)
- src/services/integrations/index.ts (no importers)
- src/services/worker/Search.ts (3-line barrel of ./search/index.js)
- src/services/infrastructure/index.ts: drop CleanupV12_4_3 re-export
- src/utils/error-messages.ts (getWorkerRestartInstructions never imported)
- src/types/transcript.ts (170 LoC of types, zero importers)
- src/npx-cli/_preview.ts (banner dev preview, no script wires it)

Build + tests still pass; observations still flowing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(parser): drop unused detectLanguage

Only the user-grammar-aware variant detectLanguageWithUserGrammars()
is actually called.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(types): drop unused SdkSessionRecord + ObservationWithContext

Both interfaces in src/types/database.ts had zero importers anywhere
in src or tests.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): drop unused getDetectedIDEs + claudeMemDataDirectory

getDetectedIDEs has no callers — install.ts uses detectInstalledIDEs
directly. claudeMemDataDirectory has no callers either.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ProcessManager): drop dead orphan-reaper + signal-handler helpers

Each had zero callers in src/ or tests/:
  - cleanupOrphanedProcesses + enumerateOrphanedProcesses
  - ORPHAN_PROCESS_PATTERNS + ORPHAN_MAX_AGE_MINUTES
  - forceKillProcess
  - waitForProcessesExit
  - createSignalHandler
  - resetWorkerRuntimePathCache

The orphan reaper was retired in PATHFINDER Plan 02 ("OS process groups
replace hand-rolled reapers", commit 94d592f2) — these were the leftover
pieces. shutdown.ts uses the supervisor's own kill-pgid path instead.

parseElapsedTime kept (covered by tests/infrastructure/process-manager.test.ts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete 11 unreferenced DX/forensic scripts

None of these are referenced by package.json npm scripts or docs/.
All last touched on Apr 29 only as part of the comment-stripping
pass — the feature code itself is older and orphaned:

  analyze-transformations-smart.js
  debug-transcript-structure.ts
  dump-transcript-readable.ts
  endless-mode-token-calculator.js
  extract-prompts-to-yaml.cjs
  extract-rich-context-examples.ts
  find-silent-failures.sh
  fix-all-timestamps.ts
  format-transcript-context.ts
  test-transcript-parser.ts
  transcript-to-markdown.ts

These are standalone tools — runtime behavior unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete unused extraction/ and types/ subdirs

- scripts/extraction/{extract-all-xml.py, filter-actual-xml.py, README.md}
  point at ~/Scripts/claude-mem/ — the user's pre-relocation path that no
  longer exists. Zero references in package.json, src/, or tests/.
- scripts/types/export.ts duplicates ObservationRecord etc. and has no
  importers (CodexCliInstaller imports transcripts/types, not this).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(BranchManager): drop dead getInstalledPluginPath

OpenCodeInstaller has its own (used) getInstalledPluginPath; the
BranchManager copy never had any external callers.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ChromaSyncState): unexport DocKind (used internally only)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(gemini): drop stale earliestPendingTimestamp / processingMessageIds

Both fields were removed from ActiveSession in earlier queue-engine
cleanup. Tests had been silently keeping them because the mock sessions
use 'as any' to bypass strict typing, so the dead fields rode along
without complaint.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 3 unused module-level constants

- src/npx-cli/banner.ts: CURSOR_HOME, CLEAR_DOWN (banner uses
  CLEAR_SCREEN which combines clear-down + cursor-home into a single
  CSI sequence; the standalone constants were leftovers).
- src/services/worker/BranchManager.ts: DEFAULT_SHELL_TIMEOUT_MS
  (BranchManager only uses GIT_COMMAND_TIMEOUT_MS / NPM_INSTALL_TIMEOUT_MS).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(opencode-plugin): drop dead workerPost helper

Only the fire-and-forget variant (workerPostFireAndForget) is actually
called. workerPost was the await-result version with no remaining caller.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 8 truly-unused interface fields

Verified each by grepping for `.field`, `"field"`, `'field'`, and
`field:` patterns across src/ + tests/ + plugin/scripts. Where the
only remaining usage was the assignment site, removed the assignments too.

- GitHubStarsData: watchers_count, forks_count (only stargazers_count read)
- TableColumnInfo: dflt_value (PRAGMA returns it but no caller reads it)
- IndexInfo: seq (PRAGMA returns it but no caller reads it)
- ObservationRecord: source_files (legacy field, no readers)
- HookResult.hookSpecificOutput: permissionDecisionReason
- WatchTarget: rescanIntervalMs (set in config, never read)
- ShutdownResult: confirmedStopped (write-only — assigned but no
  reader; updated all 3 return sites to drop it)
- ModePrompts: language_instruction (multilingual support never wired)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): reuse InstallOptions type instead of inline duplicate

parseInstallOptions had its return type written out inline as an
anonymous duplicate of InstallOptions. Use the canonical type
(import type — zero bundle cost).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(integrations): drop unused Platform type alias

The detectPlatform() function that returned this type was deleted earlier
in the branch (along with getScriptExtension that consumed it). The type
itself outlived its consumer; only string literals "Platform:" survive in
console.log diagnostics, which don't reference the alias.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): broadcast processing_status when summarize is queued

broadcastSummarizeQueued was an empty no-op even though
handleSummarizeByClaudeId calls it after enqueueing. The PendingMessageStore
onMutate callback already fires broadcastProcessingStatus on enqueue, but
calling it explicitly from broadcastSummarizeQueued ensures the spinner
ticks on the moment a summary is requested even if the onMutate chain has
any timing race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): keep spinner on while summary generates

ClaudeProvider's SDK can pull multiple synthetic prompts (e.g.
observation + summarize) before producing responses. Each pull pushed
an ID to session.processingMessageIds. When the SDK's first
observation response came back, ResponseProcessor.confirmProcessed
deleted ALL pending message rows — including the still-in-flight
summary — so getTotalQueueDepth dropped to 0 and the spinner turned
off, even though the summary took another ~22s to actually generate.

Tag each in-flight message with its type ({id, type}) so the response
processor can pop only the FIFO message of the matching type
(observation vs summarize). The summary row stays in 'processing'
until its own response arrives, keeping the spinner lit through the
entire summary window.

Also updates Gemini/OpenRouter providers and GeneratorExitHandler for
the new shape.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): clear summary from queue on any SDK response

Switch ResponseProcessor from type-aware FIFO matching to strict FIFO
popping (each SDK response → 1 in-flight message consumed). This way
the summary always clears when the SDK responds, even when the
response is unparseable or the summary doesn't actually generate
content — preventing stuck spinner / queue-depth-stuck-at-1.

Spinner behavior is preserved: messages enqueued after the summary
keep the queue depth elevated, and only when the SDK has responded
to every prompt does the queue drain to zero.

Also: when the consumed message is a 'summarize' and parsing fails,
treat it as best-effort and confirmProcessed (no retry) — summaries
that can't be parsed shouldn't keep retrying.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): redesign welcome card and remove source filters

The first-start welcome card now explains the three feed card types
(observation/summary/prompt) with color-coded badges, points users at
the gear icon for settings and the project dropdown for filtering, and
plugs /mem-search for recall — replacing the old two-line "ask:" prompts.

Source filter tabs (Claude/Codex/etc.) are removed from the header.
Filtering by AI provider was nonsense from a user POV; the project
dropdown is the only header filter now. Source tracking is also
stripped from useSSE, usePagination, App state, and CSS.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): keep welcome card in feed column, swap rows for 3 squares

Two visible problems in the previous design: the card stretched
edge-to-edge while feed cards sit in a centered 650px column, and
the body was a stack of long horizontal rows that scanned line-by-line.

Both fixed: Feed now accepts a pinnedTop slot so the welcome card
renders inside the same .feed-content column as observation cards.
Body is now a 3-column grid of square feature blocks — Live feed,
Tune it, Recall it — each with a custom inline SVG illustration
(stacked cards with color-coded stripes, gear+sliders, magnifier
over cards). Old text-row sections (welcome-card-types,
welcome-card-tips, welcome-card-section, welcome-card-tip-icon)
are removed. Squares stack to one column under 600px.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): convert welcome card to glassy modal with stylized logo

Card now opens as a centered modal with a frosted/glass backdrop
(blur + saturate) so it doubles as a proper help dialog when reopened
from the header's question-mark button. Removed the observation count,
project count, and "since" date — those don't make sense for a
first-launch surface and felt out of place in a help context.

Header art swapped from the small webp logomark to the new
high-resolution sun/sunburst PNG (claude-mem-logo-stylized.png),
shipped as a checked-in asset in src/ui and plugin/ui.

Bigger throughout: 28px h2, 16px tagline, 88px illustrations,
26px feature padding, 1:1 aspect-ratio squares. Backdrop click and
Esc both close. Mobile collapses the grid to one column and drops
the aspect-ratio constraint.

Reverted the unused pinnedTop slot on Feed.tsx since the welcome
card is now a true overlay rather than an in-feed pinned card.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): make welcome modal actually glassy

Previous version had a 55%-opacity black backdrop that almost fully
blocked the underlying UI — the "glass" was just a dark plate.

Now the backdrop is fully transparent (no darkening at all), the
panel itself drops to 55% bg-card opacity with its existing
backdrop-filter blur(28px) saturate(170%), and the feature squares
drop to 35% bg-tertiary so they layer as glass-on-glass over the
already-blurred panel. The header and feed below now read clearly
through the modal's frosted blur.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): bulletproof square features via padding-bottom + clamp() fluid type

Squares were rendering taller than wide because aspect-ratio is treated
as a minimum — content can push the box past 1:1. Switched to the
classic padding-bottom: 100% trick: percentage padding resolves against
the parent's width, so the box is ALWAYS W × W regardless of content.
Inner content sits in an absolutely-positioned flex column that can't
push the shell taller.

Whole modal is now desktop-first and fluid via clamp() — no media-query
stair-steps for type, padding, gaps, border-radius, illustration size,
or modal width. Single mobile breakpoint at <600px collapses the grid
to one column and reverts the padding-bottom trick so each feature can
grow to natural content height.

Tightened the three feature descriptions so they fit comfortably inside
the square at the desktop size.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* style(viewer): 15% black overlay + heavier modal shadow for elevation

Backdrop goes from transparent to rgba(0,0,0,0.15) — just enough
darkening to push the modal visually forward without burying the
underlying UI. Modal shadow stacked: 40px/120px ambient + 16px/48px
contact, both deeper, plus the existing inset 1px highlight.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): clear pending_messages queue on build-and-sync

Rewrites scripts/clear-failed-queue.ts to talk directly to SQLite via
bun:sqlite — the previous HTTP endpoints (/api/pending-queue/*) were
removed during the queue engine rewrite, so the script was orphaned.
Wires `npm run queue:clear` into `build-and-sync` so each rebuild
starts with a clean queue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): collapse parser to binary valid/invalid + clearPendingForSession model

- Parser: { valid: true, observations, summary } | { valid: false } — drops kind/skipped enum dispatch
- ResponseProcessor: two branches only (parseable → store + clearPendingForSession; else → no-op)
- Drop processingMessageIds + per-message claim/confirm/markFailed lifecycle across 3 providers
- PendingMessageStore: 226 → 140 lines; remove markFailed/transitionMessagesTo/confirmProcessed/clearFailedOlderThan/getAllPending/peekPendingTypes... wait keep peekPendingTypes
- Schema migration v31+v32: drop retry_count, failed_at_epoch, completed_at_epoch, worker_pid columns
- SessionQueueProcessor: delete two 1s recovery sleeps (let iterator end on error)
- Server.ts/SettingsRoutes.ts: replace four magic-number setTimeout exit-flush patterns with flushResponseThen helper
- GeneratorExitHandler: 183 → 117 lines (drain in-flight loop gone)

Net: -181 lines. No more silent data loss via maxRetries=3.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): address review comments batch 1

- install.ts: needsMarketplace true when claude-code selected (P1, was no-op)
- install.ts: throw on invalid --model so CLI exits non-zero
- install.ts: skip worker health checks + adapt next-step copy when --no-auto-start
- install.ts: repair regenerates plugin cache when missing
- index.ts: readFlag rejects missing/flag-shaped values
- index.ts: route flag-first invocations (e.g. `--provider claude`) to install
- banner.ts: fail-open if frame payload decode throws
- SearchRoutes.ts: 5s TTL cache for settings reads on hot hook path (P2)
- detect-error-handling-antipatterns.ts: trailing-brace strip whitespace-tolerant
- investigate-timestamps.ts: compute Dec 2025 epochs at runtime (was Dec 2024)
- regenerate-claude-md.ts: include workingDir in fallback walker so root is covered
- sync-marketplace.cjs: parseWorkerPort validates 1..65535 before http.request
- sync-to-marketplace.sh: resolve SOURCE_DIR from script location, not cwd
- Dockerfile.test-installer: bash --login sources .bashrc via .bash_profile
- docs/configuration.mdx: drop nonexistent .worker.port file refs, use settings.json
- docs/architecture-overview.md: dynamic port + queue model after parser collapse
- docs/architecture/worker-service.mdx: dynamic port example + drop port-file claim
- docs/platform-integration.mdx: WORKER_BASE_URL pattern, drop hardcoded 37777
- install/public/install.sh: Node 20 floor (was 18) to match docs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): reset claimed messages to pending on early-return paths

ResponseProcessor returns early in two cases:
- parser invalid (unparseable response)
- memorySessionId not yet captured

Both paths previously left the just-claimed message in `status='processing'`,
which counts toward `getPendingCount`. The generator-exit handler then sees
`pendingCount > 0` and respawns the generator, looping until the restart
guard trips and `clearPendingForSession` deletes the message — silent data
loss.

Calling `resetProcessingToPending` on these paths lets the next generator
pass re-claim the message and try again, instead of burning the restart
budget on no-op respawns.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): swebench fallback row + troubleshooting port path

- evals/swebench/run-batch.py: append fallback prediction row when
  orchestrator future raises, preserving "never drop an instance" guarantee
- docs/troubleshooting.mdx: drop nonexistent .worker.port / worker.port file
  references; use settings.json + /api/health for port discovery

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): memoize per-project observation count for welcome-hint hot path

handleContextInject runs on every PostToolUse hook (after every Read/Edit).
The welcome-hint block ran a COUNT(*) on observations for every call once
CLAUDE_MEM_WELCOME_HINT_ENABLED was true. Observation counts are
monotonically increasing — once a project has any observations it always
will — so cache the positive result in a Set and skip the COUNT(*) on
subsequent requests.

Combined with the 5s settings TTL added earlier, the steady-state cost on
the hook hot path drops to a Set lookup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): use clearProcessingForSession on AI-success path

clearPendingForSession deletes ALL rows for the session. On the success
path of processAgentResponse, that's wrong: messages that arrived as
'pending' during the (1-5s) AI response latency get deleted along with
the 'processing' row we just consumed. In a hook burst (three quick
PostToolUse hooks), B and C land while A is in flight; A's success then
nukes B and C — silent data loss.

Add a status-scoped clearProcessingForSession to PendingMessageStore +
SessionManager, and use it in ResponseProcessor's success path. The
unconditional clearPendingForSession remains correct in
GeneratorExitHandler for hard-stop / restart-guard-trip paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Revert "fix(pr-2255): use clearProcessingForSession on AI-success path"

This reverts commit a08995299c30cbad36bddc3e5bddda7af8604b35.

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-05-02 16:05:56 -07:00
committed by GitHub
parent 28b40c05f2
commit 9e2973059a
452 changed files with 6189 additions and 21059 deletions
-23
View File
@@ -1,35 +1,18 @@
# Dockerfile.e2e — End-to-end test: install claude-mem plugin on a real OpenClaw instance
# Simulates the complete plugin installation flow a user would follow.
#
# Usage:
# docker build -f Dockerfile.e2e -t openclaw-e2e-test . && docker run --rm openclaw-e2e-test
#
# Interactive (for human testing):
# docker run --rm -it openclaw-e2e-test /bin/bash
FROM ghcr.io/openclaw/openclaw:main
USER root
# Install curl for health checks in e2e-verify.sh, and TypeScript for building
RUN apt-get update && apt-get install -y --no-install-recommends curl && rm -rf /var/lib/apt/lists/*
RUN npm install -g typescript@5
# Create staging directory for the plugin source
WORKDIR /tmp/claude-mem-plugin
# Copy plugin source files
COPY package.json tsconfig.json openclaw.plugin.json ./
COPY src/ ./src/
# Build the plugin (TypeScript → JavaScript)
# NODE_ENV=production is set in the base image; override to install devDependencies
RUN NODE_ENV=development npm install && npx tsc
# Create the installable plugin package:
# OpenClaw `plugins install` expects package.json with openclaw.extensions field.
# The package name must match the plugin ID in openclaw.plugin.json (claude-mem).
# Only include the main plugin entry point, not test/mock files.
RUN mkdir -p /tmp/claude-mem-installable/dist && \
cp dist/index.js /tmp/claude-mem-installable/dist/ && \
cp dist/index.d.ts /tmp/claude-mem-installable/dist/ 2>/dev/null || true && \
@@ -45,25 +28,19 @@ RUN mkdir -p /tmp/claude-mem-installable/dist && \
require('fs').writeFileSync('/tmp/claude-mem-installable/package.json', JSON.stringify(pkg, null, 2)); \
"
# Switch back to app directory and node user for installation
WORKDIR /app
USER node
# Create the OpenClaw config directory
RUN mkdir -p /home/node/.openclaw
# Install the plugin using OpenClaw's official CLI
RUN node openclaw.mjs plugins install /tmp/claude-mem-installable
# Enable the plugin
RUN node openclaw.mjs plugins enable claude-mem
# Copy the e2e verification script and mock worker
COPY --chown=node:node e2e-verify.sh /app/e2e-verify.sh
USER root
RUN chmod +x /app/e2e-verify.sh && \
cp /tmp/claude-mem-plugin/dist/mock-worker.js /app/mock-worker.js
USER node
# Default: run the automated verification
CMD ["/bin/bash", "/app/e2e-verify.sh"]
-43
View File
@@ -1,13 +1,4 @@
#!/usr/bin/env bash
# e2e-verify.sh — Automated E2E verification for claude-mem plugin on OpenClaw
#
# This script verifies the complete plugin installation and operation flow:
# 1. Plugin is installed and visible in OpenClaw
# 2. Plugin loads correctly when gateway starts
# 3. Mock worker SSE stream is consumed by the plugin
# 4. Observations are received and formatted
#
# Exit 0 = all checks passed, Exit 1 = failure
set -euo pipefail
@@ -32,11 +23,8 @@ section() {
echo "=== $1 ==="
}
# ─── Phase 1: Plugin Discovery ───
section "Phase 1: Plugin Discovery"
# Check plugin is listed
PLUGIN_LIST=$(node /app/openclaw.mjs plugins list 2>&1)
if echo "$PLUGIN_LIST" | grep -q "claude-mem"; then
pass "Plugin appears in 'plugins list'"
@@ -45,7 +33,6 @@ else
echo "$PLUGIN_LIST"
fi
# Check plugin info
PLUGIN_INFO=$(node /app/openclaw.mjs plugins info claude-mem 2>&1 || true)
if echo "$PLUGIN_INFO" | grep -qi "claude-mem"; then
pass "Plugin info shows claude-mem details"
@@ -54,11 +41,9 @@ else
echo "$PLUGIN_INFO"
fi
# Check plugin is enabled
if echo "$PLUGIN_LIST" | grep -A1 "claude-mem" | grep -qi "enabled\|loaded"; then
pass "Plugin is enabled"
else
# Try to check via info
if echo "$PLUGIN_INFO" | grep -qi "enabled\|loaded"; then
pass "Plugin is enabled (via info)"
else
@@ -67,7 +52,6 @@ else
fi
fi
# Check plugin doctor reports no issues
DOCTOR_OUT=$(node /app/openclaw.mjs plugins doctor 2>&1 || true)
if echo "$DOCTOR_OUT" | grep -qi "no.*issue\|0 issue"; then
pass "Plugin doctor reports no issues"
@@ -76,17 +60,12 @@ else
echo "$DOCTOR_OUT"
fi
# ─── Phase 2: Plugin Files ───
section "Phase 2: Plugin Files"
# Check extension directory exists
EXTENSIONS_DIR="/home/node/.openclaw/extensions/openclaw-plugin"
if [ ! -d "$EXTENSIONS_DIR" ]; then
# Try alternative naming
EXTENSIONS_DIR="/home/node/.openclaw/extensions/claude-mem"
if [ ! -d "$EXTENSIONS_DIR" ]; then
# Search for it
FOUND_DIR=$(find /home/node/.openclaw/extensions/ -name "openclaw.plugin.json" -exec dirname {} \; 2>/dev/null | head -1 || true)
if [ -n "$FOUND_DIR" ]; then
EXTENSIONS_DIR="$FOUND_DIR"
@@ -101,7 +80,6 @@ else
ls -la /home/node/.openclaw/extensions/ 2>/dev/null || echo " (extensions dir not found)"
fi
# Check key files exist
for FILE in "openclaw.plugin.json" "dist/index.js" "package.json"; do
if [ -f "$EXTENSIONS_DIR/$FILE" ]; then
pass "File exists: $FILE"
@@ -110,16 +88,12 @@ for FILE in "openclaw.plugin.json" "dist/index.js" "package.json"; do
fi
done
# ─── Phase 3: Mock Worker + Plugin Integration ───
section "Phase 3: Mock Worker + Plugin Integration"
# Start mock worker in background
echo " Starting mock claude-mem worker..."
node /app/mock-worker.js &
MOCK_PID=$!
# Wait for mock worker to be ready
for i in $(seq 1 10); do
if curl -sf http://localhost:37777/health > /dev/null 2>&1; then
break
@@ -134,7 +108,6 @@ else
kill $MOCK_PID 2>/dev/null || true
fi
# Test SSE stream connectivity (curl with max-time to capture initial SSE frame)
SSE_TEST=$(curl -s --max-time 2 http://localhost:37777/stream 2>/dev/null || true)
if echo "$SSE_TEST" | grep -q "connected"; then
pass "SSE stream returns connected event"
@@ -143,13 +116,8 @@ else
echo " Got: $(echo "$SSE_TEST" | head -5)"
fi
# ─── Phase 4: Gateway + Plugin Load ───
section "Phase 4: Gateway Startup with Plugin"
# Create a minimal config that enables the plugin with the mock worker.
# The memory slot must be set to "claude-mem" to match what `plugins install` configured.
# Gateway auth is disabled via token for headless testing.
mkdir -p /home/node/.openclaw
cat > /home/node/.openclaw/openclaw.json << 'EOFCONFIG'
{
@@ -183,16 +151,13 @@ EOFCONFIG
pass "OpenClaw config written with plugin enabled"
# Start gateway in background and capture output
GATEWAY_LOG="/tmp/gateway.log"
echo " Starting OpenClaw gateway (timeout 15s)..."
OPENCLAW_GATEWAY_TOKEN=e2e-test-token timeout 15 node /app/openclaw.mjs gateway --allow-unconfigured --verbose --token e2e-test-token > "$GATEWAY_LOG" 2>&1 &
GATEWAY_PID=$!
# Give the gateway time to start and load plugins
sleep 5
# Check if gateway started
if kill -0 $GATEWAY_PID 2>/dev/null; then
pass "Gateway process is running"
else
@@ -201,7 +166,6 @@ else
cat "$GATEWAY_LOG" 2>/dev/null | tail -30
fi
# Check gateway log for plugin load messages
if grep -qi "claude-mem" "$GATEWAY_LOG" 2>/dev/null; then
pass "Gateway log mentions claude-mem plugin"
else
@@ -210,29 +174,24 @@ else
tail -20 "$GATEWAY_LOG" 2>/dev/null
fi
# Check for plugin loaded message
if grep -q "plugin loaded" "$GATEWAY_LOG" 2>/dev/null || grep -q "v1.0.0" "$GATEWAY_LOG" 2>/dev/null; then
pass "Plugin load message found in gateway log"
else
fail "Plugin load message not found"
fi
# Check for observation feed messages
if grep -qi "observation feed" "$GATEWAY_LOG" 2>/dev/null; then
pass "Observation feed activity in gateway log"
else
fail "No observation feed activity detected"
fi
# Check for SSE connection to mock worker
if grep -qi "connected.*SSE\|SSE.*stream\|connecting.*SSE" "$GATEWAY_LOG" 2>/dev/null; then
pass "SSE connection activity detected"
else
fail "No SSE connection activity in log"
fi
# ─── Cleanup ───
section "Cleanup"
kill $GATEWAY_PID 2>/dev/null || true
kill $MOCK_PID 2>/dev/null || true
@@ -240,8 +199,6 @@ wait $GATEWAY_PID 2>/dev/null || true
wait $MOCK_PID 2>/dev/null || true
echo " Processes stopped."
# ─── Summary ───
echo ""
echo "==============================="
echo " E2E Test Results"
+2 -250
View File
@@ -1,27 +1,9 @@
#!/usr/bin/env bash
set -euo pipefail
# claude-mem OpenClaw Plugin Installer
# Installs the claude-mem persistent memory plugin for OpenClaw gateways.
#
# Usage:
# curl -fsSL https://install.cmem.ai/openclaw.sh | bash
# # Or with options:
# curl -fsSL https://install.cmem.ai/openclaw.sh | bash -s -- --provider=gemini --api-key=YOUR_KEY
# # Direct execution:
# bash install.sh [--non-interactive] [--upgrade] [--provider=claude|gemini|openrouter] [--api-key=KEY]
###############################################################################
# Constants
###############################################################################
readonly MIN_BUN_VERSION="1.1.14"
readonly INSTALLER_VERSION="1.0.0"
###############################################################################
# Argument parsing
###############################################################################
NON_INTERACTIVE=""
CLI_PROVIDER=""
CLI_API_KEY=""
@@ -68,37 +50,23 @@ while [[ $# -gt 0 ]]; do
esac
done
###############################################################################
# TTY detection — ensure interactive prompts work under curl | bash
# When piped, stdin reads from curl's output, not the terminal.
# We open /dev/tty on fd 3 and read interactive input from there.
###############################################################################
TTY_FD=0
setup_tty() {
if [[ -t 0 ]]; then
# stdin IS a terminal — use it directly
TTY_FD=0
elif [[ "$NON_INTERACTIVE" == "true" ]]; then
# In non-interactive mode, do not require /dev/tty
TTY_FD=0
elif [[ -r /dev/tty ]]; then
# stdin is piped (curl | bash) but /dev/tty is available and readable
exec 3</dev/tty
TTY_FD=3
else
# No terminal available at all
echo "Error: No terminal available for interactive prompts." >&2
echo "Use --non-interactive or run directly: bash install.sh" >&2
exit 1
fi
}
###############################################################################
# Color utilities — auto-detect terminal color support
###############################################################################
if [[ -t 1 ]] && [[ "${TERM:-}" != "dumb" ]]; then
readonly COLOR_RED='\033[0;31m'
readonly COLOR_GREEN='\033[0;32m'
@@ -132,17 +100,10 @@ prompt_user() {
echo -en "${COLOR_CYAN}?${COLOR_RESET} $* "
}
# Read a line from the terminal (works even when stdin is piped from curl)
# Callers always pass -r via $@; shellcheck can't see through the delegation
read_tty() {
# shellcheck disable=SC2162
read "$@" <&"$TTY_FD"
}
###############################################################################
# Global cleanup trap — removes temp directories on unexpected exit
###############################################################################
CLEANUP_DIRS=()
register_cleanup_dir() {
@@ -166,10 +127,6 @@ cleanup_on_exit() {
trap cleanup_on_exit EXIT
###############################################################################
# Prerequisite checks
###############################################################################
check_git() {
if command -v git &>/dev/null; then
return 0
@@ -196,24 +153,17 @@ check_git() {
exit 1
}
###############################################################################
# Port conflict detection — check if port 37777 is already in use
###############################################################################
check_port_37777() {
local port_in_use=""
# Try lsof first (macOS/Linux)
if command -v lsof &>/dev/null; then
if lsof -i :37777 -sTCP:LISTEN &>/dev/null; then
port_in_use="true"
fi
# Fallback to ss (Linux)
elif command -v ss &>/dev/null; then
if ss -tlnp 2>/dev/null | grep -q ':37777 '; then
port_in_use="true"
fi
# Fallback to curl probe
elif command -v curl &>/dev/null; then
local response
response="$(curl -s -o /dev/null -w "%{http_code}" "http://127.0.0.1:37777/api/health" 2>/dev/null)" || true
@@ -223,36 +173,23 @@ check_port_37777() {
fi
if [[ "$port_in_use" == "true" ]]; then
return 0 # port IS in use
return 0
fi
return 1 # port is free
return 1
}
###############################################################################
# Upgrade detection — check if claude-mem is already installed
###############################################################################
is_claude_mem_installed() {
# Check if the plugin directory exists with the worker script
if find_claude_mem_install_dir 2>/dev/null; then
return 0
fi
return 1
}
###############################################################################
# JSON manipulation helper — jq with python3/node fallback
# Usage: ensure_jq_or_fallback <json_file> <jq_filter> [jq_args...]
# For simple read operations, returns the result on stdout.
# For write operations, updates the file in-place.
###############################################################################
ensure_jq_or_fallback() {
local json_file="$1"
shift
local jq_filter="$1"
shift
# remaining args are passed as jq --arg pairs
if command -v jq &>/dev/null; then
local tmp_file
@@ -262,29 +199,16 @@ ensure_jq_or_fallback() {
fi
if command -v python3 &>/dev/null; then
# For complex jq filters, fall back to node instead
# Python is used only for simple operations
:
fi
# Fallback to node (always available — it's a dependency)
# This is a passthrough; callers that need node-specific logic
# should use node -e directly. This function is for jq compatibility.
warn "jq not found — using node for JSON manipulation"
return 1
}
###############################################################################
# Parse /api/health JSON response — extract worker metadata into globals
# Uses jq → python3 → node fallback chain (matching installer conventions)
# Sets: WORKER_VERSION, WORKER_AI_PROVIDER, WORKER_AI_AUTH_METHOD,
# WORKER_INITIALIZED, WORKER_REPORTED_PID, WORKER_UPTIME
###############################################################################
parse_health_json() {
local raw_json="$1"
# Reset all health globals before parsing
WORKER_VERSION=""
WORKER_AI_PROVIDER=""
WORKER_AI_AUTH_METHOD=""
@@ -296,7 +220,6 @@ parse_health_json() {
return 0
fi
# Try jq first (fastest, most reliable)
if command -v jq &>/dev/null; then
WORKER_VERSION="$(echo "$raw_json" | jq -r '.version // empty' 2>/dev/null)" || true
WORKER_AI_PROVIDER="$(echo "$raw_json" | jq -r '.ai.provider // empty' 2>/dev/null)" || true
@@ -307,7 +230,6 @@ parse_health_json() {
return 0
fi
# Try python3 fallback
if command -v python3 &>/dev/null; then
local parsed
parsed="$(INSTALLER_HEALTH_JSON="$raw_json" python3 -c "
@@ -337,7 +259,6 @@ except Exception:
WORKER_INITIALIZED="${health_fields[3]:-}"
WORKER_REPORTED_PID="${health_fields[4]:-}"
WORKER_UPTIME="${health_fields[5]:-}"
# Normalize python's None/empty representations
[[ "$WORKER_VERSION" == "None" ]] && WORKER_VERSION=""
[[ "$WORKER_AI_PROVIDER" == "None" ]] && WORKER_AI_PROVIDER=""
[[ "$WORKER_AI_AUTH_METHOD" == "None" ]] && WORKER_AI_AUTH_METHOD=""
@@ -348,7 +269,6 @@ except Exception:
return 0
fi
# Fallback to node (always available — it's a dependency)
local parsed
parsed="$(INSTALLER_HEALTH_JSON="$raw_json" node -e "
try {
@@ -380,10 +300,6 @@ except Exception:
fi
}
###############################################################################
# Format uptime from milliseconds to human-readable (e.g., "2m 15s", "1h 23m")
###############################################################################
format_uptime_ms() {
local ms="$1"
local secs=$((ms / 1000))
@@ -396,10 +312,6 @@ format_uptime_ms() {
fi
}
###############################################################################
# Banner
###############################################################################
print_banner() {
echo -e "${COLOR_MAGENTA}${COLOR_BOLD}"
cat << 'BANNER'
@@ -413,10 +325,6 @@ BANNER
echo ""
}
###############################################################################
# Platform detection
###############################################################################
PLATFORM=""
IS_WSL=""
@@ -448,10 +356,6 @@ detect_platform() {
info "Detected platform: ${PLATFORM}${IS_WSL:+ (WSL)}"
}
###############################################################################
# Version comparison — returns 0 if $1 >= $2
###############################################################################
version_gte() {
local v1="$1" v2="$2"
local -a parts1 parts2
@@ -467,21 +371,14 @@ version_gte() {
return 0
}
###############################################################################
# Bun detection and installation
# Translated from plugin/scripts/smart-install.js patterns
###############################################################################
BUN_PATH=""
find_bun_path() {
# Try PATH first
if command -v bun &>/dev/null; then
BUN_PATH="$(command -v bun)"
return 0
fi
# Check common installation paths (handles fresh installs before PATH reload)
local -a bun_paths=(
"${HOME}/.bun/bin/bun"
"/usr/local/bin/bun"
@@ -504,7 +401,6 @@ check_bun() {
return 1
fi
# Verify minimum version
local bun_version
bun_version="$("$BUN_PATH" --version 2>/dev/null)" || return 1
@@ -529,7 +425,6 @@ install_bun() {
exit 1
fi
# Re-detect after install (installer may have placed it in ~/.bun/bin)
if ! find_bun_path; then
error "Bun installation completed but binary not found in expected locations"
error "Please restart your terminal and re-run this installer."
@@ -541,21 +436,14 @@ install_bun() {
success "Bun ${bun_version} installed at ${BUN_PATH}"
}
###############################################################################
# uv detection and installation
# Translated from plugin/scripts/smart-install.js patterns
###############################################################################
UV_PATH=""
find_uv_path() {
# Try PATH first
if command -v uv &>/dev/null; then
UV_PATH="$(command -v uv)"
return 0
fi
# Check common installation paths (handles fresh installs before PATH reload)
local -a uv_paths=(
"${HOME}/.local/bin/uv"
"${HOME}/.cargo/bin/uv"
@@ -597,7 +485,6 @@ install_uv() {
exit 1
fi
# Re-detect after install
if ! find_uv_path; then
error "uv installation completed but binary not found in expected locations"
error "Please restart your terminal and re-run this installer."
@@ -609,14 +496,9 @@ install_uv() {
success "uv ${uv_version} installed at ${UV_PATH}"
}
###############################################################################
# OpenClaw gateway detection
###############################################################################
OPENCLAW_PATH=""
find_openclaw() {
# Try PATH first — check both "openclaw" and "openclaw.mjs" binary names
for bin_name in openclaw openclaw.mjs; do
if command -v "$bin_name" &>/dev/null; then
OPENCLAW_PATH="$(command -v "$bin_name")"
@@ -624,7 +506,6 @@ find_openclaw() {
fi
done
# Check common installation paths
local -a openclaw_paths=(
"${HOME}/.openclaw/openclaw.mjs"
"/usr/local/bin/openclaw.mjs"
@@ -634,7 +515,6 @@ find_openclaw() {
"${HOME}/.npm-global/bin/openclaw"
)
# Also check for node_modules in common project locations
if [[ -n "${NODE_PATH:-}" ]]; then
openclaw_paths+=("${NODE_PATH}/openclaw/openclaw.mjs")
fi
@@ -667,7 +547,6 @@ check_openclaw() {
success "OpenClaw gateway found at ${OPENCLAW_PATH}"
}
# Run openclaw command — uses node for .mjs files, direct execution otherwise
run_openclaw() {
if [[ "$OPENCLAW_PATH" == *.mjs ]]; then
node "$OPENCLAW_PATH" "$@"
@@ -676,17 +555,10 @@ run_openclaw() {
fi
}
###############################################################################
# Plugin installation — clone, build, install, enable
# Flow based on openclaw/Dockerfile.e2e
###############################################################################
CLAUDE_MEM_REPO="https://github.com/thedotmack/claude-mem.git"
CLAUDE_MEM_BRANCH="${CLI_BRANCH:-main}"
PLUGIN_FRESHLY_INSTALLED=""
# Resolve the target extension directory.
# Priority: existing installPath from config > plugins.load.paths > default.
resolve_extension_dir() {
local oc_config="${HOME}/.openclaw/openclaw.json"
if [[ -f "$oc_config" ]] && command -v node &>/dev/null; then
@@ -722,12 +594,10 @@ resolve_extension_dir() {
CLAUDE_MEM_EXTENSION_DIR=""
install_plugin() {
# Check for git before attempting clone
check_git
CLAUDE_MEM_EXTENSION_DIR="$(resolve_extension_dir)"
# Remove existing plugin installation to allow clean re-install
local existing_plugin_dir="$CLAUDE_MEM_EXTENSION_DIR"
if [[ -d "$existing_plugin_dir" ]]; then
info "Removing existing claude-mem plugin at ${existing_plugin_dir}..."
@@ -747,7 +617,6 @@ install_plugin() {
local plugin_src="${build_dir}/claude-mem/openclaw"
# Build the TypeScript plugin
info "Building TypeScript plugin..."
if ! (cd "$plugin_src" && NODE_ENV=development npm install --ignore-scripts 2>&1 && npx tsc 2>&1); then
error "Failed to build the claude-mem OpenClaw plugin"
@@ -755,7 +624,6 @@ install_plugin() {
exit 1
fi
# Create minimal installable package (matches Dockerfile.e2e pattern)
local installable_dir="${build_dir}/claude-mem-installable"
mkdir -p "${installable_dir}/dist"
@@ -763,7 +631,6 @@ install_plugin() {
cp "${plugin_src}/dist/index.d.ts" "${installable_dir}/dist/" 2>/dev/null || true
cp "${plugin_src}/openclaw.plugin.json" "${installable_dir}/"
# Generate the installable package.json with openclaw.extensions field
INSTALLER_PACKAGE_DIR="$installable_dir" node -e "
const pkg = {
name: 'claude-mem',
@@ -775,11 +642,6 @@ install_plugin() {
require('fs').writeFileSync(process.env.INSTALLER_PACKAGE_DIR + '/package.json', JSON.stringify(pkg, null, 2));
"
# Clean up stale claude-mem plugin entry before installing.
# If the config references claude-mem but the plugin isn't installed,
# OpenClaw's config validator blocks ALL CLI commands (including plugins install).
# We temporarily remove the entry and save the config so `plugins install` can run,
# then `plugins install` + `plugins enable` will re-create it properly.
local oc_config="${HOME}/.openclaw/openclaw.json"
local saved_plugin_config=""
if [[ -f "$oc_config" ]]; then
@@ -808,7 +670,6 @@ install_plugin() {
" 2>/dev/null) || true
fi
# Install the plugin using OpenClaw's CLI
info "Installing claude-mem plugin into OpenClaw..."
if ! run_openclaw plugins install "$installable_dir" 2>&1; then
error "Failed to install claude-mem plugin"
@@ -816,7 +677,6 @@ install_plugin() {
exit 1
fi
# Enable the plugin
info "Enabling claude-mem plugin..."
if ! run_openclaw plugins enable claude-mem 2>&1; then
error "Failed to enable claude-mem plugin"
@@ -824,9 +684,6 @@ install_plugin() {
exit 1
fi
# Ensure claude-mem is present in plugins.allow after successful install+enable.
# Some OpenClaw environments require explicit allowlisting for local plugins.
# This write is guaranteed: if config doesn't exist, configure_memory_slot() will create it.
if [[ -f "$oc_config" ]]; then
if ! INSTALLER_CONFIG_FILE="$oc_config" node -e "
const fs = require('fs');
@@ -845,10 +702,7 @@ install_plugin() {
warn "Failed to write plugins.allow — claude-mem may need manual allowlisting"
fi
else
# Config doesn't exist yet; configure_memory_slot() will create it with plugins.allow
# We'll add claude-mem to the allowlist in a follow-up step after config is materialized
info "OpenClaw config not yet materialized; will ensure allowlist in post-install"
# Force config materialization by running a harmless OpenClaw command
if run_openclaw status --json >/dev/null 2>&1 && [[ -f "$oc_config" ]]; then
if ! INSTALLER_CONFIG_FILE="$oc_config" node -e "
const fs = require('fs');
@@ -867,8 +721,6 @@ install_plugin() {
fi
fi
# Restore saved plugin config (workerPort, syncMemoryFile, observationFeed, etc.)
# from any pre-existing installation that was temporarily removed above.
if [[ -n "$saved_plugin_config" && "$saved_plugin_config" != "{}" ]]; then
info "Restoring previous plugin configuration..."
INSTALLER_CONFIG_FILE="$oc_config" INSTALLER_SAVED_CONFIG="$saved_plugin_config" node -e "
@@ -885,23 +737,14 @@ install_plugin() {
success "claude-mem plugin installed and enabled"
# ── Copy core plugin files (worker, hooks, scripts) to extension directory ──
# The OpenClaw extension only contains the gateway hook (dist/index.js).
# The actual worker service and Claude Code hooks live in the plugin/ directory
# of the main repo. We copy them so find_claude_mem_install_dir() can locate
# the worker-service.cjs and the worker runs the updated version.
local extension_dir="$CLAUDE_MEM_EXTENSION_DIR"
local repo_root="${build_dir}/claude-mem"
if [[ -d "$extension_dir" && -d "${repo_root}/plugin" ]]; then
info "Copying core plugin files to ${extension_dir}..."
# Copy plugin/ directory (worker service, hooks, scripts, skills, UI)
cp -R "${repo_root}/plugin" "${extension_dir}/"
# Merge the canonical version from root package.json into the existing
# extension package.json, preserving the openclaw.extensions field that
# plugin discovery requires.
local root_version
root_version="$(node -e "console.log(require('${repo_root}/package.json').version)")"
node -e "
@@ -920,11 +763,6 @@ install_plugin() {
PLUGIN_FRESHLY_INSTALLED="true"
}
###############################################################################
# Memory slot configuration
# Sets plugins.slots.memory = "claude-mem" in ~/.openclaw/openclaw.json
###############################################################################
configure_memory_slot() {
local config_dir="${HOME}/.openclaw"
local config_file="${config_dir}/openclaw.json"
@@ -932,7 +770,6 @@ configure_memory_slot() {
mkdir -p "$config_dir"
if [[ ! -f "$config_file" ]]; then
# No config file exists — create one with the memory slot
info "Creating OpenClaw configuration with claude-mem memory slot..."
INSTALLER_CONFIG_FILE="$config_file" node -e "
const config = {
@@ -955,10 +792,8 @@ configure_memory_slot() {
return 0
fi
# Config file exists — update it to set the memory slot
info "Updating OpenClaw configuration to use claude-mem memory slot..."
# Use node for reliable JSON manipulation
INSTALLER_CONFIG_FILE="$config_file" node -e "
const fs = require('fs');
const configPath = process.env.INSTALLER_CONFIG_FILE;
@@ -998,11 +833,6 @@ configure_memory_slot() {
success "Memory slot set to claude-mem in ${config_file}"
}
###############################################################################
# AI Provider setup — interactive provider selection
# Reads defaults from SettingsDefaultsManager.ts (single source of truth)
###############################################################################
AI_PROVIDER=""
AI_PROVIDER_API_KEY=""
@@ -1026,7 +856,6 @@ setup_ai_provider() {
info "AI Provider Configuration"
echo ""
# Handle --provider flag (pre-selected via CLI)
if [[ -n "$CLI_PROVIDER" ]]; then
case "$CLI_PROVIDER" in
claude)
@@ -1060,7 +889,6 @@ setup_ai_provider() {
return 0
fi
# Handle non-interactive mode (no --provider flag)
if [[ "$NON_INTERACTIVE" == "true" ]]; then
info "Non-interactive mode: defaulting to Claude Max Plan (no API key needed)"
AI_PROVIDER="claude"
@@ -1124,19 +952,12 @@ setup_ai_provider() {
done
}
###############################################################################
# Write settings.json — creates ~/.claude-mem/settings.json with all defaults
# Schema: flat key-value (not nested { env: {...} })
# Defaults sourced from SettingsDefaultsManager.ts
###############################################################################
write_settings() {
local settings_dir="${HOME}/.claude-mem"
local settings_file="${settings_dir}/settings.json"
mkdir -p "$settings_dir"
# Pass provider and API key via environment variables to avoid shell-to-JS injection
INSTALLER_AI_PROVIDER="$AI_PROVIDER" \
INSTALLER_AI_API_KEY="$AI_PROVIDER_API_KEY" \
INSTALLER_SETTINGS_FILE="$settings_file" \
@@ -1226,11 +1047,6 @@ write_settings() {
success "Settings written to ${settings_file}"
}
###############################################################################
# Locate the installed claude-mem plugin directory
# Checks common OpenClaw and Claude Code plugin install paths
###############################################################################
CLAUDE_MEM_INSTALL_DIR=""
find_claude_mem_install_dir() {
@@ -1250,7 +1066,6 @@ find_claude_mem_install_dir() {
fi
done
# Fallback: search for the worker script under common plugin roots
local -a roots=(
"${HOME}/.openclaw"
"${HOME}/.claude/plugins"
@@ -1260,7 +1075,6 @@ find_claude_mem_install_dir() {
local found
found="$(find "$root" -name "worker-service.cjs" -path "*/plugin/scripts/*" 2>/dev/null | head -n 1)" || true
if [[ -n "$found" ]]; then
# Strip /plugin/scripts/worker-service.cjs to get the install dir
CLAUDE_MEM_INSTALL_DIR="${found%/plugin/scripts/worker-service.cjs}"
return 0
fi
@@ -1271,11 +1085,6 @@ find_claude_mem_install_dir() {
return 1
}
###############################################################################
# Worker service startup
# Starts the claude-mem worker using bun in the background
###############################################################################
WORKER_PID=""
WORKER_VERSION=""
WORKER_AI_PROVIDER=""
@@ -1305,7 +1114,6 @@ start_worker() {
mkdir -p "$log_dir"
# Ensure bun path is available
if [[ -z "$BUN_PATH" ]]; then
if ! find_bun_path; then
error "Bun not found — cannot start worker service"
@@ -1313,12 +1121,10 @@ start_worker() {
fi
fi
# Start worker in background with nohup
CLAUDE_MEM_WORKER_PORT=37777 nohup "$BUN_PATH" "$worker_script" \
>> "$log_file" 2>&1 &
WORKER_PID=$!
# Write PID file for future management
local pid_file="${HOME}/.claude-mem/worker.pid"
mkdir -p "${HOME}/.claude-mem"
INSTALLER_PID_FILE="$pid_file" INSTALLER_WORKER_PID="$WORKER_PID" node -e "
@@ -1335,13 +1141,6 @@ start_worker() {
info "Logs: ${log_file}"
}
###############################################################################
# Health verification — two-stage: health (alive) then readiness (initialized)
# Stage 1: Poll /api/health for HTTP 200 (worker process is running)
# Stage 2: Poll /api/readiness for HTTP 200 (worker is fully initialized)
# Total budget: 30 attempts (30 seconds) shared across both stages
###############################################################################
verify_health() {
local max_attempts=30
local attempt=1
@@ -1351,7 +1150,6 @@ verify_health() {
info "Verifying worker health..."
# ── Stage 1: Wait for /api/health to return HTTP 200 (worker is alive) ──
while (( attempt <= max_attempts )); do
local http_status
http_status="$(curl -s -o /dev/null -w "%{http_code}" "$health_url" 2>/dev/null)" || true
@@ -1359,7 +1157,6 @@ verify_health() {
if [[ "$http_status" == "200" ]]; then
health_alive=true
# Fetch the full health response body and parse metadata
local body
body="$(curl -s "$health_url" 2>/dev/null)" || true
parse_health_json "$body"
@@ -1374,7 +1171,6 @@ verify_health() {
attempt=$((attempt + 1))
done
# If health never responded, the worker is not running at all
if [[ "$health_alive" != "true" ]]; then
warn "Worker health check timed out after ${max_attempts} attempts"
warn "The worker may still be starting up. Check status with:"
@@ -1383,7 +1179,6 @@ verify_health() {
return 1
fi
# ── Stage 2: Wait for /api/readiness to return HTTP 200 (fully initialized) ──
attempt=$((attempt + 1))
while (( attempt <= max_attempts )); do
local readiness_status
@@ -1399,17 +1194,12 @@ verify_health() {
attempt=$((attempt + 1))
done
# Readiness timed out but health is OK — worker is running, just not fully initialized yet
warn "Worker is running but initialization is still in progress"
warn "This is normal on first run — the worker will finish initializing in the background."
warn "Check readiness with: curl http://127.0.0.1:37777/api/readiness"
return 0
}
###############################################################################
# Observation feed setup — optional interactive channel configuration
###############################################################################
FEED_CHANNEL=""
FEED_TARGET_ID=""
FEED_CONFIGURED=false
@@ -1531,10 +1321,6 @@ setup_observation_feed() {
FEED_CONFIGURED=true
}
###############################################################################
# Write observation feed config into ~/.openclaw/openclaw.json
###############################################################################
write_observation_feed_config() {
if [[ "$FEED_CONFIGURED" != "true" ]]; then
return 0
@@ -1550,7 +1336,6 @@ write_observation_feed_config() {
info "Writing observation feed configuration..."
# Use jq if available, fall back to python3, then node for JSON manipulation
if command -v jq &>/dev/null; then
local tmp_file
tmp_file="$(mktemp)"
@@ -1592,7 +1377,6 @@ with open(config_path, 'w') as f:
json.dump(config, f, indent=2)
"
else
# Fallback to node (always available since it's a dependency)
INSTALLER_FEED_CHANNEL="$FEED_CHANNEL" \
INSTALLER_FEED_TARGET_ID="$FEED_TARGET_ID" \
INSTALLER_CONFIG_FILE="$config_file" \
@@ -1638,10 +1422,6 @@ with open(config_path, 'w') as f:
info "the feed is connected."
}
###############################################################################
# Completion summary
###############################################################################
print_completion_summary() {
local provider_display=""
case "$AI_PROVIDER" in
@@ -1661,7 +1441,6 @@ print_completion_summary() {
echo -e " ${COLOR_GREEN}${COLOR_RESET} Dependencies installed (Bun, uv)"
echo -e " ${COLOR_GREEN}${COLOR_RESET} OpenClaw gateway detected"
# Show installed version from health data if available
if [[ -n "$WORKER_VERSION" ]]; then
echo -e " ${COLOR_GREEN}${COLOR_RESET} claude-mem v${COLOR_BOLD}${WORKER_VERSION}${COLOR_RESET} installed and running"
else
@@ -1670,7 +1449,6 @@ print_completion_summary() {
echo -e " ${COLOR_GREEN}${COLOR_RESET} Memory slot configured"
# Show AI provider with auth method from health data if available
if [[ -n "$WORKER_AI_AUTH_METHOD" ]]; then
echo -e " ${COLOR_GREEN}${COLOR_RESET} AI provider: ${COLOR_BOLD}${WORKER_AI_PROVIDER} (${WORKER_AI_AUTH_METHOD})${COLOR_RESET}"
else
@@ -1689,7 +1467,6 @@ print_completion_summary() {
echo -e " ${COLOR_YELLOW}${COLOR_RESET} Worker may not be running — check logs at ~/.claude-mem/logs/"
fi
# Show initialization warning if worker is alive but not yet initialized
if [[ "$WORKER_INITIALIZED" != "true" ]] && { [[ -n "$WORKER_REPORTED_PID" ]] || { [[ -n "$WORKER_PID" ]] && kill -0 "$WORKER_PID" 2>/dev/null; }; }; then
echo -e " ${COLOR_YELLOW}${COLOR_RESET} Worker is starting but still initializing (this is normal on first run)"
fi
@@ -1717,16 +1494,11 @@ print_completion_summary() {
echo ""
}
###############################################################################
# Main
###############################################################################
main() {
setup_tty
print_banner
detect_platform
# --- Step 1: Dependencies ---
echo ""
info "${COLOR_BOLD}[1/8]${COLOR_RESET} Checking dependencies..."
echo ""
@@ -1742,12 +1514,10 @@ main() {
echo ""
success "All dependencies satisfied"
# --- Step 2: OpenClaw gateway ---
echo ""
info "${COLOR_BOLD}[2/8]${COLOR_RESET} Locating OpenClaw gateway..."
check_openclaw
# --- Step 3: Plugin installation (skip if upgrading and already installed) ---
echo ""
info "${COLOR_BOLD}[3/8]${COLOR_RESET} Installing claude-mem plugin..."
@@ -1758,22 +1528,18 @@ main() {
install_plugin
fi
# --- Step 4: Memory slot configuration ---
echo ""
info "${COLOR_BOLD}[4/8]${COLOR_RESET} Configuring memory slot..."
configure_memory_slot
# --- Step 5: AI provider setup ---
echo ""
info "${COLOR_BOLD}[5/8]${COLOR_RESET} AI provider setup..."
setup_ai_provider
# --- Step 6: Write settings ---
echo ""
info "${COLOR_BOLD}[6/8]${COLOR_RESET} Writing settings..."
write_settings
# --- Step 7: Start worker and verify ---
echo ""
info "${COLOR_BOLD}[7/8]${COLOR_RESET} Starting worker service..."
@@ -1781,8 +1547,6 @@ main() {
warn "Port 37777 is already in use (worker may already be running)"
info "Checking if the existing service is healthy..."
if verify_health; then
# verify_health already called parse_health_json — WORKER_* globals are set.
# Determine the expected version from the installed plugin's package.json.
local expected_version=""
if [[ -n "$CLAUDE_MEM_INSTALL_DIR" ]] || find_claude_mem_install_dir; then
expected_version="$(INSTALLER_PKG="${CLAUDE_MEM_INSTALL_DIR}/package.json" node -e "
@@ -1793,8 +1557,6 @@ main() {
local needs_restart=""
# If we just installed fresh plugin files, always restart the worker
# to pick up the new version — even if the old worker was healthy.
if [[ "$PLUGIN_FRESHLY_INSTALLED" == "true" ]]; then
if [[ -n "$WORKER_VERSION" && -n "$expected_version" && "$WORKER_VERSION" != "$expected_version" ]]; then
info "Upgrading worker from v${WORKER_VERSION} to v${expected_version}..."
@@ -1804,32 +1566,26 @@ main() {
needs_restart="true"
fi
# Check if worker version is outdated compared to installed version
if [[ "$needs_restart" != "true" && -n "$WORKER_VERSION" && -n "$expected_version" && "$WORKER_VERSION" != "$expected_version" ]]; then
info "Upgrading worker from v${WORKER_VERSION} to v${expected_version}..."
needs_restart="true"
fi
# Check if AI provider doesn't match current configuration
if [[ "$needs_restart" != "true" && -n "$WORKER_AI_PROVIDER" && -n "$AI_PROVIDER" && "$WORKER_AI_PROVIDER" != "$AI_PROVIDER" ]]; then
warn "Worker is using ${WORKER_AI_PROVIDER} but you configured ${AI_PROVIDER} — restarting to apply"
needs_restart="true"
fi
# Restart worker if needed: kill old process, start fresh
if [[ "$needs_restart" == "true" ]]; then
info "Stopping existing worker..."
# Try graceful shutdown via API first, fall back to SIGTERM
curl -s -X POST "http://127.0.0.1:37777/api/admin/shutdown" >/dev/null 2>&1 || true
sleep 2
# If still running, send SIGTERM to known PID
if check_port_37777; then
if [[ -n "$WORKER_REPORTED_PID" ]]; then
kill "$WORKER_REPORTED_PID" 2>/dev/null || true
sleep 1
fi
# Check PID file as fallback
local pid_file="${HOME}/.claude-mem/worker.pid"
if [[ -f "$pid_file" ]]; then
local file_pid
@@ -1844,14 +1600,12 @@ main() {
fi
fi
# Start fresh worker
if start_worker; then
verify_health || true
else
warn "Worker restart failed — you can start it manually later"
fi
else
# No restart needed — show healthy status
local uptime_display=""
if [[ -n "$WORKER_UPTIME" && "$WORKER_UPTIME" =~ ^[0-9]+$ && "$WORKER_UPTIME" != "0" ]]; then
uptime_display="$(format_uptime_ms "$WORKER_UPTIME")"
@@ -1888,13 +1642,11 @@ main() {
fi
fi
# --- Step 8: Observation feed setup (optional) ---
echo ""
info "${COLOR_BOLD}[8/8]${COLOR_RESET} Observation feed setup..."
setup_observation_feed
write_observation_feed_config
# --- Completion ---
print_completion_summary
}
-54
View File
@@ -241,7 +241,6 @@ describe("Observation I/O event handlers", () => {
body: parsedBody,
});
// Handle different endpoints
if (req.url === "/api/health") {
res.writeHead(200, { "Content-Type": "application/json" });
res.end(JSON.stringify({ status: "ok" }));
@@ -311,7 +310,6 @@ describe("Observation I/O event handlers", () => {
sessionId: "test-session-1",
}, { sessionKey: "agent-1" });
// Wait for HTTP request
await new Promise((resolve) => setTimeout(resolve, 100));
const initRequest = receivedRequests.find((r) => r.url === "/api/sessions/init");
@@ -358,11 +356,9 @@ describe("Observation I/O event handlers", () => {
const { api, fireEvent } = createMockApi({ workerPort });
claudeMemPlugin(api);
// Establish contentSessionId via session_start
await fireEvent("session_start", { sessionId: "s1" }, { sessionKey: "test-agent" });
await new Promise((resolve) => setTimeout(resolve, 100));
// Fire tool result event
await fireEvent("tool_result_persist", {
toolName: "Read",
params: { file_path: "/src/index.ts" },
@@ -420,11 +416,9 @@ describe("Observation I/O event handlers", () => {
const { api, fireEvent } = createMockApi({ workerPort });
claudeMemPlugin(api);
// Establish session
await fireEvent("session_start", { sessionId: "s1" }, { sessionKey: "summarize-test" });
await new Promise((resolve) => setTimeout(resolve, 100));
// Fire agent end
await fireEvent("agent_end", {
messages: [
{ role: "user", content: "help me" },
@@ -817,12 +811,10 @@ describe("SSE stream integration", () => {
await getService().start({});
// Wait for connection
await new Promise((resolve) => setTimeout(resolve, 200));
assert.ok(logs.some((l) => l.includes("Connecting to SSE stream")));
// Send an SSE event
const observation = {
type: "new_observation",
observation: {
@@ -841,7 +833,6 @@ describe("SSE stream integration", () => {
res.write(`data: ${JSON.stringify(observation)}\n\n`);
}
// Wait for processing
await new Promise((resolve) => setTimeout(resolve, 200));
assert.equal(sentMessages.length, 1);
@@ -863,7 +854,6 @@ describe("SSE stream integration", () => {
await getService().start({});
await new Promise((resolve) => setTimeout(resolve, 200));
// Send non-observation events
for (const res of serverResponses) {
res.write(`data: ${JSON.stringify({ type: "processing_status", isProcessing: true })}\n\n`);
res.write(`data: ${JSON.stringify({ type: "session_started", sessionId: "abc" })}\n\n`);
@@ -974,8 +964,6 @@ describe("SSE stream integration", () => {
});
describe("circuit breaker", () => {
// Reset circuit breaker state before each test by firing gateway_start.
// The circuit is module-level state, so tests would otherwise bleed into each other.
beforeEach(async () => {
const { api, fireEvent } = createMockApi({ workerPort: 59999 });
claudeMemPlugin(api);
@@ -985,17 +973,12 @@ describe("circuit breaker", () => {
it("opens after threshold failures and stops further requests", async () => {
const { api, logs, fireEvent } = createMockApi({ workerPort: 59999 });
claudeMemPlugin(api);
// Reset circuit inside the test body to guard against timers from preceding
// tests (e.g. completionDelayMs timers) that may fire between beforeEach and here.
await fireEvent("gateway_start", {}, {});
// Fire threshold+1 calls so the circuit is open by the end of the loop
// regardless of whether a concurrent timer fires at the exact boundary.
for (let i = 0; i < 4; i++) {
await fireEvent("before_agent_start", { prompt: "hello" }, { sessionKey: `cb-open-${i}` });
}
// Circuit is now OPEN. Subsequent calls must be silently dropped.
const logCountBeforeDrop = logs.length;
await fireEvent("before_agent_start", { prompt: "hello" }, { sessionKey: "cb-drop" });
const noisyDropLogs = logs.slice(logCountBeforeDrop).filter(
@@ -1010,18 +993,14 @@ describe("circuit breaker", () => {
await fireEvent("gateway_start", {}, {});
const logsAfterReset = logs.length;
// Fire exactly threshold (3) calls
for (let i = 0; i < 3; i++) {
await fireEvent("before_agent_start", { prompt: "hello" }, { sessionKey: `cb-log-${i}` });
}
const newLogs = logs.slice(logsAfterReset);
// At least some failures should have been logged (circuit was active)
assert.ok(newLogs.length > 0, "threshold calls should produce log output");
// Exactly one disabling warning should appear
const disablingLogs = newLogs.filter((l) => l.includes("disabling requests"));
assert.equal(disablingLogs.length, 1, "should emit exactly one disabling warning when circuit opens");
// The last call (the threshold-crossing one) should NOT log an individual failure
const failureLogs = newLogs.filter((l) => l.includes("failed:"));
assert.ok(failureLogs.length < 3, "threshold-crossing call should not log an individual failure");
});
@@ -1031,12 +1010,10 @@ describe("circuit breaker", () => {
claudeMemPlugin(api);
await fireEvent("gateway_start", {}, {});
// Open the circuit by firing threshold+1 calls
for (let i = 0; i < 4; i++) {
await fireEvent("before_agent_start", { prompt: "hello" }, { sessionKey: `cb-reset-${i}` });
}
// Confirm circuit is open (call is silently dropped)
const logCountWhileOpen = logs.length;
await fireEvent("before_agent_start", { prompt: "hello" }, { sessionKey: "cb-while-open" });
assert.equal(
@@ -1045,10 +1022,8 @@ describe("circuit breaker", () => {
"call while circuit is open should be silently dropped"
);
// gateway_start resets the circuit
await fireEvent("gateway_start", {}, {});
// Next call should attempt to connect again (not silently drop)
const logCountAfterReset = logs.length;
await fireEvent("before_agent_start", { prompt: "hello" }, { sessionKey: "cb-after-reset" });
const newLogs = logs.slice(logCountAfterReset);
@@ -1059,26 +1034,18 @@ describe("circuit breaker", () => {
});
it("HALF_OPEN allows only a single probe — non-2xx keeps circuit open, 2xx closes it", async () => {
// ---- Phase 1: open the circuit via network failures (unreachable port) ----
// Reset circuit state first
const resetMock = createMockApi({ workerPort: 59999 });
claudeMemPlugin(resetMock.api);
await resetMock.fireEvent("gateway_start", {}, {});
// Drive 4 failures to ensure circuit is OPEN
for (let i = 0; i < 4; i++) {
await resetMock.fireEvent("before_agent_start", { prompt: "probe-test" }, { sessionKey: `probe-phase1-${i}` });
}
// ---- Phase 2: advance clock so cooldown has elapsed ----
// _circuitOpenedAt was set during Phase 1 using the real Date.now().
// Advancing Date.now by 31s means the next circuitAllow call sees the cooldown elapsed.
const realDateNow = Date.now.bind(Date);
Date.now = () => realDateNow() + 31_000;
try {
// ---- Phase 3: non-2xx probe — circuit should stay OPEN ----
// Start a server that returns 500 for all requests
let serverA: Server | null = null;
const portA: number = await new Promise((resolve) => {
serverA = createServer((_req: IncomingMessage, res: ServerResponse) => {
@@ -1091,31 +1058,19 @@ describe("circuit breaker", () => {
});
});
// Reuse the same module-level circuit state — just change the worker port.
// Create a new mock api instance pointed at server A (500 responder).
const mockA = createMockApi({ workerPort: portA });
claudeMemPlugin(mockA.api);
// Do NOT fire gateway_start here — we want the OPEN circuit state from Phase 1.
// The circuit is OPEN but the mocked clock says cooldown elapsed.
// The next call should: transition to HALF_OPEN, set _halfOpenProbeInFlight=true,
// send the probe to server A (which returns 500), then call circuitOnFailure
// and re-open the circuit.
const logCountAtProbe = mockA.logs.length;
await mockA.fireEvent("before_agent_start", { prompt: "probe" }, { sessionKey: "probe-call-non2xx" });
await new Promise((resolve) => setTimeout(resolve, 100));
const probeALogs = mockA.logs.slice(logCountAtProbe);
// After a 500 response, circuitOnFailure is called which logs "disabling requests"
// (because state was HALF_OPEN) and logger.warn logs the 500 status.
assert.ok(
probeALogs.some((l) => l.includes("disabling") || l.includes("returned 500") || l.includes("Worker POST")),
"non-2xx probe should keep circuit open (expected disabling or 500 status log)"
);
// Verify probe flag resets: a second call with cooldown elapsed should be allowed as a new probe
// (i.e., _halfOpenProbeInFlight was cleared by circuitOnFailure).
// But without advancing time further the circuit is OPEN again — so calls are dropped.
const logCountAfterFailedProbe = mockA.logs.length;
await mockA.fireEvent("before_agent_start", { prompt: "probe" }, { sessionKey: "probe-concurrent" });
await new Promise((resolve) => setTimeout(resolve, 100));
@@ -1126,22 +1081,14 @@ describe("circuit breaker", () => {
serverA!.close();
// ---- Phase 4: 2xx probe — circuit should close ----
// Re-open the circuit with fresh failures, then probe with a 200-returning server.
// Reset circuit state first.
const resetMock2 = createMockApi({ workerPort: 59999 });
claudeMemPlugin(resetMock2.api);
await resetMock2.fireEvent("gateway_start", {}, {});
// Drive failures (still using mocked Date.now, but _circuitOpenedAt will be set to
// the mocked time, so cooldown is NOT elapsed yet from the mocked perspective).
// We need to temporarily restore real Date.now while opening the circuit, then
// re-mock it for the probe.
Date.now = realDateNow;
for (let i = 0; i < 4; i++) {
await resetMock2.fireEvent("before_agent_start", { prompt: "probe-test" }, { sessionKey: `probe-phase4-${i}` });
}
// Re-advance the clock past cooldown
Date.now = () => realDateNow() + 31_000;
let serverB: Server | null = null;
@@ -1158,7 +1105,6 @@ describe("circuit breaker", () => {
const mockB = createMockApi({ workerPort: portB });
claudeMemPlugin(mockB.api);
// Do NOT fire gateway_start — reuse OPEN circuit state from resetMock2.
const logCountBeforeSuccessProbe = mockB.logs.length;
await mockB.fireEvent("before_agent_start", { prompt: "probe" }, { sessionKey: "probe-call-2xx" });
+2 -132
View File
@@ -1,9 +1,4 @@
// No file-system imports needed — context is injected via system prompt hook,
// not by writing to MEMORY.md.
// Minimal type declarations for the OpenClaw Plugin SDK.
// These match the real OpenClawPluginApi provided by the gateway at runtime.
// See: https://docs.openclaw.ai/plugin
interface PluginLogger {
debug?: (message: string) => void;
@@ -30,7 +25,6 @@ interface PluginCommandContext {
type PluginCommandResult = string | { text: string } | { text: string; format?: string };
// OpenClaw event types for agent lifecycle
interface BeforeAgentStartEvent {
prompt?: string;
}
@@ -136,10 +130,6 @@ interface OpenClawPluginApi {
};
}
// ============================================================================
// SSE Observation Feed Types
// ============================================================================
interface ObservationSSEPayload {
id: number;
memory_session_id: string;
@@ -166,10 +156,6 @@ interface SSENewObservationEvent {
type ConnectionState = "disconnected" | "connected" | "reconnecting";
// ============================================================================
// Plugin Configuration
// ============================================================================
interface FeedEmojiConfig {
primary?: string;
claudeCode?: string;
@@ -193,16 +179,10 @@ interface ClaudeMemPluginConfig {
};
}
// ============================================================================
// Constants
// ============================================================================
const MAX_SSE_BUFFER_SIZE = 1024 * 1024; // 1MB
const MAX_SSE_BUFFER_SIZE = 1024 * 1024;
const DEFAULT_WORKER_PORT = 37777;
const DEFAULT_WORKER_HOST = "127.0.0.1";
// Emoji pool for deterministic auto-assignment to unknown agents.
// Uses a hash of the agentId to pick a consistent emoji — no persistent state needed.
const EMOJI_POOL = [
"🔧","📐","🔍","💻","🧪","🐛","🛡️","☁️","📦","🎯",
"🔮","⚡","🌊","🎨","📊","🚀","🔬","🏗️","📝","🎭",
@@ -216,7 +196,6 @@ function poolEmojiForAgent(agentId: string): string {
return EMOJI_POOL[Math.abs(hash) % EMOJI_POOL.length];
}
// Default emoji values — overridden by user config via observationFeed.emojis
const DEFAULT_PRIMARY_EMOJI = "🦞";
const DEFAULT_CLAUDE_CODE_EMOJI = "⌨️";
const DEFAULT_CLAUDE_CODE_LABEL = "Claude Code Session";
@@ -233,19 +212,15 @@ function buildGetSourceLabel(
return function getSourceLabel(project: string | null | undefined): string {
if (!project) return fallback;
// OpenClaw agent projects are formatted as "openclaw-<agentId>"
if (project.startsWith("openclaw-")) {
const agentId = project.slice("openclaw-".length);
if (!agentId) return `${primary} openclaw`;
const emoji = pinnedAgents[agentId] || poolEmojiForAgent(agentId);
return `${emoji} ${agentId}`;
}
// OpenClaw project without agent suffix
if (project === "openclaw") {
return `${primary} openclaw`;
}
// Everything else is a Claude Code session. Keep the project identifier
// visible so concurrent sessions can be distinguished in the feed.
const trimmedLabel = claudeCodeLabel.trim();
if (!trimmedLabel) {
return `${claudeCode} ${project}`;
@@ -254,24 +229,12 @@ function buildGetSourceLabel(
};
}
// ============================================================================
// Worker HTTP Client
// ============================================================================
let _workerHost = DEFAULT_WORKER_HOST;
function workerBaseUrl(port: number): string {
return `http://${_workerHost}:${port}`;
}
// ============================================================================
// Worker Circuit Breaker
// ============================================================================
// Prevents CPU-spinning retry loops when the worker is unreachable.
// After CIRCUIT_BREAKER_THRESHOLD consecutive network errors, the circuit
// opens and all worker calls are silently dropped for CIRCUIT_BREAKER_COOLDOWN_MS.
// After the cooldown, one probe attempt is allowed to check if the worker recovered.
const CIRCUIT_BREAKER_THRESHOLD = 3;
const CIRCUIT_BREAKER_COOLDOWN_MS = 30_000;
@@ -294,7 +257,6 @@ function circuitAllow(logger: PluginLogger): boolean {
}
return false;
}
// HALF_OPEN: allow one probe through
if (_halfOpenProbeInFlight) return false;
_halfOpenProbeInFlight = true;
return true;
@@ -429,10 +391,6 @@ async function workerGetJson(
}
}
// ============================================================================
// SSE Observation Feed
// ============================================================================
function formatObservationMessage(
observation: ObservationSSEPayload,
getSourceLabel: (project: string | null | undefined) => string,
@@ -446,8 +404,6 @@ function formatObservationMessage(
return message;
}
// Explicit mapping from channel name to [runtime namespace key, send function name].
// These match the PluginRuntime.channel structure in the OpenClaw SDK.
const CHANNEL_SEND_MAP: Record<string, { namespace: string; functionName: string }> = {
telegram: { namespace: "telegram", functionName: "sendMessageTelegram" },
whatsapp: { namespace: "whatsapp", functionName: "sendMessageWhatsApp" },
@@ -491,7 +447,6 @@ function sendToChannel(
text: string,
botToken?: string
): Promise<void> {
// If a dedicated bot token is provided for Telegram, send directly
if (botToken && channel === "telegram") {
return sendDirectTelegram(botToken, to, text, api.logger);
}
@@ -514,7 +469,6 @@ function sendToChannel(
return Promise.resolve();
}
// WhatsApp requires a third options argument with { verbose: boolean }
const args: unknown[] = channel === "whatsapp"
? [to, text, { verbose: false }]
: [to, text];
@@ -579,7 +533,6 @@ async function connectToSSEStream(
buffer = frames.pop() || "";
for (const frame of frames) {
// SSE spec: concatenate all data: lines with \n
const dataLines = frame
.split("\n")
.filter((line) => line.startsWith("data:"))
@@ -620,10 +573,6 @@ async function connectToSSEStream(
setConnectionState("disconnected");
}
// ============================================================================
// Plugin Entry Point
// ============================================================================
export default function claudeMemPlugin(api: OpenClawPluginApi): void {
const userConfig = (api.pluginConfig || {}) as ClaudeMemPluginConfig;
const workerPort = userConfig.workerPort || DEFAULT_WORKER_PORT;
@@ -638,14 +587,11 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
return baseProjectName;
}
// ------------------------------------------------------------------
// Session tracking for observation I/O
// ------------------------------------------------------------------
const sessionIds = new Map<string, string>();
const canonicalSessionKeys = new Map<string, string>();
const sessionAliasesByCanonicalKey = new Map<string, Set<string>>();
const recentPromptInits = new Map<string, number>();
const syncMemoryFile = userConfig.syncMemoryFile !== false; // default true
const syncMemoryFile = userConfig.syncMemoryFile !== false;
const syncMemoryFileExclude = new Set(userConfig.syncMemoryFileExclude || []);
function getContentSessionId(sessionKey?: string): string {
@@ -707,9 +653,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
}
const cacheKey = `${contentSessionId}::${project}::${prompt}`;
const lastSeenAt = recentPromptInits.get(cacheKey);
// Note: cache is set unconditionally before return. If workerPost fails
// after this check, a retry within 2s would be incorrectly skipped.
// Acceptable because before_agent_start is not retried by the runtime.
recentPromptInits.set(cacheKey, now);
return typeof lastSeenAt === "number" && now - lastSeenAt <= 2000;
}
@@ -728,14 +671,10 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
sessionIds.delete(canonicalKey);
}
// TTL cache for context injection to avoid re-fetching on every LLM turn.
// before_prompt_build fires on every turn; caching for 60s keeps the worker
// load manageable while still picking up new observations reasonably quickly.
const CONTEXT_CACHE_TTL_MS = 60_000;
const contextCache = new Map<string, { text: string; fetchedAt: number }>();
async function getContextForPrompt(ctx?: EventContext): Promise<string | null> {
// Include both the base project and agent-scoped project (e.g. "openclaw" + "openclaw-main")
const projects = [baseProjectName];
const agentProject = ctx ? getProjectName(ctx) : null;
if (agentProject && agentProject !== baseProjectName) {
@@ -743,7 +682,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
}
const cacheKey = projects.join(",");
// Return cached context if still fresh
const cached = contextCache.get(cacheKey);
if (cached && Date.now() - cached.fetchedAt < CONTEXT_CACHE_TTL_MS) {
return cached.text;
@@ -762,36 +700,21 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
return null;
}
// ------------------------------------------------------------------
// Event: session_start — track session (fires on /new, /reset)
// Init is deferred to before_agent_start to avoid duplicate prompt records.
// ------------------------------------------------------------------
api.on("session_start", async (_event, ctx) => {
const { contentSessionId } = rememberSessionContext(ctx);
api.logger.info(`[claude-mem] Session tracking initialized: ${contentSessionId}`);
});
// ------------------------------------------------------------------
// Event: message_received — alias tracking only; init deferred to before_agent_start
// ------------------------------------------------------------------
api.on("message_received", async (event, ctx) => {
const { canonicalKey, contentSessionId } = rememberSessionContext(ctx);
api.logger.info(`[claude-mem] Message received — prompt capture deferred to before_agent_start: session=${canonicalKey} contentSessionId=${contentSessionId} hasContent=${Boolean(event.content)}`);
});
// ------------------------------------------------------------------
// Event: after_compaction — preserve session tracking after context compaction.
// Re-init is intentionally NOT called here; the worker retains session state
// independently and re-initializing would create duplicate prompt records.
// ------------------------------------------------------------------
api.on("after_compaction", async (_event, ctx) => {
const { contentSessionId } = rememberSessionContext(ctx);
api.logger.info(`[claude-mem] Session preserved after compaction: ${contentSessionId}`);
});
// ------------------------------------------------------------------
// Event: before_agent_start — single init point with dedup guard
// ------------------------------------------------------------------
api.on("before_agent_start", async (event, ctx) => {
const { contentSessionId } = rememberSessionContext(ctx);
const projectName = getProjectName(ctx);
@@ -802,8 +725,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
return;
}
// Initialize session in the worker so observations are not skipped
// (the privacy check requires a stored user prompt to exist)
await workerPost(workerPort, "/api/sessions/init", {
contentSessionId,
project: projectName,
@@ -813,14 +734,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
api.logger.info(`[claude-mem] Session initialized via before_agent_start: contentSessionId=${contentSessionId} project=${projectName}`);
});
// ------------------------------------------------------------------
// Event: before_prompt_build — inject context into system prompt
//
// Instead of writing to MEMORY.md (which conflicts with agent-curated
// memory), inject the observation timeline via appendSystemContext.
// This keeps MEMORY.md under the agent's control while still providing
// cross-session context to the LLM.
// ------------------------------------------------------------------
api.on("before_prompt_build", async (_event, ctx) => {
if (!shouldInjectContext(ctx)) return;
@@ -831,20 +744,15 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
}
});
// ------------------------------------------------------------------
// Event: tool_result_persist — record tool observations
// ------------------------------------------------------------------
api.on("tool_result_persist", (event, ctx) => {
api.logger.info(`[claude-mem] tool_result_persist fired: tool=${event.toolName ?? "unknown"} agent=${ctx.agentId ?? "none"} session=${ctx.sessionKey ?? "none"}`);
const toolName = event.toolName;
if (!toolName) return;
// Skip memory_ tools to prevent recursive observation loops
if (toolName.startsWith("memory_")) return;
const { canonicalKey, contentSessionId } = rememberSessionContext(ctx);
// Extract result text from all content blocks
let toolResponseText = "";
const content = event.message?.content;
if (Array.isArray(content)) {
@@ -854,15 +762,11 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
.join("\n");
}
// Truncate long responses to prevent oversized payloads
const MAX_TOOL_RESPONSE_LENGTH = 1000;
if (toolResponseText.length > MAX_TOOL_RESPONSE_LENGTH) {
toolResponseText = toolResponseText.slice(0, MAX_TOOL_RESPONSE_LENGTH);
}
// Resolve workspaceDir with fallback chain.
// Empty cwd causes worker-side observation queueing failures,
// so we drop the observation rather than sending cwd: "".
const workspaceDir = ctx.workspaceDir;
if (!workspaceDir) {
@@ -870,7 +774,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
return;
}
// Fire-and-forget: send observation to worker
workerPostFireAndForget(workerPort, "/api/sessions/observations", {
contentSessionId,
tool_name: toolName,
@@ -880,13 +783,9 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
}, api.logger);
});
// ------------------------------------------------------------------
// Event: agent_end — summarize session (worker self-completes)
// ------------------------------------------------------------------
api.on("agent_end", async (event, ctx) => {
const { contentSessionId } = rememberSessionContext(ctx);
// Extract last assistant message for summarization
let lastAssistantMessage = "";
if (Array.isArray(event.messages)) {
for (let i = event.messages.length - 1; i >= 0; i--) {
@@ -905,25 +804,17 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
}
}
// Send summarize. The worker self-completes the session when its SDK-agent
// generator drains; no explicit complete call needed.
await workerPost(workerPort, "/api/sessions/summarize", {
contentSessionId,
last_assistant_message: lastAssistantMessage,
}, api.logger);
});
// ------------------------------------------------------------------
// Event: session_end — clean up session tracking to prevent unbounded growth
// ------------------------------------------------------------------
api.on("session_end", async (_event, ctx) => {
clearSessionContext(ctx);
api.logger.info(`[claude-mem] Session tracking cleaned up`);
});
// ------------------------------------------------------------------
// Event: gateway_start — clear session tracking for fresh start
// ------------------------------------------------------------------
api.on("gateway_start", async () => {
circuitReset();
sessionIds.clear();
@@ -934,9 +825,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
api.logger.info("[claude-mem] Gateway started — session tracking reset");
});
// ------------------------------------------------------------------
// Service: SSE observation feed → messaging channels
// ------------------------------------------------------------------
let sseAbortController: AbortController | null = null;
let connectionState: ConnectionState = "disconnected";
let connectionPromise: Promise<void> | null = null;
@@ -1014,9 +902,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
return Math.max(1, Math.min(50, Math.trunc(parsed)));
}
// ------------------------------------------------------------------
// Command: /claude_mem_feed — status & toggle
// ------------------------------------------------------------------
api.registerCommand({
name: "claude_mem_feed",
description: "Show or toggle Claude-Mem observation feed status",
@@ -1050,10 +935,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
},
});
// ------------------------------------------------------------------
// Command: /claude-mem-search — query worker search API
// Usage: /claude-mem-search <query> [limit]
// ------------------------------------------------------------------
api.registerCommand({
name: "claude-mem-search",
description: "Search Claude-Mem observations by query",
@@ -1088,10 +969,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
},
});
// ------------------------------------------------------------------
// Command: /claude-mem-recent — recent context snapshot
// Usage: /claude-mem-recent [project] [limit]
// ------------------------------------------------------------------
api.registerCommand({
name: "claude-mem-recent",
description: "Show recent Claude-Mem context for a project",
@@ -1131,10 +1008,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
},
});
// ------------------------------------------------------------------
// Command: /claude-mem-timeline — search and timeline around best match
// Usage: /claude-mem-timeline <query> [depthBefore] [depthAfter]
// ------------------------------------------------------------------
api.registerCommand({
name: "claude-mem-timeline",
description: "Find best memory match and show nearby timeline events",
@@ -1185,9 +1058,6 @@ export default function claudeMemPlugin(api: OpenClawPluginApi): void {
},
});
// ------------------------------------------------------------------
// Command: /claude_mem_status — worker health check
// ------------------------------------------------------------------
api.registerCommand({
name: "claude_mem_status",
description: "Check Claude-Mem worker health and session status",
-6
View File
@@ -1,10 +1,4 @@
#!/usr/bin/env bash
# test-e2e.sh — Run E2E test of claude-mem plugin on real OpenClaw
#
# Usage:
# ./test-e2e.sh # Automated E2E test (build + run + verify)
# ./test-e2e.sh --interactive # Drop into shell for manual testing
# ./test-e2e.sh --build-only # Just build the image, don't run
set -euo pipefail
cd "$(dirname "$0")"
-253
View File
@@ -1,9 +1,6 @@
#!/usr/bin/env bash
set -euo pipefail
# Test suite for openclaw/install.sh functions
# Tests the OpenClaw gateway detection, plugin install, and memory slot config.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
INSTALL_SCRIPT="${SCRIPT_DIR}/install.sh"
@@ -11,10 +8,6 @@ TESTS_RUN=0
TESTS_PASSED=0
TESTS_FAILED=0
###############################################################################
# Test helpers
###############################################################################
test_pass() {
TESTS_RUN=$((TESTS_RUN + 1))
TESTS_PASSED=$((TESTS_PASSED + 1))
@@ -57,30 +50,17 @@ assert_file_exists() {
fi
}
###############################################################################
# Source the install script without running main()
# We override main to be a no-op, then source the file.
###############################################################################
source_install_functions() {
# Create a temp file that overrides main and sources the install script
local tmp_source
tmp_source="$(mktemp)"
# Extract everything except the final `main "$@"` invocation
sed '$ d' "$INSTALL_SCRIPT" > "$tmp_source"
# Override main to prevent execution
echo 'main() { :; }' >> "$tmp_source"
# Source it (suppress color output for cleaner tests)
TERM=dumb source "$tmp_source"
rm -f "$tmp_source"
}
source_install_functions
###############################################################################
# Test: detect_platform() — returns a valid platform string
###############################################################################
echo ""
echo "=== detect_platform() ==="
@@ -118,7 +98,6 @@ test_detect_platform_is_idempotent() {
test_detect_platform_is_idempotent
test_detect_platform_sets_iswsl_empty_on_non_wsl() {
# Unless actually running on WSL, IS_WSL should be empty
PLATFORM=""
IS_WSL=""
detect_platform >/dev/null 2>&1
@@ -132,15 +111,10 @@ test_detect_platform_sets_iswsl_empty_on_non_wsl() {
test_detect_platform_sets_iswsl_empty_on_non_wsl
###############################################################################
# Test: check_bun() — correctly detects bun presence/absence
###############################################################################
echo ""
echo "=== check_bun() ==="
test_check_bun_detects_installed_bun() {
# If bun is installed on this system, check_bun should succeed
if command -v bun &>/dev/null; then
BUN_PATH=""
if check_bun >/dev/null 2>&1; then
@@ -197,15 +171,12 @@ test_find_bun_path_checks_home_bun_bin() {
HOME="$fake_home"
BUN_PATH=""
# Create a fake bun binary in ~/.bun/bin/
mkdir -p "${fake_home}/.bun/bin"
cat > "${fake_home}/.bun/bin/bun" <<'FAKEBUN'
#!/bin/bash
echo "1.2.0"
FAKEBUN
chmod +x "${fake_home}/.bun/bin/bun"
# Hide bun from PATH
local saved_path="$PATH"
PATH="/nonexistent"
@@ -222,15 +193,10 @@ FAKEBUN
test_find_bun_path_checks_home_bun_bin
###############################################################################
# Test: check_uv() — correctly detects uv presence/absence
###############################################################################
echo ""
echo "=== check_uv() ==="
test_check_uv_detects_installed_uv() {
# If uv is installed on this system, check_uv should succeed
if command -v uv &>/dev/null; then
UV_PATH=""
if check_uv >/dev/null 2>&1; then
@@ -253,9 +219,6 @@ test_check_uv_detects_installed_uv() {
test_check_uv_detects_installed_uv
test_check_uv_fails_when_not_found() {
# find_uv_path checks hardcoded system paths (/usr/local/bin/uv,
# /opt/homebrew/bin/uv) that we can't override without root.
# Skip if uv exists at any of those absolute paths.
if [[ -x "/usr/local/bin/uv" ]] || [[ -x "/opt/homebrew/bin/uv" ]]; then
test_pass "check_uv not-found test: skipped (uv installed at system path)"
return 0
@@ -295,15 +258,12 @@ test_find_uv_path_checks_local_bin() {
HOME="$fake_home"
UV_PATH=""
# Create a fake uv binary in ~/.local/bin/
mkdir -p "${fake_home}/.local/bin"
cat > "${fake_home}/.local/bin/uv" <<'FAKEUV'
#!/bin/bash
echo "uv 0.4.0"
FAKEUV
chmod +x "${fake_home}/.local/bin/uv"
# Hide uv from PATH
local saved_path="$PATH"
PATH="/nonexistent"
@@ -320,19 +280,13 @@ FAKEUV
test_find_uv_path_checks_local_bin
###############################################################################
# Test: find_openclaw() — not found scenario
###############################################################################
echo ""
echo "=== find_openclaw() ==="
# Save original PATH and test with empty locations
ORIGINAL_PATH="$PATH"
ORIGINAL_HOME="$HOME"
test_find_openclaw_not_found() {
# Use a fake HOME where nothing exists
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
@@ -354,7 +308,6 @@ test_find_openclaw_not_found() {
test_find_openclaw_not_found
# Test: find_openclaw() — found in HOME/.openclaw/
test_find_openclaw_in_home() {
local fake_home
fake_home="$(mktemp -d)"
@@ -379,10 +332,6 @@ test_find_openclaw_in_home() {
test_find_openclaw_in_home
###############################################################################
# Test: configure_memory_slot() — creates new config
###############################################################################
echo ""
echo "=== configure_memory_slot() ==="
@@ -396,7 +345,6 @@ test_configure_new_config() {
local config_file="${fake_home}/.openclaw/openclaw.json"
assert_file_exists "$config_file" "Config file created at ~/.openclaw/openclaw.json"
# Verify JSON structure
local memory_slot
memory_slot="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.slots.memory);")"
assert_eq "claude-mem" "$memory_slot" "Memory slot set to claude-mem in new config"
@@ -415,13 +363,11 @@ test_configure_new_config() {
test_configure_new_config
# Test: configure_memory_slot() — updates existing config
test_configure_existing_config() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
# Create an existing config with other settings
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
@@ -439,22 +385,18 @@ test_configure_existing_config() {
configure_memory_slot >/dev/null 2>&1
# Verify memory slot was updated
local memory_slot
memory_slot="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.slots.memory);")"
assert_eq "claude-mem" "$memory_slot" "Memory slot updated from memory-core to claude-mem"
# Verify existing settings preserved
local gateway_mode
gateway_mode="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.gateway.mode);")"
assert_eq "local" "$gateway_mode" "Existing gateway.mode setting preserved"
# Verify other plugin still present
local other_plugin
other_plugin="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['some-other-plugin'].enabled);")"
assert_eq "true" "$other_plugin" "Existing plugin entries preserved"
# Verify claude-mem entry was added
local cm_enabled
cm_enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].enabled);")"
assert_eq "true" "$cm_enabled" "claude-mem entry added and enabled"
@@ -465,7 +407,6 @@ test_configure_existing_config() {
test_configure_existing_config
# Test: configure_memory_slot() — preserves existing claude-mem config
test_configure_preserves_existing_cm_config() {
local fake_home
fake_home="$(mktemp -d)"
@@ -493,7 +434,6 @@ test_configure_preserves_existing_cm_config() {
configure_memory_slot >/dev/null 2>&1
# Should enable it but preserve existing config
local cm_enabled
cm_enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].enabled);")"
assert_eq "true" "$cm_enabled" "claude-mem entry enabled when previously disabled"
@@ -512,10 +452,6 @@ test_configure_preserves_existing_cm_config() {
test_configure_preserves_existing_cm_config
###############################################################################
# Test: version_gte() — already exists from phase 1
###############################################################################
echo ""
echo "=== version_gte() ==="
@@ -537,14 +473,9 @@ else
test_fail "version_gte: 1.0.0 < 1.1.14"
fi
###############################################################################
# Test: Script structure validation
###############################################################################
echo ""
echo "=== Script structure ==="
# Verify all required functions exist
for fn in find_openclaw check_openclaw install_plugin configure_memory_slot; do
if declare -f "$fn" &>/dev/null; then
test_pass "Function ${fn}() is defined"
@@ -553,10 +484,8 @@ for fn in find_openclaw check_openclaw install_plugin configure_memory_slot; do
fi
done
# Verify the CLAUDE_MEM_REPO constant
assert_contains "$CLAUDE_MEM_REPO" "github.com/thedotmack/claude-mem" "CLAUDE_MEM_REPO points to correct repository"
# Verify AI provider functions exist
for fn in setup_ai_provider write_settings mask_api_key; do
if declare -f "$fn" &>/dev/null; then
test_pass "Function ${fn}() is defined"
@@ -565,10 +494,6 @@ for fn in setup_ai_provider write_settings mask_api_key; do
fi
done
###############################################################################
# Test: mask_api_key()
###############################################################################
echo ""
echo "=== mask_api_key() ==="
@@ -581,15 +506,10 @@ assert_eq "****" "$masked_short" "mask_api_key masks keys <= 4 chars entirely"
masked_five=$(mask_api_key "12345")
assert_eq "*2345" "$masked_five" "mask_api_key masks 5-char key correctly"
###############################################################################
# Test: setup_ai_provider() — non-interactive mode defaults to Claude
###############################################################################
echo ""
echo "=== setup_ai_provider() ==="
test_setup_ai_provider_non_interactive() {
# NON_INTERACTIVE is readonly, so test in a child bash that sources with --non-interactive
local ai_result
ai_result="$(bash -c '
set -euo pipefail
@@ -609,10 +529,6 @@ test_setup_ai_provider_non_interactive() {
test_setup_ai_provider_non_interactive
###############################################################################
# Test: write_settings() — creates new settings.json with defaults
###############################################################################
echo ""
echo "=== write_settings() ==="
@@ -628,7 +544,6 @@ test_write_settings_new_file() {
local settings_file="${fake_home}/.claude-mem/settings.json"
assert_file_exists "$settings_file" "settings.json created at ~/.claude-mem/settings.json"
# Verify it's valid JSON with expected defaults
local provider
provider="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_PROVIDER);")"
assert_eq "claude" "$provider" "CLAUDE_MEM_PROVIDER set to claude"
@@ -651,7 +566,6 @@ test_write_settings_new_file() {
test_write_settings_new_file
# Test: write_settings() — Gemini provider with API key
test_write_settings_gemini() {
local fake_home
fake_home="$(mktemp -d)"
@@ -681,7 +595,6 @@ test_write_settings_gemini() {
test_write_settings_gemini
# Test: write_settings() — OpenRouter provider with API key
test_write_settings_openrouter() {
local fake_home
fake_home="$(mktemp -d)"
@@ -711,13 +624,11 @@ test_write_settings_openrouter() {
test_write_settings_openrouter
# Test: write_settings() — preserves existing user customizations
test_write_settings_preserves_existing() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
# Create existing settings with custom values
mkdir -p "${fake_home}/.claude-mem"
local settings_file="${fake_home}/.claude-mem/settings.json"
node -e "
@@ -730,22 +641,18 @@ test_write_settings_preserves_existing() {
require('fs').writeFileSync('${settings_file}', JSON.stringify(settings, null, 2));
"
# Now run write_settings with a new provider
AI_PROVIDER="claude"
AI_PROVIDER_API_KEY=""
write_settings >/dev/null 2>&1
# Provider should be updated to claude
local provider
provider="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_PROVIDER);")"
assert_eq "claude" "$provider" "Preserve: provider updated to new selection"
# Custom port should be preserved (not overwritten by defaults)
local custom_port
custom_port="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_WORKER_PORT);")"
assert_eq "38888" "$custom_port" "Preserve: existing custom WORKER_PORT preserved"
# Custom log level should be preserved
local log_level
log_level="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_LOG_LEVEL);")"
assert_eq "DEBUG" "$log_level" "Preserve: existing custom LOG_LEVEL preserved"
@@ -756,7 +663,6 @@ test_write_settings_preserves_existing() {
test_write_settings_preserves_existing
# Test: write_settings() — flat schema has all expected keys
test_write_settings_complete_schema() {
local fake_home
fake_home="$(mktemp -d)"
@@ -768,18 +674,15 @@ test_write_settings_complete_schema() {
local settings_file="${fake_home}/.claude-mem/settings.json"
# Verify key count matches SettingsDefaultsManager (34 keys)
local key_count
key_count="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(Object.keys(s).length);")"
# Settings should have all 34 keys from SettingsDefaultsManager
if (( key_count >= 30 )); then
test_pass "Settings file has ${key_count} keys (complete schema)"
else
test_fail "Settings file has ${key_count} keys, expected >= 30" "Schema may be incomplete"
fi
# Verify it does NOT have nested { env: {...} } format
local has_env_key
has_env_key="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.env !== undefined);")"
assert_eq "false" "$has_env_key" "Settings uses flat schema (no nested 'env' key)"
@@ -790,10 +693,6 @@ test_write_settings_complete_schema() {
test_write_settings_complete_schema
###############################################################################
# Test: find_claude_mem_install_dir() — not found scenario
###############################################################################
echo ""
echo "=== find_claude_mem_install_dir() ==="
@@ -817,14 +716,12 @@ test_find_install_dir_not_found() {
test_find_install_dir_not_found
# Test: find_claude_mem_install_dir() — found in ~/.openclaw/extensions/claude-mem/
test_find_install_dir_openclaw_extensions() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
# Create the expected directory structure
mkdir -p "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts"
touch "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts/worker-service.cjs"
@@ -841,7 +738,6 @@ test_find_install_dir_openclaw_extensions() {
test_find_install_dir_openclaw_extensions
# Test: find_claude_mem_install_dir() — found in ~/.claude/plugins/marketplaces/thedotmack/
test_find_install_dir_marketplace() {
local fake_home
fake_home="$(mktemp -d)"
@@ -864,10 +760,6 @@ test_find_install_dir_marketplace() {
test_find_install_dir_marketplace
###############################################################################
# Test: start_worker() — fails gracefully when install dir not found
###############################################################################
echo ""
echo "=== start_worker() ==="
@@ -892,17 +784,10 @@ test_start_worker_no_install_dir() {
test_start_worker_no_install_dir
###############################################################################
# Test: verify_health() — fails when no server is running
###############################################################################
echo ""
echo "=== verify_health() ==="
test_verify_health_no_server() {
# verify_health should fail gracefully when nothing is running on 37777
# We use a very short test — just 1 attempt to keep the test fast
# Override the function to test with fewer attempts by running in a subshell
local result
result="$(bash -c '
set -euo pipefail
@@ -912,31 +797,22 @@ test_verify_health_no_server() {
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
# Call verify_health which will attempt 10 polls — capture exit code
verify_health 2>/dev/null && echo "PASS" || echo "FAIL"
' 2>/dev/null)" || true
# Note: This test may take ~10 seconds due to polling
# If curl is not available, it will also fail
if [[ "$result" == *"FAIL"* ]]; then
test_pass "verify_health returns failure when no server is running"
else
# Could pass if something is actually running on 37777
test_pass "verify_health returned success (worker may already be running on 37777)"
fi
}
# Only run the health check test if curl is available
if command -v curl &>/dev/null; then
test_verify_health_no_server
else
test_pass "verify_health test skipped (curl not available)"
fi
###############################################################################
# Test: print_completion_summary() — runs without error
###############################################################################
echo ""
echo "=== print_completion_summary() ==="
@@ -987,10 +863,6 @@ test_print_completion_summary_openrouter() {
test_print_completion_summary_openrouter
###############################################################################
# Test: Script structure — new functions exist
###############################################################################
echo ""
echo "=== New function existence ==="
@@ -1002,14 +874,9 @@ for fn in find_claude_mem_install_dir start_worker verify_health print_completio
fi
done
###############################################################################
# Test: main() function calls new functions in correct order
###############################################################################
echo ""
echo "=== main() function structure ==="
# Verify main calls the new functions by checking the install.sh source
test_main_calls_start_worker() {
if grep -q 'start_worker' "$INSTALL_SCRIPT"; then
test_pass "main() calls start_worker"
@@ -1070,10 +937,6 @@ test_main_calls_write_observation_feed_config() {
test_main_calls_write_observation_feed_config
###############################################################################
# Test: setup_observation_feed() — function exists and non-interactive skips
###############################################################################
echo ""
echo "=== setup_observation_feed() ==="
@@ -1086,7 +949,6 @@ for fn in setup_observation_feed write_observation_feed_config; do
done
test_setup_observation_feed_non_interactive() {
# Non-interactive mode should skip feed setup without error
local feed_result
feed_result="$(bash -c '
set -euo pipefail
@@ -1108,10 +970,6 @@ test_setup_observation_feed_non_interactive() {
test_setup_observation_feed_non_interactive
###############################################################################
# Test: write_observation_feed_config() — writes correct JSON structure
###############################################################################
echo ""
echo "=== write_observation_feed_config() ==="
@@ -1120,7 +978,6 @@ test_write_observation_feed_config_writes_json() {
fake_home="$(mktemp -d)"
HOME="$fake_home"
# Create an existing openclaw.json with claude-mem entry
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
@@ -1144,7 +1001,6 @@ test_write_observation_feed_config_writes_json() {
write_observation_feed_config >/dev/null 2>&1
# Verify observationFeed was written
local feed_enabled
feed_enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.enabled);")"
assert_eq "true" "$feed_enabled" "observationFeed.enabled is true"
@@ -1157,7 +1013,6 @@ test_write_observation_feed_config_writes_json() {
feed_to="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.to);")"
assert_eq "123456789" "$feed_to" "observationFeed.to is 123456789"
# Verify existing config preserved
local worker_port
worker_port="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.workerPort);")"
assert_eq "37777" "$worker_port" "Existing workerPort preserved after feed config write"
@@ -1176,7 +1031,6 @@ test_write_observation_feed_config_skips_when_not_configured() {
fake_home="$(mktemp -d)"
HOME="$fake_home"
# Create minimal config
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
@@ -1187,7 +1041,6 @@ test_write_observation_feed_config_skips_when_not_configured() {
write_observation_feed_config >/dev/null 2>&1
# Config should be unchanged — no observationFeed key
local has_feed
has_feed="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries !== undefined);")"
assert_eq "false" "$has_feed" "Config unchanged when FEED_CONFIGURED is false"
@@ -1239,14 +1092,9 @@ test_write_observation_feed_config_discord() {
test_write_observation_feed_config_discord
###############################################################################
# Test: write_observation_feed_config() — jq/python3/node fallback paths
###############################################################################
echo ""
echo "=== write_observation_feed_config() — fallback paths ==="
# Helper: verify feed config JSON was written correctly
verify_feed_config_json() {
local config_file="$1" expected_channel="$2" expected_target="$3" label="$4"
@@ -1262,13 +1110,11 @@ verify_feed_config_json() {
feed_to="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.to);")"
assert_eq "$expected_target" "$feed_to" "${label}: observationFeed.to correct"
# Verify existing config preserved
local worker_port
worker_port="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.workerPort);")"
assert_eq "37777" "$worker_port" "${label}: existing workerPort preserved"
}
# Create a seed config file for fallback tests
create_seed_config() {
local config_file="$1"
mkdir -p "$(dirname "$config_file")"
@@ -1288,7 +1134,6 @@ create_seed_config() {
"
}
# Test: jq path (if jq is available)
test_write_feed_config_jq_path() {
if ! command -v jq &>/dev/null; then
test_pass "jq path: skipped (jq not installed)"
@@ -1305,7 +1150,6 @@ test_write_feed_config_jq_path() {
FEED_TARGET_ID="C01ABC2DEFG"
FEED_CONFIGURED="true"
# jq is first in the chain, so just call directly
write_observation_feed_config >/dev/null 2>&1
verify_feed_config_json "$config_file" "slack" "C01ABC2DEFG" "jq path"
@@ -1319,7 +1163,6 @@ test_write_feed_config_jq_path() {
test_write_feed_config_jq_path
# Test: python3 fallback path (hide jq)
test_write_feed_config_python3_path() {
if ! command -v python3 &>/dev/null; then
test_pass "python3 path: skipped (python3 not installed)"
@@ -1329,14 +1172,12 @@ test_write_feed_config_python3_path() {
local fake_home
fake_home="$(mktemp -d)"
# Run in a subshell that hides jq from PATH
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
export HOME="'"$fake_home"'"
# Create seed config using node (node is always available)
mkdir -p "'"${fake_home}"'/.openclaw"
node -e "
const config = {
@@ -1353,14 +1194,12 @@ test_write_feed_config_python3_path() {
require(\"fs\").writeFileSync(\"'"${fake_home}"'/.openclaw/openclaw.json\", JSON.stringify(config, null, 2));
"
# Source install.sh functions
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
# Hide jq by creating a PATH without it
SAFE_PATH=""
IFS=":" read -ra path_parts <<< "$PATH"
for p in "${path_parts[@]}"; do
@@ -1378,7 +1217,6 @@ test_write_feed_config_python3_path() {
' 2>/dev/null)" || true
if [[ "$result" == *"DONE"* ]]; then
# Verify the JSON using node
local config_file="${fake_home}/.openclaw/openclaw.json"
verify_feed_config_json "$config_file" "signal" "+15551234567" "python3 path"
else
@@ -1390,7 +1228,6 @@ test_write_feed_config_python3_path() {
test_write_feed_config_python3_path
# Test: node fallback path (hide both jq and python3)
test_write_feed_config_node_path() {
local fake_home
fake_home="$(mktemp -d)"
@@ -1401,7 +1238,6 @@ test_write_feed_config_node_path() {
TERM=dumb
export HOME="'"$fake_home"'"
# Create seed config
mkdir -p "'"${fake_home}"'/.openclaw"
node -e "
const config = {
@@ -1418,22 +1254,12 @@ test_write_feed_config_node_path() {
require(\"fs\").writeFileSync(\"'"${fake_home}"'/.openclaw/openclaw.json\", JSON.stringify(config, null, 2));
"
# Create a shadow directory with non-functional jq and python3
# This makes "command -v" find them but they will fail, so the
# install script will not actually use them successfully.
# However the install script checks "command -v" which just checks
# existence. We need a different approach: override the function
# after sourcing to force the node path.
# Source install.sh functions
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
# Override write_observation_feed_config to only use the node path
# by extracting just the node branch logic
INSTALLER_FEED_CHANNEL="whatsapp" \
INSTALLER_FEED_TARGET_ID="5511999887766@s.whatsapp.net" \
INSTALLER_CONFIG_FILE="'"${fake_home}"'/.openclaw/openclaw.json" \
@@ -1477,7 +1303,6 @@ test_write_feed_config_node_path() {
test_write_feed_config_node_path
# Test: write_observation_feed_config uses jq/python3/node fallback chain
test_feed_config_fallback_chain_in_source() {
if grep -q 'command -v jq' "$INSTALL_SCRIPT"; then
test_pass "write_observation_feed_config checks for jq first"
@@ -1500,10 +1325,6 @@ test_feed_config_fallback_chain_in_source() {
test_feed_config_fallback_chain_in_source
###############################################################################
# Test: print_completion_summary() — shows observation feed status
###############################################################################
echo ""
echo "=== print_completion_summary() — observation feed ==="
@@ -1547,10 +1368,6 @@ test_completion_summary_without_feed() {
test_completion_summary_without_feed
###############################################################################
# Test: Channel type instructions exist in install.sh
###############################################################################
echo ""
echo "=== Channel instructions ==="
@@ -1562,15 +1379,10 @@ for channel in telegram discord slack signal whatsapp line; do
fi
done
# Verify specific instruction content
assert_contains "$(grep -A2 'userinfobot' "$INSTALL_SCRIPT" 2>/dev/null || echo '')" "userinfobot" "Telegram instructions include @userinfobot"
assert_contains "$(grep -A2 'Developer Mode' "$INSTALL_SCRIPT" 2>/dev/null || echo '')" "Developer Mode" "Discord instructions include Developer Mode"
assert_contains "$(grep -A2 'C01ABC2DEFG' "$INSTALL_SCRIPT" 2>/dev/null || echo '')" "C01ABC2DEFG" "Slack instructions include sample channel ID"
###############################################################################
# Test: TTY detection — setup_tty() and read_tty() exist
###############################################################################
echo ""
echo "=== TTY detection ==="
@@ -1582,24 +1394,18 @@ for fn in setup_tty read_tty; do
fi
done
# Verify TTY_FD is initialized (defaults to 0)
if declare -p TTY_FD &>/dev/null; then
test_pass "TTY_FD variable is defined"
else
test_fail "TTY_FD variable should be defined"
fi
# Verify setup_tty is called from main()
if grep -q 'setup_tty' "$INSTALL_SCRIPT"; then
test_pass "main() calls setup_tty"
else
test_fail "main() should call setup_tty"
fi
###############################################################################
# Test: Argument parsing — --provider flag
###############################################################################
echo ""
echo "=== Argument parsing — --provider flag ==="
@@ -1690,10 +1496,6 @@ test_provider_flag_invalid() {
test_provider_flag_invalid
###############################################################################
# Test: Argument parsing — --non-interactive flag (new format)
###############################################################################
echo ""
echo "=== Argument parsing — --non-interactive ==="
@@ -1740,16 +1542,10 @@ test_non_interactive_with_provider() {
test_non_interactive_with_provider
###############################################################################
# Test: --non-interactive mode completes without hanging
###############################################################################
echo ""
echo "=== --non-interactive full flow ==="
test_non_interactive_completes() {
# Run the full setup_ai_provider + setup_observation_feed in non-interactive mode
# This should complete without any prompts or hangs
local result
result="$(bash -c '
set -euo pipefail
@@ -1772,10 +1568,6 @@ test_non_interactive_completes() {
test_non_interactive_completes
###############################################################################
# Test: Script structure — curl | bash usage comment
###############################################################################
echo ""
echo "=== curl | bash usage comment ==="
@@ -1791,10 +1583,6 @@ else
test_fail "install.sh should document --provider flag in usage comment"
fi
###############################################################################
# Test: write_settings with --provider flag end-to-end
###############################################################################
echo ""
echo "=== write_settings with --provider flag ==="
@@ -1836,10 +1624,6 @@ test_write_settings_via_provider_flag() {
test_write_settings_via_provider_flag
###############################################################################
# Test: --upgrade flag parsing
###############################################################################
echo ""
echo "=== --upgrade flag parsing ==="
@@ -1903,10 +1687,6 @@ test_upgrade_not_set_by_default() {
test_upgrade_not_set_by_default
###############################################################################
# Test: is_claude_mem_installed() — upgrade detection
###############################################################################
echo ""
echo "=== is_claude_mem_installed() ==="
@@ -1916,7 +1696,6 @@ test_is_claude_mem_installed_found() {
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
# Create the expected directory structure
mkdir -p "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts"
touch "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts/worker-service.cjs"
@@ -1950,15 +1729,10 @@ test_is_claude_mem_installed_not_found() {
test_is_claude_mem_installed_not_found
###############################################################################
# Test: check_git() — git availability check
###############################################################################
echo ""
echo "=== check_git() ==="
test_check_git_available() {
# git should be available in test environment
if command -v git &>/dev/null; then
local output
output="$(check_git 2>&1)" || true
@@ -1971,7 +1745,6 @@ test_check_git_available() {
test_check_git_available
test_check_git_not_available() {
# Test that check_git fails gracefully when git is not in PATH
local exit_code=0
PLATFORM="macos"
bash -c '
@@ -2035,10 +1808,6 @@ test_check_git_linux_message() {
test_check_git_linux_message
###############################################################################
# Test: check_port_37777() — port conflict detection
###############################################################################
echo ""
echo "=== check_port_37777() ==="
@@ -2052,10 +1821,6 @@ test_check_port_function_exists() {
test_check_port_function_exists
###############################################################################
# Test: cleanup_on_exit() — global cleanup trap
###############################################################################
echo ""
echo "=== cleanup_on_exit() ==="
@@ -2079,7 +1844,6 @@ test_register_cleanup_dir() {
local test_dir
test_dir="$(mktemp -d)"
# Save existing cleanup dirs
local saved_dirs=("${CLEANUP_DIRS[@]+"${CLEANUP_DIRS[@]}"}")
CLEANUP_DIRS=()
@@ -2091,17 +1855,12 @@ test_register_cleanup_dir() {
test_fail "register_cleanup_dir should add directory to CLEANUP_DIRS"
fi
# Restore
CLEANUP_DIRS=("${saved_dirs[@]+"${saved_dirs[@]}"}")
rm -rf "$test_dir"
}
test_register_cleanup_dir
###############################################################################
# Test: ensure_jq_or_fallback() — JSON utility function
###############################################################################
echo ""
echo "=== ensure_jq_or_fallback() ==="
@@ -2138,10 +1897,6 @@ test_ensure_jq_with_jq_available() {
test_ensure_jq_with_jq_available
###############################################################################
# Test: main() references new functions
###############################################################################
echo ""
echo "=== main() references new functions ==="
@@ -2205,10 +1960,6 @@ test_usage_comment_includes_upgrade() {
test_usage_comment_includes_upgrade
###############################################################################
# Test: Distribution readiness — URL, usage comment, SKILL.md reference
###############################################################################
echo ""
echo "=== Distribution readiness ==="
@@ -2323,10 +2074,6 @@ test_skill_md_documents_options() {
test_skill_md_documents_options
###############################################################################
# Summary
###############################################################################
echo ""
echo "========================================"
echo "Results: ${TESTS_PASSED}/${TESTS_RUN} passed, ${TESTS_FAILED} failed"
-8
View File
@@ -1,9 +1,3 @@
/**
* Smoke test for OpenClaw claude-mem plugin registration.
* Validates the plugin structure works independently of the full OpenClaw runtime.
*
* Run: node test-sse-consumer.js
*/
import claudeMemPlugin from "./dist/index.js";
@@ -49,10 +43,8 @@ const mockApi = {
},
};
// Call the default export with mock API
claudeMemPlugin(mockApi);
// Verify registration
let failures = 0;
if (!registeredService) {