Files
claude-mem/openclaw/test-install.sh
T
Alex Newman 9e2973059a UX redesign: installer + provider rename + /learn-codebase + welcome card + SessionStart hint (#2255)
* feat(ux): claude-mem UX improvements with installer enhancements

Squashed PR #2156 commits for clean rebase onto main:
- feat(installer): add provider selection, model prompt, worker auto-start
- refactor: rename *Agent provider classes to *Provider
- feat: add /learn-codebase skill and viewer welcome card
- feat(worker): inject welcome hint when project has zero observations
- fix(pr-2156): address greptile review comments
- fix(pr-2156): address coderabbit review comments
- fix(pr-2156): persist CLAUDE_MEM_PROVIDER for non-claude in non-TTY mode
- fix(pr-2156): file-backed settings reads in installer + env-first SKILL doc

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* build: rebuild plugin artifacts after rebase onto v12.4.7

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(skills): strip claude-mem internals from learn-codebase

The learn-codebase skill, install next-step copy, WelcomeCard, and
welcome-hint previously walked the primary agent through worker endpoints
and synthetic observation payloads. The PostToolUse hook already captures
every Read/Edit the agent makes — the agent should have no awareness that
the memory layer exists. Collapse the skill to one instruction ("read every
source file in full") and rephrase touchpoints to describe only what the
user observes (Claude reading files), not what happens behind the scenes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): preflight version mismatch + settings-aware port resolution

Two related fixes for build-and-sync's worker restart step:

1. Read CLAUDE_MEM_WORKER_PORT from ~/.claude-mem/settings.json the same
   way the worker does, instead of computing the default port from the
   uid alone. Previously, users with a custom port saw a misleading
   "Worker not running" message because the restart POST hit the wrong
   port and got ECONNREFUSED.

2. Add a preflight check that aborts the sync when the running worker's
   reported version does not match the version we are about to build.
   Claude Code's plugin loader pins the worker to a specific cache
   version per session, so syncing into a newer cache directory has no
   effect until the user runs `claude plugin update thedotmack/claude-mem`
   to bump the pin. The preflight surfaces this explicitly with the exact
   command to run; --force bypasses it for intentional cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(learn-codebase): note sed for partial reads of large files

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide

Removed prose comments from all tracked source. Preserved directives
(@ts-ignore, eslint-disable, biome-ignore, prettier-ignore, triple-slash
references, webpack magic, shebangs). Deleted two tests that asserted
on comment text rather than runtime behavior.

Net: 401 files, -14,587 / +389 lines, -10.4% bytes.

Verified: typecheck passes, build passes, test count unchanged from
baseline (22 pre-existing fails, all unrelated).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(installer): move runtime setup into npx, eliminate hook dead air

Smart-install ran 3 times during a fresh install — the worst run was silent,
fired by Claude Code's Setup hook after `claude plugin install`, producing
~30s of dead air that looked like the plugin was hung.

This change makes `npx claude-mem install` the single place heavy work
happens, with a visible spinner. Hooks become runtime-only.

- New `src/npx-cli/install/setup-runtime.ts` module: ensureBun, ensureUv,
  installPluginDependencies, read/writeInstallMarker, isInstallCurrent.
  Marker schema preserved exactly ({version, bun, uv, installedAt}) so
  ContextBuilder and BranchManager readers keep working.
- `npx claude-mem install`: ungated copy/register/enable for every IDE,
  inserts a "Setting up runtime" task with honest "first install can take
  ~30s" spinner. The claude-code shell-out to `claude plugin install` is
  removed — npx already populated everything Claude reads.
- New `npx claude-mem repair` command for post-`claude plugin update`
  recovery, force-reinstalls runtime.
- Setup hook now runs `plugin/scripts/version-check.js` (29ms wall) instead
  of smart-install. Mismatch prints "run: npx claude-mem repair" on stderr.
  Always exits 0 (non-blocking, per CLAUDE.md exit-code strategy).
- SessionStart loses the smart-install entry; 2 hooks remain (worker start,
  context fetch).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(installer): delete smart-install sources, retarget tests

- Delete scripts/smart-install.js + plugin/scripts/smart-install.js (both
  are source files kept in sync manually; both must go).
- Delete tests/smart-install.test.ts (covered surface is gone).
- tests/plugin-scripts-line-endings: drop smart-install.js entry.
- tests/infrastructure/plugin-distribution: retarget two assertions at
  version-check.js (the new Setup hook script).
- New tests/setup-runtime.test.ts: 9 tests covering marker read/write,
  isInstallCurrent semantics. Marker schema invariant verified.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs(installer): describe npx-driven setup + version-check Setup hook

Sweep public docs and architecture notes to reflect the new flow:
npx installer does Bun/uv setup with a visible spinner; Setup hook runs
sub-100ms version-check.js; users hit `npx claude-mem repair` after a
`claude plugin update`.

- docs/architecture-overview.md: hook lifecycle table + npx flow paragraph
- docs/public/configuration.mdx: tree + hook config example
- docs/public/development.mdx: build output line
- docs/public/hooks-architecture.mdx: full rewrite of pre-hook section,
  timing table, performance table
- docs/public/architecture/{overview,hooks,worker-service}.mdx: tree
  comments, JSON config example, Bun requirement section

docs/reports/* untouched (historical incident reports).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): mergeSettings writes via USER_SETTINGS_PATH

Greptile P1 (#2156): `settingsFilePath()` only resolved
`process.env.CLAUDE_MEM_DATA_DIR`, while `getSetting()` reads via
`USER_SETTINGS_PATH` which `resolveDataDir()` populates from BOTH the env
var AND a `CLAUDE_MEM_DATA_DIR` entry persisted in
`~/.claude-mem/settings.json`. Result: a user with the data dir saved in
settings.json but not exported in their shell would have provider/model
settings silently written to `~/.claude-mem/settings.json` while
`getSetting()` read from `/custom/path/settings.json` — read/write split.

Drop `settingsFilePath()` and the now-unused `homedir` import; reuse the
already-imported `USER_SETTINGS_PATH` constant.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(cli): parse --provider, --model, --no-auto-start install flags

Greptile P1 (#2156): InstallOptions has fields `provider`, `model`,
`noAutoStart`, but the install case in the npx-cli switch only parsed
`--ide`. The other three flags were silently dropped — `npx claude-mem
install --provider gemini` was a no-op.

Extract a `parseInstallOptions(argv)` helper, share it between the bare
`npx claude-mem` and `npx claude-mem install` paths, and validate
`--provider` against the allowed set. Update help text accordingly.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): pipe runtime-setup output, always show IDE multiselect

Two issues caught in a docker test of the installer:

1. The bun.sh installer, uv installer, and `bun install` were using
   stdio: 'inherit', dumping their stdout/stderr through clack's spinner
   region — visible as raw "downloading uv 0.11.8…" / "Checked 58
   installs across 38 packages…" text streaming under the spinner. Switch
   to stdio: 'pipe' and surface captured stderr only on failure (via a
   shared describeExecError() helper that includes stdout when stderr is
   empty). Spinner stays clean on the happy path.

2. promptForIDESelection() silently picked claude-code when no IDEs were
   detected, never showing the user the multiselect. On a fresh machine
   with no IDEs present yet (e.g. our docker test container), the user
   never got to choose. Now: always show the full IDE list when
   interactive; mark detected ones with [detected] hints and pre-select
   them; show a warn line if zero are detected explaining they should pick
   what they plan to use. Non-TTY callers still get the silent
   claude-code default at the call site (unchanged).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): skip marketplace work for claude-code-only, offer to install Claude Code

Two related UX fixes from a docker test:

**Delay between "Saved Claude model=…" and "Plugin files copied OK"**

After dropping the needsManualInstall gate, every install was unconditionally
running `copyPluginToMarketplace` (which copied the entire root node_modules
tree — thousands of files, dozens of seconds) and `runNpmInstallInMarketplace`
(npm install --production) even when only claude-code was selected. Neither
is needed for claude-code: that path uses the plugin cache dir + the
installed_plugins.json + enabledPlugins flag, all of which we already write.

- Drop `node_modules` from `copyPluginToMarketplace`'s allowed-entries list;
  the dependency-install task populates it on the destination side anyway.
- Re-introduce `needsMarketplace = selectedIDEs.some(id => id !== 'claude-code')`
  scoped *only* to `copyPluginToMarketplace`, `runNpmInstallInMarketplace`,
  and the pre-install `shutdownWorkerAndWait` (also pointless for claude-code-
  only flows since we're not overwriting the worker's running cache dir
  source). All other tasks (cache copy, register, enable, runtime setup) stay
  unconditional.

**Claude Code missing → silent install of an IDE that isn't there**

When the user picked claude-code on a machine without it (e.g. a fresh
container), the install completed but `claude` was unavailable and the only
hint was a generic warn line. Replace with an explicit pre-flight prompt:

  Claude Code is not installed. Claude-mem works best in Claude Code, but
  also works with the IDEs below.
  ? Install Claude Code now?
    ◆ Yes — install Claude Code (recommended)
    ◯ No — pick another IDE below
    ◯ Cancel installation

If the user picks "Yes", run `curl -fsSL https://claude.ai/install.sh | bash`
(or the PowerShell equivalent on Windows), then re-detect IDEs and proceed
with claude-code pre-selected. If the install fails or the user picks "No",
the multiselect still appears with claude-code visible (just unmarked
[detected]), so they can opt in or pick another IDE.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): detect Claude Code via `claude` CLI, not ~/.claude dir

The directory `~/.claude` can exist (e.g. mounted in Docker, or created
by tooling) without Claude Code actually being installed. Detect the
`claude` command in PATH instead so the installer correctly offers to
install Claude Code when missing.

* docs(learn-codebase): add reviewer note explaining the cost tradeoff

The skill intentionally reads every file in full to build a cognitive
cache that pays off across the rest of the project. Add a brief note
so reviewers (human or bot) understand the tradeoff before flagging
the unbounded read as a cost issue.

* fix: address Greptile P1 feedback on welcome hint and learn-codebase

- SearchRoutes: skip welcome hint when caller passes ?full=true so
  explicit full-context requests aren't intercepted by the hint.
- learn-codebase: replace `sed` instruction with the Read tool's
  offset/limit parameters, since Bash is gated in Claude Code by
  default.

* feat(install): ASCII-animated logo splash on interactive install

Plays a ~1s bloom animation of the claude-mem sunburst logomark when
the installer starts in an interactive terminal — geometrically rendered
via 12 ray curves around a center disc, in the brand orange. The
wordmark and tagline type on alongside the final frame.

Auto-skipped on non-TTY, in CI, when NO_COLOR or CLAUDE_MEM_NO_BANNER
is set, or when the terminal is too narrow.

Inspired by ghostty +boo.

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): replace rotation frames with angular-sector bloom generator

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): three-act choreography renderer with radial gradient and diff redraw

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): update preview script to support small/medium/hero tier selection

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(docker): add COLORTERM=truecolor to test-installer sandbox

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): auto-apply PATH for Claude Code with spinner UX

The Claude Code install.sh prints a Setup notes block telling users to
manually edit "your shell config file" to add ~/.local/bin to PATH —
which left fresh installs unable to launch claude from the command line.

After a successful install, detect ~/.local/bin/claude on disk and, if
the dir is missing from PATH, append the right export line to .zshrc /
.bash_profile / .bashrc / fish config (idempotent, marked with a
comment). Also updates process.env.PATH for the current install run.

Wraps the curl|bash install in a clack spinner (interactive only) so the
~4 minute native-build download doesn't look frozen — output is captured
silently and dumped on failure for debuggability. Non-interactive mode
keeps inherited stdio for CI logs.

Verified end-to-end in the test-installer docker sandbox: spinner
animates, .bashrc gets the export, fresh login shell resolves claude.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(banner): video-frame ASCII renderer with three-act choreography

Generator switched from a single Jimp-rendered logo to pre-extracted
video frames concatenated with \x01 separators and gzip-deflated, ported
from ghostty's boo wire format. Renderer rewritten around three acts
(ignite → stagger bloom → text reveal + breathe) with adaptive sizing,
radial gradient, and diff-based redraw.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(onboarding): unify install / SessionStart / viewer around one first-success moment

Three surfaces now point at the same north-star moment — open the viewer, do
anything in Claude Code, watch an observation appear within seconds — with the
same verbatim timing and privacy lines, and a single canonical "how it works"
explainer instead of three diverging copies.

- Canonical explainer at src/services/worker/onboarding-explainer.md served via
  GET /api/onboarding/explainer; mirrored into plugin/skills/how-it-works/SKILL.md
- SessionStart welcome hint rewritten as third-person status (no imperatives
  Claude tries to execute), pinned with a default-value regression test
- Post-install Next Steps reframed as "two paths": passive default + optional
  /learn-codebase front-load; drops /mem-search and /knowledge-agent from this
  surface; adds verbatim timing + privacy lines and /how-it-works link
- /api/stats response gains firstObservationAt for the viewer stat row
- Viewer WelcomeCard branches on observationCount === 0: empty state shows live
  worker-connection dot + "waiting for activity"; has-data state shows
  observations · projects · since [date] and two example prompts. v2 dismiss key
- jimp added to package.json to fix pre-existing banner-frame build break

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(banner): play unconditionally; only honor CLAUDE_MEM_NO_BANNER

The 128-col / TTY / CI / NO_COLOR gates silently swallowed the banner in
narrower terminals, CI logs, and any non-TTY pipe — including Docker runs
where -it should preserve the experience but column width was the wrong
gate. Remove the implicit gates; keep the explicit opt-out only.

If a frame wraps in a narrow terminal, that's better than the banner
not playing at all.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* revert(banner): restore 15:33 gating logic per user request

Reverts eb6fc157. Restores isBannerEnabled to the state at commit
8e448015 (2026-04-30 15:33): TTY check, !CI, !NO_COLOR, !CLAUDE_MEM_NO_BANNER,
and cols >= BANNER.width.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(install): wrap remaining slow steps with spinners

Each IDE installer (Cursor, Gemini CLI, OpenCode, Windsurf, OpenClaw,
Codex CLI, MCP integrations) now runs inside a clack task spinner with
per-step progress messages instead of silent dynamic-import + cpSync.
Pre-overwrite worker shutdown (up to 10s) and the post-install health
probe (up to 3s) also get spinners.

Internal console.log/error/warn from each IDE installer is buffered
during the spinner; if the install fails, captured output is replayed
afterward via log.warn so users can see what broke.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(review): observation count + IDE pre-selection regressions

WelcomeCard's "no observations yet" empty state was triggered when a
project filter narrowed the feed to zero rows, even with thousands of
observations elsewhere. Source the count from global stats.database
to match firstObservationAt's scope.

Restore initialValues: [] in the IDE multiselect — pre-selecting every
detected IDE was the exact regression #2106 was filed for.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): trichotomy worker state + cache fallback for script path

ensureWorkerStarted now returns 'ready' | 'warming' | 'dead' instead of
boolean. The spawned-but-still-warming case (common in Docker cold
starts and slow first-time inits) was being misreported as 'did not
start', which contradicted the next-steps panel saying 'still starting
up'. Install task message and Next Steps headline now agree on the
actual state.

Also fixes the actual root cause of 'Worker did not start' on
claude-code-only installs: the worker script path was hardcoded to the
marketplace dir, which is left empty when no non-claude-code IDE is
selected. Now falls back to pluginCacheDirectory(version) when the
marketplace copy isn't present.

Verified end-to-end in docker/claude-mem with --ide claude-code,
--ide cursor, and a fresh container — install task and headline
agree on 'Worker ready at http://localhost:<port>' in all cases.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* docs: align CLAUDE.md and public docs with current code

Sweep across CLAUDE.md and 10 high-traffic docs/public/ MDX files to
remove point-in-time references and align with the actual current
shape of the codebase. Highlights:

- Hardcoded port 37777 → per-user formula (37700 + uid % 100) on the
  front-door pages (introduction, installation, configuration,
  architecture/overview, architecture/worker-service, troubleshooting,
  hooks-architecture, platform-integration).
- Default model 'sonnet' → 'claude-haiku-4-5-20251001' (matches
  SettingsDefaultsManager).
- Node 18 → 20 (matches package.json engines).
- Lifecycle hook count corrected (5 events).
- Removed the nonexistent 'Smart Install' component and pre-built
  directory tree referencing files that no longer exist
  (context-hook.ts, save-hook.ts, cleanup-hook.ts, etc.); replaced
  with the real worker dispatcher shape.
- Removed CLAUDE.md '#2101' issue tag (kept the design rationale).
- Replaced obsolete hooks.json example with a description of the real
  bun-runner.js / worker-service.cjs hook event shape.

Lower-traffic doc pages still hardcode 37777 — left for a separate
global pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): land strip-comments around real parsers (postcss, remark, parse5)

Each language gets a real parser to locate comments, then we splice ranges
out of the original source. The library never serializes — that's how
remark-stringify produced 243 reformat-noise diffs in the first attempt
versus the 21 real strip targets here.

  JS/TS/JSX  -> ts.createSourceFile + getLeadingCommentRanges
  CSS/SCSS   -> postcss.parse + walkComments + node.source offsets
  MD/MDX     -> remark-parse (+ remark-mdx) + AST html / mdx-expression nodes
  HTML       -> parse5 with sourceCodeLocationInfo
  shell/py   -> kept hand-rolled hash stripper (no library worth the dep)

Preserves: shebangs, @ts-* directives, eslint-disable, biome-ignore,
prettier-ignore, triple-slash refs, webpack magic, /*! license keep,
@strip-comments-keep file marker. JS/TS handler runs a parse-roundtrip
check and refuses to write if syntax errors increased (catches the
worker-utils.ts breakage class from the 2026-04-29 attempt).

npm scripts:
  strip-comments         (apply)
  strip-comments:check   (CI-style, exits non-zero if changes needed)
  strip-comments:dry-run (list, no writes)

Verified --check on this repo: 21 changes, -4.0% bytes, no parse-error
regressions, no reformat-suspect false positives.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor: strip comments codebase-wide via parser-backed tool

21 files changed, -17,550 bytes (-4.0%) of narrative comments removed
across .ts / .tsx / .js / .mjs and the .gitignore. JS/TS comments stripped
via ts.createSourceFile + getLeadingCommentRanges — same canonical lexer,
same behavior as the 2026-04-29 strip, no reformat noise.

Preexisting baseline (unchanged):
  typecheck: 16 errors at HEAD, 16 errors after strip (line numbers shift,
             no new error classes — verified via diff of sorted error lists)
  build:     fails at HEAD with CrushHooksInstaller.js unresolved import
             (preexisting, unrelated to this strip)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(install): drop Crush integration references after extract

The Crush integration was extracted to its own branch on May 1, but the
import at install.ts:280 (and the case block + ide-detection entry +
McpIntegrations config + npx-cli help text) still referenced the now-
removed CrushHooksInstaller.js, breaking the build.

Removes:
- case 'crush' block in install.ts
- crush entry in ide-detection.ts
- CRUSH_CONFIG and registration in McpIntegrations.ts
- 'crush' from the IDE Identifiers help line in index.ts

Rebuilds worker-service.cjs to match.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(banner): mark generated banner-frames.ts with @strip-comments-keep

Without this, every build/strip cycle ping-pongs five lines of doc
comments in and out of the auto-generated output. The keep-marker tells
strip-comments.ts to skip the file entirely.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): drop banner-frame regen from build script

generate-banner-frames.mjs requires PNG frames in /tmp/cmem-banner-frames
that only exist after the maintainer runs ffmpeg locally on the source
video. CI has neither the video nor the frames, so the build broke on
Windows. The output (src/npx-cli/banner-frames.ts) is committed, so the
regen is a one-shot dev step — not a build step. Run the script directly
when the video changes.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): unstick the spinner — kill claim-self-lock, wake on fail, auto-broadcast

Three surgical changes that cure the stuck-spinner bug at the source.

Phase 1.1 (L9): claimNextMessage no longer self-excludes its own worker_pid.
A single UPDATE-RETURNING grabs the oldest pending row by id. Removes the
LiveWorkerPidsProvider plumbing that was never injected — Supervisor enforces
single-worker via PID file, so the multi-worker SQL was defending against a
configuration the project does not support.

Phase 1.2 (L19): SessionManager.markMessageFailed wraps PendingMessageStore.markFailed
and emits 'message' on the per-session EventEmitter. The iterator's waitForMessage
now wakes immediately on re-pend instead of parking for 3 minutes. ResponseProcessor
and SessionRoutes routed through the new wrapper.

Phase 1.3 (L24): PendingMessageStore takes an optional onMutate callback fired
from every mutator (enqueue, claimNextMessage, confirmProcessed, markFailed,
transitionMessagesTo, clearFailedOlderThan). SessionManager wires it; WorkerService
passes broadcastProcessingStatus. Ten manual broadcast calls deleted across
SessionCleanupHelper, SessionEventBroadcaster, SessionRoutes, DataRoutes, and
worker-service. Caller discipline becomes structurally impossible to forget.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): delete dead code — legacy routes, processPendingQueues, decorative guards

Pure deletions. Phase 2 of kill-the-asshole-gates.

- Legacy /sessions/:sessionDbId/* routes (handleSessionInit, handleObservations,
  handleSummarize, handleSessionStatus, handleSessionDelete, handleSessionComplete)
  bypassed all five ingest gates and were a parallel write path. Folded the
  initializeSession + broadcastNewPrompt + syncUserPrompt + ensureGeneratorRunning
  + broadcastSessionStarted work into the canonical /api/sessions/init handler so
  the hook makes one round trip instead of two.
- processPendingQueues (~104 lines, zero callers) — replaced in Phase 6 by a
  one-statement startup sweep.
- spawnInProgress Map and crashRecoveryScheduled Set — decorative dedupe over
  generatorPromise and stillExists checks that already provide the real safety.
- STALE_GENERATOR_THRESHOLD_MS — pre-empted live generators and raced with the
  finally block; the 3min idle timeout already kills zombies.
- MAX_SESSION_WALL_CLOCK_MS — ran a SELECT on every observation to enforce 24h.
  Runaway-spend protection lives in the API key, not in claude-mem.
- Missing-id 400 in shared.ts ingestObservation — Zod already enforces min(1)
  on contentSessionId and toolName at the route schema.
- SessionCompletionHandler import + completionHandler field on SessionRoutes
  (orphaned after handler deletions).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): SQL-backed getTotalQueueDepth — single source of truth

Was: iterate this.sessions.values() and sum getPendingCount per session.
Now: SELECT COUNT(*) FROM pending_messages WHERE status IN ('pending','processing').

The in-memory sessions Map drifted from the DB rows whenever a generator exited
without confirm/fail, leading to false-positive isProcessing in the UI. Phase 1.3's
auto-broadcast fires on every mutation, but it broadcast a stale Map count.
Reading from the DB makes the UI's spinner state match what the queue actually holds.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): typed abortReason replaces wasAborted boolean

Was: a boolean wasAborted that lumped every abort together. The finally block
branched on !wasAborted, so any abort skipped restart — including idle aborts
with pending work, which is exactly the case where we DO want to restart.

Now: ActiveSession.abortReason is a typed enum 'idle' | 'shutdown' | 'overflow'
| 'restart-guard'. The finally block consumes the reason and only skips restart
for 'shutdown' and 'restart-guard'. Idle and overflow aborts fall through, so
if pending work exists they trigger restart correctly.

Dropped 'stale' and 'wall-clock' from the union — Phase 2 deleted those paths.
Natural-completion abort (post-success) intentionally has no reason; it's not
gating restart logic.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): unify the two generator-exit finally blocks

Was: worker-service.ts:startSessionProcessor and SessionRoutes:ensureGeneratorRunning
each had their own ~70-line finally block with divergent restart-guard handling.
The worker-service path called terminateSession on RestartGuard trip and orphaned
pending rows (the L16 bug); the SessionRoutes path drained them. Two places to
update when rules changed.

Now: handleGeneratorExit in src/services/worker/session/GeneratorExitHandler.ts
owns the contract:
  1. Always kill the SDK subprocess if alive.
  2. Always drain processingMessageIds via sessionManager.markMessageFailed
     (which wakes the iterator — Phase 1.2).
  3. shutdown / restart-guard reasons: drain pending rows via
     transitionMessagesTo('failed'), finalize, remove from Map. Fixes L16.
  4. pendingCount=0: finalize normally and remove from Map.
  5. pendingCount>0: backoff respawn via per-session respawnTimer (no global Set;
     Phase 2.4 deleted that). RestartGuard trip drains to 'abandoned'.

Both finally blocks are now ~10-line wrappers that translate local state into the
canonical abortReason and delegate. Restored completionHandler injection into
SessionRoutes (was dropped in Phase 2 cleanup; needed by the unified helper for
finalizeSession).

Behavior change: SessionRoutes' previous "keep idle session in memory" was
deliberately replaced by the plan's "remove from Map on natural completion" —
next observation reinitializes via getMessageIterator → initializeSession.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(worker): startup orphan sweep — reset 'processing' rows at boot

When the worker dies (crash, kill, restart), any pending_messages rows it left
in 'processing' state are by definition orphans (the only worker is dead).
Single SQL UPDATE at boot resets them to 'pending' so the iterator can claim
them again. Replaces the deleted processPendingQueues function (Phase 2.2).

Runs in initializeBackground after dbManager.initialize() and before the
initializationComplete middleware releases blocked HTTP requests, so no
in-flight request can race the sweep. NOT on a periodic timer — after boot,
every 'processing' row has a live consumer and a periodic sweep would race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): simplify enqueue catch, replace memorySessionId throw with re-pend

7.1: queueObservation's catch was logging two ERROR-level messages and rethrowing.
The rethrow is correct (FK violations / disk full / schema drift should crash
loudly), but the verbose ERROR logging pretended the error was recoverable.
Reduced to one INFO line + rethrow.

7.2: ResponseProcessor's memorySessionId guard was throwing if the SDK hadn't
included session_id on the first user-yield, terminal-failing the entire batch.
Now warns and re-pends in-flight messages via sessionManager.markMessageFailed
(which wakes the iterator — Phase 1.2). The next iteration tries again with
memorySessionId hopefully captured.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(sync): mirror builds to installed-version cache for hot reload

When package.json bumps past Claude Code's installed pin, sync-marketplace
wrote new code to cache/<buildVersion>/ but the worker loaded from
cache/<installedVersion>/, so worker:restart reloaded the same old code.

Replace the exit-on-mismatch preflight with a mirror step: when versions
differ, also rsync plugin/ into cache/<installedVersion>/ so worker:restart
hot-reloads new code without a Claude Code session restart. The
build-version cache still gets written for the eventual
`claude plugin update`.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: delete dead barrel files and orphan utilities

- src/sdk/index.ts (re-exports parser+prompts; nothing imported the barrel)
- src/services/Context.ts (re-exports ./context/index.js; no importers)
- src/services/integrations/index.ts (no importers)
- src/services/worker/Search.ts (3-line barrel of ./search/index.js)
- src/services/infrastructure/index.ts: drop CleanupV12_4_3 re-export
- src/utils/error-messages.ts (getWorkerRestartInstructions never imported)
- src/types/transcript.ts (170 LoC of types, zero importers)
- src/npx-cli/_preview.ts (banner dev preview, no script wires it)

Build + tests still pass; observations still flowing.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(parser): drop unused detectLanguage

Only the user-grammar-aware variant detectLanguageWithUserGrammars()
is actually called.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(types): drop unused SdkSessionRecord + ObservationWithContext

Both interfaces in src/types/database.ts had zero importers anywhere
in src or tests.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): drop unused getDetectedIDEs + claudeMemDataDirectory

getDetectedIDEs has no callers — install.ts uses detectInstalledIDEs
directly. claudeMemDataDirectory has no callers either.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ProcessManager): drop dead orphan-reaper + signal-handler helpers

Each had zero callers in src/ or tests/:
  - cleanupOrphanedProcesses + enumerateOrphanedProcesses
  - ORPHAN_PROCESS_PATTERNS + ORPHAN_MAX_AGE_MINUTES
  - forceKillProcess
  - waitForProcessesExit
  - createSignalHandler
  - resetWorkerRuntimePathCache

The orphan reaper was retired in PATHFINDER Plan 02 ("OS process groups
replace hand-rolled reapers", commit 94d592f2) — these were the leftover
pieces. shutdown.ts uses the supervisor's own kill-pgid path instead.

parseElapsedTime kept (covered by tests/infrastructure/process-manager.test.ts).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete 11 unreferenced DX/forensic scripts

None of these are referenced by package.json npm scripts or docs/.
All last touched on Apr 29 only as part of the comment-stripping
pass — the feature code itself is older and orphaned:

  analyze-transformations-smart.js
  debug-transcript-structure.ts
  dump-transcript-readable.ts
  endless-mode-token-calculator.js
  extract-prompts-to-yaml.cjs
  extract-rich-context-examples.ts
  find-silent-failures.sh
  fix-all-timestamps.ts
  format-transcript-context.ts
  test-transcript-parser.ts
  transcript-to-markdown.ts

These are standalone tools — runtime behavior unchanged.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(scripts): delete unused extraction/ and types/ subdirs

- scripts/extraction/{extract-all-xml.py, filter-actual-xml.py, README.md}
  point at ~/Scripts/claude-mem/ — the user's pre-relocation path that no
  longer exists. Zero references in package.json, src/, or tests/.
- scripts/types/export.ts duplicates ObservationRecord etc. and has no
  importers (CodexCliInstaller imports transcripts/types, not this).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(BranchManager): drop dead getInstalledPluginPath

OpenCodeInstaller has its own (used) getInstalledPluginPath; the
BranchManager copy never had any external callers.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(ChromaSyncState): unexport DocKind (used internally only)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* test(gemini): drop stale earliestPendingTimestamp / processingMessageIds

Both fields were removed from ActiveSession in earlier queue-engine
cleanup. Tests had been silently keeping them because the mock sessions
use 'as any' to bypass strict typing, so the dead fields rode along
without complaint.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 3 unused module-level constants

- src/npx-cli/banner.ts: CURSOR_HOME, CLEAR_DOWN (banner uses
  CLEAR_SCREEN which combines clear-down + cursor-home into a single
  CSI sequence; the standalone constants were leftovers).
- src/services/worker/BranchManager.ts: DEFAULT_SHELL_TIMEOUT_MS
  (BranchManager only uses GIT_COMMAND_TIMEOUT_MS / NPM_INSTALL_TIMEOUT_MS).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(opencode-plugin): drop dead workerPost helper

Only the fire-and-forget variant (workerPostFireAndForget) is actually
called. workerPost was the await-result version with no remaining caller.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore: drop 8 truly-unused interface fields

Verified each by grepping for `.field`, `"field"`, `'field'`, and
`field:` patterns across src/ + tests/ + plugin/scripts. Where the
only remaining usage was the assignment site, removed the assignments too.

- GitHubStarsData: watchers_count, forks_count (only stargazers_count read)
- TableColumnInfo: dflt_value (PRAGMA returns it but no caller reads it)
- IndexInfo: seq (PRAGMA returns it but no caller reads it)
- ObservationRecord: source_files (legacy field, no readers)
- HookResult.hookSpecificOutput: permissionDecisionReason
- WatchTarget: rescanIntervalMs (set in config, never read)
- ShutdownResult: confirmedStopped (write-only — assigned but no
  reader; updated all 3 return sites to drop it)
- ModePrompts: language_instruction (multilingual support never wired)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(npx-cli): reuse InstallOptions type instead of inline duplicate

parseInstallOptions had its return type written out inline as an
anonymous duplicate of InstallOptions. Use the canonical type
(import type — zero bundle cost).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* chore(integrations): drop unused Platform type alias

The detectPlatform() function that returned this type was deleted earlier
in the branch (along with getScriptExtension that consumed it). The type
itself outlived its consumer; only string literals "Platform:" survive in
console.log diagnostics, which don't reference the alias.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): broadcast processing_status when summarize is queued

broadcastSummarizeQueued was an empty no-op even though
handleSummarizeByClaudeId calls it after enqueueing. The PendingMessageStore
onMutate callback already fires broadcastProcessingStatus on enqueue, but
calling it explicitly from broadcastSummarizeQueued ensures the spinner
ticks on the moment a summary is requested even if the onMutate chain has
any timing race.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): keep spinner on while summary generates

ClaudeProvider's SDK can pull multiple synthetic prompts (e.g.
observation + summarize) before producing responses. Each pull pushed
an ID to session.processingMessageIds. When the SDK's first
observation response came back, ResponseProcessor.confirmProcessed
deleted ALL pending message rows — including the still-in-flight
summary — so getTotalQueueDepth dropped to 0 and the spinner turned
off, even though the summary took another ~22s to actually generate.

Tag each in-flight message with its type ({id, type}) so the response
processor can pop only the FIFO message of the matching type
(observation vs summarize). The summary row stays in 'processing'
until its own response arrives, keeping the spinner lit through the
entire summary window.

Also updates Gemini/OpenRouter providers and GeneratorExitHandler for
the new shape.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(worker): clear summary from queue on any SDK response

Switch ResponseProcessor from type-aware FIFO matching to strict FIFO
popping (each SDK response → 1 in-flight message consumed). This way
the summary always clears when the SDK responds, even when the
response is unparseable or the summary doesn't actually generate
content — preventing stuck spinner / queue-depth-stuck-at-1.

Spinner behavior is preserved: messages enqueued after the summary
keep the queue depth elevated, and only when the SDK has responded
to every prompt does the queue drain to zero.

Also: when the consumed message is a 'summarize' and parsing fails,
treat it as best-effort and confirmProcessed (no retry) — summaries
that can't be parsed shouldn't keep retrying.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): redesign welcome card and remove source filters

The first-start welcome card now explains the three feed card types
(observation/summary/prompt) with color-coded badges, points users at
the gear icon for settings and the project dropdown for filtering, and
plugs /mem-search for recall — replacing the old two-line "ask:" prompts.

Source filter tabs (Claude/Codex/etc.) are removed from the header.
Filtering by AI provider was nonsense from a user POV; the project
dropdown is the only header filter now. Source tracking is also
stripped from useSSE, usePagination, App state, and CSS.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): keep welcome card in feed column, swap rows for 3 squares

Two visible problems in the previous design: the card stretched
edge-to-edge while feed cards sit in a centered 650px column, and
the body was a stack of long horizontal rows that scanned line-by-line.

Both fixed: Feed now accepts a pinnedTop slot so the welcome card
renders inside the same .feed-content column as observation cards.
Body is now a 3-column grid of square feature blocks — Live feed,
Tune it, Recall it — each with a custom inline SVG illustration
(stacked cards with color-coded stripes, gear+sliders, magnifier
over cards). Old text-row sections (welcome-card-types,
welcome-card-tips, welcome-card-section, welcome-card-tip-icon)
are removed. Squares stack to one column under 600px.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* feat(viewer): convert welcome card to glassy modal with stylized logo

Card now opens as a centered modal with a frosted/glass backdrop
(blur + saturate) so it doubles as a proper help dialog when reopened
from the header's question-mark button. Removed the observation count,
project count, and "since" date — those don't make sense for a
first-launch surface and felt out of place in a help context.

Header art swapped from the small webp logomark to the new
high-resolution sun/sunburst PNG (claude-mem-logo-stylized.png),
shipped as a checked-in asset in src/ui and plugin/ui.

Bigger throughout: 28px h2, 16px tagline, 88px illustrations,
26px feature padding, 1:1 aspect-ratio squares. Backdrop click and
Esc both close. Mobile collapses the grid to one column and drops
the aspect-ratio constraint.

Reverted the unused pinnedTop slot on Feed.tsx since the welcome
card is now a true overlay rather than an in-feed pinned card.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): make welcome modal actually glassy

Previous version had a 55%-opacity black backdrop that almost fully
blocked the underlying UI — the "glass" was just a dark plate.

Now the backdrop is fully transparent (no darkening at all), the
panel itself drops to 55% bg-card opacity with its existing
backdrop-filter blur(28px) saturate(170%), and the feature squares
drop to 35% bg-tertiary so they layer as glass-on-glass over the
already-blurred panel. The header and feed below now read clearly
through the modal's frosted blur.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(viewer): bulletproof square features via padding-bottom + clamp() fluid type

Squares were rendering taller than wide because aspect-ratio is treated
as a minimum — content can push the box past 1:1. Switched to the
classic padding-bottom: 100% trick: percentage padding resolves against
the parent's width, so the box is ALWAYS W × W regardless of content.
Inner content sits in an absolutely-positioned flex column that can't
push the shell taller.

Whole modal is now desktop-first and fluid via clamp() — no media-query
stair-steps for type, padding, gaps, border-radius, illustration size,
or modal width. Single mobile breakpoint at <600px collapses the grid
to one column and reverts the padding-bottom trick so each feature can
grow to natural content height.

Tightened the three feature descriptions so they fit comfortably inside
the square at the desktop size.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* style(viewer): 15% black overlay + heavier modal shadow for elevation

Backdrop goes from transparent to rgba(0,0,0,0.15) — just enough
darkening to push the modal visually forward without burying the
underlying UI. Modal shadow stacked: 40px/120px ambient + 16px/48px
contact, both deeper, plus the existing inset 1px highlight.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(build): clear pending_messages queue on build-and-sync

Rewrites scripts/clear-failed-queue.ts to talk directly to SQLite via
bun:sqlite — the previous HTTP endpoints (/api/pending-queue/*) were
removed during the queue engine rewrite, so the script was orphaned.
Wires `npm run queue:clear` into `build-and-sync` so each rebuild
starts with a clean queue.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* refactor(worker): collapse parser to binary valid/invalid + clearPendingForSession model

- Parser: { valid: true, observations, summary } | { valid: false } — drops kind/skipped enum dispatch
- ResponseProcessor: two branches only (parseable → store + clearPendingForSession; else → no-op)
- Drop processingMessageIds + per-message claim/confirm/markFailed lifecycle across 3 providers
- PendingMessageStore: 226 → 140 lines; remove markFailed/transitionMessagesTo/confirmProcessed/clearFailedOlderThan/getAllPending/peekPendingTypes... wait keep peekPendingTypes
- Schema migration v31+v32: drop retry_count, failed_at_epoch, completed_at_epoch, worker_pid columns
- SessionQueueProcessor: delete two 1s recovery sleeps (let iterator end on error)
- Server.ts/SettingsRoutes.ts: replace four magic-number setTimeout exit-flush patterns with flushResponseThen helper
- GeneratorExitHandler: 183 → 117 lines (drain in-flight loop gone)

Net: -181 lines. No more silent data loss via maxRetries=3.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): address review comments batch 1

- install.ts: needsMarketplace true when claude-code selected (P1, was no-op)
- install.ts: throw on invalid --model so CLI exits non-zero
- install.ts: skip worker health checks + adapt next-step copy when --no-auto-start
- install.ts: repair regenerates plugin cache when missing
- index.ts: readFlag rejects missing/flag-shaped values
- index.ts: route flag-first invocations (e.g. `--provider claude`) to install
- banner.ts: fail-open if frame payload decode throws
- SearchRoutes.ts: 5s TTL cache for settings reads on hot hook path (P2)
- detect-error-handling-antipatterns.ts: trailing-brace strip whitespace-tolerant
- investigate-timestamps.ts: compute Dec 2025 epochs at runtime (was Dec 2024)
- regenerate-claude-md.ts: include workingDir in fallback walker so root is covered
- sync-marketplace.cjs: parseWorkerPort validates 1..65535 before http.request
- sync-to-marketplace.sh: resolve SOURCE_DIR from script location, not cwd
- Dockerfile.test-installer: bash --login sources .bashrc via .bash_profile
- docs/configuration.mdx: drop nonexistent .worker.port file refs, use settings.json
- docs/architecture-overview.md: dynamic port + queue model after parser collapse
- docs/architecture/worker-service.mdx: dynamic port example + drop port-file claim
- docs/platform-integration.mdx: WORKER_BASE_URL pattern, drop hardcoded 37777
- install/public/install.sh: Node 20 floor (was 18) to match docs

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): reset claimed messages to pending on early-return paths

ResponseProcessor returns early in two cases:
- parser invalid (unparseable response)
- memorySessionId not yet captured

Both paths previously left the just-claimed message in `status='processing'`,
which counts toward `getPendingCount`. The generator-exit handler then sees
`pendingCount > 0` and respawns the generator, looping until the restart
guard trips and `clearPendingForSession` deletes the message — silent data
loss.

Calling `resetProcessingToPending` on these paths lets the next generator
pass re-claim the message and try again, instead of burning the restart
budget on no-op respawns.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): swebench fallback row + troubleshooting port path

- evals/swebench/run-batch.py: append fallback prediction row when
  orchestrator future raises, preserving "never drop an instance" guarantee
- docs/troubleshooting.mdx: drop nonexistent .worker.port / worker.port file
  references; use settings.json + /api/health for port discovery

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): memoize per-project observation count for welcome-hint hot path

handleContextInject runs on every PostToolUse hook (after every Read/Edit).
The welcome-hint block ran a COUNT(*) on observations for every call once
CLAUDE_MEM_WELCOME_HINT_ENABLED was true. Observation counts are
monotonically increasing — once a project has any observations it always
will — so cache the positive result in a Set and skip the COUNT(*) on
subsequent requests.

Combined with the 5s settings TTL added earlier, the steady-state cost on
the hook hot path drops to a Set lookup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* fix(pr-2255): use clearProcessingForSession on AI-success path

clearPendingForSession deletes ALL rows for the session. On the success
path of processAgentResponse, that's wrong: messages that arrived as
'pending' during the (1-5s) AI response latency get deleted along with
the 'processing' row we just consumed. In a hook burst (three quick
PostToolUse hooks), B and C land while A is in flight; A's success then
nukes B and C — silent data loss.

Add a status-scoped clearProcessingForSession to PendingMessageStore +
SessionManager, and use it in ResponseProcessor's success path. The
unconditional clearPendingForSession remains correct in
GeneratorExitHandler for hard-stop / restart-guard-trip paths.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>

* Revert "fix(pr-2255): use clearProcessingForSession on AI-success path"

This reverts commit a08995299c30cbad36bddc3e5bddda7af8604b35.

---------

Co-authored-by: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-02 16:05:56 -07:00

2087 lines
58 KiB
Bash
Executable File

#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
INSTALL_SCRIPT="${SCRIPT_DIR}/install.sh"
TESTS_RUN=0
TESTS_PASSED=0
TESTS_FAILED=0
test_pass() {
TESTS_RUN=$((TESTS_RUN + 1))
TESTS_PASSED=$((TESTS_PASSED + 1))
echo -e "\033[0;32m✓\033[0m $1"
}
test_fail() {
TESTS_RUN=$((TESTS_RUN + 1))
TESTS_FAILED=$((TESTS_FAILED + 1))
echo -e "\033[0;31m✗\033[0m $1"
if [[ -n "${2:-}" ]]; then
echo " Detail: $2"
fi
}
assert_eq() {
local expected="$1" actual="$2" msg="$3"
if [[ "$expected" == "$actual" ]]; then
test_pass "$msg"
else
test_fail "$msg" "expected='${expected}' actual='${actual}'"
fi
}
assert_contains() {
local haystack="$1" needle="$2" msg="$3"
if [[ "$haystack" == *"$needle"* ]]; then
test_pass "$msg"
else
test_fail "$msg" "expected string to contain '${needle}'"
fi
}
assert_file_exists() {
local filepath="$1" msg="$2"
if [[ -f "$filepath" ]]; then
test_pass "$msg"
else
test_fail "$msg" "file not found: ${filepath}"
fi
}
source_install_functions() {
local tmp_source
tmp_source="$(mktemp)"
sed '$ d' "$INSTALL_SCRIPT" > "$tmp_source"
echo 'main() { :; }' >> "$tmp_source"
TERM=dumb source "$tmp_source"
rm -f "$tmp_source"
}
source_install_functions
echo ""
echo "=== detect_platform() ==="
test_detect_platform_returns_valid_string() {
PLATFORM=""
IS_WSL=""
detect_platform >/dev/null 2>&1
case "$PLATFORM" in
macos|linux|windows)
test_pass "detect_platform sets PLATFORM='${PLATFORM}'"
;;
*)
test_fail "detect_platform returned unexpected PLATFORM='${PLATFORM}'" "expected macos, linux, or windows"
;;
esac
}
test_detect_platform_returns_valid_string
test_detect_platform_is_idempotent() {
PLATFORM=""
IS_WSL=""
detect_platform >/dev/null 2>&1
local first_platform="$PLATFORM"
PLATFORM=""
IS_WSL=""
detect_platform >/dev/null 2>&1
local second_platform="$PLATFORM"
assert_eq "$first_platform" "$second_platform" "detect_platform returns consistent results"
}
test_detect_platform_is_idempotent
test_detect_platform_sets_iswsl_empty_on_non_wsl() {
PLATFORM=""
IS_WSL=""
detect_platform >/dev/null 2>&1
if [[ "$PLATFORM" == "linux" ]] && grep -qi microsoft /proc/version 2>/dev/null; then
assert_eq "true" "$IS_WSL" "IS_WSL is 'true' on WSL"
else
assert_eq "" "${IS_WSL:-}" "IS_WSL is empty on non-WSL platform"
fi
}
test_detect_platform_sets_iswsl_empty_on_non_wsl
echo ""
echo "=== check_bun() ==="
test_check_bun_detects_installed_bun() {
if command -v bun &>/dev/null; then
BUN_PATH=""
if check_bun >/dev/null 2>&1; then
test_pass "check_bun succeeds when bun is installed"
else
test_fail "check_bun should succeed when bun is installed"
fi
if [[ -n "$BUN_PATH" ]]; then
test_pass "check_bun sets BUN_PATH='${BUN_PATH}'"
else
test_fail "check_bun should set BUN_PATH when bun is found"
fi
else
test_pass "check_bun test (installed): skipped (bun not installed)"
test_pass "check_bun BUN_PATH test: skipped (bun not installed)"
fi
}
test_check_bun_detects_installed_bun
test_check_bun_fails_when_not_found() {
local fake_home
fake_home="$(mktemp -d)"
local exit_code=0
bash -c '
set -euo pipefail
TERM=dumb
export HOME="'"$fake_home"'"
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
PATH="/nonexistent"
BUN_PATH=""
check_bun
' >/dev/null 2>&1 || exit_code=$?
rm -rf "$fake_home"
if [[ "$exit_code" -ne 0 ]]; then
test_pass "check_bun returns failure when bun is not in PATH"
else
test_fail "check_bun should return failure when bun is not in PATH"
fi
}
test_check_bun_fails_when_not_found
test_find_bun_path_checks_home_bun_bin() {
local fake_home
fake_home="$(mktemp -d)"
local saved_home="$HOME"
HOME="$fake_home"
BUN_PATH=""
mkdir -p "${fake_home}/.bun/bin"
cat > "${fake_home}/.bun/bin/bun" <<'FAKEBUN'
echo "1.2.0"
FAKEBUN
chmod +x "${fake_home}/.bun/bin/bun"
local saved_path="$PATH"
PATH="/nonexistent"
if find_bun_path 2>/dev/null; then
assert_eq "${fake_home}/.bun/bin/bun" "$BUN_PATH" "find_bun_path finds bun in ~/.bun/bin/"
else
test_fail "find_bun_path should find bun in ~/.bun/bin/"
fi
HOME="$saved_home"
PATH="$saved_path"
rm -rf "$fake_home"
}
test_find_bun_path_checks_home_bun_bin
echo ""
echo "=== check_uv() ==="
test_check_uv_detects_installed_uv() {
if command -v uv &>/dev/null; then
UV_PATH=""
if check_uv >/dev/null 2>&1; then
test_pass "check_uv succeeds when uv is installed"
else
test_fail "check_uv should succeed when uv is installed"
fi
if [[ -n "$UV_PATH" ]]; then
test_pass "check_uv sets UV_PATH='${UV_PATH}'"
else
test_fail "check_uv should set UV_PATH when uv is found"
fi
else
test_pass "check_uv test (installed): skipped (uv not installed)"
test_pass "check_uv UV_PATH test: skipped (uv not installed)"
fi
}
test_check_uv_detects_installed_uv
test_check_uv_fails_when_not_found() {
if [[ -x "/usr/local/bin/uv" ]] || [[ -x "/opt/homebrew/bin/uv" ]]; then
test_pass "check_uv not-found test: skipped (uv installed at system path)"
return 0
fi
local fake_home
fake_home="$(mktemp -d)"
local exit_code=0
bash -c '
set -euo pipefail
TERM=dumb
export HOME="'"$fake_home"'"
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
PATH="/nonexistent"
UV_PATH=""
check_uv
' >/dev/null 2>&1 || exit_code=$?
rm -rf "$fake_home"
if [[ "$exit_code" -ne 0 ]]; then
test_pass "check_uv returns failure when uv is not in PATH"
else
test_fail "check_uv should return failure when uv is not in PATH"
fi
}
test_check_uv_fails_when_not_found
test_find_uv_path_checks_local_bin() {
local fake_home
fake_home="$(mktemp -d)"
local saved_home="$HOME"
HOME="$fake_home"
UV_PATH=""
mkdir -p "${fake_home}/.local/bin"
cat > "${fake_home}/.local/bin/uv" <<'FAKEUV'
echo "uv 0.4.0"
FAKEUV
chmod +x "${fake_home}/.local/bin/uv"
local saved_path="$PATH"
PATH="/nonexistent"
if find_uv_path 2>/dev/null; then
assert_eq "${fake_home}/.local/bin/uv" "$UV_PATH" "find_uv_path finds uv in ~/.local/bin/"
else
test_fail "find_uv_path should find uv in ~/.local/bin/"
fi
HOME="$saved_home"
PATH="$saved_path"
rm -rf "$fake_home"
}
test_find_uv_path_checks_local_bin
echo ""
echo "=== find_openclaw() ==="
ORIGINAL_PATH="$PATH"
ORIGINAL_HOME="$HOME"
test_find_openclaw_not_found() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
PATH="/nonexistent"
OPENCLAW_PATH=""
if find_openclaw 2>/dev/null; then
test_fail "find_openclaw should return 1 when openclaw.mjs is not found"
else
test_pass "find_openclaw returns 1 when not found"
fi
assert_eq "" "$OPENCLAW_PATH" "OPENCLAW_PATH is empty when not found"
HOME="$ORIGINAL_HOME"
PATH="$ORIGINAL_PATH"
rm -rf "$fake_home"
}
test_find_openclaw_not_found
test_find_openclaw_in_home() {
local fake_home
fake_home="$(mktemp -d)"
mkdir -p "${fake_home}/.openclaw"
touch "${fake_home}/.openclaw/openclaw.mjs"
HOME="$fake_home"
PATH="/nonexistent"
OPENCLAW_PATH=""
if find_openclaw 2>/dev/null; then
test_pass "find_openclaw finds openclaw.mjs in ~/.openclaw/"
assert_eq "${fake_home}/.openclaw/openclaw.mjs" "$OPENCLAW_PATH" "OPENCLAW_PATH set correctly"
else
test_fail "find_openclaw should find openclaw.mjs in ~/.openclaw/"
fi
HOME="$ORIGINAL_HOME"
PATH="$ORIGINAL_PATH"
rm -rf "$fake_home"
}
test_find_openclaw_in_home
echo ""
echo "=== configure_memory_slot() ==="
test_configure_new_config() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
configure_memory_slot >/dev/null 2>&1
local config_file="${fake_home}/.openclaw/openclaw.json"
assert_file_exists "$config_file" "Config file created at ~/.openclaw/openclaw.json"
local memory_slot
memory_slot="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.slots.memory);")"
assert_eq "claude-mem" "$memory_slot" "Memory slot set to claude-mem in new config"
local enabled
enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].enabled);")"
assert_eq "true" "$enabled" "claude-mem entry is enabled in new config"
local worker_port
worker_port="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.workerPort);")"
assert_eq "37777" "$worker_port" "Worker port is 37777 in new config"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_configure_new_config
test_configure_existing_config() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
const config = {
gateway: { mode: 'local' },
plugins: {
slots: { memory: 'memory-core' },
entries: {
'some-other-plugin': { enabled: true }
}
}
};
require('fs').writeFileSync('${config_file}', JSON.stringify(config, null, 2));
"
configure_memory_slot >/dev/null 2>&1
local memory_slot
memory_slot="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.slots.memory);")"
assert_eq "claude-mem" "$memory_slot" "Memory slot updated from memory-core to claude-mem"
local gateway_mode
gateway_mode="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.gateway.mode);")"
assert_eq "local" "$gateway_mode" "Existing gateway.mode setting preserved"
local other_plugin
other_plugin="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['some-other-plugin'].enabled);")"
assert_eq "true" "$other_plugin" "Existing plugin entries preserved"
local cm_enabled
cm_enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].enabled);")"
assert_eq "true" "$cm_enabled" "claude-mem entry added and enabled"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_configure_existing_config
test_configure_preserves_existing_cm_config() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
const config = {
plugins: {
slots: { memory: 'memory-core' },
entries: {
'claude-mem': {
enabled: false,
config: {
workerPort: 38888,
observationFeed: { enabled: true, channel: 'telegram', to: '12345' }
}
}
}
}
};
require('fs').writeFileSync('${config_file}', JSON.stringify(config, null, 2));
"
configure_memory_slot >/dev/null 2>&1
local cm_enabled
cm_enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].enabled);")"
assert_eq "true" "$cm_enabled" "claude-mem entry enabled when previously disabled"
local custom_port
custom_port="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.workerPort);")"
assert_eq "38888" "$custom_port" "Existing custom workerPort preserved"
local feed_channel
feed_channel="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.channel);")"
assert_eq "telegram" "$feed_channel" "Existing observationFeed config preserved"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_configure_preserves_existing_cm_config
echo ""
echo "=== version_gte() ==="
if version_gte "1.2.0" "1.1.14"; then
test_pass "version_gte: 1.2.0 >= 1.1.14"
else
test_fail "version_gte: 1.2.0 >= 1.1.14"
fi
if version_gte "1.1.14" "1.1.14"; then
test_pass "version_gte: 1.1.14 >= 1.1.14 (equal)"
else
test_fail "version_gte: 1.1.14 >= 1.1.14 (equal)"
fi
if ! version_gte "1.0.0" "1.1.14"; then
test_pass "version_gte: 1.0.0 < 1.1.14"
else
test_fail "version_gte: 1.0.0 < 1.1.14"
fi
echo ""
echo "=== Script structure ==="
for fn in find_openclaw check_openclaw install_plugin configure_memory_slot; do
if declare -f "$fn" &>/dev/null; then
test_pass "Function ${fn}() is defined"
else
test_fail "Function ${fn}() should be defined"
fi
done
assert_contains "$CLAUDE_MEM_REPO" "github.com/thedotmack/claude-mem" "CLAUDE_MEM_REPO points to correct repository"
for fn in setup_ai_provider write_settings mask_api_key; do
if declare -f "$fn" &>/dev/null; then
test_pass "Function ${fn}() is defined"
else
test_fail "Function ${fn}() should be defined"
fi
done
echo ""
echo "=== mask_api_key() ==="
masked=$(mask_api_key "sk-1234567890abcdef")
assert_eq "***************cdef" "$masked" "mask_api_key masks all but last 4 chars"
masked_short=$(mask_api_key "abcd")
assert_eq "****" "$masked_short" "mask_api_key masks keys <= 4 chars entirely"
masked_five=$(mask_api_key "12345")
assert_eq "*2345" "$masked_five" "mask_api_key masks 5-char key correctly"
echo ""
echo "=== setup_ai_provider() ==="
test_setup_ai_provider_non_interactive() {
local ai_result
ai_result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--non-interactive"
source "$tmp"
rm -f "$tmp"
setup_ai_provider >/dev/null 2>&1
echo "$AI_PROVIDER"
' 2>/dev/null)" || true
assert_eq "claude" "$ai_result" "Non-interactive mode defaults to claude provider"
}
test_setup_ai_provider_non_interactive
echo ""
echo "=== write_settings() ==="
test_write_settings_new_file() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
AI_PROVIDER="claude"
AI_PROVIDER_API_KEY=""
write_settings >/dev/null 2>&1
local settings_file="${fake_home}/.claude-mem/settings.json"
assert_file_exists "$settings_file" "settings.json created at ~/.claude-mem/settings.json"
local provider
provider="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_PROVIDER);")"
assert_eq "claude" "$provider" "CLAUDE_MEM_PROVIDER set to claude"
local auth_method
auth_method="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_CLAUDE_AUTH_METHOD);")"
assert_eq "cli" "$auth_method" "CLAUDE_MEM_CLAUDE_AUTH_METHOD set to cli for Claude provider"
local worker_port
worker_port="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_WORKER_PORT);")"
assert_eq "37777" "$worker_port" "CLAUDE_MEM_WORKER_PORT defaults to 37777"
local model
model="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_MODEL);")"
assert_eq "claude-sonnet-4-6" "$model" "CLAUDE_MEM_MODEL defaults to claude-sonnet-4-6"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_write_settings_new_file
test_write_settings_gemini() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
AI_PROVIDER="gemini"
AI_PROVIDER_API_KEY="test-gemini-key-1234"
write_settings >/dev/null 2>&1
local settings_file="${fake_home}/.claude-mem/settings.json"
local provider
provider="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_PROVIDER);")"
assert_eq "gemini" "$provider" "Gemini: CLAUDE_MEM_PROVIDER set to gemini"
local api_key
api_key="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_GEMINI_API_KEY);")"
assert_eq "test-gemini-key-1234" "$api_key" "Gemini: API key stored in settings"
local gemini_model
gemini_model="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_GEMINI_MODEL);")"
assert_eq "gemini-2.5-flash-lite" "$gemini_model" "Gemini: model defaults to gemini-2.5-flash-lite"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_write_settings_gemini
test_write_settings_openrouter() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
AI_PROVIDER="openrouter"
AI_PROVIDER_API_KEY="sk-or-test-key-5678"
write_settings >/dev/null 2>&1
local settings_file="${fake_home}/.claude-mem/settings.json"
local provider
provider="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_PROVIDER);")"
assert_eq "openrouter" "$provider" "OpenRouter: CLAUDE_MEM_PROVIDER set to openrouter"
local api_key
api_key="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_OPENROUTER_API_KEY);")"
assert_eq "sk-or-test-key-5678" "$api_key" "OpenRouter: API key stored in settings"
local or_model
or_model="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_OPENROUTER_MODEL);")"
assert_eq "xiaomi/mimo-v2-flash:free" "$or_model" "OpenRouter: model defaults to xiaomi/mimo-v2-flash:free"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_write_settings_openrouter
test_write_settings_preserves_existing() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
mkdir -p "${fake_home}/.claude-mem"
local settings_file="${fake_home}/.claude-mem/settings.json"
node -e "
const settings = {
CLAUDE_MEM_PROVIDER: 'gemini',
CLAUDE_MEM_GEMINI_API_KEY: 'old-key',
CLAUDE_MEM_WORKER_PORT: '38888',
CLAUDE_MEM_LOG_LEVEL: 'DEBUG'
};
require('fs').writeFileSync('${settings_file}', JSON.stringify(settings, null, 2));
"
AI_PROVIDER="claude"
AI_PROVIDER_API_KEY=""
write_settings >/dev/null 2>&1
local provider
provider="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_PROVIDER);")"
assert_eq "claude" "$provider" "Preserve: provider updated to new selection"
local custom_port
custom_port="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_WORKER_PORT);")"
assert_eq "38888" "$custom_port" "Preserve: existing custom WORKER_PORT preserved"
local log_level
log_level="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_LOG_LEVEL);")"
assert_eq "DEBUG" "$log_level" "Preserve: existing custom LOG_LEVEL preserved"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_write_settings_preserves_existing
test_write_settings_complete_schema() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
AI_PROVIDER="claude"
AI_PROVIDER_API_KEY=""
write_settings >/dev/null 2>&1
local settings_file="${fake_home}/.claude-mem/settings.json"
local key_count
key_count="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(Object.keys(s).length);")"
if (( key_count >= 30 )); then
test_pass "Settings file has ${key_count} keys (complete schema)"
else
test_fail "Settings file has ${key_count} keys, expected >= 30" "Schema may be incomplete"
fi
local has_env_key
has_env_key="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.env !== undefined);")"
assert_eq "false" "$has_env_key" "Settings uses flat schema (no nested 'env' key)"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_write_settings_complete_schema
echo ""
echo "=== find_claude_mem_install_dir() ==="
test_find_install_dir_not_found() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
if find_claude_mem_install_dir 2>/dev/null; then
test_fail "find_claude_mem_install_dir should return 1 when not found"
else
test_pass "find_claude_mem_install_dir returns 1 when not found"
fi
assert_eq "" "$CLAUDE_MEM_INSTALL_DIR" "CLAUDE_MEM_INSTALL_DIR is empty when not found"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_find_install_dir_not_found
test_find_install_dir_openclaw_extensions() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
mkdir -p "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts"
touch "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts/worker-service.cjs"
if find_claude_mem_install_dir 2>/dev/null; then
test_pass "find_claude_mem_install_dir finds dir in ~/.openclaw/extensions/claude-mem/"
assert_eq "${fake_home}/.openclaw/extensions/claude-mem" "$CLAUDE_MEM_INSTALL_DIR" "CLAUDE_MEM_INSTALL_DIR set correctly for openclaw extensions"
else
test_fail "find_claude_mem_install_dir should find dir in ~/.openclaw/extensions/claude-mem/"
fi
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_find_install_dir_openclaw_extensions
test_find_install_dir_marketplace() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
mkdir -p "${fake_home}/.claude/plugins/marketplaces/thedotmack/plugin/scripts"
touch "${fake_home}/.claude/plugins/marketplaces/thedotmack/plugin/scripts/worker-service.cjs"
if find_claude_mem_install_dir 2>/dev/null; then
test_pass "find_claude_mem_install_dir finds dir in marketplace path"
assert_eq "${fake_home}/.claude/plugins/marketplaces/thedotmack" "$CLAUDE_MEM_INSTALL_DIR" "CLAUDE_MEM_INSTALL_DIR set correctly for marketplace"
else
test_fail "find_claude_mem_install_dir should find dir in marketplace path"
fi
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_find_install_dir_marketplace
echo ""
echo "=== start_worker() ==="
test_start_worker_no_install_dir() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
local output
if output="$(start_worker 2>&1)"; then
test_fail "start_worker should fail when install dir not found"
else
test_pass "start_worker returns error when install dir not found"
fi
assert_contains "$output" "Cannot find claude-mem plugin installation directory" "start_worker error message mentions install dir"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_start_worker_no_install_dir
echo ""
echo "=== verify_health() ==="
test_verify_health_no_server() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
verify_health 2>/dev/null && echo "PASS" || echo "FAIL"
' 2>/dev/null)" || true
if [[ "$result" == *"FAIL"* ]]; then
test_pass "verify_health returns failure when no server is running"
else
test_pass "verify_health returned success (worker may already be running on 37777)"
fi
}
if command -v curl &>/dev/null; then
test_verify_health_no_server
else
test_pass "verify_health test skipped (curl not available)"
fi
echo ""
echo "=== print_completion_summary() ==="
test_print_completion_summary() {
AI_PROVIDER="claude"
WORKER_PID=""
FEED_CONFIGURED=false
FEED_CHANNEL=""
FEED_TARGET_ID=""
local output
output="$(print_completion_summary 2>&1)"
assert_contains "$output" "Installation Complete" "Completion summary shows 'Installation Complete'"
assert_contains "$output" "Claude Max Plan" "Completion summary shows correct provider"
assert_contains "$output" "not configured" "Completion summary shows feed 'not configured' when skipped"
assert_contains "$output" "What's next" "Completion summary shows What's next section"
assert_contains "$output" "/claude-mem-status" "Completion summary mentions status command"
assert_contains "$output" "localhost:37777" "Completion summary mentions viewer URL"
assert_contains "$output" "re-run this installer" "Completion summary shows re-run instructions"
}
test_print_completion_summary
test_print_completion_summary_gemini() {
AI_PROVIDER="gemini"
WORKER_PID=""
FEED_CONFIGURED=false
local output
output="$(print_completion_summary 2>&1)"
assert_contains "$output" "Gemini" "Gemini provider shown in completion summary"
}
test_print_completion_summary_gemini
test_print_completion_summary_openrouter() {
AI_PROVIDER="openrouter"
WORKER_PID=""
FEED_CONFIGURED=false
local output
output="$(print_completion_summary 2>&1)"
assert_contains "$output" "OpenRouter" "OpenRouter provider shown in completion summary"
}
test_print_completion_summary_openrouter
echo ""
echo "=== New function existence ==="
for fn in find_claude_mem_install_dir start_worker verify_health print_completion_summary; do
if declare -f "$fn" &>/dev/null; then
test_pass "Function ${fn}() is defined"
else
test_fail "Function ${fn}() should be defined"
fi
done
echo ""
echo "=== main() function structure ==="
test_main_calls_start_worker() {
if grep -q 'start_worker' "$INSTALL_SCRIPT"; then
test_pass "main() calls start_worker"
else
test_fail "main() should call start_worker"
fi
}
test_main_calls_start_worker
test_main_calls_verify_health() {
if grep -q 'verify_health' "$INSTALL_SCRIPT"; then
test_pass "main() calls verify_health"
else
test_fail "main() should call verify_health"
fi
}
test_main_calls_verify_health
test_main_calls_completion_summary() {
if grep -q 'print_completion_summary' "$INSTALL_SCRIPT"; then
test_pass "main() calls print_completion_summary"
else
test_fail "main() should call print_completion_summary"
fi
}
test_main_calls_completion_summary
test_main_has_progress_indicators() {
if grep -q '\[1/8\]' "$INSTALL_SCRIPT" && grep -q '\[8/8\]' "$INSTALL_SCRIPT"; then
test_pass "main() has progress indicators [1/8] through [8/8]"
else
test_fail "main() should have progress indicators [1/8] through [8/8]"
fi
}
test_main_has_progress_indicators
test_main_calls_setup_observation_feed() {
if grep -q 'setup_observation_feed' "$INSTALL_SCRIPT"; then
test_pass "main() calls setup_observation_feed"
else
test_fail "main() should call setup_observation_feed"
fi
}
test_main_calls_setup_observation_feed
test_main_calls_write_observation_feed_config() {
if grep -q 'write_observation_feed_config' "$INSTALL_SCRIPT"; then
test_pass "main() calls write_observation_feed_config"
else
test_fail "main() should call write_observation_feed_config"
fi
}
test_main_calls_write_observation_feed_config
echo ""
echo "=== setup_observation_feed() ==="
for fn in setup_observation_feed write_observation_feed_config; do
if declare -f "$fn" &>/dev/null; then
test_pass "Function ${fn}() is defined"
else
test_fail "Function ${fn}() should be defined"
fi
done
test_setup_observation_feed_non_interactive() {
local feed_result
feed_result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--non-interactive"
source "$tmp"
rm -f "$tmp"
setup_observation_feed 2>/dev/null
echo "CHANNEL=$FEED_CHANNEL"
echo "CONFIGURED=$FEED_CONFIGURED"
' 2>/dev/null)" || true
assert_contains "$feed_result" "CHANNEL=" "Non-interactive mode: FEED_CHANNEL is empty"
assert_contains "$feed_result" "CONFIGURED=false" "Non-interactive mode: FEED_CONFIGURED is false"
}
test_setup_observation_feed_non_interactive
echo ""
echo "=== write_observation_feed_config() ==="
test_write_observation_feed_config_writes_json() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
const config = {
plugins: {
slots: { memory: 'claude-mem' },
entries: {
'claude-mem': {
enabled: true,
config: { workerPort: 37777, syncMemoryFile: true }
}
}
}
};
require('fs').writeFileSync('${config_file}', JSON.stringify(config, null, 2));
"
FEED_CHANNEL="telegram"
FEED_TARGET_ID="123456789"
FEED_CONFIGURED="true"
write_observation_feed_config >/dev/null 2>&1
local feed_enabled
feed_enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.enabled);")"
assert_eq "true" "$feed_enabled" "observationFeed.enabled is true"
local feed_channel
feed_channel="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.channel);")"
assert_eq "telegram" "$feed_channel" "observationFeed.channel is telegram"
local feed_to
feed_to="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.to);")"
assert_eq "123456789" "$feed_to" "observationFeed.to is 123456789"
local worker_port
worker_port="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.workerPort);")"
assert_eq "37777" "$worker_port" "Existing workerPort preserved after feed config write"
HOME="$ORIGINAL_HOME"
FEED_CHANNEL=""
FEED_TARGET_ID=""
FEED_CONFIGURED=false
rm -rf "$fake_home"
}
test_write_observation_feed_config_writes_json
test_write_observation_feed_config_skips_when_not_configured() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
require('fs').writeFileSync('${config_file}', JSON.stringify({ plugins: {} }, null, 2));
"
FEED_CONFIGURED="false"
write_observation_feed_config >/dev/null 2>&1
local has_feed
has_feed="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries !== undefined);")"
assert_eq "false" "$has_feed" "Config unchanged when FEED_CONFIGURED is false"
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_write_observation_feed_config_skips_when_not_configured
test_write_observation_feed_config_discord() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
mkdir -p "${fake_home}/.openclaw"
local config_file="${fake_home}/.openclaw/openclaw.json"
node -e "
const config = {
plugins: {
entries: {
'claude-mem': { enabled: true, config: {} }
}
}
};
require('fs').writeFileSync('${config_file}', JSON.stringify(config, null, 2));
"
FEED_CHANNEL="discord"
FEED_TARGET_ID="1234567890123456789"
FEED_CONFIGURED="true"
write_observation_feed_config >/dev/null 2>&1
local feed_channel
feed_channel="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.channel);")"
assert_eq "discord" "$feed_channel" "Discord channel type written correctly"
local feed_to
feed_to="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.to);")"
assert_eq "1234567890123456789" "$feed_to" "Discord channel ID written correctly"
HOME="$ORIGINAL_HOME"
FEED_CHANNEL=""
FEED_TARGET_ID=""
FEED_CONFIGURED=false
rm -rf "$fake_home"
}
test_write_observation_feed_config_discord
echo ""
echo "=== write_observation_feed_config() — fallback paths ==="
verify_feed_config_json() {
local config_file="$1" expected_channel="$2" expected_target="$3" label="$4"
local feed_enabled
feed_enabled="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.enabled);")"
assert_eq "true" "$feed_enabled" "${label}: observationFeed.enabled is true"
local feed_channel
feed_channel="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.channel);")"
assert_eq "$expected_channel" "$feed_channel" "${label}: observationFeed.channel correct"
local feed_to
feed_to="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.observationFeed.to);")"
assert_eq "$expected_target" "$feed_to" "${label}: observationFeed.to correct"
local worker_port
worker_port="$(node -e "const c = JSON.parse(require('fs').readFileSync('${config_file}','utf8')); console.log(c.plugins.entries['claude-mem'].config.workerPort);")"
assert_eq "37777" "$worker_port" "${label}: existing workerPort preserved"
}
create_seed_config() {
local config_file="$1"
mkdir -p "$(dirname "$config_file")"
node -e "
const config = {
plugins: {
slots: { memory: 'claude-mem' },
entries: {
'claude-mem': {
enabled: true,
config: { workerPort: 37777, syncMemoryFile: true }
}
}
}
};
require('fs').writeFileSync('${config_file}', JSON.stringify(config, null, 2));
"
}
test_write_feed_config_jq_path() {
if ! command -v jq &>/dev/null; then
test_pass "jq path: skipped (jq not installed)"
return 0
fi
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
local config_file="${fake_home}/.openclaw/openclaw.json"
create_seed_config "$config_file"
FEED_CHANNEL="slack"
FEED_TARGET_ID="C01ABC2DEFG"
FEED_CONFIGURED="true"
write_observation_feed_config >/dev/null 2>&1
verify_feed_config_json "$config_file" "slack" "C01ABC2DEFG" "jq path"
HOME="$ORIGINAL_HOME"
FEED_CHANNEL=""
FEED_TARGET_ID=""
FEED_CONFIGURED=false
rm -rf "$fake_home"
}
test_write_feed_config_jq_path
test_write_feed_config_python3_path() {
if ! command -v python3 &>/dev/null; then
test_pass "python3 path: skipped (python3 not installed)"
return 0
fi
local fake_home
fake_home="$(mktemp -d)"
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
export HOME="'"$fake_home"'"
mkdir -p "'"${fake_home}"'/.openclaw"
node -e "
const config = {
plugins: {
slots: { memory: \"claude-mem\" },
entries: {
\"claude-mem\": {
enabled: true,
config: { workerPort: 37777, syncMemoryFile: true }
}
}
}
};
require(\"fs\").writeFileSync(\"'"${fake_home}"'/.openclaw/openclaw.json\", JSON.stringify(config, null, 2));
"
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
SAFE_PATH=""
IFS=":" read -ra path_parts <<< "$PATH"
for p in "${path_parts[@]}"; do
if [[ ! -x "${p}/jq" ]]; then
SAFE_PATH="${SAFE_PATH:+${SAFE_PATH}:}${p}"
fi
done
export PATH="$SAFE_PATH"
FEED_CHANNEL="signal"
FEED_TARGET_ID="+15551234567"
FEED_CONFIGURED="true"
write_observation_feed_config >/dev/null 2>&1
echo "DONE"
' 2>/dev/null)" || true
if [[ "$result" == *"DONE"* ]]; then
local config_file="${fake_home}/.openclaw/openclaw.json"
verify_feed_config_json "$config_file" "signal" "+15551234567" "python3 path"
else
test_fail "python3 path: write_observation_feed_config failed"
fi
rm -rf "$fake_home"
}
test_write_feed_config_python3_path
test_write_feed_config_node_path() {
local fake_home
fake_home="$(mktemp -d)"
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
export HOME="'"$fake_home"'"
mkdir -p "'"${fake_home}"'/.openclaw"
node -e "
const config = {
plugins: {
slots: { memory: \"claude-mem\" },
entries: {
\"claude-mem\": {
enabled: true,
config: { workerPort: 37777, syncMemoryFile: true }
}
}
}
};
require(\"fs\").writeFileSync(\"'"${fake_home}"'/.openclaw/openclaw.json\", JSON.stringify(config, null, 2));
"
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
INSTALLER_FEED_CHANNEL="whatsapp" \
INSTALLER_FEED_TARGET_ID="5511999887766@s.whatsapp.net" \
INSTALLER_CONFIG_FILE="'"${fake_home}"'/.openclaw/openclaw.json" \
node -e "
const fs = require(\"fs\");
const configPath = process.env.INSTALLER_CONFIG_FILE;
const channel = process.env.INSTALLER_FEED_CHANNEL;
const targetId = process.env.INSTALLER_FEED_TARGET_ID;
const config = JSON.parse(fs.readFileSync(configPath, \"utf8\"));
if (!config.plugins) config.plugins = {};
if (!config.plugins.entries) config.plugins.entries = {};
if (!config.plugins.entries[\"claude-mem\"]) {
config.plugins.entries[\"claude-mem\"] = { enabled: true, config: {} };
}
if (!config.plugins.entries[\"claude-mem\"].config) {
config.plugins.entries[\"claude-mem\"].config = {};
}
config.plugins.entries[\"claude-mem\"].config.observationFeed = {
enabled: true,
channel: channel,
to: targetId
};
fs.writeFileSync(configPath, JSON.stringify(config, null, 2));
"
echo "DONE"
' 2>/dev/null)" || true
if [[ "$result" == *"DONE"* ]]; then
local config_file="${fake_home}/.openclaw/openclaw.json"
verify_feed_config_json "$config_file" "whatsapp" "5511999887766@s.whatsapp.net" "node path"
else
test_fail "node path: write_observation_feed_config failed"
fi
rm -rf "$fake_home"
}
test_write_feed_config_node_path
test_feed_config_fallback_chain_in_source() {
if grep -q 'command -v jq' "$INSTALL_SCRIPT"; then
test_pass "write_observation_feed_config checks for jq first"
else
test_fail "write_observation_feed_config should check for jq"
fi
if grep -q 'command -v python3' "$INSTALL_SCRIPT"; then
test_pass "write_observation_feed_config has python3 fallback"
else
test_fail "write_observation_feed_config should have python3 fallback"
fi
if grep -q 'node -e' "$INSTALL_SCRIPT"; then
test_pass "write_observation_feed_config has node fallback"
else
test_fail "write_observation_feed_config should have node fallback"
fi
}
test_feed_config_fallback_chain_in_source
echo ""
echo "=== print_completion_summary() — observation feed ==="
test_completion_summary_with_feed() {
AI_PROVIDER="claude"
WORKER_PID=""
FEED_CONFIGURED="true"
FEED_CHANNEL="telegram"
FEED_TARGET_ID="123456789"
local output
output="$(print_completion_summary 2>&1)"
assert_contains "$output" "telegram" "Summary shows feed channel when configured"
assert_contains "$output" "123456789" "Summary shows feed target when configured"
assert_contains "$output" "What's next" "Summary includes What's next section"
assert_contains "$output" "/claude-mem-feed" "Summary includes feed check command when configured"
FEED_CONFIGURED=false
FEED_CHANNEL=""
FEED_TARGET_ID=""
}
test_completion_summary_with_feed
test_completion_summary_without_feed() {
AI_PROVIDER="claude"
WORKER_PID=""
FEED_CONFIGURED=false
FEED_CHANNEL=""
FEED_TARGET_ID=""
local output
output="$(print_completion_summary 2>&1)"
assert_contains "$output" "not configured" "Summary shows 'not configured' when feed skipped"
assert_contains "$output" "What's next" "Summary includes What's next section without feed"
assert_contains "$output" "/claude-mem-status" "Summary includes status check command"
assert_contains "$output" "localhost:37777" "Summary includes viewer URL"
}
test_completion_summary_without_feed
echo ""
echo "=== Channel instructions ==="
for channel in telegram discord slack signal whatsapp line; do
if grep -qi "$channel" "$INSTALL_SCRIPT"; then
test_pass "Channel '${channel}' instructions exist in install.sh"
else
test_fail "Channel '${channel}' instructions should exist in install.sh"
fi
done
assert_contains "$(grep -A2 'userinfobot' "$INSTALL_SCRIPT" 2>/dev/null || echo '')" "userinfobot" "Telegram instructions include @userinfobot"
assert_contains "$(grep -A2 'Developer Mode' "$INSTALL_SCRIPT" 2>/dev/null || echo '')" "Developer Mode" "Discord instructions include Developer Mode"
assert_contains "$(grep -A2 'C01ABC2DEFG' "$INSTALL_SCRIPT" 2>/dev/null || echo '')" "C01ABC2DEFG" "Slack instructions include sample channel ID"
echo ""
echo "=== TTY detection ==="
for fn in setup_tty read_tty; do
if declare -f "$fn" &>/dev/null; then
test_pass "Function ${fn}() is defined"
else
test_fail "Function ${fn}() should be defined"
fi
done
if declare -p TTY_FD &>/dev/null; then
test_pass "TTY_FD variable is defined"
else
test_fail "TTY_FD variable should be defined"
fi
if grep -q 'setup_tty' "$INSTALL_SCRIPT"; then
test_pass "main() calls setup_tty"
else
test_fail "main() should call setup_tty"
fi
echo ""
echo "=== Argument parsing — --provider flag ==="
test_provider_flag_claude() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--provider=claude"
source "$tmp"
rm -f "$tmp"
setup_ai_provider >/dev/null 2>&1
echo "$AI_PROVIDER"
' 2>/dev/null)" || true
assert_eq "claude" "$result" "--provider=claude sets AI_PROVIDER to claude"
}
test_provider_flag_claude
test_provider_flag_gemini_with_api_key() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--provider=gemini" "--api-key=test-key-123"
source "$tmp"
rm -f "$tmp"
setup_ai_provider >/dev/null 2>&1
echo "PROVIDER=$AI_PROVIDER"
echo "KEY=$AI_PROVIDER_API_KEY"
' 2>/dev/null)" || true
assert_contains "$result" "PROVIDER=gemini" "--provider=gemini sets AI_PROVIDER to gemini"
assert_contains "$result" "KEY=test-key-123" "--api-key=test-key-123 sets API key"
}
test_provider_flag_gemini_with_api_key
test_provider_flag_openrouter() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--provider=openrouter" "--api-key=sk-or-test"
source "$tmp"
rm -f "$tmp"
setup_ai_provider >/dev/null 2>&1
echo "PROVIDER=$AI_PROVIDER"
echo "KEY=$AI_PROVIDER_API_KEY"
' 2>/dev/null)" || true
assert_contains "$result" "PROVIDER=openrouter" "--provider=openrouter sets AI_PROVIDER"
assert_contains "$result" "KEY=sk-or-test" "--api-key sets API key for openrouter"
}
test_provider_flag_openrouter
test_provider_flag_invalid() {
local exit_code=0
bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--provider=invalid"
source "$tmp"
rm -f "$tmp"
setup_ai_provider
' >/dev/null 2>&1 || exit_code=$?
if [[ "$exit_code" -ne 0 ]]; then
test_pass "--provider=invalid exits with error"
else
test_fail "--provider=invalid should exit with error"
fi
}
test_provider_flag_invalid
echo ""
echo "=== Argument parsing — --non-interactive ==="
test_non_interactive_flag() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--non-interactive"
source "$tmp"
rm -f "$tmp"
echo "NON_INTERACTIVE=$NON_INTERACTIVE"
' 2>/dev/null)" || true
assert_contains "$result" "NON_INTERACTIVE=true" "--non-interactive sets NON_INTERACTIVE=true"
}
test_non_interactive_flag
test_non_interactive_with_provider() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--non-interactive" "--provider=gemini" "--api-key=my-key"
source "$tmp"
rm -f "$tmp"
setup_ai_provider >/dev/null 2>&1
echo "PROVIDER=$AI_PROVIDER"
echo "KEY=$AI_PROVIDER_API_KEY"
echo "NON_INTERACTIVE=$NON_INTERACTIVE"
' 2>/dev/null)" || true
assert_contains "$result" "PROVIDER=gemini" "--non-interactive + --provider: provider set correctly"
assert_contains "$result" "KEY=my-key" "--non-interactive + --api-key: key set correctly"
assert_contains "$result" "NON_INTERACTIVE=true" "--non-interactive flag parsed alongside --provider"
}
test_non_interactive_with_provider
echo ""
echo "=== --non-interactive full flow ==="
test_non_interactive_completes() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--non-interactive"
source "$tmp"
rm -f "$tmp"
setup_ai_provider 2>/dev/null
setup_observation_feed 2>/dev/null
echo "AI=$AI_PROVIDER"
echo "FEED=$FEED_CONFIGURED"
' 2>/dev/null)" || true
assert_contains "$result" "AI=claude" "--non-interactive: AI provider defaults to claude"
assert_contains "$result" "FEED=false" "--non-interactive: observation feed skipped"
}
test_non_interactive_completes
echo ""
echo "=== curl | bash usage comment ==="
if grep -q 'curl -fsSL.*raw.githubusercontent.com.*install.sh | bash' "$INSTALL_SCRIPT"; then
test_pass "install.sh contains curl | bash usage comment"
else
test_fail "install.sh should contain curl | bash usage comment"
fi
if grep -q 'bash -s -- --provider=' "$INSTALL_SCRIPT"; then
test_pass "install.sh documents --provider flag in usage comment"
else
test_fail "install.sh should document --provider flag in usage comment"
fi
echo ""
echo "=== write_settings with --provider flag ==="
test_write_settings_via_provider_flag() {
local fake_home
fake_home="$(mktemp -d)"
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
export HOME="'"$fake_home"'"
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--provider=gemini" "--api-key=test-end-to-end-key"
source "$tmp"
rm -f "$tmp"
setup_ai_provider >/dev/null 2>&1
write_settings >/dev/null 2>&1
echo "DONE"
' 2>/dev/null)" || true
if [[ "$result" == *"DONE"* ]]; then
local settings_file="${fake_home}/.claude-mem/settings.json"
local provider
provider="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_PROVIDER);")"
assert_eq "gemini" "$provider" "--provider flag: settings.json has provider=gemini"
local api_key
api_key="$(node -e "const s = JSON.parse(require('fs').readFileSync('${settings_file}','utf8')); console.log(s.CLAUDE_MEM_GEMINI_API_KEY);")"
assert_eq "test-end-to-end-key" "$api_key" "--provider flag: settings.json has correct API key"
else
test_fail "--provider flag: write_settings failed"
fi
rm -rf "$fake_home"
}
test_write_settings_via_provider_flag
echo ""
echo "=== --upgrade flag parsing ==="
test_upgrade_flag() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--upgrade"
source "$tmp"
rm -f "$tmp"
echo "UPGRADE=$UPGRADE_MODE"
' 2>/dev/null)" || true
assert_contains "$result" "UPGRADE=true" "--upgrade sets UPGRADE_MODE=true"
}
test_upgrade_flag
test_upgrade_flag_with_provider() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
set -- "--upgrade" "--provider=gemini" "--api-key=upgrade-key"
source "$tmp"
rm -f "$tmp"
echo "UPGRADE=$UPGRADE_MODE"
echo "PROVIDER=$CLI_PROVIDER"
echo "KEY=$CLI_API_KEY"
' 2>/dev/null)" || true
assert_contains "$result" "UPGRADE=true" "--upgrade + --provider: upgrade flag parsed"
assert_contains "$result" "PROVIDER=gemini" "--upgrade + --provider: provider flag parsed"
assert_contains "$result" "KEY=upgrade-key" "--upgrade + --api-key: API key parsed"
}
test_upgrade_flag_with_provider
test_upgrade_not_set_by_default() {
local result
result="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
echo "UPGRADE=${UPGRADE_MODE:-}"
' 2>/dev/null)" || true
assert_eq "UPGRADE=" "$result" "UPGRADE_MODE is empty by default"
}
test_upgrade_not_set_by_default
echo ""
echo "=== is_claude_mem_installed() ==="
test_is_claude_mem_installed_found() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
mkdir -p "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts"
touch "${fake_home}/.openclaw/extensions/claude-mem/plugin/scripts/worker-service.cjs"
if is_claude_mem_installed; then
test_pass "is_claude_mem_installed returns true when plugin exists"
else
test_fail "is_claude_mem_installed should return true when plugin exists"
fi
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_is_claude_mem_installed_found
test_is_claude_mem_installed_not_found() {
local fake_home
fake_home="$(mktemp -d)"
HOME="$fake_home"
CLAUDE_MEM_INSTALL_DIR=""
if is_claude_mem_installed; then
test_fail "is_claude_mem_installed should return false when plugin not found"
else
test_pass "is_claude_mem_installed returns false when plugin not found"
fi
HOME="$ORIGINAL_HOME"
rm -rf "$fake_home"
}
test_is_claude_mem_installed_not_found
echo ""
echo "=== check_git() ==="
test_check_git_available() {
if command -v git &>/dev/null; then
local output
output="$(check_git 2>&1)" || true
test_pass "check_git succeeds when git is installed"
else
test_pass "check_git test skipped (git not available)"
fi
}
test_check_git_available
test_check_git_not_available() {
local exit_code=0
PLATFORM="macos"
bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
PATH="/nonexistent"
check_git
' >/dev/null 2>&1 || exit_code=$?
if [[ "$exit_code" -ne 0 ]]; then
test_pass "check_git exits with error when git is missing"
else
test_fail "check_git should exit with error when git is missing"
fi
}
test_check_git_not_available
test_check_git_macos_message() {
local output
output="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
PATH="/nonexistent"
PLATFORM="macos"
check_git
' 2>&1)" || true
assert_contains "$output" "xcode-select" "check_git suggests xcode-select on macOS"
}
test_check_git_macos_message
test_check_git_linux_message() {
local output
output="$(bash -c '
set -euo pipefail
TERM=dumb
tmp=$(mktemp)
sed "$ d" "'"${INSTALL_SCRIPT}"'" > "$tmp"
echo "main() { :; }" >> "$tmp"
source "$tmp"
rm -f "$tmp"
PATH="/nonexistent"
PLATFORM="linux"
check_git
' 2>&1)" || true
assert_contains "$output" "apt install git" "check_git suggests apt on Linux"
}
test_check_git_linux_message
echo ""
echo "=== check_port_37777() ==="
test_check_port_function_exists() {
if declare -f check_port_37777 &>/dev/null; then
test_pass "Function check_port_37777() is defined"
else
test_fail "Function check_port_37777() should be defined"
fi
}
test_check_port_function_exists
echo ""
echo "=== cleanup_on_exit() ==="
test_cleanup_trap_functions_exist() {
if declare -f register_cleanup_dir &>/dev/null; then
test_pass "Function register_cleanup_dir() is defined"
else
test_fail "Function register_cleanup_dir() should be defined"
fi
if declare -f cleanup_on_exit &>/dev/null; then
test_pass "Function cleanup_on_exit() is defined"
else
test_fail "Function cleanup_on_exit() should be defined"
fi
}
test_cleanup_trap_functions_exist
test_register_cleanup_dir() {
local test_dir
test_dir="$(mktemp -d)"
local saved_dirs=("${CLEANUP_DIRS[@]+"${CLEANUP_DIRS[@]}"}")
CLEANUP_DIRS=()
register_cleanup_dir "$test_dir"
if [[ "${#CLEANUP_DIRS[@]}" -eq 1 ]] && [[ "${CLEANUP_DIRS[0]}" == "$test_dir" ]]; then
test_pass "register_cleanup_dir adds directory to CLEANUP_DIRS"
else
test_fail "register_cleanup_dir should add directory to CLEANUP_DIRS"
fi
CLEANUP_DIRS=("${saved_dirs[@]+"${saved_dirs[@]}"}")
rm -rf "$test_dir"
}
test_register_cleanup_dir
echo ""
echo "=== ensure_jq_or_fallback() ==="
test_ensure_jq_or_fallback_exists() {
if declare -f ensure_jq_or_fallback &>/dev/null; then
test_pass "Function ensure_jq_or_fallback() is defined"
else
test_fail "Function ensure_jq_or_fallback() should be defined"
fi
}
test_ensure_jq_or_fallback_exists
test_ensure_jq_with_jq_available() {
if ! command -v jq &>/dev/null; then
test_pass "ensure_jq jq-path: skipped (jq not installed)"
return 0
fi
local tmp_json
tmp_json="$(mktemp)"
echo '{"name": "test", "value": 1}' > "$tmp_json"
if ensure_jq_or_fallback "$tmp_json" '.name = "updated"'; then
local result
result="$(node -e "const j = JSON.parse(require('fs').readFileSync('${tmp_json}','utf8')); console.log(j.name);")"
assert_eq "updated" "$result" "ensure_jq_or_fallback updates JSON via jq"
else
test_fail "ensure_jq_or_fallback should succeed with jq available"
fi
rm -f "$tmp_json"
}
test_ensure_jq_with_jq_available
echo ""
echo "=== main() references new functions ==="
test_main_calls_check_port() {
if grep -q 'check_port_37777' "$INSTALL_SCRIPT"; then
test_pass "main() calls check_port_37777"
else
test_fail "main() should call check_port_37777"
fi
}
test_main_calls_check_port
test_main_calls_is_claude_mem_installed() {
if grep -q 'is_claude_mem_installed' "$INSTALL_SCRIPT"; then
test_pass "main() calls is_claude_mem_installed for upgrade detection"
else
test_fail "main() should call is_claude_mem_installed"
fi
}
test_main_calls_is_claude_mem_installed
test_main_references_upgrade_mode() {
if grep -q 'UPGRADE_MODE' "$INSTALL_SCRIPT"; then
test_pass "main() references UPGRADE_MODE"
else
test_fail "main() should reference UPGRADE_MODE"
fi
}
test_main_references_upgrade_mode
test_install_plugin_calls_check_git() {
if grep -q 'check_git' "$INSTALL_SCRIPT"; then
test_pass "install_plugin() calls check_git"
else
test_fail "install_plugin() should call check_git"
fi
}
test_install_plugin_calls_check_git
test_install_plugin_uses_register_cleanup() {
if grep -q 'register_cleanup_dir' "$INSTALL_SCRIPT"; then
test_pass "install_plugin() uses register_cleanup_dir"
else
test_fail "install_plugin() should use register_cleanup_dir"
fi
}
test_install_plugin_uses_register_cleanup
test_usage_comment_includes_upgrade() {
if grep -q '\-\-upgrade' "$INSTALL_SCRIPT"; then
test_pass "Usage comment documents --upgrade flag"
else
test_fail "Usage comment should document --upgrade flag"
fi
}
test_usage_comment_includes_upgrade
echo ""
echo "=== Distribution readiness ==="
test_install_sh_has_shebang() {
local first_line
first_line="$(head -1 "$INSTALL_SCRIPT")"
assert_eq "#!/usr/bin/env bash" "$first_line" "install.sh has correct shebang line"
}
test_install_sh_has_shebang
test_install_sh_has_set_euo_pipefail() {
if grep -q 'set -euo pipefail' "$INSTALL_SCRIPT"; then
test_pass "install.sh uses set -euo pipefail for safety"
else
test_fail "install.sh should use set -euo pipefail"
fi
}
test_install_sh_has_set_euo_pipefail
test_install_sh_has_stable_url_in_usage() {
if grep -q 'raw.githubusercontent.com/thedotmack/claude-mem/main/openclaw/install.sh' "$INSTALL_SCRIPT"; then
test_pass "install.sh usage comment has stable raw.githubusercontent.com URL"
else
test_fail "install.sh should reference stable raw.githubusercontent.com URL in usage"
fi
}
test_install_sh_has_stable_url_in_usage
test_install_sh_documents_all_flags() {
local missing_flags=()
for flag in "--non-interactive" "--upgrade" "--provider" "--api-key"; do
if ! grep -Fq -- "$flag" "$INSTALL_SCRIPT"; then
missing_flags+=("$flag")
fi
done
if [[ ${#missing_flags[@]} -eq 0 ]]; then
test_pass "install.sh documents all CLI flags (--non-interactive, --upgrade, --provider, --api-key)"
else
test_fail "install.sh missing documentation for flags: ${missing_flags[*]}"
fi
}
test_install_sh_documents_all_flags
test_install_sh_has_installer_version() {
if grep -q 'INSTALLER_VERSION=' "$INSTALL_SCRIPT"; then
test_pass "install.sh defines INSTALLER_VERSION constant"
else
test_fail "install.sh should define INSTALLER_VERSION"
fi
}
test_install_sh_has_installer_version
test_skill_md_references_one_liner() {
local skill_file="${SCRIPT_DIR}/SKILL.md"
if [[ ! -f "$skill_file" ]]; then
test_fail "SKILL.md not found at ${skill_file}"
return
fi
if grep -q 'curl -fsSL.*raw.githubusercontent.com.*install.sh | bash' "$skill_file"; then
test_pass "SKILL.md references the one-liner installer"
else
test_fail "SKILL.md should reference the one-liner installer"
fi
}
test_skill_md_references_one_liner
test_skill_md_has_quick_install_section() {
local skill_file="${SCRIPT_DIR}/SKILL.md"
if [[ ! -f "$skill_file" ]]; then
test_fail "SKILL.md not found at ${skill_file}"
return
fi
if grep -q 'Quick Install' "$skill_file"; then
test_pass "SKILL.md has Quick Install section"
else
test_fail "SKILL.md should have Quick Install section"
fi
}
test_skill_md_has_quick_install_section
test_skill_md_documents_options() {
local skill_file="${SCRIPT_DIR}/SKILL.md"
if [[ ! -f "$skill_file" ]]; then
test_fail "SKILL.md not found at ${skill_file}"
return
fi
local missing=()
for option in "--provider" "--non-interactive" "--upgrade"; do
if ! grep -Fq -- "$option" "$skill_file"; then
missing+=("$option")
fi
done
if [[ ${#missing[@]} -eq 0 ]]; then
test_pass "SKILL.md documents all installer options (--provider, --non-interactive, --upgrade)"
else
test_fail "SKILL.md missing documentation for: ${missing[*]}"
fi
}
test_skill_md_documents_options
echo ""
echo "========================================"
echo "Results: ${TESTS_PASSED}/${TESTS_RUN} passed, ${TESTS_FAILED} failed"
echo "========================================"
if [[ "$TESTS_FAILED" -gt 0 ]]; then
exit 1
fi
exit 0