* fix: resolve search, database, and docker bugs (#1913, #1916, #1956, #1957, #2048)
- Fix concept/concepts param mismatch in SearchManager.normalizeParams (#1916)
- Add FTS5 keyword fallback when ChromaDB is unavailable (#1913, #2048)
- Add periodic WAL checkpoint and journal_size_limit to prevent unbounded WAL growth (#1956)
- Add periodic clearFailed() to purge stale pending_messages (#1957)
- Fix nounset-safe TTY_ARGS expansion in docker/claude-mem/run.sh
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: prevent silent data loss on non-XML responses, add queue info to /health (#1867, #1874)
- ResponseProcessor: mark messages as failed (with retry) instead of confirming
when the LLM returns non-XML garbage (auth errors, rate limits) (#1874)
- Health endpoint: include activeSessions count for queue liveness monitoring (#1867)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: cache isFts5Available() at construction time
Addresses Greptile review: avoid DDL probe (CREATE + DROP) on every text
query. Result is now cached in _fts5Available at construction.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve worker stability bugs — pool deadlock, MCP loopback, restart guard (#1868, #1876, #2053)
- Replace flat consecutiveRestarts counter with time-windowed RestartGuard:
only counts restarts within 60s window (cap=10), decays after 5min of
success. Prevents stranding pending messages on long-running sessions. (#2053)
- Add idle session eviction to pool slot allocation: when all slots are full,
evict the idlest session (no pending work, oldest activity) to free a slot
for new requests, preventing 60s timeout deadlock. (#1868)
- Fix MCP loopback self-check: use process.execPath instead of bare 'node'
which fails on non-interactive PATH. Fix crash misclassification by removing
false "Generator exited unexpectedly" error log on normal completion. (#1876)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve hooks reliability bugs — summarize exit code, session-init health wait (#1896, #1901, #1903, #1907)
- Wrap summarize hook's workerHttpRequest in try/catch to prevent exit
code 2 (blocking error) on network failures or malformed responses.
Session exit no longer blocks on worker errors. (#1901)
- Add health-check wait loop to UserPromptSubmit session-init command in
hooks.json. On Linux/WSL where hook ordering fires UserPromptSubmit
before SessionStart, session-init now waits up to 10s for worker health
before proceeding. Also wrap session-init HTTP call in try/catch. (#1907)
- Close#1896 as already-fixed: mtime comparison at file-context.ts:255-267
bypasses truncation when file is newer than latest observation.
- Close#1903 as no-repro: hooks.json correctly declares all hook events.
Issue was Claude Code 12.0.1/macOS platform event-dispatch bug.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: security hardening — bearer auth, path validation, rate limits, per-user port (#1932, #1933, #1934, #1935, #1936)
- Add bearer token auth to all API endpoints: auto-generated 32-byte
token stored at ~/.claude-mem/worker-auth-token (mode 0600). All hook,
MCP, viewer, and OpenCode requests include Authorization header.
Health/readiness endpoints exempt for polling. (#1932, #1933)
- Add path traversal protection: watch.context.path validated against
project root and ~/.claude-mem/ before write. Rejects ../../../etc
style attacks. (#1934)
- Reduce JSON body limit from 50MB to 5MB. Add in-memory rate limiter
(300 req/min/IP) to prevent abuse. (#1935)
- Derive default worker port from UID (37700 + uid%100) to prevent
cross-user data leakage on multi-user macOS. Windows falls back to
37777. Shell hooks use same formula via id -u. (#1936)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve search project filtering and import Chroma sync (#1911, #1912, #1914, #1918)
- Fix per-type search endpoints to pass project filter to Chroma queries
and SQLite hydration. searchObservations/Sessions/UserPrompts now use
$or clause matching project + merged_into_project. (#1912)
- Fix timeline/search methods to pass project to Chroma anchor queries.
Prevents cross-project result leakage when project param omitted. (#1911)
- Sync imported observations to ChromaDB after FTS rebuild. Import
endpoint now calls chromaSync.syncObservation() for each imported
row, making them visible to MCP search(). (#1914)
- Fix session-init cwd fallback to match context.ts (process.cwd()).
Prevents project key mismatch that caused "no previous sessions"
on fresh sessions. (#1918)
- Fix sync-marketplace restart to include auth token and per-user port.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve all CodeRabbit and Greptile review comments on PR #2080
- Fix run.sh comment mismatch (no-op flag vs empty array)
- Gate session-init on health check success (prevent running when worker unreachable)
- Fix date_desc ordering ignored in FTS session search
- Age-scope failed message purge (1h retention) instead of clearing all
- Anchor RestartGuard decay to real successes (null init, not Date.now())
- Add recordSuccess() calls in ResponseProcessor and completion path
- Prevent caller headers from overriding bearer auth token
- Add lazy cleanup for rate limiter map to prevent unbounded growth
- Bound post-import Chroma sync with concurrency limit of 8
- Add doc_type:'observation' filter to Chroma queries feeding observation hydration
- Add FTS fallback to all specialized search handlers (observations, sessions, prompts, timeline)
- Add response.ok check and error handling in viewer saveSettings
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve CodeRabbit round-2 review comments
- Use failure timestamp (COALESCE) instead of created_at_epoch for stale purge
- Downgrade _fts5Available flag when FTS table creation fails
- Escape FTS5 MATCH input by quoting user queries as literal phrases
- Escape LIKE metacharacters (%, _, \) in prompt text search
- Add response.ok check in initial settings load (matches save flow)
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* fix: resolve CodeRabbit round-3 review comments
- Include failed_at_epoch in COALESCE for age-scoped purge
- Re-throw FTS5 errors so callers can distinguish failure from no-results
- Wrap all FTS fallback calls in SearchManager with try/catch
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
* refactor: remove bearer auth and platform_source from context inject
Bearer token auth (#1932/#1933) added friction for all localhost API
clients with no benefit — the worker already binds localhost-only (CORS
restriction + host binding). Removed auth-token module, requireAuth
middleware, and Authorization headers from all internal callers.
platform_source filtering from the /api/context/inject path was never
used by any caller and silently filtered out observations. The underlying
platform_source column stays; only the query-time filter and its plumbing
through ContextBuilder, ObservationCompiler, SearchRoutes, context.ts,
and transcripts/processor.ts are removed.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: resolve CodeRabbit + Greptile + claude-review comments on PR #2081
- middleware.ts: drop 'Authorization' from CORS allowedHeaders (Greptile)
- middleware.ts: rate limiter falls back to req.socket.remoteAddress; add Retry-After on 429 (claude-review)
- SearchRoutes.ts: drop leftover platformSource read+pass in handleContextPreview (Greptile)
- .docker-blowout-data/: stop tracking the empty SQLite placeholder and gitignore the dir (claude-review)
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: tighten rate limiter — correct boundary + drop dead cleanup branch
- `entry.count >= RATE_LIMIT_MAX_REQUESTS` so the 300th request is the
first rejected (was 301).
- Removed the `requestCounts.size > 100` lazy-cleanup block — on a
localhost-only server the map tops out at 1–2 entries, so the branch
was dead code.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: rate limiter correctly allows exactly 300 req/min; doc localhost scope
- Check `entry.count >= max` BEFORE incrementing so the cap matches the
comment: 300 requests pass, the 301st gets 429.
- Added a comment noting the limiter is effectively a global cap on a
localhost-only worker (all callers share the 127.0.0.1/::1 bucket).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: normalise IPv4-mapped IPv6 in rate limiter client IP
Strip the `::ffff:` prefix so a localhost caller routed as
`::ffff:127.0.0.1` shares a bucket with `127.0.0.1`.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
* fix: size-guarded prune of rate limiter map for non-localhost deploys
Prune expired entries only when the map exceeds 1000 keys and we're
already doing a window reset, so the cost is zero on the localhost hot
path (1–2 keys) and the map can't grow unbounded if the worker is ever
bound on a non-loopback interface.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
---------
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>