chore: merge upstream v12.3.1 + keep local fixes
Upstream brings: - 12.2.1: Break infinite summary-retry loop (#2072) - 12.2.2: Subagent observation labeling + schema migration (#2073) - 12.2.3: Silence parser warning on normal observation responses (#2074) - 12.3.0: Docker harness + SWE-bench eval harness (#2076) - 12.3.1: Error handling anti-pattern cleanup across 91 files (#2078) Local fixes preserved through merge: - env-sanitizer PATH extension for claude CLI lookup - SessionStore stale session reset (mac sleep / 4h wall-clock) Built artifacts rebuilt from merged sources; fixes verified present in worker-service.cjs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -10,7 +10,7 @@
|
||||
"plugins": [
|
||||
{
|
||||
"name": "claude-mem",
|
||||
"version": "12.2.0",
|
||||
"version": "12.3.1",
|
||||
"source": "./plugin",
|
||||
"description": "Persistent memory system for Claude Code - context compression across sessions"
|
||||
}
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-mem",
|
||||
"version": "12.2.0",
|
||||
"version": "12.3.1",
|
||||
"description": "Memory compression system for Claude Code - persist context across sessions",
|
||||
"author": {
|
||||
"name": "Alex Newman"
|
||||
|
||||
@@ -1 +0,0 @@
|
||||
{"sessionId":"6a00de6e-282e-4cd8-98ec-b5afb73c468d","pid":50072,"acquiredAt":1775678989779}
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-mem",
|
||||
"version": "12.2.0",
|
||||
"version": "12.3.1",
|
||||
"description": "Memory compression system for Claude Code - persist context across sessions",
|
||||
"author": {
|
||||
"name": "Alex Newman",
|
||||
|
||||
@@ -0,0 +1,9 @@
|
||||
# Keep the build context small for evals/swebench/Dockerfile.agent.
|
||||
# The Dockerfile needs `plugin/` and `evals/swebench/` — do NOT exclude them.
|
||||
node_modules/
|
||||
.git/
|
||||
logs/
|
||||
evals/swebench/runs/
|
||||
.docker-claude-mem-data/
|
||||
.venv
|
||||
.venv-*
|
||||
+13
@@ -34,7 +34,20 @@ src/ui/viewer.html
|
||||
.claude-octopus/
|
||||
.claude/session-intent.md
|
||||
.claude/session-plan.md
|
||||
.claude/scheduled_tasks.lock
|
||||
.octo/
|
||||
|
||||
# Installer marker — dropped by the claude-mem CLI at install time
|
||||
plugin/.cli-installed
|
||||
|
||||
# Local contribution analysis (not part of upstream)
|
||||
CONTRIB_NOTES.md
|
||||
|
||||
# Docker container runtime data (basic claude-mem container)
|
||||
.docker-claude-mem-data/
|
||||
|
||||
# SWE-bench eval outputs
|
||||
evals/swebench/runs/
|
||||
claude-opus-4-7+claude-mem.*.json
|
||||
logs/run_evaluation/
|
||||
.venv-swebench/
|
||||
|
||||
@@ -0,0 +1,315 @@
|
||||
# Plan: Disable Summaries for Subagents + Label Subagent Observations
|
||||
|
||||
## Goal
|
||||
|
||||
1. **Disable summaries for subagents** — prevent any summary generation path (hook → worker → SDK agent) from firing for events originating in a Claude Code subagent.
|
||||
2. **Label observations from subagents** — tag every observation with the subagent identity (agent_id + agent_type) so downstream queries can distinguish main-session work from subagent work.
|
||||
|
||||
## Phase 0 — Documentation Discovery (COMPLETE)
|
||||
|
||||
### Claude Code hook payload fields (source: https://code.claude.com/docs/en/hooks.md)
|
||||
|
||||
- `agent_id` — present **only** when the hook fires inside a subagent invocation (e.g., `"agent-def456"`). Absent in the main session.
|
||||
- `agent_type` — the subagent identifier (built-in like `"Bash"`, `"Explore"`, `"Plan"`, or a custom agent name). Present in subagents **and** when `--agent` flag is used.
|
||||
- `session_id` — shared across main and subagents in the same session. Cannot distinguish contexts on its own.
|
||||
- `transcript_path` — shared session transcript. Not a reliable discriminator.
|
||||
- `SubagentStop` — dedicated event that fires when a subagent finishes. Currently **NOT registered** in `plugin/hooks/hooks.json`.
|
||||
- `Stop` — fires for the main Claude agent (not subagents). Currently registered → wired to `summarize` handler.
|
||||
|
||||
**Discriminator for subagent context**: presence of `agent_id` OR `agent_type` in the hook stdin JSON.
|
||||
|
||||
### Current claude-mem architecture (grepped + read)
|
||||
|
||||
- `src/cli/types.ts:1-15` — `NormalizedHookInput` lacks `agentId` / `agentType`.
|
||||
- `src/cli/adapters/claude-code.ts:5-17` — Claude Code adapter does NOT extract `agent_id` / `agent_type`.
|
||||
- `src/cli/handlers/summarize.ts:27-143` — Stop-hook handler posts to `/api/sessions/summarize` without guarding on subagent context.
|
||||
- `src/cli/handlers/observation.ts:51-62` — PostToolUse handler POSTs observation body without subagent fields.
|
||||
- `src/services/worker/http/routes/SessionRoutes.ts:555-646` — `handleObservationsByClaudeId` destructures only `{ contentSessionId, tool_name, tool_input, tool_response, cwd }`; `queueObservation` call at line 620 has no subagent field.
|
||||
- `src/services/sqlite/observations/store.ts:75-80` — `INSERT INTO observations` column list has no `agent_type` / `agent_id`.
|
||||
- `src/services/sqlite/migrations.ts:578-588` — migrations array ends with `migration009` (version 26). Next migration slot is `migration010` (version 27).
|
||||
- `src/utils/logger.ts:195-203` — already reads `input.subagent_type` for formatting Task tool invocations (reference pattern, no downstream storage).
|
||||
|
||||
### Allowed APIs / patterns to copy
|
||||
|
||||
- **Adapter metadata extension pattern**: `src/cli/adapters/gemini-cli.ts:77-96` already collects platform-specific metadata into `metadata` and returns it on `NormalizedHookInput`. Copy this pattern.
|
||||
- **Migration pattern**: `src/services/sqlite/migrations.ts:556-573` (migration009) is a copy-ready template for conditional `ALTER TABLE ADD COLUMN` additions.
|
||||
- **Observation INSERT column extension pattern**: `src/services/sqlite/observations/store.ts:75-98` — add `agent_type`, `agent_id` to the column list and to `stmt.run(...)` bindings.
|
||||
|
||||
### Anti-patterns to avoid
|
||||
|
||||
- Do NOT assume `agent_id` is present on the main session — it is undefined there. Treat presence as the discriminator.
|
||||
- Do NOT register SubagentStop as a new hook in `hooks.json` just to "disable" summaries — defensively short-circuiting in the handler is simpler and covers both current and future Claude Code versions where Stop might fire in subagent contexts.
|
||||
- Do NOT rely on `session_id` to distinguish — it is shared.
|
||||
- Do NOT invent a `parent_tool_use_id` field in hook input. The Claude Code docs do not expose parent tool use ID on hook payloads. Only use `agent_id` + `agent_type`.
|
||||
- Do NOT break the existing observation hash-dedup logic in `store.ts:19-28` — leave the hash inputs as-is.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Extend hook input surface to carry subagent fields
|
||||
|
||||
**What to implement** (COPY pattern from gemini-cli adapter metadata handling):
|
||||
|
||||
1. Edit `src/cli/types.ts:1-15` — add two optional fields to `NormalizedHookInput`:
|
||||
```ts
|
||||
agentId?: string; // Claude Code subagent agent_id (undefined in main session)
|
||||
agentType?: string; // Claude Code subagent agent_type (undefined in main session)
|
||||
```
|
||||
|
||||
2. Edit `src/cli/adapters/claude-code.ts:5-17` — in `normalizeInput`, extract `r.agent_id` and `r.agent_type`:
|
||||
```ts
|
||||
return {
|
||||
sessionId: r.session_id ?? r.id ?? r.sessionId,
|
||||
cwd: r.cwd ?? process.cwd(),
|
||||
prompt: r.prompt,
|
||||
toolName: r.tool_name,
|
||||
toolInput: r.tool_input,
|
||||
toolResponse: r.tool_response,
|
||||
transcriptPath: r.transcript_path,
|
||||
agentId: typeof r.agent_id === 'string' ? r.agent_id : undefined,
|
||||
agentType: typeof r.agent_type === 'string' ? r.agent_type : undefined,
|
||||
};
|
||||
```
|
||||
|
||||
3. Edit `src/cli/adapters/gemini-cli.ts:88-97` — return matching `undefined` defaults so the interface contract is consistent across adapters. (No behavior change; just explicit `agentId: undefined, agentType: undefined` on the return object, or rely on the optional-field default by leaving it out. Leave it out — TypeScript optional is fine.)
|
||||
|
||||
**Documentation references**: Claude Code hooks docs section "Subagent Identification Fields"; gemini-cli adapter metadata pattern at `src/cli/adapters/gemini-cli.ts:77-96`.
|
||||
|
||||
**Verification checklist**:
|
||||
- `grep -n "agentId" src/cli/types.ts` → finds the new field.
|
||||
- `grep -n "agent_id" src/cli/adapters/claude-code.ts` → finds the extraction.
|
||||
- `npm run build` succeeds.
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT rename `agent_id` / `agent_type` snake_case raw fields. Camel-case only in `NormalizedHookInput`.
|
||||
- Do NOT default to a sentinel string like `"main"`; leave undefined when absent.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Short-circuit summary generation in subagent context
|
||||
|
||||
**What to implement**:
|
||||
|
||||
1. Edit `src/cli/handlers/summarize.ts:27-36`, immediately after the worker-ready check (line 34) and before any processing:
|
||||
```ts
|
||||
// Skip summaries in subagent context — subagents do not own the session summary.
|
||||
// Main Stop hook owns it; SubagentStop (if ever registered) must no-op.
|
||||
if (input.agentId || input.agentType) {
|
||||
logger.debug('HOOK', 'Skipping summary: subagent context detected', {
|
||||
sessionId: input.sessionId,
|
||||
agentId: input.agentId,
|
||||
agentType: input.agentType
|
||||
});
|
||||
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
|
||||
}
|
||||
```
|
||||
|
||||
2. (Safety) Edit `src/services/worker/http/routes/SessionRoutes.ts` in `handleSummarizeByClaudeId` (around line 655-692): add a defensive guard that rejects the summarize request if the body includes `agentId` or `agentType`. Return `{ status: 'skipped', reason: 'subagent_context' }`. This is belt-and-suspenders in case any caller bypasses the hook layer.
|
||||
|
||||
3. Extend the `/api/sessions/summarize` body in `src/cli/handlers/summarize.ts:73-82` to include `agentId` and `agentType` (passthrough) so the worker can make the same decision independently. Only pass fields when defined:
|
||||
```ts
|
||||
body: JSON.stringify({
|
||||
contentSessionId: sessionId,
|
||||
last_assistant_message: lastAssistantMessage,
|
||||
platformSource,
|
||||
...(input.agentId ? { agentId: input.agentId } : {}),
|
||||
...(input.agentType ? { agentType: input.agentType } : {}),
|
||||
}),
|
||||
```
|
||||
|
||||
**Documentation references**: summarize.ts handler flow at `src/cli/handlers/summarize.ts:27-143`; summarize route at `src/services/worker/http/routes/SessionRoutes.ts:655-692`.
|
||||
|
||||
**Verification checklist**:
|
||||
- Unit test or manual dispatch with a payload containing `agent_id: "agent-abc"` → summarize handler returns before calling `/api/sessions/summarize`.
|
||||
- `grep -n "subagent" src/cli/handlers/summarize.ts` → finds the new guard.
|
||||
- `grep -n "subagent_context\|agentId" src/services/worker/http/routes/SessionRoutes.ts` → finds the server-side guard.
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT also short-circuit in `session-complete` or `context` handlers — the session's main Stop still cleans up.
|
||||
- Do NOT log at info level (spammy); `logger.debug` only.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Database schema migration for subagent labels on observations
|
||||
|
||||
**What to implement** (COPY migration009 pattern from `src/services/sqlite/migrations.ts:556-573`):
|
||||
|
||||
1. Append a new migration to `src/services/sqlite/migrations.ts` right after `migration009` (before the `migrations` array at line 578):
|
||||
```ts
|
||||
export const migration010: Migration = {
|
||||
version: 27,
|
||||
up: (db: Database) => {
|
||||
const columns = db.prepare('PRAGMA table_info(observations)').all() as any[];
|
||||
const hasAgentType = columns.some((c: any) => c.name === 'agent_type');
|
||||
const hasAgentId = columns.some((c: any) => c.name === 'agent_id');
|
||||
if (!hasAgentType) {
|
||||
db.run('ALTER TABLE observations ADD COLUMN agent_type TEXT');
|
||||
}
|
||||
if (!hasAgentId) {
|
||||
db.run('ALTER TABLE observations ADD COLUMN agent_id TEXT');
|
||||
}
|
||||
db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_type ON observations(agent_type)');
|
||||
console.log('[migration010] Added agent_type, agent_id columns to observations');
|
||||
},
|
||||
down: (_db: Database) => {
|
||||
// SQLite DROP COLUMN not fully supported; no-op
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
2. Add `migration010` to the `migrations` array at `src/services/sqlite/migrations.ts:578-588`.
|
||||
|
||||
3. Check `src/services/sqlite/migrations/runner.ts` to see if there's a parallel registration site; if so, mirror the addition there. (Investigation step — if `runner.ts` replicates migration definitions, extend it the same way. Otherwise, importing `migrations` from `migrations.ts` is sufficient.)
|
||||
|
||||
**Documentation references**: migration007 and migration009 at `src/services/sqlite/migrations.ts:491-509` and `556-573` as copy-ready templates.
|
||||
|
||||
**Verification checklist**:
|
||||
- Run worker; check logs for `[migration010]`.
|
||||
- `sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA table_info(observations);"` → shows `agent_type` and `agent_id` columns.
|
||||
- `sqlite3 ~/.claude-mem/claude-mem.db ".indexes observations"` → shows `idx_observations_agent_type`.
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT drop or rename existing columns.
|
||||
- Do NOT set NOT NULL constraints — main-session rows have NULL for these.
|
||||
- Do NOT pick a version number that's already used (26 is migration009; use 27).
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Thread subagent fields through hook → worker → SDK → DB
|
||||
|
||||
**What to implement**:
|
||||
|
||||
### 4a — Hook PostToolUse handler sends fields
|
||||
|
||||
Edit `src/cli/handlers/observation.ts:51-62`:
|
||||
```ts
|
||||
const response = await workerHttpRequest('/api/sessions/observations', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
contentSessionId: sessionId,
|
||||
platformSource,
|
||||
tool_name: toolName,
|
||||
tool_input: toolInput,
|
||||
tool_response: toolResponse,
|
||||
cwd,
|
||||
...(input.agentId ? { agentId: input.agentId } : {}),
|
||||
...(input.agentType ? { agentType: input.agentType } : {}),
|
||||
})
|
||||
});
|
||||
```
|
||||
|
||||
### 4b — Worker observations route receives and forwards
|
||||
|
||||
Edit `src/services/worker/http/routes/SessionRoutes.ts:555-646`:
|
||||
- Destructure: `const { contentSessionId, tool_name, tool_input, tool_response, cwd, agentId, agentType } = req.body;`
|
||||
- Pass to `queueObservation` at line 620:
|
||||
```ts
|
||||
this.sessionManager.queueObservation(sessionDbId, {
|
||||
tool_name,
|
||||
tool_input: cleanedToolInput,
|
||||
tool_response: cleanedToolResponse,
|
||||
prompt_number: promptNumber,
|
||||
cwd: cwd || ...,
|
||||
agentId: typeof agentId === 'string' ? agentId : undefined,
|
||||
agentType: typeof agentType === 'string' ? agentType : undefined,
|
||||
});
|
||||
```
|
||||
|
||||
### 4c — queueObservation type extension
|
||||
|
||||
Investigation: find the `queueObservation` signature in the session manager (likely `src/services/session/` or similar). Add optional `agentId?: string; agentType?: string;` to the payload type. These must ride through to the SDK agent's observation context so they land in `storeObservation()`.
|
||||
|
||||
### 4d — Observation input type + store.ts extension
|
||||
|
||||
- Edit `src/services/sqlite/observations/types.ts:10-19` — add:
|
||||
```ts
|
||||
agent_type?: string | null;
|
||||
agent_id?: string | null;
|
||||
```
|
||||
- Edit `src/services/sqlite/observations/store.ts:75-98`:
|
||||
- Column list: add `, agent_type, agent_id` before `content_hash`.
|
||||
- Placeholders: add `, ?, ?`.
|
||||
- Bindings: add `observation.agent_type ?? null, observation.agent_id ?? null`.
|
||||
- Verify there are no other `INSERT INTO observations` sites that need updating. Sites already located (to re-check):
|
||||
- `src/services/sqlite/SessionStore.ts:1755` / `1890` / `2022` / `2623` — each needs the same two columns added. If these are separate insertion paths, extend all of them; pass `null` for fields not available in that path.
|
||||
|
||||
### 4e — SDK agent observation parser forwards fields
|
||||
|
||||
The SDK agent parses `<observation>` XML into an `ObservationInput` and calls `storeObservation`. The tool_input passed in must carry `agentId`/`agentType` through to here so the row gets labeled. Investigation step: find where `storeObservation()` is called with an `ObservationInput` built from the queued observation, and inject `agent_type`/`agent_id` from the queue item's subagent fields onto the `ObservationInput`. Location likely in `src/services/sdk/` or adjacent.
|
||||
|
||||
**Documentation references**:
|
||||
- observation handler at `src/cli/handlers/observation.ts:51-62`
|
||||
- SessionRoutes observations endpoint at `src/services/worker/http/routes/SessionRoutes.ts:555-646`
|
||||
- storeObservation at `src/services/sqlite/observations/store.ts:75-98`
|
||||
- Existing observation INSERT sites at `src/services/sqlite/SessionStore.ts:1755, 1890, 2022, 2623` (audit required)
|
||||
|
||||
**Verification checklist**:
|
||||
- `grep -rn "agent_type\|agentType" src/` → shows fields threaded through every layer.
|
||||
- Simulate a Task subagent PostToolUse payload → observation row has non-null `agent_type`.
|
||||
- Main-session PostToolUse → observation row has NULL `agent_type` (existing behavior preserved).
|
||||
- No existing test suite breaks: `npm test` passes.
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT include `agent_type` / `agent_id` in the content-hash computation (`src/services/sqlite/observations/store.ts:19-28`). The hash identity must remain stable for dedup.
|
||||
- Do NOT add fields to the FTS5 `observations_fts` virtual table — not searchable text.
|
||||
- Do NOT backfill — leave existing rows NULL.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Tests and verification
|
||||
|
||||
**What to implement**:
|
||||
|
||||
1. Add a unit test at `tests/cli/handlers/summarize-subagent-skip.test.ts` verifying:
|
||||
- When `input.agentId` is set, handler returns early with `exitCode: SUCCESS` and does NOT call `workerHttpRequest`.
|
||||
- When `input.agentType` is set, same behavior.
|
||||
- When both are undefined, handler proceeds (mock worker response).
|
||||
|
||||
2. Add a unit test at `tests/cli/adapters/claude-code-subagent.test.ts` verifying:
|
||||
- `normalizeInput({ agent_id: "agent-abc", agent_type: "Explore" })` returns `{ agentId: "agent-abc", agentType: "Explore" }`.
|
||||
- `normalizeInput({})` returns `agentId: undefined, agentType: undefined`.
|
||||
|
||||
3. Add a unit test at `tests/services/sqlite/observations/store-subagent-label.test.ts` verifying:
|
||||
- `storeObservation` with `agent_type: "Explore"` inserts row with `agent_type = "Explore"`.
|
||||
- Omitted `agent_type` → NULL in DB.
|
||||
- Content-hash dedup still works (two observations with same title/narrative but different `agent_type` should still collide on dedup — verify expected behavior; update test if product intent differs).
|
||||
|
||||
4. Manual integration check: start worker, simulate a hook payload with `agent_id`/`agent_type`, observe observation row in DB.
|
||||
|
||||
**Verification checklist**:
|
||||
- `npm test` passes.
|
||||
- `npm run build` succeeds.
|
||||
- Database inspection shows expected rows.
|
||||
|
||||
**Anti-pattern guards**:
|
||||
- Do NOT mock the entire storeObservation — use a real in-memory Bun SQLite DB if existing tests do.
|
||||
- Do NOT add integration tests that require a running worker unless the suite already does.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — Build + autonomous execution pipeline
|
||||
|
||||
After Phases 1-5 land and pass verification:
|
||||
|
||||
1. **Build**: `npm run build-and-sync`.
|
||||
2. **Commit**: a single commit titled `feat: disable subagent summaries and label subagent observations` with co-author footer.
|
||||
3. **Push branch**: push current worktree branch `trail-guarantee` (or a new feature branch — confirm with `git status`). Create PR via `gh pr create` with summary of both features.
|
||||
4. **Run `/loop 5m`** to continuously re-check PR review comments: as each CodeRabbit/Greptile/human comment arrives, address it in a new commit, push, and re-check. Exit loop only when all actionable review comments are resolved and status checks pass.
|
||||
5. **Merge to main** via `gh pr merge --squash --auto` (or `--merge` per repo convention — inspect `.github/` first).
|
||||
6. **Version bump**: `cd ~/Scripts/claude-mem/` and run `/version-bump`.
|
||||
|
||||
**Anti-pattern guards for this phase**:
|
||||
- Do NOT force-push to main.
|
||||
- Do NOT skip hooks (`--no-verify`).
|
||||
- Do NOT squash-merge if the repo uses rebase-merge; check `.github/` for branch-protection hints.
|
||||
- Do NOT resolve a review comment without actually addressing it — every resolved thread must have a corresponding commit or a reply explaining why no change is needed.
|
||||
|
||||
---
|
||||
|
||||
## Final Verification (end of Phase 5, before Phase 6)
|
||||
|
||||
- `grep -rn "agent_id\|agentId" src/` → fields present in: `types.ts`, `claude-code.ts`, `summarize.ts`, `observation.ts`, `SessionRoutes.ts`, observation types, store, migration010.
|
||||
- `grep -rn "subagent_context" src/services/worker/` → worker-side guard present.
|
||||
- `sqlite3 ~/.claude-mem/claude-mem.db "PRAGMA table_info(observations);"` → includes `agent_type`, `agent_id`.
|
||||
- `npm test && npm run build` → both green.
|
||||
- Smoke test: simulate a subagent hook payload end-to-end → observation labeled, no summary fired.
|
||||
@@ -0,0 +1,488 @@
|
||||
# Anti-Pattern Fix Checklist
|
||||
|
||||
**Total: 301 issues | Fixed: 289 | Approved Overrides: 12 | Remaining: 0**
|
||||
**Detector passes clean: 0 issues to fix**
|
||||
|
||||
Every item gets fixed (logging added, try block narrowed, catch made specific, or error propagated) OR approved with a specific technical reason.
|
||||
|
||||
---
|
||||
|
||||
## src/services/worker-service.ts (14 issues)
|
||||
- [x] :291 GENERIC_CATCH
|
||||
- [x] :291 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :375 LARGE_TRY_BLOCK
|
||||
- [x] :388 GENERIC_CATCH
|
||||
- [x] :388 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :489 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :536 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :574 LARGE_TRY_BLOCK
|
||||
- [x] :592 GENERIC_CATCH
|
||||
- [x] :592 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :696 ERROR_MESSAGE_GUESSING
|
||||
- [x] :837 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :849 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :912 LARGE_TRY_BLOCK
|
||||
- [x] :941 GENERIC_CATCH
|
||||
- [x] :941 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :961 LARGE_TRY_BLOCK
|
||||
- [x] :979 GENERIC_CATCH
|
||||
- [x] :979 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
|
||||
## src/services/sqlite/SessionStore.ts (7 issues)
|
||||
- [x] :449 LARGE_TRY_BLOCK
|
||||
- [x] :477 GENERIC_CATCH
|
||||
- [x] :477 CATCH_AND_CONTINUE_CRITICAL_PATH
|
||||
- [x] :689 LARGE_TRY_BLOCK
|
||||
- [x] :848 GENERIC_CATCH
|
||||
- [x] :2302 GENERIC_CATCH
|
||||
- [x] :2334 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/SDKAgent.ts (1 issue)
|
||||
- [x] :481 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/GeminiAgent.ts (1 issue)
|
||||
- [x] :138 LARGE_TRY_BLOCK
|
||||
|
||||
## src/services/worker/OpenRouterAgent.ts (1 issue)
|
||||
- [x] :87 LARGE_TRY_BLOCK
|
||||
|
||||
## src/services/infrastructure/ProcessManager.ts (20 issues)
|
||||
- [x] :56 LARGE_TRY_BLOCK
|
||||
- [x] :69 NO_LOGGING_IN_CATCH
|
||||
- [x] :205 GENERIC_CATCH
|
||||
- [x] :219 GENERIC_CATCH
|
||||
- [x] :263 GENERIC_CATCH
|
||||
- [x] :290 GENERIC_CATCH
|
||||
- [x] :307 GENERIC_CATCH
|
||||
- [x] :307 NO_LOGGING_IN_CATCH (APPROVED OVERRIDE exists — review)
|
||||
- [x] :375 LARGE_TRY_BLOCK
|
||||
- [x] :443 GENERIC_CATCH
|
||||
- [x] :470 GENERIC_CATCH
|
||||
- [x] :479 GENERIC_CATCH
|
||||
- [x] :525 LARGE_TRY_BLOCK
|
||||
- [x] :608 GENERIC_CATCH
|
||||
- [x] :628 GENERIC_CATCH
|
||||
- [x] :636 GENERIC_CATCH
|
||||
- [x] :751 LARGE_TRY_BLOCK
|
||||
- [x] :828 GENERIC_CATCH
|
||||
- [x] :899 GENERIC_CATCH
|
||||
- [x] :963 NO_LOGGING_IN_CATCH
|
||||
- [x] :963 GENERIC_CATCH
|
||||
- [x] :986 NO_LOGGING_IN_CATCH
|
||||
- [x] :1035 GENERIC_CATCH
|
||||
|
||||
## src/services/infrastructure/HealthMonitor.ts (3 issues)
|
||||
- [x] :56 NO_LOGGING_IN_CATCH
|
||||
- [x] :93 GENERIC_CATCH
|
||||
- [x] :168 GENERIC_CATCH
|
||||
|
||||
## src/services/infrastructure/WorktreeAdoption.ts (3 issues)
|
||||
- [x] :253 LARGE_TRY_BLOCK
|
||||
- [x] :285 GENERIC_CATCH
|
||||
- [x] :301 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/SessionManager.ts (5 issues)
|
||||
- [x] :72 NO_LOGGING_IN_CATCH
|
||||
- [x] :294 GENERIC_CATCH
|
||||
- [x] :345 GENERIC_CATCH
|
||||
- [x] :399 GENERIC_CATCH
|
||||
- [x] :471 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/ProcessRegistry.ts (2 issues)
|
||||
- [x] :398 NO_LOGGING_IN_CATCH
|
||||
- [x] :497 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/SearchManager.ts (8 issues)
|
||||
- [x] :442 LARGE_TRY_BLOCK
|
||||
- [x] :458 GENERIC_CATCH
|
||||
- [x] :692 LARGE_TRY_BLOCK
|
||||
- [x] :726 GENERIC_CATCH
|
||||
- [x] :766 LARGE_TRY_BLOCK
|
||||
- [x] :794 GENERIC_CATCH
|
||||
- [x] :1375 GENERIC_CATCH
|
||||
- [x] :1390 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/BranchManager.ts (5 issues)
|
||||
- [x] :121 LARGE_TRY_BLOCK
|
||||
- [x] :139 GENERIC_CATCH
|
||||
- [x] :244 GENERIC_CATCH
|
||||
- [x] :269 LARGE_TRY_BLOCK
|
||||
- [x] :301 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/SettingsManager.ts (1 issue)
|
||||
- [x] :45 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/PaginationHelper.ts (1 issue)
|
||||
- [x] :57 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/knowledge/KnowledgeAgent.ts (4 issues)
|
||||
- [x] :94 GENERIC_CATCH
|
||||
- [x] :133 GENERIC_CATCH
|
||||
- [x] :206 GENERIC_CATCH
|
||||
- [x] :261 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/knowledge/CorpusStore.ts (2 issues)
|
||||
- [x] :48 GENERIC_CATCH
|
||||
- [x] :75 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/knowledge/CorpusBuilder.ts (1 issue)
|
||||
- [x] :26 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/worker/http/BaseRouteHandler.ts (1 issue)
|
||||
- [x] :29 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/http/routes/SearchRoutes.ts (2 issues)
|
||||
- [x] :272 LARGE_TRY_BLOCK
|
||||
- [x] :297 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/http/routes/SettingsRoutes.ts (1 issue)
|
||||
- [x] :76 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/http/routes/SessionRoutes.ts (5 issues)
|
||||
- [x] :223 PROMISE_CATCH_NO_LOGGING
|
||||
- [x] :259 GENERIC_CATCH
|
||||
- [x] :288 LARGE_TRY_BLOCK
|
||||
- [x] :589 LARGE_TRY_BLOCK
|
||||
- [x] :643 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/http/routes/CorpusRoutes.ts (1 issue)
|
||||
- [x] :96 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/worker/http/routes/ViewerRoutes.ts (1 issue)
|
||||
- [x] :74 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/worker/search/strategies/ChromaSearchStrategy.ts (2 issues)
|
||||
- [x] :66 LARGE_TRY_BLOCK
|
||||
- [x] :140 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/search/strategies/HybridSearchStrategy.ts (6 issues)
|
||||
- [x] :71 LARGE_TRY_BLOCK
|
||||
- [x] :113 GENERIC_CATCH
|
||||
- [x] :137 LARGE_TRY_BLOCK
|
||||
- [x] :178 GENERIC_CATCH
|
||||
- [x] :204 LARGE_TRY_BLOCK
|
||||
- [x] :244 GENERIC_CATCH
|
||||
|
||||
## src/services/worker/search/strategies/SQLiteSearchStrategy.ts (2 issues)
|
||||
- [x] :67 LARGE_TRY_BLOCK
|
||||
- [x] :99 GENERIC_CATCH
|
||||
|
||||
## src/services/queue/SessionQueueProcessor.ts (2 issues)
|
||||
- [x] :37 LARGE_TRY_BLOCK
|
||||
- [x] :67 GENERIC_CATCH
|
||||
|
||||
## src/services/sync/ChromaMcpManager.ts (6 issues)
|
||||
- [x] :79 GENERIC_CATCH
|
||||
- [x] :310 NO_LOGGING_IN_CATCH
|
||||
- [x] :325 NO_LOGGING_IN_CATCH
|
||||
- [x] :344 GENERIC_CATCH
|
||||
- [x] :397 NO_LOGGING_IN_CATCH
|
||||
- [x] :411 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/sync/ChromaSync.ts (5 issues)
|
||||
- [x] :565 LARGE_TRY_BLOCK
|
||||
- [x] :731 LARGE_TRY_BLOCK
|
||||
- [x] :788 ERROR_STRING_MATCHING
|
||||
- [x] :789 ERROR_STRING_MATCHING
|
||||
- [x] :828 GENERIC_CATCH
|
||||
|
||||
## src/services/context/ContextBuilder.ts (1 issue)
|
||||
- [x] :52 GENERIC_CATCH
|
||||
|
||||
## src/services/context/ObservationCompiler.ts (2 issues)
|
||||
- [x] :228 LARGE_TRY_BLOCK
|
||||
- [x] :248 GENERIC_CATCH
|
||||
|
||||
## src/services/server/Server.ts (3 issues)
|
||||
- [x] :211 LARGE_TRY_BLOCK
|
||||
- [x] :235 NO_LOGGING_IN_CATCH
|
||||
- [x] :235 GENERIC_CATCH
|
||||
|
||||
## src/services/worker-spawner.ts (1 issue)
|
||||
- [x] :56 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/smart-file-read/search.ts (2 issues)
|
||||
- [x] :81 NO_LOGGING_IN_CATCH
|
||||
- [x] :117 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/smart-file-read/parser.ts (5 issues)
|
||||
- [x] :162 NO_LOGGING_IN_CATCH
|
||||
- [x] :277 NO_LOGGING_IN_CATCH
|
||||
- [x] :284 NO_LOGGING_IN_CATCH
|
||||
- [x] :553 NO_LOGGING_IN_CATCH
|
||||
- [x] :588 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/sqlite/migrations/runner.ts (4 issues)
|
||||
- [x] :421 LARGE_TRY_BLOCK
|
||||
- [x] :449 GENERIC_CATCH
|
||||
- [x] :661 LARGE_TRY_BLOCK
|
||||
- [x] :817 GENERIC_CATCH
|
||||
|
||||
## src/services/sqlite/migrations.ts (1 issue)
|
||||
- [x] :381 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/sqlite/observations/files.ts (1 issue)
|
||||
- [x] :20 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/sqlite/timeline/queries.ts (2 issues)
|
||||
- [x] :114 GENERIC_CATCH
|
||||
- [x] :146 GENERIC_CATCH
|
||||
|
||||
## src/services/sqlite/SessionSearch.ts (5 issues)
|
||||
- [x] :77 LARGE_TRY_BLOCK
|
||||
- [x] :161 GENERIC_CATCH
|
||||
- [x] :176 NO_LOGGING_IN_CATCH
|
||||
- [x] :384 NO_LOGGING_IN_CATCH
|
||||
- [x] :402 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/transcripts/watcher.ts (4 issues)
|
||||
- [x] :46 NO_LOGGING_IN_CATCH
|
||||
- [x] :155 NO_LOGGING_IN_CATCH
|
||||
- [x] :183 NO_LOGGING_IN_CATCH
|
||||
- [x] :219 GENERIC_CATCH
|
||||
|
||||
## src/services/transcripts/processor.ts (3 issues)
|
||||
- [x] :280 NO_LOGGING_IN_CATCH
|
||||
- [x] :325 LARGE_TRY_BLOCK
|
||||
- [x] :355 LARGE_TRY_BLOCK
|
||||
|
||||
## src/services/transcripts/field-utils.ts (1 issue)
|
||||
- [x] :145 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/integrations/CursorHooksInstaller.ts (11 issues)
|
||||
- [x] :118 GENERIC_CATCH
|
||||
- [x] :260 GENERIC_CATCH
|
||||
- [x] :311 LARGE_TRY_BLOCK
|
||||
- [x] :381 GENERIC_CATCH
|
||||
- [x] :402 LARGE_TRY_BLOCK
|
||||
- [x] :419 GENERIC_CATCH
|
||||
- [x] :459 LARGE_TRY_BLOCK
|
||||
- [x] :503 GENERIC_CATCH
|
||||
- [x] :538 LARGE_TRY_BLOCK
|
||||
- [x] :565 NO_LOGGING_IN_CATCH
|
||||
- [x] :602 GENERIC_CATCH
|
||||
|
||||
## src/services/integrations/GeminiCliHooksInstaller.ts (6 issues)
|
||||
- [x] :164 GENERIC_CATCH
|
||||
- [x] :289 LARGE_TRY_BLOCK
|
||||
- [x] :334 GENERIC_CATCH
|
||||
- [x] :350 LARGE_TRY_BLOCK
|
||||
- [x] :403 GENERIC_CATCH
|
||||
- [x] :427 NO_LOGGING_IN_CATCH
|
||||
- [x] :427 GENERIC_CATCH
|
||||
|
||||
## src/services/integrations/OpenCodeInstaller.ts (3 issues)
|
||||
- [x] :166 LARGE_TRY_BLOCK
|
||||
- [x] :214 LARGE_TRY_BLOCK
|
||||
- [x] :312 LARGE_TRY_BLOCK
|
||||
|
||||
## src/services/integrations/OpenClawInstaller.ts (2 issues)
|
||||
- [x] :149 NO_LOGGING_IN_CATCH
|
||||
- [x] :253 LARGE_TRY_BLOCK
|
||||
|
||||
## src/services/integrations/WindsurfHooksInstaller.ts (13 issues)
|
||||
- [x] :88 GENERIC_CATCH
|
||||
- [x] :152 GENERIC_CATCH
|
||||
- [x] :237 GENERIC_CATCH
|
||||
- [x] :289 LARGE_TRY_BLOCK
|
||||
- [x] :321 GENERIC_CATCH
|
||||
- [x] :337 LARGE_TRY_BLOCK
|
||||
- [x] :352 GENERIC_CATCH
|
||||
- [x] :386 LARGE_TRY_BLOCK
|
||||
- [x] :409 NO_LOGGING_IN_CATCH
|
||||
- [x] :409 GENERIC_CATCH
|
||||
- [x] :448 LARGE_TRY_BLOCK
|
||||
- [x] :459 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/integrations/McpIntegrations.ts (4 issues)
|
||||
- [x] :108 LARGE_TRY_BLOCK
|
||||
- [x] :148 GENERIC_CATCH
|
||||
- [x] :277 LARGE_TRY_BLOCK
|
||||
- [x] :337 GENERIC_CATCH
|
||||
|
||||
## src/services/integrations/CodexCliInstaller.ts (9 issues)
|
||||
- [x] :69 GENERIC_CATCH
|
||||
- [x] :138 LARGE_TRY_BLOCK
|
||||
- [x] :161 GENERIC_CATCH
|
||||
- [x] :187 LARGE_TRY_BLOCK
|
||||
- [x] :216 GENERIC_CATCH
|
||||
- [x] :237 LARGE_TRY_BLOCK
|
||||
- [x] :265 GENERIC_CATCH
|
||||
- [x] :291 LARGE_TRY_BLOCK
|
||||
- [x] :337 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/services/domain/ModeManager.ts (3 issues)
|
||||
- [x] :146 GENERIC_CATCH
|
||||
- [x] :163 GENERIC_CATCH
|
||||
- [x] :173 GENERIC_CATCH
|
||||
|
||||
## src/supervisor/process-registry.ts (5 issues)
|
||||
- [x] :35 NO_LOGGING_IN_CATCH
|
||||
- [x] :35 GENERIC_CATCH
|
||||
- [x] :68 GENERIC_CATCH
|
||||
- [x] :170 GENERIC_CATCH
|
||||
- [x] :197 GENERIC_CATCH
|
||||
|
||||
## src/supervisor/shutdown.ts (6 issues)
|
||||
- [x] :38 GENERIC_CATCH
|
||||
- [x] :52 GENERIC_CATCH
|
||||
- [x] :71 GENERIC_CATCH
|
||||
- [x] :94 GENERIC_CATCH
|
||||
- [x] :139 GENERIC_CATCH
|
||||
- [x] :154 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/supervisor/index.ts (2 issues)
|
||||
- [x] :72 GENERIC_CATCH
|
||||
- [x] :164 GENERIC_CATCH
|
||||
|
||||
## src/cli/hook-command.ts (1 issue)
|
||||
- [x] :75 LARGE_TRY_BLOCK
|
||||
|
||||
## src/cli/stdin-reader.ts (4 issues)
|
||||
- [x] :32 NO_LOGGING_IN_CATCH
|
||||
- [x] :52 NO_LOGGING_IN_CATCH
|
||||
- [x] :131 LARGE_TRY_BLOCK
|
||||
- [x] :170 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/cli/claude-md-commands.ts (12 issues)
|
||||
- [x] :79 LARGE_TRY_BLOCK
|
||||
- [x] :97 GENERIC_CATCH
|
||||
- [x] :144 NO_LOGGING_IN_CATCH
|
||||
- [x] :190 NO_LOGGING_IN_CATCH
|
||||
- [x] :203 NO_LOGGING_IN_CATCH
|
||||
- [x] :319 LARGE_TRY_BLOCK
|
||||
- [x] :345 NO_LOGGING_IN_CATCH
|
||||
- [x] :345 GENERIC_CATCH
|
||||
- [x] :357 LARGE_TRY_BLOCK
|
||||
- [x] :430 GENERIC_CATCH
|
||||
- [x] :508 LARGE_TRY_BLOCK
|
||||
- [x] :525 GENERIC_CATCH
|
||||
|
||||
## src/cli/handlers/session-complete.ts (2 issues)
|
||||
- [x] :38 LARGE_TRY_BLOCK
|
||||
- [x] :58 GENERIC_CATCH
|
||||
|
||||
## src/cli/handlers/user-message.ts (1 issue)
|
||||
- [x] :28 LARGE_TRY_BLOCK
|
||||
|
||||
## src/cli/handlers/context.ts (1 issue)
|
||||
- [x] :48 LARGE_TRY_BLOCK
|
||||
|
||||
## src/cli/handlers/file-context.ts (3 issues)
|
||||
- [x] :202 NO_LOGGING_IN_CATCH
|
||||
- [x] :202 GENERIC_CATCH
|
||||
- [x] :221 LARGE_TRY_BLOCK
|
||||
|
||||
## src/cli/handlers/summarize.ts (1 issue)
|
||||
- [x] :111 LARGE_TRY_BLOCK
|
||||
|
||||
## src/cli/handlers/session-init.ts (1 issue)
|
||||
- [x] :134 LARGE_TRY_BLOCK
|
||||
|
||||
## src/cli/handlers/file-edit.ts (1 issue)
|
||||
- [x] :41 LARGE_TRY_BLOCK
|
||||
|
||||
## src/cli/handlers/observation.ts (1 issue)
|
||||
- [x] :50 LARGE_TRY_BLOCK
|
||||
|
||||
## src/ui/viewer/hooks/useStats.ts (1 issue)
|
||||
- [x] :13 GENERIC_CATCH
|
||||
|
||||
## src/ui/viewer/hooks/useTheme.ts (2 issues)
|
||||
- [x] :19 GENERIC_CATCH
|
||||
- [x] :64 GENERIC_CATCH
|
||||
|
||||
## src/ui/viewer/hooks/useContextPreview.ts (3 issues)
|
||||
- [x] :40 LARGE_TRY_BLOCK
|
||||
- [x] :63 GENERIC_CATCH
|
||||
- [x] :108 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/bin/import-xml-observations.ts (7 issues)
|
||||
- [x] :62 LARGE_TRY_BLOCK
|
||||
- [x] :134 LARGE_TRY_BLOCK
|
||||
- [x] :152 GENERIC_CATCH
|
||||
- [x] :167 LARGE_TRY_BLOCK
|
||||
- [x] :183 GENERIC_CATCH
|
||||
- [x] :329 GENERIC_CATCH
|
||||
- [x] :361 GENERIC_CATCH
|
||||
|
||||
## src/utils/project-filter.ts (1 issue)
|
||||
- [x] :66 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/utils/worktree.ts (2 issues)
|
||||
- [x] :41 NO_LOGGING_IN_CATCH
|
||||
- [x] :55 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/utils/claude-md-utils.ts (2 issues)
|
||||
- [x] :442 LARGE_TRY_BLOCK
|
||||
- [x] :475 GENERIC_CATCH
|
||||
|
||||
## src/utils/logger.ts (5 issues)
|
||||
- [x] :63 GENERIC_CATCH
|
||||
- [x] :87 NO_LOGGING_IN_CATCH
|
||||
- [x] :87 GENERIC_CATCH
|
||||
- [x] :155 NO_LOGGING_IN_CATCH
|
||||
- [x] :292 GENERIC_CATCH
|
||||
|
||||
## src/utils/json-utils.ts (1 issue)
|
||||
- [x] :24 GENERIC_CATCH
|
||||
|
||||
## src/utils/agents-md-utils.ts (1 issue)
|
||||
- [x] :34 GENERIC_CATCH
|
||||
|
||||
## src/shared/timeline-formatting.ts (1 issue)
|
||||
- [x] :19 GENERIC_CATCH
|
||||
|
||||
## src/shared/plugin-state.ts (1 issue)
|
||||
- [x] :25 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/shared/worker-utils.ts (2 issues)
|
||||
- [x] :150 GENERIC_CATCH
|
||||
- [x] :179 LARGE_TRY_BLOCK
|
||||
|
||||
## src/shared/SettingsDefaultsManager.ts (2 issues)
|
||||
- [x] :224 GENERIC_CATCH
|
||||
- [x] :244 GENERIC_CATCH
|
||||
|
||||
## src/shared/EnvManager.ts (3 issues)
|
||||
- [x] :124 GENERIC_CATCH
|
||||
- [x] :134 LARGE_TRY_BLOCK
|
||||
- [x] :186 GENERIC_CATCH
|
||||
|
||||
## src/shared/paths.ts (1 issue)
|
||||
- [x] :149 GENERIC_CATCH
|
||||
|
||||
## src/sdk/prompts.ts (2 issues)
|
||||
- [x] :112 GENERIC_CATCH
|
||||
- [x] :121 GENERIC_CATCH
|
||||
|
||||
## src/npx-cli/utils/bun-resolver.ts (1 issue)
|
||||
- [x] :82 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/npx-cli/commands/install.ts (4 issues)
|
||||
- [x] :131 NO_LOGGING_IN_CATCH
|
||||
- [x] :375 NO_LOGGING_IN_CATCH
|
||||
- [x] :412 NO_LOGGING_IN_CATCH
|
||||
- [x] :501 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/npx-cli/commands/uninstall.ts (1 issue)
|
||||
- [x] :123 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/npx-cli/commands/runtime.ts (2 issues)
|
||||
- [x] :157 LARGE_TRY_BLOCK
|
||||
- [x] :177 GENERIC_CATCH
|
||||
|
||||
## src/npx-cli/commands/ide-detection.ts (2 issues)
|
||||
- [x] :41 NO_LOGGING_IN_CATCH
|
||||
- [x] :56 NO_LOGGING_IN_CATCH
|
||||
|
||||
## src/servers/mcp-server.ts (4 issues)
|
||||
- [x] :111 LARGE_TRY_BLOCK
|
||||
- [x] :156 LARGE_TRY_BLOCK
|
||||
- [x] :198 GENERIC_CATCH
|
||||
- [x] :232 GENERIC_CATCH
|
||||
|
||||
## src/integrations/opencode-plugin/index.ts (3 issues)
|
||||
- [x] :108 LARGE_TRY_BLOCK
|
||||
- [x] :342 LARGE_TRY_BLOCK
|
||||
- [x] :357 NO_LOGGING_IN_CATCH
|
||||
+125
@@ -4,6 +4,131 @@ All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
|
||||
|
||||
##
|
||||
✅ CHANGELOG.md generated successfully!
|
||||
237 new release(s) prepended
|
||||
e resolves error handling anti-patterns across the entire codebase (91 files), improving resilience and correctness.
|
||||
|
||||
### Bug Fixes
|
||||
|
||||
- **OpenRouterAgent**: Restored assistant replies to `conversationHistory` — multi-turn context was lost after method extraction (#2078)
|
||||
- **ChromaSync**: Fixed cross-type dedup collision where `observation#N`, `session_summary#N`, and `user_prompt#N` could silently drop results
|
||||
- **Timeline queries**: Fixed logger calls wrapping Error inside an object instead of passing directly
|
||||
- **FTS migrations**: Preserved non-Error failure details instead of silently dropping them
|
||||
|
||||
### Error Handling Improvements
|
||||
|
||||
- Replaced 301 error handling anti-patterns across 91 files:
|
||||
- Narrowed overly broad try-catch blocks into focused error boundaries
|
||||
- Replaced unsafe `error as Error` casts with `instanceof` checks
|
||||
- Added structured error logging where catches were previously empty
|
||||
- Extracted large try blocks into dedicated helper methods
|
||||
- **Installer resilience**: Moved filesystem operations (`mkdirSync`) inside try/catch in Cursor, Gemini CLI, Goose MCP, and OpenClaw installers to maintain numeric return-code contracts
|
||||
- **GeminiCliHooksInstaller**: Install/uninstall paths now catch `readGeminiSettings()` failures instead of throwing past the `0/1` return contract
|
||||
- **OpenClawInstaller**: Malformed `openclaw.json` now throws instead of silently returning `{}` and potentially wiping user config
|
||||
- **WindsurfHooksInstaller**: Added null-safe parsing of `hooks.json` with optional chaining
|
||||
- **McpIntegrations**: Goose YAML updater now throws when claude-mem markers exist but regex replacement fails
|
||||
- **EnvManager**: Directory setup and existing-file reads are now wrapped in structured error logging
|
||||
- **WorktreeAdoption**: `adoptedSqliteIds` mutation delayed until SQL update succeeds
|
||||
- **Import script**: Guard against malformed timestamps before `toISOString()`
|
||||
- **Runtime CLI**: Guard `response.json()` parsing with controlled error output
|
||||
|
||||
### Documentation
|
||||
|
||||
- Added README for Docker claude-mem harness
|
||||
|
||||
## [12.3.0] - 2026-04-20
|
||||
|
||||
## New features
|
||||
|
||||
### Basic claude-mem Docker container (`docker/claude-mem/`)
|
||||
A ready-to-run container for ad-hoc claude-mem testing with zero local setup beyond Docker.
|
||||
|
||||
- `FROM node:20`; layers pinned Bun (1.3.12) + uv (0.11.7) + the built plugin
|
||||
- Non-root `node` user so `--permission-mode bypassPermissions` works headlessly
|
||||
- `build.sh`, `run.sh` (auto-extracts OAuth from macOS Keychain or `~/.claude/.credentials.json`, falls back to `ANTHROPIC_API_KEY`), `entrypoint.sh`
|
||||
- Persistent `.claude-mem/` mount so the observations DB survives container exit
|
||||
|
||||
Validated end-to-end: `PostToolUse` hook → queue → worker SDK call under subscription OAuth → `<observation>` XML → `observations` table → Chroma sync.
|
||||
|
||||
### SWE-bench evaluation harness (`evals/swebench/`)
|
||||
Two-container split (our agent image + the upstream SWE-bench harness) for measuring claude-mem's effect on resolve rate.
|
||||
|
||||
- `Dockerfile.agent` → `claude-mem/swebench-agent:latest` (same non-root, version-pinned approach)
|
||||
- `run-instance.sh` — two-turn ingest/fix protocol per instance; shallow clone at `base_commit` with full-clone fallback
|
||||
- `run-batch.py` — parallel orchestrator with OAuth extraction, per-container naming, timeout enforcement + force-cleanup, `--overwrite` guard against silent truncation of partial results
|
||||
- `eval.sh` — wraps `python -m swebench.harness.run_evaluation`
|
||||
- `summarize.py` — aggregates per-instance reports
|
||||
- `smoke-test.sh` — one-instance smoke test
|
||||
|
||||
### Fixes / hardening (from PR review)
|
||||
- `chmod 600` on extracted OAuth creds files
|
||||
- Grouped `{ chmod || true; }` so bash precedence can't mask failed `curl|sh` installs
|
||||
- macOS creds: Keychain-first with file fallback for migrated / older setups
|
||||
- `smoke-test.sh` `TIMEOUT` now actually enforced via `timeout`/`gtimeout` plus `docker rm -f` on exit 124
|
||||
- Container naming + force-cleanup in `run-batch.py` timeout handler prevents orphan containers
|
||||
- Fixed stdin-redirection collision in the consolidated `smoke-test.sh` JSON parser
|
||||
- Drop `exec` in `run.sh` so the EXIT trap fires and cleans the temp creds file
|
||||
|
||||
**PR:** https://github.com/thedotmack/claude-mem/pull/2076
|
||||
|
||||
## [12.2.3] - 2026-04-19
|
||||
|
||||
## Fixed
|
||||
|
||||
- **Parser: stop warning on normal observation responses (#2074).** Eliminated the `PARSER Summary response contained <observation> tags instead of <summary> — prompt conditioning may need strengthening` warning that fired on every normal observation turn. The warning was inherited from #1345 when `parseSummary` was only called after summary prompts; after #1633's refactor it runs on every response, so the observation-only fallthrough always tripped. Gated the entire observation-on-summary path on `coerceFromObservation` so only genuine summary-turn coercion failures log.
|
||||
|
||||
**Full diff:** https://github.com/thedotmack/claude-mem/compare/v12.2.2...v12.2.3
|
||||
|
||||
## [12.2.2] - 2026-04-19
|
||||
|
||||
## Subagent summary disable + labeling
|
||||
|
||||
Claude Code subagents (the Task tool and built-in agents like Explore/Plan/Bash) no longer trigger a session summary on Stop, and every observation row now carries the originating subagent's identity.
|
||||
|
||||
### Features
|
||||
|
||||
- **Subagent Stop hooks skip summarization.** When a hook fires inside a subagent (identified by `agent_id` on stdin), the handler short-circuits before bootstrapping the worker. Only the main assistant owns the session summary. Sessions started with `--agent` (which set `agent_type` but not `agent_id`) still own their summary.
|
||||
- **Observations are labeled by subagent.** The `observations` table gains two new nullable columns — `agent_type` and `agent_id` — populated end-to-end from the hook stdin through the pending queue into storage. Main-session rows remain `NULL`. Labels survive worker restarts via matching columns on `pending_messages`.
|
||||
|
||||
### Safety
|
||||
|
||||
- Defense-in-depth guard on the worker `/api/sessions/summarize` route so direct API callers can't bypass the hook-layer short-circuit.
|
||||
- `pickAgentField` type guard at the adapter edge validates the hook input: must be a non-empty string ≤128 characters, otherwise dropped.
|
||||
- Content-hash dedup intentionally excludes `agent_type`/`agent_id` so the same semantic observation from a subagent and its parent merges to a single row.
|
||||
|
||||
### Schema
|
||||
|
||||
- Migration 010 (version 27) adds the two columns to `observations` and `pending_messages`, plus indexes on `observations.agent_type` and `observations.agent_id`. Idempotent, state-aware logging.
|
||||
|
||||
### Tests
|
||||
|
||||
- 17 new unit tests: adapter extraction (length cap boundary, empty-string rejection, type guards), handler short-circuit behavior, DB-level labeling and dedup invariants.
|
||||
|
||||
PR: #2073
|
||||
|
||||
## [12.2.1] - 2026-04-19
|
||||
|
||||
## What's Fixed
|
||||
|
||||
### Break infinite summary-retry loop (#1633)
|
||||
|
||||
When the summary agent returned `<observation>` tags instead of `<summary>` tags, the parser rejected the response, no summary was stored, the session completed without a summary, and a new session was spawned with ~5–6 KB of extra prompt context — repeating indefinitely.
|
||||
|
||||
**Three layers of defense (PR #2072):**
|
||||
|
||||
- **Parser coercion** — when a summary is expected, observation fields are mapped to summary fields (title → request/completed, narrative → investigated, facts → learned) instead of discarding the response.
|
||||
- **Stronger prompt** — summary prompts now include an explicit tag-requirement block and a closing reminder so the LLM is much less likely to emit observation tags in the first place.
|
||||
- **Circuit breaker** — per-session counter caps consecutive summary failures at 3; further summarize requests are skipped until a success resets it. Explicit `<skip_summary/>` responses are treated as neutral, not failures.
|
||||
|
||||
**Edge cases handled:**
|
||||
|
||||
- Empty leading `<observation>` blocks fall through to the first populated one.
|
||||
- Empty `<summary></summary>` wrappers fall back to observation coercion.
|
||||
- Multiple observation blocks are iterated via a global regex.
|
||||
|
||||
Full details: #2072
|
||||
|
||||
## [12.2.0] - 2026-04-18
|
||||
|
||||
## Highlights
|
||||
|
||||
@@ -0,0 +1,93 @@
|
||||
# Basic claude-mem container for ad-hoc testing.
|
||||
#
|
||||
# Base layout mirrors anthropics/claude-code .devcontainer
|
||||
# (https://github.com/anthropics/claude-code/blob/main/.devcontainer/Dockerfile):
|
||||
# FROM node:20, non-root `node` user, global npm install of @anthropic-ai/claude-code.
|
||||
# We skip the firewall/zsh/fzf/delta/git-hist noise since this image is for
|
||||
# exercising claude-mem, not as a full dev environment.
|
||||
#
|
||||
# On top of that base we install:
|
||||
# - Bun (claude-mem worker service runtime)
|
||||
# - uv (provides Python for Chroma per CLAUDE.md)
|
||||
# - The locally-built plugin/ tree at /opt/claude-mem
|
||||
#
|
||||
# Usage:
|
||||
# docker build -f docker/claude-mem/Dockerfile -t claude-mem:basic .
|
||||
# docker run --rm -it \
|
||||
# -v $(mktemp -d):/home/node/.claude-mem \
|
||||
# -e CLAUDE_MEM_CREDENTIALS_FILE=/auth/.credentials.json \
|
||||
# -v /path/to/extracted/creds.json:/auth/.credentials.json:ro \
|
||||
# claude-mem:basic
|
||||
|
||||
FROM node:20
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
git \
|
||||
curl \
|
||||
ca-certificates \
|
||||
unzip \
|
||||
jq \
|
||||
less \
|
||||
procps \
|
||||
uuid-runtime \
|
||||
sqlite3 \
|
||||
&& apt-get clean && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Bun — system-wide so the unprivileged `node` user can execute it.
|
||||
# Pin via --build-arg BUN_VERSION=X.Y.Z; default is the version verified at PR time.
|
||||
ARG BUN_VERSION=1.3.12
|
||||
ENV BUN_INSTALL="/usr/local/bun"
|
||||
RUN curl -fsSL https://bun.sh/install | bash -s "bun-v${BUN_VERSION}" \
|
||||
&& chmod -R a+rX /usr/local/bun
|
||||
ENV PATH="/usr/local/bun/bin:${PATH}"
|
||||
|
||||
# uv — system-wide, for Chroma's Python runtime. Pin via --build-arg UV_VERSION=X.Y.Z.
|
||||
# Versioned installer URL per https://docs.astral.sh/uv/getting-started/installation/.
|
||||
ARG UV_VERSION=0.11.7
|
||||
ENV UV_INSTALL_DIR="/usr/local/bin"
|
||||
# `&&` binds tighter than `||` in bash, so the previous form let `curl|sh` fail
|
||||
# silently via the trailing `|| true`. Group the chmod so tolerated failure is
|
||||
# scoped to perms-fixing only.
|
||||
RUN set -eux \
|
||||
&& curl -LsSf "https://astral.sh/uv/${UV_VERSION}/install.sh" | sh \
|
||||
&& { chmod a+rX /usr/local/bin/uv /usr/local/bin/uvx 2>/dev/null || true; }
|
||||
|
||||
# Match the upstream devcontainer's npm-global prefix so `npm install -g`
|
||||
# targets a dir the `node` user owns.
|
||||
RUN mkdir -p /usr/local/share/npm-global \
|
||||
&& chown -R node:node /usr/local/share/npm-global
|
||||
ENV NPM_CONFIG_PREFIX=/usr/local/share/npm-global
|
||||
ENV PATH="/usr/local/share/npm-global/bin:${PATH}"
|
||||
|
||||
# Claude Code CLI. Override at build-time with --build-arg CLAUDE_CODE_VERSION=X.Y.Z
|
||||
# to pin; default tracks latest.
|
||||
ARG CLAUDE_CODE_VERSION=latest
|
||||
USER node
|
||||
RUN npm install -g @anthropic-ai/claude-code@${CLAUDE_CODE_VERSION}
|
||||
|
||||
# Locally-built claude-mem plugin. COPY runs as root by default and layers are
|
||||
# cached, so put this after the npm install so iterating on the plugin doesn't
|
||||
# invalidate the CLI install layer.
|
||||
USER root
|
||||
COPY plugin/ /opt/claude-mem/
|
||||
RUN chown -R node:node /opt/claude-mem
|
||||
|
||||
# Persistent mount points for ad-hoc testing — mount a host dir at either of
|
||||
# these to inspect the claude-mem DB after a session.
|
||||
RUN mkdir -p /home/node/.claude /home/node/.claude-mem \
|
||||
&& chown -R node:node /home/node/.claude /home/node/.claude-mem
|
||||
|
||||
USER node
|
||||
WORKDIR /home/node
|
||||
|
||||
# Helper: copies OAuth creds out of the read-only mount into $HOME/.claude/
|
||||
# before exec'ing whatever you asked for. Saves the "cp + chmod" dance every
|
||||
# time you drop in.
|
||||
COPY --chown=node:node docker/claude-mem/entrypoint.sh /usr/local/bin/claude-mem-entrypoint
|
||||
RUN chmod +x /usr/local/bin/claude-mem-entrypoint
|
||||
|
||||
ENTRYPOINT ["/usr/local/bin/claude-mem-entrypoint"]
|
||||
CMD ["bash"]
|
||||
@@ -0,0 +1,135 @@
|
||||
# claude-mem Docker harness
|
||||
|
||||
A minimal container for exercising claude-mem end-to-end without polluting your
|
||||
host. Not a dev environment — just enough to boot `claude` with the locally-built
|
||||
plugin and capture observations into a throwaway SQLite DB you can inspect
|
||||
afterwards.
|
||||
|
||||
## Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `Dockerfile` | Image definition (node:20 + Bun + uv + Claude Code CLI + local `plugin/`) |
|
||||
| `build.sh` | Runs `npm run build` then `docker build`. Tag defaults to `claude-mem:basic`. |
|
||||
| `entrypoint.sh` | Runs inside the container. Seeds OAuth creds into `$HOME/.claude/` if mounted, then `exec "$@"`. |
|
||||
| `run.sh` | Host-side launcher. Extracts creds (Keychain → file → env), mounts a persistent data dir, drops you into an interactive shell. |
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
# From the repo root:
|
||||
docker/claude-mem/build.sh
|
||||
docker/claude-mem/run.sh
|
||||
```
|
||||
|
||||
`run.sh` drops you into `bash` inside the container with `claude` on `PATH` and
|
||||
the plugin pre-staged at `/opt/claude-mem`. Launch it with:
|
||||
|
||||
```bash
|
||||
claude --plugin-dir /opt/claude-mem
|
||||
```
|
||||
|
||||
On exit, the SQLite DB survives at `./.docker-claude-mem-data/claude-mem.db` on
|
||||
the host — inspect with:
|
||||
|
||||
```bash
|
||||
sqlite3 .docker-claude-mem-data/claude-mem.db 'select count(*) from observations'
|
||||
```
|
||||
|
||||
## What's in the image
|
||||
|
||||
Mirrors the layout of [anthropics/claude-code's devcontainer](https://github.com/anthropics/claude-code/blob/main/.devcontainer/Dockerfile):
|
||||
`FROM node:20`, non-root `node` user, global `npm install -g @anthropic-ai/claude-code`.
|
||||
Skips the firewall/zsh/fzf/delta/git-hist tooling since this image is about
|
||||
running claude-mem, not editing code.
|
||||
|
||||
On top of that:
|
||||
|
||||
- **Bun** (`/usr/local/bun`) — claude-mem's worker service runtime
|
||||
- **uv** (`/usr/local/bin/uv`) — provides Python for Chroma per `CLAUDE.md`
|
||||
- **`plugin/`** copied to `/opt/claude-mem` — the locally-built plugin tree
|
||||
- **`/home/node/.claude`** and **`/home/node/.claude-mem`** — pre-created mount points
|
||||
|
||||
Layer ordering is deliberate: plugin files are copied **after** the `npm install`
|
||||
layer so iterating on the plugin doesn't bust the CLI install cache.
|
||||
|
||||
## Pinning versions
|
||||
|
||||
Everything that matters is a `--build-arg` — pin for reproducibility, omit for
|
||||
latest:
|
||||
|
||||
```bash
|
||||
docker build \
|
||||
-f docker/claude-mem/Dockerfile \
|
||||
--build-arg BUN_VERSION=1.3.12 \
|
||||
--build-arg UV_VERSION=0.11.7 \
|
||||
--build-arg CLAUDE_CODE_VERSION=1.2.3 \
|
||||
-t claude-mem:basic .
|
||||
```
|
||||
|
||||
| Arg | Default | Notes |
|
||||
|-----|---------|-------|
|
||||
| `BUN_VERSION` | `1.3.12` | Installed via the official `bun.sh/install` script, tag `bun-v${BUN_VERSION}`. |
|
||||
| `UV_VERSION` | `0.11.7` | Installed via the versioned `astral.sh/uv/${UV_VERSION}/install.sh`. |
|
||||
| `CLAUDE_CODE_VERSION` | `latest` | npm tag or exact version. Pin in CI, let it float locally. |
|
||||
|
||||
## Authentication
|
||||
|
||||
`run.sh` picks the first auth source that works, in this order:
|
||||
|
||||
1. **`ANTHROPIC_API_KEY`** env var — mounted straight into the container.
|
||||
2. **macOS Keychain** — `security find-generic-password -s 'Claude Code-credentials'`.
|
||||
3. **`~/.claude/.credentials.json`** — legacy on-disk form, still present on some
|
||||
older CLI installs and migrated machines.
|
||||
|
||||
If a credentials file is used, it's written to a `mktemp` file with `chmod 600`,
|
||||
mounted read-only at `/auth/.credentials.json`, and the container's entrypoint
|
||||
copies it to `$HOME/.claude/.credentials.json` before exec. An `EXIT` trap
|
||||
deletes the temp file when `run.sh` returns — `docker run` is deliberately **not**
|
||||
`exec`'d so the trap gets a chance to fire.
|
||||
|
||||
If no auth source is found, `run.sh` exits with an error pointing you at
|
||||
`claude login` or `ANTHROPIC_API_KEY`.
|
||||
|
||||
## Manual invocation (without `run.sh`)
|
||||
|
||||
```bash
|
||||
docker run --rm -it \
|
||||
-v $(mktemp -d):/home/node/.claude-mem \
|
||||
-e CLAUDE_MEM_CREDENTIALS_FILE=/auth/.credentials.json \
|
||||
-v /path/to/creds.json:/auth/.credentials.json:ro \
|
||||
claude-mem:basic
|
||||
```
|
||||
|
||||
Or with API key auth:
|
||||
|
||||
```bash
|
||||
docker run --rm -it \
|
||||
-v $(mktemp -d):/home/node/.claude-mem \
|
||||
-e ANTHROPIC_API_KEY \
|
||||
claude-mem:basic
|
||||
```
|
||||
|
||||
## Environment variables
|
||||
|
||||
| Var | Where | Purpose |
|
||||
|-----|-------|---------|
|
||||
| `TAG` | `build.sh`, `run.sh` | Override image tag (default `claude-mem:basic`). |
|
||||
| `HOST_MEM_DIR` | `run.sh` | Override host path for the persistent `.claude-mem` volume (default `$REPO_ROOT/.docker-claude-mem-data`). |
|
||||
| `ANTHROPIC_API_KEY` | `run.sh`, entrypoint | API-key auth. Skips the OAuth creds extraction. |
|
||||
| `CLAUDE_MEM_CREDENTIALS_FILE` | entrypoint | Path (inside the container) to a mounted OAuth creds JSON. Copied to `$HOME/.claude/.credentials.json` at startup. |
|
||||
|
||||
## Passing args through
|
||||
|
||||
Anything after `run.sh` is forwarded to the container as the command:
|
||||
|
||||
```bash
|
||||
docker/claude-mem/run.sh claude --plugin-dir /opt/claude-mem --print "what did we learn yesterday?"
|
||||
```
|
||||
|
||||
## Cleanup
|
||||
|
||||
```bash
|
||||
rm -rf .docker-claude-mem-data # wipes the persistent DB + Chroma store
|
||||
docker rmi claude-mem:basic # removes the image
|
||||
```
|
||||
Executable
+24
@@ -0,0 +1,24 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build the basic claude-mem Docker image from the current worktree.
|
||||
#
|
||||
# Usage:
|
||||
# docker/claude-mem/build.sh # builds claude-mem:basic
|
||||
# TAG=my-tag docker/claude-mem/build.sh # override the tag
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
TAG="${TAG:-claude-mem:basic}"
|
||||
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
echo "[build] npm run build"
|
||||
npm run build
|
||||
|
||||
echo "[build] docker build -t $TAG"
|
||||
docker build \
|
||||
-f docker/claude-mem/Dockerfile \
|
||||
-t "$TAG" \
|
||||
"$REPO_ROOT"
|
||||
|
||||
echo "[build] done: $TAG"
|
||||
Executable
+28
@@ -0,0 +1,28 @@
|
||||
#!/usr/bin/env bash
|
||||
# Entrypoint for the basic claude-mem container. Seeds OAuth creds if a
|
||||
# credentials file is mounted, then exec's whatever was passed (default: bash).
|
||||
#
|
||||
# Env vars:
|
||||
# CLAUDE_MEM_CREDENTIALS_FILE Path to a mounted OAuth credentials JSON file
|
||||
# (e.g. /auth/.credentials.json). Copied into
|
||||
# $HOME/.claude/.credentials.json at startup.
|
||||
# ANTHROPIC_API_KEY Standard API-key auth; set when OAuth isn't used.
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
mkdir -p "$HOME/.claude" "$HOME/.claude-mem"
|
||||
|
||||
if [[ -n "${CLAUDE_MEM_CREDENTIALS_FILE:-}" ]]; then
|
||||
if [[ ! -f "$CLAUDE_MEM_CREDENTIALS_FILE" ]]; then
|
||||
echo "ERROR: CLAUDE_MEM_CREDENTIALS_FILE set but file missing: $CLAUDE_MEM_CREDENTIALS_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
cp "$CLAUDE_MEM_CREDENTIALS_FILE" "$HOME/.claude/.credentials.json"
|
||||
chmod 600 "$HOME/.claude/.credentials.json"
|
||||
fi
|
||||
|
||||
# Helpful one-liner for interactive users: run `claude` with the plugin dir
|
||||
# preconfigured. Don't force it — `exec "$@"` lets you override freely.
|
||||
export PATH="/usr/local/bun/bin:/usr/local/share/npm-global/bin:$PATH"
|
||||
|
||||
exec "$@"
|
||||
Executable
+69
@@ -0,0 +1,69 @@
|
||||
#!/usr/bin/env bash
|
||||
# Drop into an interactive claude-mem container with OAuth creds + persistent
|
||||
# memory volume. For ad-hoc testing / poking around.
|
||||
#
|
||||
# Usage:
|
||||
# docker/claude-mem/run.sh
|
||||
# docker/claude-mem/run.sh claude --plugin-dir /opt/claude-mem --print "hi"
|
||||
#
|
||||
# On exit, the mounted .claude-mem/ dir on the host survives so you can inspect
|
||||
# the DB: `sqlite3 <HOST_MEM_DIR>/claude-mem.db 'select count(*) from observations'`.
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
TAG="${TAG:-claude-mem:basic}"
|
||||
|
||||
HOST_MEM_DIR="${HOST_MEM_DIR:-$REPO_ROOT/.docker-claude-mem-data}"
|
||||
mkdir -p "$HOST_MEM_DIR"
|
||||
echo "[run] host .claude-mem dir: $HOST_MEM_DIR" >&2
|
||||
|
||||
# Auth. Prefer OAuth (extracted from macOS Keychain / Linux creds file);
|
||||
# fall back to ANTHROPIC_API_KEY env.
|
||||
CREDS_FILE=""
|
||||
CREDS_MOUNT_ARGS=()
|
||||
if [[ -z "${ANTHROPIC_API_KEY:-}" ]]; then
|
||||
CREDS_FILE="$(mktemp -t claude-mem-creds.XXXXXX.json)"
|
||||
trap 'rm -f "$CREDS_FILE"' EXIT
|
||||
|
||||
# Try macOS Keychain first (primary storage on Darwin), then fall back to
|
||||
# the on-disk credentials file — some macOS setups (older CLI versions,
|
||||
# users who migrated machines) still have the file-only form.
|
||||
creds_obtained=0
|
||||
if [[ "$(uname)" == "Darwin" ]]; then
|
||||
if security find-generic-password -s 'Claude Code-credentials' -w > "$CREDS_FILE" 2>/dev/null \
|
||||
&& [[ -s "$CREDS_FILE" ]]; then
|
||||
creds_obtained=1
|
||||
fi
|
||||
fi
|
||||
if [[ "$creds_obtained" -eq 0 && -f "$HOME/.claude/.credentials.json" ]]; then
|
||||
cp "$HOME/.claude/.credentials.json" "$CREDS_FILE"
|
||||
creds_obtained=1
|
||||
fi
|
||||
if [[ "$creds_obtained" -eq 0 ]]; then
|
||||
echo "ERROR: no ANTHROPIC_API_KEY set and no Claude OAuth credentials found." >&2
|
||||
echo " Tried: macOS Keychain ('Claude Code-credentials') and ~/.claude/.credentials.json." >&2
|
||||
echo " Run \`claude login\` on the host first, or set ANTHROPIC_API_KEY." >&2
|
||||
exit 1
|
||||
fi
|
||||
chmod 600 "$CREDS_FILE"
|
||||
CREDS_MOUNT_ARGS=(
|
||||
-e CLAUDE_MEM_CREDENTIALS_FILE=/auth/.credentials.json
|
||||
-v "$CREDS_FILE:/auth/.credentials.json:ro"
|
||||
)
|
||||
else
|
||||
CREDS_MOUNT_ARGS=(-e ANTHROPIC_API_KEY)
|
||||
fi
|
||||
|
||||
# Pick -it only when a TTY is attached (keeps non-interactive callers working).
|
||||
TTY_ARGS=()
|
||||
[[ -t 0 && -t 1 ]] && TTY_ARGS=(-it)
|
||||
|
||||
# NOT `exec` — we want the EXIT trap above to run and remove $CREDS_FILE
|
||||
# after the container exits. Running docker as a child keeps the shell
|
||||
# alive long enough for the trap to fire.
|
||||
docker run --rm "${TTY_ARGS[@]}" \
|
||||
"${CREDS_MOUNT_ARGS[@]}" \
|
||||
-v "$HOST_MEM_DIR:/home/node/.claude-mem" \
|
||||
"$TAG" \
|
||||
"$@"
|
||||
@@ -0,0 +1,74 @@
|
||||
# claude-mem SWE-bench agent image
|
||||
# Plan: .claude/plans/swebench-claude-mem-docker.md (Phase 1)
|
||||
#
|
||||
# Produces `claude-mem/swebench-agent:latest`: Claude Code CLI 2.1.114 +
|
||||
# locally-built claude-mem plugin, ready to run headlessly per SWE-bench
|
||||
# instance. Auth (ANTHROPIC_API_KEY) is passed at runtime, never baked in.
|
||||
|
||||
FROM node:20-bookworm-slim
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive
|
||||
|
||||
# System dependencies:
|
||||
# git, curl, ca-certificates, unzip — base tooling (Bun installer needs unzip)
|
||||
# jq — JSONL assembly in run-instance.sh
|
||||
# uuid-runtime — uuidgen for per-instance session IDs (Phase 2)
|
||||
# sqlite3 — verifies the claude-mem observations DB
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends \
|
||||
git \
|
||||
curl \
|
||||
ca-certificates \
|
||||
unzip \
|
||||
jq \
|
||||
uuid-runtime \
|
||||
sqlite3 \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Bun (claude-mem worker service runs under Bun). Installed to a system
|
||||
# location so the non-root runtime user can execute it.
|
||||
ENV BUN_INSTALL="/usr/local/bun"
|
||||
RUN curl -fsSL https://bun.sh/install | bash \
|
||||
&& chmod -R a+rX /usr/local/bun
|
||||
ENV PATH="/usr/local/bun/bin:${PATH}"
|
||||
|
||||
# uv (provides Python for Chroma per CLAUDE.md). Installed to a system
|
||||
# location, same reason.
|
||||
ENV UV_INSTALL_DIR="/usr/local/bin"
|
||||
# Group the chmod so the trailing `|| true` only absorbs chmod failures; without
|
||||
# this grouping, bash precedence (`&&` binds tighter than `||`) would silently
|
||||
# mask a failed `curl|sh` install step.
|
||||
RUN set -eux \
|
||||
&& curl -LsSf https://astral.sh/uv/install.sh | sh \
|
||||
&& { chmod a+rX /usr/local/bin/uv /usr/local/bin/uvx 2>/dev/null || true; }
|
||||
|
||||
# Claude Code CLI — PINNED to the version whose flag surface was verified in
|
||||
# the plan (Phase 0). Do NOT bump without re-verifying flags.
|
||||
RUN npm install -g @anthropic-ai/claude-code@2.1.114
|
||||
|
||||
# Locally-built claude-mem plugin. The build-agent-image.sh wrapper runs
|
||||
# `npm run build` before `docker build`, so plugin/ is populated in the build
|
||||
# context. We do NOT install claude-mem from npm — we want the current
|
||||
# worktree under test.
|
||||
COPY plugin/ /opt/claude-mem/
|
||||
|
||||
# Runner script — entrypoint for per-instance invocation (Phase 2 deliverable).
|
||||
COPY evals/swebench/run-instance.sh /evals/swebench/run-instance.sh
|
||||
RUN chmod +x /evals/swebench/run-instance.sh
|
||||
|
||||
# Pre-create per-instance config dirs. run-instance.sh overrides HOME to a
|
||||
# scratch dir for isolation, but having these present keeps tools from
|
||||
# bailing if they probe the default locations before HOME is set.
|
||||
RUN mkdir -p /root/.claude /root/.claude-mem
|
||||
|
||||
# Non-root user. Claude Code refuses `--dangerously-skip-permissions` /
|
||||
# `--permission-mode bypassPermissions` when euid==0 as a safety rail, so we
|
||||
# need an unprivileged user for headless batch runs. node:20 already ships a
|
||||
# `node` user at uid 1000 — reuse it.
|
||||
RUN mkdir -p /home/node/.claude /home/node/.claude-mem \
|
||||
&& chown -R node:node /home/node /opt/claude-mem
|
||||
|
||||
USER node
|
||||
WORKDIR /home/node
|
||||
|
||||
ENTRYPOINT ["/evals/swebench/run-instance.sh"]
|
||||
Executable
+20
@@ -0,0 +1,20 @@
|
||||
#!/usr/bin/env bash
|
||||
# Build the claude-mem SWE-bench agent image.
|
||||
# Plan: .claude/plans/swebench-claude-mem-docker.md (Phase 1, step 2)
|
||||
set -euo pipefail
|
||||
|
||||
# Resolve repo root (two levels up from this script: evals/swebench -> repo).
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# 1. Build the plugin so plugin/ is populated for the COPY step in the Dockerfile.
|
||||
npm run build
|
||||
|
||||
# 2. Build the agent image. Context is the repo root so both plugin/ and
|
||||
# evals/swebench/run-instance.sh are reachable.
|
||||
docker build \
|
||||
-f evals/swebench/Dockerfile.agent \
|
||||
-t claude-mem/swebench-agent:latest \
|
||||
.
|
||||
Executable
+72
@@ -0,0 +1,72 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# eval.sh — Thin wrapper around `python -m swebench.harness.run_evaluation`.
|
||||
#
|
||||
# Required env:
|
||||
# RUN_ID Identifier for this evaluation run (matches predictions dir).
|
||||
# Optional env:
|
||||
# MAX_WORKERS Parallel worker count for the harness (default: 4).
|
||||
# DATASET HF dataset name (default: princeton-nlp/SWE-bench_Verified).
|
||||
# TIMEOUT Per-instance timeout in seconds (default: 1800).
|
||||
#
|
||||
# Reports land at:
|
||||
# logs/run_evaluation/$RUN_ID/claude-opus-4-7+claude-mem/<instance_id>/report.json
|
||||
|
||||
: "${RUN_ID:?RUN_ID is required (e.g. RUN_ID=smoke-001)}"
|
||||
MAX_WORKERS="${MAX_WORKERS:-4}"
|
||||
DATASET="${DATASET:-princeton-nlp/SWE-bench_Verified}"
|
||||
TIMEOUT="${TIMEOUT:-1800}"
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
PREDICTIONS="evals/swebench/runs/$RUN_ID/predictions.jsonl"
|
||||
|
||||
if [[ ! -f "$PREDICTIONS" ]]; then
|
||||
echo "ERROR: predictions file not found: $PREDICTIONS" >&2
|
||||
echo "Hint: run Phase 3 agent loop first to produce predictions.jsonl for RUN_ID=$RUN_ID." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Harness REQUIRES Docker — fail fast with a clean message if it's not running.
|
||||
if ! command -v docker >/dev/null 2>&1; then
|
||||
echo "ERROR: docker CLI not found on PATH. The SWE-bench harness requires Docker." >&2
|
||||
exit 1
|
||||
fi
|
||||
if ! docker info >/dev/null 2>&1; then
|
||||
echo "ERROR: Docker daemon is not running. Start Docker Desktop (or the docker service) and retry." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create/reuse a dedicated venv so we don't pollute the system Python.
|
||||
VENV_DIR=".venv-swebench"
|
||||
if [[ ! -d "$VENV_DIR" ]]; then
|
||||
echo "[eval.sh] Creating Python venv at $VENV_DIR ..."
|
||||
python3 -m venv "$VENV_DIR"
|
||||
fi
|
||||
# shellcheck disable=SC1091
|
||||
source "$VENV_DIR/bin/activate"
|
||||
|
||||
echo "[eval.sh] Installing/updating swebench in $VENV_DIR ..."
|
||||
pip install -q swebench
|
||||
|
||||
echo "[eval.sh] Running harness:"
|
||||
echo " dataset: $DATASET"
|
||||
echo " predictions: $PREDICTIONS"
|
||||
echo " max_workers: $MAX_WORKERS"
|
||||
echo " run_id: $RUN_ID"
|
||||
echo " timeout: $TIMEOUT"
|
||||
|
||||
python -m swebench.harness.run_evaluation \
|
||||
--dataset_name "$DATASET" \
|
||||
--predictions_path "$PREDICTIONS" \
|
||||
--max_workers "$MAX_WORKERS" \
|
||||
--run_id "$RUN_ID" \
|
||||
--timeout "$TIMEOUT"
|
||||
|
||||
REPORTS_DIR="logs/run_evaluation/$RUN_ID/claude-opus-4-7+claude-mem"
|
||||
echo ""
|
||||
echo "[eval.sh] Done. Per-instance reports at:"
|
||||
echo " $REPORTS_DIR/<instance_id>/report.json"
|
||||
Executable
+561
@@ -0,0 +1,561 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Batch orchestrator for SWE-bench evaluation of Claude Code + claude-mem.
|
||||
|
||||
Iterates a list of SWE-bench Verified instances, launches a per-instance Docker
|
||||
container (`claude-mem/swebench-agent:latest`) that runs the two-turn
|
||||
ingest/fix protocol, and collects all resulting diffs into a single
|
||||
`predictions.jsonl` compatible with the upstream SWE-bench harness.
|
||||
|
||||
Usage:
|
||||
python evals/swebench/run-batch.py \
|
||||
--run-id claude-mem-baseline-001 \
|
||||
--limit 3 \
|
||||
--max-concurrent 2
|
||||
|
||||
Rate-limit note: Anthropic API rate limits can bite quickly. The default
|
||||
`--max-concurrent` is 4, but it is safer to START WITH 2 and raise the cap
|
||||
only after observing no 429s in the logs.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import atexit
|
||||
import json
|
||||
import os
|
||||
import platform
|
||||
import shutil
|
||||
import stat
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
import threading
|
||||
from concurrent.futures import ThreadPoolExecutor, as_completed
|
||||
from pathlib import Path
|
||||
from typing import Any, Iterable
|
||||
|
||||
from datasets import load_dataset
|
||||
|
||||
|
||||
# Hidden-from-agent fields per the plan. We MUST NOT pass these to the agent
|
||||
# container — they are evaluator-only ground truth.
|
||||
HIDDEN_AGENT_FIELDS = (
|
||||
"patch",
|
||||
"test_patch",
|
||||
"FAIL_TO_PASS",
|
||||
"PASS_TO_PASS",
|
||||
"environment_setup_commit",
|
||||
"version",
|
||||
)
|
||||
|
||||
|
||||
def extract_oauth_credentials() -> Path | None:
|
||||
"""
|
||||
Extract Claude Code OAuth credentials (from a Max/Pro subscription) to a
|
||||
temp file the container can bind-mount. Returns the temp file path, or
|
||||
None if extraction failed / no creds present.
|
||||
|
||||
macOS: creds live in the Keychain under service "Claude Code-credentials".
|
||||
Linux: creds live at ~/.claude/.credentials.json.
|
||||
|
||||
CAVEAT: Anthropic Max/Pro subscriptions have usage limits (per ~5h window)
|
||||
and their ToS is framed around individual developer use. Running batch
|
||||
evaluation across parallel containers may exhaust the quota quickly or
|
||||
raise compliance concerns. This helper exists because the user explicitly
|
||||
requested it; the caller is responsible for the policy call.
|
||||
|
||||
The token may age out mid-run; we mount read-only so refresh writes fail
|
||||
silently inside the container (the underlying token in the host
|
||||
Keychain/file is untouched).
|
||||
"""
|
||||
temp = tempfile.NamedTemporaryFile(
|
||||
prefix="claude-mem-creds-",
|
||||
suffix=".json",
|
||||
delete=False,
|
||||
)
|
||||
temp_path = Path(temp.name)
|
||||
temp.close()
|
||||
# Clean up on process exit, even on crash.
|
||||
atexit.register(lambda: temp_path.unlink(missing_ok=True))
|
||||
|
||||
# macOS: try Keychain first (primary storage on Darwin). On miss, fall
|
||||
# through to the on-disk credentials file — some macOS setups (older CLI,
|
||||
# migrated machines) only have the file form.
|
||||
if platform.system() == "Darwin":
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
[
|
||||
"security",
|
||||
"find-generic-password",
|
||||
"-s",
|
||||
"Claude Code-credentials",
|
||||
"-w",
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
if completed.returncode == 0 and completed.stdout.strip():
|
||||
temp_path.write_text(completed.stdout.strip(), encoding="utf-8")
|
||||
temp_path.chmod(stat.S_IRUSR | stat.S_IWUSR)
|
||||
return temp_path
|
||||
# else fall through to the on-disk credentials check below
|
||||
except FileNotFoundError:
|
||||
print(
|
||||
"WARN: `security` command not available; trying on-disk creds.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
# fall through to the on-disk credentials check below
|
||||
|
||||
# Both platforms (and macOS fallback): read the on-disk credentials file.
|
||||
creds_file = Path.home() / ".claude" / ".credentials.json"
|
||||
if creds_file.exists():
|
||||
temp_path.write_text(creds_file.read_text(encoding="utf-8"), encoding="utf-8")
|
||||
temp_path.chmod(stat.S_IRUSR | stat.S_IWUSR)
|
||||
return temp_path
|
||||
|
||||
if platform.system() == "Darwin":
|
||||
print(
|
||||
"WARN: Claude Code-credentials not found in macOS Keychain and "
|
||||
"~/.claude/.credentials.json missing. Run `claude login` on the "
|
||||
"host first, or fall back to ANTHROPIC_API_KEY.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return None
|
||||
|
||||
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Run the claude-mem SWE-bench agent on a batch of instances.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--instance-ids",
|
||||
nargs="+",
|
||||
default=None,
|
||||
help="Optional explicit list of instance_ids to run.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--limit",
|
||||
type=int,
|
||||
default=None,
|
||||
help="If set, process only the first N instances after filtering.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max-concurrent",
|
||||
type=int,
|
||||
default=4,
|
||||
help="Max concurrent agent containers (default 4; start with 2 and raise after observing no 429s).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--run-id",
|
||||
type=str,
|
||||
required=True,
|
||||
help="Run identifier; used for output paths.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--out",
|
||||
type=str,
|
||||
default=None,
|
||||
help="Path to predictions.jsonl (default: evals/swebench/runs/<run_id>/predictions.jsonl).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--timeout",
|
||||
type=int,
|
||||
default=1800,
|
||||
help="Per-instance timeout in seconds (default 1800, matches upstream harness).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--image",
|
||||
type=str,
|
||||
default="claude-mem/swebench-agent:latest",
|
||||
help="Agent Docker image tag.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dataset",
|
||||
type=str,
|
||||
default="princeton-nlp/SWE-bench_Verified",
|
||||
help="HuggingFace dataset name (e.g. princeton-nlp/SWE-bench_Lite, default Verified).",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--auth",
|
||||
choices=["oauth", "api-key", "auto"],
|
||||
default="auto",
|
||||
help=(
|
||||
"Auth mode. 'oauth' extracts Claude Max/Pro creds from host "
|
||||
"Keychain (macOS) or ~/.claude/.credentials.json (Linux). "
|
||||
"'api-key' uses ANTHROPIC_API_KEY env. 'auto' prefers oauth, "
|
||||
"falls back to api-key."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--overwrite",
|
||||
action="store_true",
|
||||
help=(
|
||||
"Truncate existing predictions.jsonl for this --run-id. "
|
||||
"Without this flag, the run aborts if predictions already exist "
|
||||
"(protects partial results from accidental re-runs)."
|
||||
),
|
||||
)
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
def select_instances(
|
||||
dataset: Iterable[dict[str, Any]],
|
||||
instance_ids: list[str] | None,
|
||||
limit: int | None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""Filter dataset rows by instance_ids (if given) and apply limit."""
|
||||
rows: list[dict[str, Any]] = list(dataset)
|
||||
if instance_ids:
|
||||
wanted = set(instance_ids)
|
||||
rows = [r for r in rows if r["instance_id"] in wanted]
|
||||
missing = wanted - {r["instance_id"] for r in rows}
|
||||
if missing:
|
||||
print(
|
||||
f"WARN: {len(missing)} requested instance_ids not found in dataset: "
|
||||
f"{sorted(missing)[:5]}{'...' if len(missing) > 5 else ''}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
if limit is not None:
|
||||
rows = rows[:limit]
|
||||
return rows
|
||||
|
||||
|
||||
def append_prediction_row(
|
||||
predictions_path: Path,
|
||||
instance_id: str,
|
||||
model_patch: str,
|
||||
model_name_or_path: str,
|
||||
lock: threading.Lock,
|
||||
) -> None:
|
||||
"""Append one JSONL prediction row under a lock (appends are NOT atomic across threads)."""
|
||||
row = {
|
||||
"instance_id": instance_id,
|
||||
"model_patch": model_patch,
|
||||
"model_name_or_path": model_name_or_path,
|
||||
}
|
||||
line = json.dumps(row, ensure_ascii=False) + "\n"
|
||||
with lock:
|
||||
with predictions_path.open("a", encoding="utf-8") as fp:
|
||||
fp.write(line)
|
||||
|
||||
|
||||
def copy_log_if_exists(src: Path, dst: Path) -> None:
|
||||
"""Copy a log file from the shared scratch volume into the run-log directory, if present."""
|
||||
if src.exists() and src.is_file():
|
||||
dst.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy2(src, dst)
|
||||
|
||||
|
||||
def run_one_instance(
|
||||
instance: dict[str, Any],
|
||||
image: str,
|
||||
predictions_path: Path,
|
||||
predictions_dir: Path,
|
||||
run_dir: Path,
|
||||
timeout: int,
|
||||
predictions_lock: threading.Lock,
|
||||
model_name_or_path: str,
|
||||
oauth_creds_path: Path | None,
|
||||
) -> tuple[str, str]:
|
||||
"""
|
||||
Run the agent container for a single instance.
|
||||
|
||||
Returns a (status, instance_id) tuple where status is one of:
|
||||
"succeeded", "failed", "timed_out".
|
||||
|
||||
On ANY non-success (timeout, non-zero exit, missing diff), a prediction
|
||||
row with model_patch="" is still appended — the plan requires we never
|
||||
silently drop an instance.
|
||||
"""
|
||||
instance_id: str = instance["instance_id"]
|
||||
repo: str = instance["repo"]
|
||||
base_commit: str = instance["base_commit"]
|
||||
problem_statement: str = instance["problem_statement"]
|
||||
|
||||
instance_log_dir = run_dir / instance_id
|
||||
instance_log_dir.mkdir(parents=True, exist_ok=True)
|
||||
stderr_log_path = instance_log_dir / "stderr.log"
|
||||
|
||||
# Per-instance scratch dir — MUST NOT be shared across containers.
|
||||
scratch_dir = Path(tempfile.mkdtemp(prefix=f"swebench-{instance_id}-"))
|
||||
problem_file = scratch_dir / "problem.txt"
|
||||
problem_file.write_text(problem_statement, encoding="utf-8")
|
||||
|
||||
status: str = "failed"
|
||||
model_patch: str = ""
|
||||
|
||||
# Uniquely named so the TimeoutExpired handler can kill it without racing
|
||||
# other instances on the host.
|
||||
container_name = f"swebench-agent-{instance_id}-{os.getpid()}-{threading.get_ident()}"
|
||||
|
||||
try:
|
||||
# The orchestrator owns JSONL writes under `predictions_lock` to avoid
|
||||
# racy concurrent appends across containers — so we DO NOT mount the
|
||||
# predictions directory into the container. Instead, the agent writes
|
||||
# its authoritative diff to /scratch/model_patch.diff (via
|
||||
# CLAUDE_MEM_OUTPUT_DIR), plus ingest/fix logs to the same dir. The
|
||||
# 5th CLI arg to run-instance.sh is only used in standalone smoke-test
|
||||
# mode; here we point it at a throwaway path inside the container.
|
||||
cmd: list[str] = [
|
||||
"docker",
|
||||
"run",
|
||||
"--rm",
|
||||
"--name",
|
||||
container_name,
|
||||
"-e",
|
||||
"CLAUDE_MEM_OUTPUT_DIR=/scratch",
|
||||
"-v",
|
||||
f"{scratch_dir}:/scratch",
|
||||
]
|
||||
if oauth_creds_path is not None:
|
||||
cmd += [
|
||||
"-e",
|
||||
"CLAUDE_MEM_CREDENTIALS_FILE=/auth/.credentials.json",
|
||||
"-v",
|
||||
f"{oauth_creds_path}:/auth/.credentials.json:ro",
|
||||
]
|
||||
else:
|
||||
# Pay-per-call path.
|
||||
cmd += ["-e", "ANTHROPIC_API_KEY"]
|
||||
cmd += [
|
||||
image,
|
||||
instance_id,
|
||||
repo,
|
||||
base_commit,
|
||||
"/scratch/problem.txt",
|
||||
"/scratch/ignored-predictions.jsonl",
|
||||
]
|
||||
|
||||
try:
|
||||
completed = subprocess.run(
|
||||
cmd,
|
||||
timeout=timeout,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
check=False,
|
||||
)
|
||||
# Persist stderr so post-mortem is possible even on success.
|
||||
stderr_log_path.write_text(
|
||||
f"=== STDOUT ===\n{completed.stdout}\n=== STDERR ===\n{completed.stderr}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
if completed.returncode == 0:
|
||||
# Read the diff the agent wrote to the shared predictions volume.
|
||||
# The container writes its own prediction line; we prefer to
|
||||
# write our own authoritative row here from the diff file the
|
||||
# agent left in /scratch. If the agent wrote a diff file, use
|
||||
# it; otherwise fall back to empty patch.
|
||||
diff_file = scratch_dir / "model_patch.diff"
|
||||
if diff_file.exists():
|
||||
diff_text = diff_file.read_text(encoding="utf-8")
|
||||
if diff_text.strip():
|
||||
model_patch = diff_text
|
||||
status = "succeeded"
|
||||
else:
|
||||
status = "failed" # empty diff
|
||||
else:
|
||||
# Container did not leave a diff file — treat as failure
|
||||
# but still emit an empty-patch row below.
|
||||
status = "failed"
|
||||
else:
|
||||
status = "failed"
|
||||
|
||||
except subprocess.TimeoutExpired as exc:
|
||||
status = "timed_out"
|
||||
# subprocess.run killed the docker CLI, but the container may
|
||||
# still be running. Force-remove it by name so we don't leak
|
||||
# containers across the batch.
|
||||
subprocess.run(
|
||||
["docker", "rm", "-f", container_name],
|
||||
capture_output=True,
|
||||
check=False,
|
||||
timeout=30,
|
||||
)
|
||||
stderr_log_path.write_text(
|
||||
f"TIMEOUT after {timeout}s (forced docker rm -f {container_name})\n"
|
||||
f"=== STDOUT (partial) ===\n{exc.stdout or ''}\n"
|
||||
f"=== STDERR (partial) ===\n{exc.stderr or ''}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
# Copy per-turn logs left by the agent in the shared scratch volume.
|
||||
copy_log_if_exists(scratch_dir / "ingest.jsonl", instance_log_dir / "ingest.jsonl")
|
||||
copy_log_if_exists(scratch_dir / "fix.jsonl", instance_log_dir / "fix.jsonl")
|
||||
|
||||
# Always write a row — never silently drop an instance.
|
||||
append_prediction_row(
|
||||
predictions_path=predictions_path,
|
||||
instance_id=instance_id,
|
||||
model_patch=model_patch,
|
||||
model_name_or_path=model_name_or_path,
|
||||
lock=predictions_lock,
|
||||
)
|
||||
|
||||
except Exception as exc: # pragma: no cover — defensive
|
||||
status = "failed"
|
||||
try:
|
||||
stderr_log_path.write_text(
|
||||
f"ORCHESTRATOR EXCEPTION: {exc!r}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
except OSError:
|
||||
pass
|
||||
append_prediction_row(
|
||||
predictions_path=predictions_path,
|
||||
instance_id=instance_id,
|
||||
model_patch="",
|
||||
model_name_or_path=model_name_or_path,
|
||||
lock=predictions_lock,
|
||||
)
|
||||
finally:
|
||||
# Per-instance scratch must not leak across containers.
|
||||
shutil.rmtree(scratch_dir, ignore_errors=True)
|
||||
|
||||
return status, instance_id
|
||||
|
||||
|
||||
def main() -> int:
|
||||
args = parse_args()
|
||||
|
||||
repo_root = Path(__file__).resolve().parents[2]
|
||||
if args.out:
|
||||
predictions_path = Path(args.out).resolve()
|
||||
else:
|
||||
predictions_path = (
|
||||
repo_root
|
||||
/ "evals"
|
||||
/ "swebench"
|
||||
/ "runs"
|
||||
/ args.run_id
|
||||
/ "predictions.jsonl"
|
||||
)
|
||||
|
||||
predictions_dir = predictions_path.parent
|
||||
run_dir = predictions_dir # logs land in evals/swebench/runs/<run_id>/<instance_id>/
|
||||
predictions_dir.mkdir(parents=True, exist_ok=True)
|
||||
# Don't silently discard partial results from a prior run.
|
||||
if predictions_path.exists() and predictions_path.stat().st_size > 0:
|
||||
if not args.overwrite:
|
||||
print(
|
||||
f"ERROR: {predictions_path} already exists and is non-empty. "
|
||||
"Pass --overwrite to truncate, or pick a different --run-id.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 1
|
||||
print(
|
||||
f"WARN: --overwrite set; truncating existing {predictions_path}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
predictions_path.write_text("", encoding="utf-8")
|
||||
|
||||
# Resolve auth: OAuth (Max/Pro subscription) or API key.
|
||||
oauth_creds_path: Path | None = None
|
||||
if args.auth in ("oauth", "auto"):
|
||||
oauth_creds_path = extract_oauth_credentials()
|
||||
if oauth_creds_path is not None:
|
||||
print(
|
||||
f"Auth: OAuth credentials extracted to {oauth_creds_path} "
|
||||
"(mounted read-only into each container). "
|
||||
"NOTE: Max/Pro has per-window usage limits; batch runs may exhaust them.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
elif args.auth == "oauth":
|
||||
print(
|
||||
"ERROR: --auth=oauth requested but credentials extraction failed.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 1
|
||||
|
||||
if oauth_creds_path is None:
|
||||
if not os.environ.get("ANTHROPIC_API_KEY"):
|
||||
print(
|
||||
"ERROR: no auth available. Either run `claude login` on host "
|
||||
"(for OAuth) or set ANTHROPIC_API_KEY.",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return 1
|
||||
print("Auth: ANTHROPIC_API_KEY (pay-per-call).", file=sys.stderr)
|
||||
|
||||
print(f"Loading dataset {args.dataset} (split=test)...", file=sys.stderr)
|
||||
dataset = load_dataset(args.dataset, split="test")
|
||||
|
||||
instances = select_instances(dataset, args.instance_ids, args.limit)
|
||||
total = len(instances)
|
||||
if total == 0:
|
||||
print("No instances selected; nothing to do.", file=sys.stderr)
|
||||
return 0
|
||||
|
||||
# Scrub hidden-from-agent fields defensively. The agent container only
|
||||
# receives instance_id/repo/base_commit/problem_statement via CLI args +
|
||||
# the per-instance problem file — the hidden fields never leave this
|
||||
# process. This loop makes that invariant explicit.
|
||||
for row in instances:
|
||||
for key in HIDDEN_AGENT_FIELDS:
|
||||
row.pop(key, None)
|
||||
|
||||
model_name_or_path = "claude-opus-4-7+claude-mem"
|
||||
|
||||
print(
|
||||
f"Launching {total} instance(s) with max_concurrent={args.max_concurrent}, "
|
||||
f"timeout={args.timeout}s, image={args.image}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
predictions_lock = threading.Lock()
|
||||
succeeded = 0
|
||||
failed = 0
|
||||
timed_out = 0
|
||||
|
||||
with ThreadPoolExecutor(max_workers=args.max_concurrent) as executor:
|
||||
future_to_id = {
|
||||
executor.submit(
|
||||
run_one_instance,
|
||||
instance=instance,
|
||||
image=args.image,
|
||||
predictions_path=predictions_path,
|
||||
predictions_dir=predictions_dir,
|
||||
run_dir=run_dir,
|
||||
timeout=args.timeout,
|
||||
predictions_lock=predictions_lock,
|
||||
model_name_or_path=model_name_or_path,
|
||||
oauth_creds_path=oauth_creds_path,
|
||||
): instance["instance_id"]
|
||||
for instance in instances
|
||||
}
|
||||
|
||||
for future in as_completed(future_to_id):
|
||||
instance_id = future_to_id[future]
|
||||
try:
|
||||
status, _ = future.result()
|
||||
except Exception as exc: # pragma: no cover — defensive
|
||||
status = "failed"
|
||||
print(
|
||||
f"[{instance_id}] orchestrator future raised: {exc!r}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
if status == "succeeded":
|
||||
succeeded += 1
|
||||
elif status == "timed_out":
|
||||
timed_out += 1
|
||||
else:
|
||||
failed += 1
|
||||
|
||||
print(
|
||||
f"[{instance_id}] {status} "
|
||||
f"({succeeded + failed + timed_out}/{total} done)",
|
||||
file=sys.stderr,
|
||||
)
|
||||
|
||||
print(
|
||||
f"{total} total, {succeeded} succeeded, {failed} failed, {timed_out} timed out",
|
||||
)
|
||||
# Per plan: exit 0 even if some instances failed.
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
Executable
+177
@@ -0,0 +1,177 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# run-instance.sh — runs Claude Code + claude-mem against a single SWE-bench
|
||||
# instance using the two-turn protocol (ingest, then fix), and appends a
|
||||
# prediction JSONL row to OUT_PREDICTIONS_PATH.
|
||||
#
|
||||
# Usage:
|
||||
# run-instance.sh INSTANCE_ID REPO_SLUG BASE_COMMIT PROBLEM_STATEMENT_FILE OUT_PREDICTIONS_PATH
|
||||
#
|
||||
# Required env:
|
||||
# ANTHROPIC_API_KEY
|
||||
|
||||
if [[ $# -ne 5 ]]; then
|
||||
echo "Usage: $0 INSTANCE_ID REPO_SLUG BASE_COMMIT PROBLEM_STATEMENT_FILE OUT_PREDICTIONS_PATH" >&2
|
||||
exit 2
|
||||
fi
|
||||
|
||||
INSTANCE_ID="$1"
|
||||
REPO_SLUG="$2"
|
||||
BASE_COMMIT="$3"
|
||||
PROBLEM_STATEMENT_FILE="$4"
|
||||
OUT_PREDICTIONS_PATH="$5"
|
||||
|
||||
# Auth: either ANTHROPIC_API_KEY (pay-per-call) OR a pre-extracted OAuth
|
||||
# credentials file from a Claude Max/Pro subscription (flat-fee, but subject
|
||||
# to Anthropic's usage limits — batch-scale runs may exhaust the 5h window).
|
||||
# run-batch.py extracts OAuth creds from host Keychain/file and mounts them
|
||||
# at CLAUDE_MEM_CREDENTIALS_FILE; standalone smoke-test can do the same, or
|
||||
# set ANTHROPIC_API_KEY directly.
|
||||
if [[ -z "${ANTHROPIC_API_KEY:-}" && -z "${CLAUDE_MEM_CREDENTIALS_FILE:-}" ]]; then
|
||||
echo "ERROR: one of ANTHROPIC_API_KEY or CLAUDE_MEM_CREDENTIALS_FILE is required" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -n "${CLAUDE_MEM_CREDENTIALS_FILE:-}" && ! -f "$CLAUDE_MEM_CREDENTIALS_FILE" ]]; then
|
||||
echo "ERROR: CLAUDE_MEM_CREDENTIALS_FILE set but file missing: $CLAUDE_MEM_CREDENTIALS_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$PROBLEM_STATEMENT_FILE" ]]; then
|
||||
echo "ERROR: PROBLEM_STATEMENT_FILE not found: $PROBLEM_STATEMENT_FILE" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
MODEL_NAME="claude-opus-4-7+claude-mem"
|
||||
|
||||
# Per-instance ephemeral scratch dir — isolates ~/.claude/ and ~/.claude-mem/.
|
||||
SCRATCH=$(mktemp -d)
|
||||
REPO_DIR="$SCRATCH/repo"
|
||||
MEM_DIR="$SCRATCH/.claude-mem"
|
||||
CLAUDE_DIR="$SCRATCH/.claude"
|
||||
mkdir -p "$MEM_DIR" "$CLAUDE_DIR"
|
||||
|
||||
# If using OAuth, seed the isolated CLAUDE_DIR with the mounted credentials
|
||||
# file so Claude Code finds them at HOME=$SCRATCH → ~/.claude/.credentials.json.
|
||||
# chmod 600 to match what `claude login` writes (it checks permissions).
|
||||
if [[ -n "${CLAUDE_MEM_CREDENTIALS_FILE:-}" ]]; then
|
||||
cp "$CLAUDE_MEM_CREDENTIALS_FILE" "$CLAUDE_DIR/.credentials.json"
|
||||
chmod 600 "$CLAUDE_DIR/.credentials.json"
|
||||
fi
|
||||
|
||||
# Directory where artifacts the batch orchestrator reads (model_patch.diff,
|
||||
# ingest.jsonl, fix.jsonl) are written. When run via `docker run -v
|
||||
# <host-scratch>:/scratch` from run-batch.py, the orchestrator sets
|
||||
# CLAUDE_MEM_OUTPUT_DIR=/scratch so these files are visible on the host. In
|
||||
# standalone/smoke-test mode the default keeps artifacts in the ephemeral
|
||||
# scratch dir alongside the repo.
|
||||
OUTPUT_DIR="${CLAUDE_MEM_OUTPUT_DIR:-$SCRATCH}"
|
||||
mkdir -p "$OUTPUT_DIR"
|
||||
|
||||
# Always write a prediction row (even on failure) so batch mode stays aligned.
|
||||
# The trap emits an empty-patch row if we exit before the success path sets
|
||||
# PREDICTION_EMITTED=1, then cleans up SCRATCH.
|
||||
DIFF_OUT="$OUTPUT_DIR/model_patch.diff"
|
||||
INGEST_LOG="$OUTPUT_DIR/ingest.jsonl"
|
||||
FIX_LOG="$OUTPUT_DIR/fix.jsonl"
|
||||
|
||||
PREDICTION_EMITTED=0
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
if [[ "$PREDICTION_EMITTED" -ne 1 ]]; then
|
||||
# Ensure the orchestrator sees an (empty) diff file even on early exit.
|
||||
: > "$DIFF_OUT" 2>/dev/null || true
|
||||
jq -nc \
|
||||
--arg id "$INSTANCE_ID" \
|
||||
--arg patch "" \
|
||||
--arg model "$MODEL_NAME" \
|
||||
'{instance_id:$id, model_patch:$patch, model_name_or_path:$model}' \
|
||||
>> "$OUT_PREDICTIONS_PATH" || true
|
||||
fi
|
||||
rm -rf "$SCRATCH"
|
||||
exit "$exit_code"
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
# Shallow clone + fetch the exact commit. Saves minutes on large repos
|
||||
# (sympy/django/scikit-learn) vs. a full-history clone. Fallback to a full
|
||||
# clone if the server rejects the by-commit fetch (GitHub supports
|
||||
# uploadpack.allowReachableSHA1InWant by default on public repos, but mirrors
|
||||
# may not).
|
||||
if ! { git clone --depth 1 --no-single-branch "https://github.com/${REPO_SLUG}.git" "$REPO_DIR" \
|
||||
&& git -C "$REPO_DIR" fetch --depth 1 origin "$BASE_COMMIT"; }; then
|
||||
echo "WARN: shallow fetch failed; falling back to full clone" >&2
|
||||
rm -rf "$REPO_DIR"
|
||||
git clone "https://github.com/${REPO_SLUG}.git" "$REPO_DIR"
|
||||
fi
|
||||
git -C "$REPO_DIR" reset --hard "$BASE_COMMIT"
|
||||
|
||||
# ---------- Turn 1: Ingest (populate memory via PostToolUse hook) ----------
|
||||
INGEST_PROMPT="Please learn about the codebase by systematically and thoroughly reading EVERY SOURCE FILE IN FULL, no matter how many there are. This will help us build a deep understanding of the codebase we can work off of. Don't worry about cost. This is critical and non-negotiable."
|
||||
|
||||
SESSION_ID=$(uuidgen | tr '[:upper:]' '[:lower:]')
|
||||
|
||||
set +e
|
||||
(
|
||||
cd "$REPO_DIR" && HOME="$SCRATCH" claude \
|
||||
--print \
|
||||
--session-id "$SESSION_ID" \
|
||||
--plugin-dir /opt/claude-mem \
|
||||
--permission-mode bypassPermissions \
|
||||
--allowedTools "Read,Glob,Grep,Bash(ls *),Bash(wc *)" \
|
||||
--max-budget-usd 5.00 \
|
||||
--output-format json \
|
||||
"$INGEST_PROMPT"
|
||||
) > "$INGEST_LOG" 2>&1
|
||||
INGEST_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [[ "$INGEST_EXIT" -ne 0 ]]; then
|
||||
echo "WARN: ingest turn exited with $INGEST_EXIT; continuing to fix turn" >&2
|
||||
fi
|
||||
|
||||
# ---------- Turn 2: Fix (consume memory via mem-search slash command) ----------
|
||||
PROBLEM=$(cat "$PROBLEM_STATEMENT_FILE")
|
||||
QUERY=$(printf '%s' "$PROBLEM" | tr -s '[:space:]' ' ' | cut -c1-200)
|
||||
|
||||
FIX_PROMPT="/claude-mem:mem-search ${QUERY}
|
||||
|
||||
Problem statement:
|
||||
${PROBLEM}
|
||||
|
||||
Using what you've learned from the codebase (see memory above), produce a minimal unified diff that fixes this bug. Edit files in place. Do NOT commit."
|
||||
|
||||
set +e
|
||||
(
|
||||
cd "$REPO_DIR" && HOME="$SCRATCH" claude \
|
||||
--print \
|
||||
--resume "$SESSION_ID" \
|
||||
--plugin-dir /opt/claude-mem \
|
||||
--permission-mode bypassPermissions \
|
||||
--allowedTools "Read,Glob,Grep,Edit,Write,Bash(git *),Bash(ls *)" \
|
||||
--max-budget-usd 5.00 \
|
||||
--output-format json \
|
||||
"$FIX_PROMPT"
|
||||
) > "$FIX_LOG" 2>&1
|
||||
FIX_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [[ "$FIX_EXIT" -ne 0 ]]; then
|
||||
echo "WARN: fix turn exited with $FIX_EXIT; will still emit prediction row" >&2
|
||||
fi
|
||||
|
||||
# ---------- Capture diff and emit prediction row ----------
|
||||
# Write the diff to DIFF_OUT first (authoritative for the batch orchestrator),
|
||||
# then read it back for the JSONL row (kept for standalone/smoke-test use).
|
||||
git -C "$REPO_DIR" diff > "$DIFF_OUT" || : > "$DIFF_OUT"
|
||||
DIFF=$(cat "$DIFF_OUT")
|
||||
|
||||
jq -nc \
|
||||
--arg id "$INSTANCE_ID" \
|
||||
--arg patch "$DIFF" \
|
||||
--arg model "$MODEL_NAME" \
|
||||
'{instance_id:$id, model_patch:$patch, model_name_or_path:$model}' \
|
||||
>> "$OUT_PREDICTIONS_PATH"
|
||||
|
||||
PREDICTION_EMITTED=1
|
||||
Executable
+152
@@ -0,0 +1,152 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# smoke-test.sh — runs ONE SWE-bench instance end-to-end against the agent
|
||||
# container using OAuth credentials extracted from the host. Use this to
|
||||
# verify the two-turn protocol + /claude-mem:mem-search slash resolution
|
||||
# before kicking off a batch run.
|
||||
#
|
||||
# Usage:
|
||||
# evals/swebench/smoke-test.sh [INSTANCE_ID]
|
||||
#
|
||||
# Defaults to sympy__sympy-24152 (an easy Verified instance) if no arg given.
|
||||
#
|
||||
# Outputs:
|
||||
# evals/swebench/runs/smoke/<INSTANCE_ID>/{ingest.jsonl,fix.jsonl,model_patch.diff}
|
||||
# evals/swebench/runs/smoke/predictions.jsonl
|
||||
|
||||
INSTANCE_ID="${1:-sympy__sympy-24152}"
|
||||
DATASET="${DATASET:-princeton-nlp/SWE-bench_Lite}"
|
||||
IMAGE="${IMAGE:-claude-mem/swebench-agent:latest}"
|
||||
TIMEOUT="${TIMEOUT:-1800}"
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../.." && pwd)"
|
||||
RUN_DIR="$REPO_ROOT/evals/swebench/runs/smoke/$INSTANCE_ID"
|
||||
PREDICTIONS="$REPO_ROOT/evals/swebench/runs/smoke/predictions.jsonl"
|
||||
mkdir -p "$RUN_DIR" "$(dirname "$PREDICTIONS")"
|
||||
|
||||
# --- Extract OAuth credentials ---
|
||||
CREDS_FILE="$(mktemp -t claude-mem-creds.XXXXXX.json)"
|
||||
trap 'rm -f "$CREDS_FILE"' EXIT
|
||||
|
||||
# Try macOS Keychain first (primary on Darwin), then fall through to the
|
||||
# on-disk credentials file — matches docker/claude-mem/run.sh behavior.
|
||||
creds_obtained=0
|
||||
if [[ "$(uname)" == "Darwin" ]]; then
|
||||
if security find-generic-password -s 'Claude Code-credentials' -w > "$CREDS_FILE" 2>/dev/null \
|
||||
&& [[ -s "$CREDS_FILE" ]]; then
|
||||
creds_obtained=1
|
||||
fi
|
||||
fi
|
||||
if [[ "$creds_obtained" -eq 0 && -f "$HOME/.claude/.credentials.json" ]]; then
|
||||
cp "$HOME/.claude/.credentials.json" "$CREDS_FILE"
|
||||
creds_obtained=1
|
||||
fi
|
||||
if [[ "$creds_obtained" -eq 0 ]]; then
|
||||
echo "ERROR: no Claude OAuth creds found (macOS Keychain or ~/.claude/.credentials.json)" >&2
|
||||
exit 1
|
||||
fi
|
||||
chmod 600 "$CREDS_FILE"
|
||||
|
||||
# --- Fetch instance data from HuggingFace via a small Python helper ---
|
||||
INSTANCE_JSON="$(mktemp)"
|
||||
trap 'rm -f "$CREDS_FILE" "$INSTANCE_JSON"' EXIT
|
||||
python3 - "$INSTANCE_ID" "$DATASET" > "$INSTANCE_JSON" <<'PY'
|
||||
import json, sys
|
||||
from datasets import load_dataset
|
||||
target = sys.argv[1]
|
||||
dataset = sys.argv[2]
|
||||
ds = load_dataset(dataset, split="test")
|
||||
for row in ds:
|
||||
if row["instance_id"] == target:
|
||||
print(json.dumps({
|
||||
"instance_id": row["instance_id"],
|
||||
"repo": row["repo"],
|
||||
"base_commit": row["base_commit"],
|
||||
"problem_statement": row["problem_statement"],
|
||||
}))
|
||||
break
|
||||
else:
|
||||
print(f"ERROR: instance {target} not found", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
PY
|
||||
|
||||
SCRATCH="$(mktemp -d -t claude-mem-smoke.XXXXXX)"
|
||||
trap 'rm -f "$CREDS_FILE" "$INSTANCE_JSON"; rm -rf "$SCRATCH"' EXIT
|
||||
|
||||
# Parse the instance JSON once: print repo + base_commit to stdout, write the
|
||||
# problem statement directly to $SCRATCH/problem.txt. INSTANCE_JSON is passed
|
||||
# as argv so stdin is free for the `python3 -` heredoc script body (previously
|
||||
# both were competing for stdin, which made json.load see the heredoc's EOF).
|
||||
read -r REPO BASE_COMMIT < <(
|
||||
python3 - "$SCRATCH" "$INSTANCE_JSON" <<'PY'
|
||||
import json, os, sys
|
||||
scratch, instance_json = sys.argv[1], sys.argv[2]
|
||||
with open(instance_json) as f:
|
||||
d = json.load(f)
|
||||
open(os.path.join(scratch, "problem.txt"), "w").write(d["problem_statement"])
|
||||
print(d["repo"], d["base_commit"])
|
||||
PY
|
||||
)
|
||||
|
||||
echo "=== Running $INSTANCE_ID ($REPO @ $BASE_COMMIT) ===" >&2
|
||||
echo "Scratch: $SCRATCH" >&2
|
||||
echo "Logs will land in: $RUN_DIR" >&2
|
||||
|
||||
# Pick a wall-clock timeout binary. Linux ships `timeout`; macOS needs
|
||||
# `gtimeout` from coreutils (brew install coreutils). If neither is available,
|
||||
# warn and run without a cap — the smoke test is manual anyway.
|
||||
TIMEOUT_CMD=()
|
||||
if command -v timeout >/dev/null 2>&1; then
|
||||
TIMEOUT_CMD=(timeout "$TIMEOUT")
|
||||
elif command -v gtimeout >/dev/null 2>&1; then
|
||||
TIMEOUT_CMD=(gtimeout "$TIMEOUT")
|
||||
else
|
||||
echo "WARN: no \`timeout\`/\`gtimeout\` on PATH; container runs uncapped" >&2
|
||||
fi
|
||||
|
||||
# Name the container so we can force-remove it if the wall-clock timeout
|
||||
# fires (SIGTERM from timeout leaves the container state open briefly).
|
||||
CONTAINER_NAME="claude-mem-smoke-$INSTANCE_ID-$$"
|
||||
|
||||
set +e
|
||||
"${TIMEOUT_CMD[@]}" docker run --rm \
|
||||
--name "$CONTAINER_NAME" \
|
||||
-e CLAUDE_MEM_OUTPUT_DIR=/scratch \
|
||||
-e CLAUDE_MEM_CREDENTIALS_FILE=/auth/.credentials.json \
|
||||
-v "$SCRATCH:/scratch" \
|
||||
-v "$CREDS_FILE:/auth/.credentials.json:ro" \
|
||||
"$IMAGE" \
|
||||
"$INSTANCE_ID" "$REPO" "$BASE_COMMIT" /scratch/problem.txt /scratch/ignored-predictions.jsonl
|
||||
DOCKER_EXIT=$?
|
||||
set -e
|
||||
|
||||
if [[ "$DOCKER_EXIT" -eq 124 ]]; then
|
||||
# `timeout` signals TERM and returns 124 on timeout. Force-remove the
|
||||
# container in case docker hasn't reaped it yet.
|
||||
echo "ERROR: docker run exceeded ${TIMEOUT}s wall-clock; removing container" >&2
|
||||
docker rm -f "$CONTAINER_NAME" >/dev/null 2>&1 || true
|
||||
fi
|
||||
|
||||
# Copy artifacts from scratch → RUN_DIR
|
||||
for f in ingest.jsonl fix.jsonl model_patch.diff; do
|
||||
[[ -f "$SCRATCH/$f" ]] && cp "$SCRATCH/$f" "$RUN_DIR/$f"
|
||||
done
|
||||
|
||||
# Emit authoritative prediction row
|
||||
DIFF_FILE="$SCRATCH/model_patch.diff"
|
||||
DIFF=""
|
||||
[[ -f "$DIFF_FILE" ]] && DIFF="$(cat "$DIFF_FILE")"
|
||||
jq -nc \
|
||||
--arg id "$INSTANCE_ID" \
|
||||
--arg patch "$DIFF" \
|
||||
--arg model "claude-opus-4-7+claude-mem" \
|
||||
'{instance_id:$id, model_patch:$patch, model_name_or_path:$model}' \
|
||||
>> "$PREDICTIONS"
|
||||
|
||||
echo "=== Done ===" >&2
|
||||
echo "Diff size: $(wc -c < "$DIFF_FILE" 2>/dev/null || echo 0) bytes" >&2
|
||||
echo "Predictions: $PREDICTIONS" >&2
|
||||
echo "Verify mem-search invocation:" >&2
|
||||
echo " grep -o '\"name\":\"[^\"]*mem-search[^\"]*\"' $RUN_DIR/fix.jsonl || echo 'NOT INVOKED'" >&2
|
||||
Executable
+308
@@ -0,0 +1,308 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Summarize SWE-bench evaluation run results.
|
||||
|
||||
Walks the SWE-bench harness output directory, tallies resolved/unresolved/error
|
||||
counts, and emits a markdown summary. Optionally diffs against another run.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def load_expected_instance_ids(predictions_path: Path) -> list[str]:
|
||||
"""Read instance_ids from a predictions.jsonl file (one JSON object per line)."""
|
||||
instance_ids: list[str] = []
|
||||
if not predictions_path.exists():
|
||||
print(
|
||||
f"warning: predictions file not found: {predictions_path}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
return instance_ids
|
||||
with predictions_path.open("r", encoding="utf-8") as handle:
|
||||
for line_number, raw_line in enumerate(handle, start=1):
|
||||
stripped = raw_line.strip()
|
||||
if not stripped:
|
||||
continue
|
||||
try:
|
||||
record = json.loads(stripped)
|
||||
except json.JSONDecodeError as exc:
|
||||
print(
|
||||
f"warning: could not parse predictions line {line_number}: {exc}",
|
||||
file=sys.stderr,
|
||||
)
|
||||
continue
|
||||
instance_id = record.get("instance_id")
|
||||
if instance_id:
|
||||
instance_ids.append(instance_id)
|
||||
return instance_ids
|
||||
|
||||
|
||||
def load_run_results(
|
||||
run_id: str,
|
||||
model_name: str,
|
||||
expected_instance_ids: list[str],
|
||||
repo_root: Path,
|
||||
) -> dict:
|
||||
"""Walk logs/run_evaluation/<run_id>/<model_name>/*/report.json and tally results.
|
||||
|
||||
Returns a dict:
|
||||
{
|
||||
"per_instance": {instance_id: {"resolved": bool|None, "notes": str}},
|
||||
"resolved_count": int,
|
||||
"unresolved_count": int,
|
||||
"error_count": int,
|
||||
}
|
||||
"""
|
||||
run_logs_root = repo_root / "logs" / "run_evaluation" / run_id / model_name
|
||||
per_instance: dict[str, dict] = {}
|
||||
resolved_count = 0
|
||||
unresolved_count = 0
|
||||
error_count = 0
|
||||
|
||||
for instance_id in expected_instance_ids:
|
||||
report_path = run_logs_root / instance_id / "report.json"
|
||||
if not report_path.exists():
|
||||
per_instance[instance_id] = {
|
||||
"resolved": None,
|
||||
"notes": "missing report.json",
|
||||
}
|
||||
error_count += 1
|
||||
continue
|
||||
try:
|
||||
with report_path.open("r", encoding="utf-8") as handle:
|
||||
report_data = json.load(handle)
|
||||
except (json.JSONDecodeError, OSError) as exc:
|
||||
per_instance[instance_id] = {
|
||||
"resolved": None,
|
||||
"notes": f"failed to parse report.json: {exc}",
|
||||
}
|
||||
error_count += 1
|
||||
continue
|
||||
|
||||
# SWE-bench harness typically nests per-instance data under the
|
||||
# instance_id key; fall back to the top-level dict for flexibility.
|
||||
inner = report_data.get(instance_id, report_data)
|
||||
resolved_value = inner.get("resolved")
|
||||
if resolved_value is True:
|
||||
per_instance[instance_id] = {"resolved": True, "notes": ""}
|
||||
resolved_count += 1
|
||||
elif resolved_value is False:
|
||||
notes_parts: list[str] = []
|
||||
tests_status = inner.get("tests_status")
|
||||
if isinstance(tests_status, dict):
|
||||
fail_to_pass = tests_status.get("FAIL_TO_PASS", {})
|
||||
if isinstance(fail_to_pass, dict):
|
||||
failed = fail_to_pass.get("failure", []) or []
|
||||
if failed:
|
||||
notes_parts.append(f"FAIL_TO_PASS failures: {len(failed)}")
|
||||
per_instance[instance_id] = {
|
||||
"resolved": False,
|
||||
"notes": "; ".join(notes_parts),
|
||||
}
|
||||
unresolved_count += 1
|
||||
else:
|
||||
per_instance[instance_id] = {
|
||||
"resolved": None,
|
||||
"notes": "report.json missing 'resolved' field",
|
||||
}
|
||||
error_count += 1
|
||||
|
||||
return {
|
||||
"per_instance": per_instance,
|
||||
"resolved_count": resolved_count,
|
||||
"unresolved_count": unresolved_count,
|
||||
"error_count": error_count,
|
||||
}
|
||||
|
||||
|
||||
def format_resolved_cell(resolved: bool | None) -> str:
|
||||
if resolved is True:
|
||||
return "yes"
|
||||
if resolved is False:
|
||||
return "no"
|
||||
return "error"
|
||||
|
||||
|
||||
def render_summary_markdown(run_id: str, results: dict) -> str:
|
||||
total = (
|
||||
results["resolved_count"]
|
||||
+ results["unresolved_count"]
|
||||
+ results["error_count"]
|
||||
)
|
||||
resolved = results["resolved_count"]
|
||||
resolve_rate = (resolved / total * 100.0) if total > 0 else 0.0
|
||||
|
||||
lines: list[str] = []
|
||||
lines.append(f"# Run {run_id}")
|
||||
lines.append(f"- Total: {total}")
|
||||
lines.append(f"- Resolved: {resolved} ({resolve_rate:.2f}%)")
|
||||
lines.append(f"- Unresolved: {results['unresolved_count']}")
|
||||
lines.append(f"- Errors: {results['error_count']}")
|
||||
lines.append("")
|
||||
lines.append("## Per-instance")
|
||||
lines.append("| instance_id | resolved | notes |")
|
||||
lines.append("|---|---|---|")
|
||||
for instance_id, record in results["per_instance"].items():
|
||||
resolved_cell = format_resolved_cell(record["resolved"])
|
||||
notes_cell = record.get("notes", "") or ""
|
||||
# Escape pipe chars in notes to avoid breaking markdown tables.
|
||||
notes_cell = notes_cell.replace("|", "\\|")
|
||||
lines.append(f"| {instance_id} | {resolved_cell} | {notes_cell} |")
|
||||
lines.append("")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def render_diff_markdown(
|
||||
current_run_id: str,
|
||||
other_run_id: str,
|
||||
current_results: dict,
|
||||
other_results: dict,
|
||||
) -> str:
|
||||
def resolve_rate(results: dict) -> tuple[int, float]:
|
||||
total = (
|
||||
results["resolved_count"]
|
||||
+ results["unresolved_count"]
|
||||
+ results["error_count"]
|
||||
)
|
||||
rate = (results["resolved_count"] / total * 100.0) if total > 0 else 0.0
|
||||
return total, rate
|
||||
|
||||
current_total, current_rate = resolve_rate(current_results)
|
||||
other_total, other_rate = resolve_rate(other_results)
|
||||
rate_delta = current_rate - other_rate
|
||||
|
||||
lines: list[str] = []
|
||||
lines.append(f"# Diff vs {other_run_id}")
|
||||
lines.append(
|
||||
f"- {current_run_id}: {current_results['resolved_count']}/{current_total} "
|
||||
f"({current_rate:.2f}%)"
|
||||
)
|
||||
lines.append(
|
||||
f"- {other_run_id}: {other_results['resolved_count']}/{other_total} "
|
||||
f"({other_rate:.2f}%)"
|
||||
)
|
||||
lines.append(f"- Delta: {rate_delta:+.2f} percentage points")
|
||||
lines.append("")
|
||||
lines.append("## Per-instance status changes")
|
||||
lines.append(f"| instance_id | {other_run_id} | {current_run_id} |")
|
||||
lines.append("|---|---|---|")
|
||||
|
||||
all_instance_ids = set(current_results["per_instance"].keys()) | set(
|
||||
other_results["per_instance"].keys()
|
||||
)
|
||||
changes_found = False
|
||||
for instance_id in sorted(all_instance_ids):
|
||||
current_record = current_results["per_instance"].get(instance_id)
|
||||
other_record = other_results["per_instance"].get(instance_id)
|
||||
current_status = (
|
||||
format_resolved_cell(current_record["resolved"])
|
||||
if current_record
|
||||
else "absent"
|
||||
)
|
||||
other_status = (
|
||||
format_resolved_cell(other_record["resolved"])
|
||||
if other_record
|
||||
else "absent"
|
||||
)
|
||||
if current_status != other_status:
|
||||
lines.append(f"| {instance_id} | {other_status} | {current_status} |")
|
||||
changes_found = True
|
||||
if not changes_found:
|
||||
lines.append("| (no status changes) | | |")
|
||||
lines.append("")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Summarize SWE-bench evaluation run results."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--run-id",
|
||||
required=True,
|
||||
help="Run identifier used in logs/run_evaluation/<run_id>/ and evals/swebench/runs/<run_id>/.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--compare",
|
||||
metavar="OTHER_RUN_ID",
|
||||
default=None,
|
||||
help="Optional other run_id to diff resolve rates and per-instance status changes against.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--model-name",
|
||||
default="claude-opus-4-7+claude-mem",
|
||||
help="Model name directory inside logs/run_evaluation/<run_id>/.",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--out",
|
||||
default=None,
|
||||
help="Output path for the markdown summary (default: evals/swebench/runs/<run_id>/summary.md).",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# Resolve repo root from this script's location: evals/swebench/summarize.py
|
||||
script_path = Path(__file__).resolve()
|
||||
repo_root = script_path.parent.parent.parent
|
||||
|
||||
current_predictions_path = (
|
||||
repo_root / "evals" / "swebench" / "runs" / args.run_id / "predictions.jsonl"
|
||||
)
|
||||
current_instance_ids = load_expected_instance_ids(current_predictions_path)
|
||||
current_results = load_run_results(
|
||||
run_id=args.run_id,
|
||||
model_name=args.model_name,
|
||||
expected_instance_ids=current_instance_ids,
|
||||
repo_root=repo_root,
|
||||
)
|
||||
|
||||
summary_markdown = render_summary_markdown(args.run_id, current_results)
|
||||
|
||||
if args.compare:
|
||||
other_predictions_path = (
|
||||
repo_root
|
||||
/ "evals"
|
||||
/ "swebench"
|
||||
/ "runs"
|
||||
/ args.compare
|
||||
/ "predictions.jsonl"
|
||||
)
|
||||
other_instance_ids = load_expected_instance_ids(other_predictions_path)
|
||||
other_results = load_run_results(
|
||||
run_id=args.compare,
|
||||
model_name=args.model_name,
|
||||
expected_instance_ids=other_instance_ids,
|
||||
repo_root=repo_root,
|
||||
)
|
||||
diff_markdown = render_diff_markdown(
|
||||
current_run_id=args.run_id,
|
||||
other_run_id=args.compare,
|
||||
current_results=current_results,
|
||||
other_results=other_results,
|
||||
)
|
||||
summary_markdown = summary_markdown + "\n" + diff_markdown
|
||||
|
||||
if args.out:
|
||||
output_path = Path(args.out)
|
||||
if not output_path.is_absolute():
|
||||
output_path = (Path.cwd() / output_path).resolve()
|
||||
else:
|
||||
output_path = (
|
||||
repo_root
|
||||
/ "evals"
|
||||
/ "swebench"
|
||||
/ "runs"
|
||||
/ args.run_id
|
||||
/ "summary.md"
|
||||
)
|
||||
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
output_path.write_text(summary_markdown, encoding="utf-8")
|
||||
|
||||
print(str(output_path))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-mem",
|
||||
"version": "12.2.0",
|
||||
"version": "12.3.1",
|
||||
"description": "Memory compression system for Claude Code - persist context across sessions",
|
||||
"keywords": [
|
||||
"claude",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-mem",
|
||||
"version": "12.2.0",
|
||||
"version": "12.3.1",
|
||||
"description": "Persistent memory system for Claude Code - seamlessly preserve context across sessions",
|
||||
"author": {
|
||||
"name": "Alex Newman"
|
||||
|
||||
+1
-1
@@ -1,6 +1,6 @@
|
||||
{
|
||||
"name": "claude-mem-plugin",
|
||||
"version": "12.2.0",
|
||||
"version": "12.3.1",
|
||||
"private": true,
|
||||
"description": "Runtime dependencies for claude-mem bundled hooks",
|
||||
"type": "module",
|
||||
|
||||
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
+243
-238
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -224,8 +224,9 @@ function detectAntiPatterns(filePath: string, projectRoot: string): AntiPattern[
|
||||
}
|
||||
}
|
||||
|
||||
// Detect try block start
|
||||
if (trimmed.match(/^\s*try\s*{/) || trimmed.match(/}\s*try\s*{/)) {
|
||||
// Detect try block start (only when NOT already inside a catch block —
|
||||
// nested try/catch inside a catch is just catch-block content)
|
||||
if (!inCatch && (trimmed.match(/^\s*try\s*{/) || trimmed.match(/}\s*try\s*{/))) {
|
||||
inTry = true;
|
||||
tryStartLine = i + 1;
|
||||
tryLines = [line];
|
||||
|
||||
@@ -59,8 +59,18 @@ function buildTimestampMap(): TimestampMapping {
|
||||
|
||||
for (let index = 0; index < lines.length; index++) {
|
||||
const line = lines[index];
|
||||
let data: any;
|
||||
try {
|
||||
const data = JSON.parse(line);
|
||||
data = JSON.parse(line);
|
||||
} catch (e: unknown) {
|
||||
logger.debug('IMPORT', 'Skipping invalid JSON line', {
|
||||
lineNumber: index + 1,
|
||||
filename,
|
||||
error: e instanceof Error ? e.message : String(e)
|
||||
});
|
||||
continue;
|
||||
}
|
||||
|
||||
const timestamp = data.timestamp;
|
||||
const sessionId = data.sessionId;
|
||||
const project = data.cwd;
|
||||
@@ -68,6 +78,9 @@ function buildTimestampMap(): TimestampMapping {
|
||||
if (timestamp && sessionId) {
|
||||
// Round timestamp to second for matching with XML timestamps
|
||||
const roundedTimestamp = new Date(timestamp);
|
||||
if (Number.isNaN(roundedTimestamp.getTime())) {
|
||||
continue;
|
||||
}
|
||||
roundedTimestamp.setMilliseconds(0);
|
||||
const key = roundedTimestamp.toISOString();
|
||||
|
||||
@@ -76,13 +89,6 @@ function buildTimestampMap(): TimestampMapping {
|
||||
map[key] = { sessionId, project };
|
||||
}
|
||||
}
|
||||
} catch (e) {
|
||||
logger.debug('IMPORT', 'Skipping invalid JSON line', {
|
||||
lineNumber: index + 1,
|
||||
filename,
|
||||
error: e instanceof Error ? e.message : String(e)
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -131,7 +137,6 @@ function parseObservation(xml: string): ObservationData | null {
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const observation: ObservationData = {
|
||||
type: extractTag(xml, 'type'),
|
||||
title: extractTag(xml, 'title'),
|
||||
@@ -149,10 +154,6 @@ function parseObservation(xml: string): ObservationData | null {
|
||||
}
|
||||
|
||||
return observation;
|
||||
} catch (e) {
|
||||
console.error('Error parsing observation:', e);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -164,7 +165,6 @@ function parseSummary(xml: string): SummaryData | null {
|
||||
return null;
|
||||
}
|
||||
|
||||
try {
|
||||
const summary: SummaryData = {
|
||||
request: extractTag(xml, 'request'),
|
||||
investigated: extractTag(xml, 'investigated'),
|
||||
@@ -180,10 +180,6 @@ function parseSummary(xml: string): SummaryData | null {
|
||||
}
|
||||
|
||||
return summary;
|
||||
} catch (e) {
|
||||
console.error('Error parsing summary:', e);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -326,8 +322,8 @@ function main() {
|
||||
if (importedObs % 50 === 0) {
|
||||
console.log(`Imported ${importedObs} observations...`);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error(`Error storing observation:`, e);
|
||||
} catch (e: unknown) {
|
||||
console.error(`Error storing observation:`, e instanceof Error ? e.message : String(e));
|
||||
skipped++;
|
||||
}
|
||||
continue;
|
||||
@@ -358,8 +354,8 @@ function main() {
|
||||
if (importedSum % 10 === 0) {
|
||||
console.log(`Imported ${importedSum} summaries...`);
|
||||
}
|
||||
} catch (e) {
|
||||
console.error(`Error storing summary:`, e);
|
||||
} catch (e: unknown) {
|
||||
console.error(`Error storing summary:`, e instanceof Error ? e.message : String(e));
|
||||
skipped++;
|
||||
}
|
||||
continue;
|
||||
|
||||
@@ -2,6 +2,13 @@ import type { PlatformAdapter, NormalizedHookInput, HookResult } from '../types.
|
||||
|
||||
// Maps Claude Code stdin format (session_id, cwd, tool_name, etc.)
|
||||
// SessionStart hooks receive no stdin, so we must handle undefined input gracefully
|
||||
|
||||
// Defensive cap: Claude Code's agent identifiers are short (e.g., "agent-abc123", "Explore").
|
||||
// Ignore anything longer than 128 chars so a malformed payload cannot balloon DB rows.
|
||||
const MAX_AGENT_FIELD_LEN = 128;
|
||||
const pickAgentField = (v: unknown): string | undefined =>
|
||||
typeof v === 'string' && v.length > 0 && v.length <= MAX_AGENT_FIELD_LEN ? v : undefined;
|
||||
|
||||
export const claudeCodeAdapter: PlatformAdapter = {
|
||||
normalizeInput(raw) {
|
||||
const r = (raw ?? {}) as any;
|
||||
@@ -13,6 +20,8 @@ export const claudeCodeAdapter: PlatformAdapter = {
|
||||
toolInput: r.tool_input,
|
||||
toolResponse: r.tool_response,
|
||||
transcriptPath: r.transcript_path,
|
||||
agentId: pickAgentField(r.agent_id),
|
||||
agentType: pickAgentField(r.agent_type),
|
||||
};
|
||||
},
|
||||
formatOutput(result) {
|
||||
|
||||
+121
-84
@@ -76,12 +76,19 @@ function estimateTokens(obs: ObservationRow): number {
|
||||
function getTrackedFolders(workingDir: string): Set<string> {
|
||||
const folders = new Set<string>();
|
||||
|
||||
let output: string;
|
||||
try {
|
||||
const output = execSync('git ls-files', {
|
||||
output = execSync('git ls-files', {
|
||||
cwd: workingDir,
|
||||
encoding: 'utf-8',
|
||||
maxBuffer: 50 * 1024 * 1024
|
||||
});
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.warn('CLAUDE_MD', 'git ls-files failed, falling back to directory walk', { error: errorMessage });
|
||||
walkDirectoriesWithIgnore(workingDir, folders);
|
||||
return folders;
|
||||
}
|
||||
|
||||
const files = output.trim().split('\n').filter(f => f);
|
||||
|
||||
@@ -94,10 +101,6 @@ function getTrackedFolders(workingDir: string): Set<string> {
|
||||
dir = path.dirname(dir);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn('CLAUDE_MD', 'git ls-files failed, falling back to directory walk', { error: String(error) });
|
||||
walkDirectoriesWithIgnore(workingDir, folders);
|
||||
}
|
||||
|
||||
return folders;
|
||||
}
|
||||
@@ -141,7 +144,9 @@ function hasDirectChildFile(obs: ObservationRow, folderPath: string): boolean {
|
||||
if (Array.isArray(files)) {
|
||||
return files.some(f => isDirectChild(f, folderPath));
|
||||
}
|
||||
} catch {}
|
||||
} catch (error) {
|
||||
logger.warn('CLAUDE_MD', 'Failed to parse files JSON in hasDirectChildFile', { error: error instanceof Error ? error.message : String(error) });
|
||||
}
|
||||
return false;
|
||||
};
|
||||
|
||||
@@ -187,7 +192,9 @@ function extractRelevantFile(obs: ObservationRow, relativeFolder: string): strin
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
} catch (error) {
|
||||
logger.warn('CLAUDE_MD', 'Failed to parse files_modified JSON', { error: error instanceof Error ? error.message : String(error) });
|
||||
}
|
||||
}
|
||||
|
||||
if (obs.files_read) {
|
||||
@@ -200,7 +207,9 @@ function extractRelevantFile(obs: ObservationRow, relativeFolder: string): strin
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch {}
|
||||
} catch (error) {
|
||||
logger.warn('CLAUDE_MD', 'Failed to parse files_read JSON', { error: error instanceof Error ? error.message : String(error) });
|
||||
}
|
||||
}
|
||||
|
||||
return 'General';
|
||||
@@ -316,7 +325,6 @@ function regenerateFolder(
|
||||
workingDir: string,
|
||||
observationLimit: number
|
||||
): { success: boolean; observationCount: number; error?: string } {
|
||||
try {
|
||||
if (!existsSync(absoluteFolder)) {
|
||||
return { success: false, observationCount: 0, error: 'Folder no longer exists' };
|
||||
}
|
||||
@@ -338,48 +346,24 @@ function regenerateFolder(
|
||||
return { success: true, observationCount: observations.length };
|
||||
}
|
||||
|
||||
try {
|
||||
const formatted = formatObservationsForClaudeMd(observations, relativeFolder);
|
||||
writeClaudeMdToFolder(absoluteFolder, formatted);
|
||||
|
||||
return { success: true, observationCount: observations.length };
|
||||
} catch (error) {
|
||||
return { success: false, observationCount: 0, error: String(error) };
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.warn('CLAUDE_MD', 'Failed to regenerate folder', { folder: relativeFolder, error: errorMessage });
|
||||
return { success: false, observationCount: 0, error: errorMessage };
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate CLAUDE.md files for all folders with observations.
|
||||
*
|
||||
* @param dryRun - If true, only report what would be done without writing files
|
||||
* @returns Exit code (0 for success, 1 for error)
|
||||
*/
|
||||
export async function generateClaudeMd(dryRun: boolean): Promise<number> {
|
||||
try {
|
||||
const workingDir = process.cwd();
|
||||
const settings = SettingsDefaultsManager.loadFromFile(SETTINGS_PATH);
|
||||
const observationLimit = parseInt(settings.CLAUDE_MEM_CONTEXT_OBSERVATIONS, 10) || 50;
|
||||
|
||||
logger.info('CLAUDE_MD', 'Starting CLAUDE.md generation', {
|
||||
workingDir,
|
||||
dryRun,
|
||||
observationLimit
|
||||
});
|
||||
|
||||
const project = path.basename(workingDir);
|
||||
const trackedFolders = getTrackedFolders(workingDir);
|
||||
|
||||
if (trackedFolders.size === 0) {
|
||||
logger.info('CLAUDE_MD', 'No folders found in project');
|
||||
return 0;
|
||||
}
|
||||
|
||||
logger.info('CLAUDE_MD', `Found ${trackedFolders.size} folders in project`);
|
||||
|
||||
if (!existsSync(DB_PATH)) {
|
||||
logger.info('CLAUDE_MD', 'Database not found, no observations to process');
|
||||
return 0;
|
||||
}
|
||||
|
||||
function processAllFoldersForGeneration(
|
||||
trackedFolders: Set<string>,
|
||||
workingDir: string,
|
||||
project: string,
|
||||
dryRun: boolean,
|
||||
observationLimit: number
|
||||
): number {
|
||||
const db = new Database(DB_PATH, { readonly: true, create: false });
|
||||
|
||||
let successCount = 0;
|
||||
@@ -427,14 +411,103 @@ export async function generateClaudeMd(dryRun: boolean): Promise<number> {
|
||||
});
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate CLAUDE.md files for all folders with observations.
|
||||
*
|
||||
* @param dryRun - If true, only report what would be done without writing files
|
||||
* @returns Exit code (0 for success, 1 for error)
|
||||
*/
|
||||
export async function generateClaudeMd(dryRun: boolean): Promise<number> {
|
||||
const workingDir = process.cwd();
|
||||
const settings = SettingsDefaultsManager.loadFromFile(SETTINGS_PATH);
|
||||
const observationLimit = parseInt(settings.CLAUDE_MEM_CONTEXT_OBSERVATIONS, 10) || 50;
|
||||
|
||||
logger.info('CLAUDE_MD', 'Starting CLAUDE.md generation', {
|
||||
workingDir,
|
||||
dryRun,
|
||||
observationLimit
|
||||
});
|
||||
|
||||
const project = path.basename(workingDir);
|
||||
const trackedFolders = getTrackedFolders(workingDir);
|
||||
|
||||
if (trackedFolders.size === 0) {
|
||||
logger.info('CLAUDE_MD', 'No folders found in project');
|
||||
return 0;
|
||||
}
|
||||
|
||||
logger.info('CLAUDE_MD', `Found ${trackedFolders.size} folders in project`);
|
||||
|
||||
if (!existsSync(DB_PATH)) {
|
||||
logger.info('CLAUDE_MD', 'Database not found, no observations to process');
|
||||
return 0;
|
||||
}
|
||||
|
||||
try {
|
||||
return processAllFoldersForGeneration(trackedFolders, workingDir, project, dryRun, observationLimit);
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.error('CLAUDE_MD', 'Fatal error during CLAUDE.md generation', {
|
||||
error: String(error)
|
||||
error: errorMessage
|
||||
});
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
function processFilesForCleanup(
|
||||
filesToProcess: string[],
|
||||
workingDir: string,
|
||||
dryRun: boolean
|
||||
): number {
|
||||
let deletedCount = 0;
|
||||
let cleanedCount = 0;
|
||||
let errorCount = 0;
|
||||
|
||||
for (const file of filesToProcess) {
|
||||
const relativePath = path.relative(workingDir, file);
|
||||
|
||||
try {
|
||||
const result = cleanSingleFile(file, relativePath, dryRun);
|
||||
if (result === 'deleted') deletedCount++;
|
||||
else cleanedCount++;
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.warn('CLAUDE_MD', `Error processing ${relativePath}`, { error: errorMessage });
|
||||
errorCount++;
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('CLAUDE_MD', 'CLAUDE.md cleanup complete', {
|
||||
deleted: deletedCount,
|
||||
cleaned: cleanedCount,
|
||||
errors: errorCount,
|
||||
dryRun
|
||||
});
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
function cleanSingleFile(file: string, relativePath: string, dryRun: boolean): 'deleted' | 'cleaned' {
|
||||
const content = readFileSync(file, 'utf-8');
|
||||
const stripped = content.replace(/<claude-mem-context>[\s\S]*?<\/claude-mem-context>/g, '').trim();
|
||||
|
||||
if (stripped === '') {
|
||||
if (!dryRun) {
|
||||
unlinkSync(file);
|
||||
}
|
||||
logger.debug('CLAUDE_MD', `${dryRun ? '[DRY-RUN] Would delete' : 'Deleted'} (empty): ${relativePath}`);
|
||||
return 'deleted';
|
||||
} else {
|
||||
if (!dryRun) {
|
||||
writeFileSync(file, stripped);
|
||||
}
|
||||
logger.debug('CLAUDE_MD', `${dryRun ? '[DRY-RUN] Would clean' : 'Cleaned'}: ${relativePath}`);
|
||||
return 'cleaned';
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up auto-generated CLAUDE.md files.
|
||||
*
|
||||
@@ -447,7 +520,6 @@ export async function generateClaudeMd(dryRun: boolean): Promise<number> {
|
||||
* @returns Exit code (0 for success, 1 for error)
|
||||
*/
|
||||
export async function cleanClaudeMd(dryRun: boolean): Promise<number> {
|
||||
try {
|
||||
const workingDir = process.cwd();
|
||||
|
||||
logger.info('CLAUDE_MD', 'Starting CLAUDE.md cleanup', {
|
||||
@@ -498,47 +570,12 @@ export async function cleanClaudeMd(dryRun: boolean): Promise<number> {
|
||||
|
||||
logger.info('CLAUDE_MD', `Found ${filesToProcess.length} CLAUDE.md files with auto-generated content`);
|
||||
|
||||
let deletedCount = 0;
|
||||
let cleanedCount = 0;
|
||||
let errorCount = 0;
|
||||
|
||||
for (const file of filesToProcess) {
|
||||
const relativePath = path.relative(workingDir, file);
|
||||
|
||||
try {
|
||||
const content = readFileSync(file, 'utf-8');
|
||||
const stripped = content.replace(/<claude-mem-context>[\s\S]*?<\/claude-mem-context>/g, '').trim();
|
||||
|
||||
if (stripped === '') {
|
||||
if (!dryRun) {
|
||||
unlinkSync(file);
|
||||
}
|
||||
logger.debug('CLAUDE_MD', `${dryRun ? '[DRY-RUN] Would delete' : 'Deleted'} (empty): ${relativePath}`);
|
||||
deletedCount++;
|
||||
} else {
|
||||
if (!dryRun) {
|
||||
writeFileSync(file, stripped);
|
||||
}
|
||||
logger.debug('CLAUDE_MD', `${dryRun ? '[DRY-RUN] Would clean' : 'Cleaned'}: ${relativePath}`);
|
||||
cleanedCount++;
|
||||
}
|
||||
} catch (error) {
|
||||
logger.warn('CLAUDE_MD', `Error processing ${relativePath}`, { error: String(error) });
|
||||
errorCount++;
|
||||
}
|
||||
}
|
||||
|
||||
logger.info('CLAUDE_MD', 'CLAUDE.md cleanup complete', {
|
||||
deleted: deletedCount,
|
||||
cleaned: cleanedCount,
|
||||
errors: errorCount,
|
||||
dryRun
|
||||
});
|
||||
|
||||
return 0;
|
||||
return processFilesForCleanup(filesToProcess, workingDir, dryRun);
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.error('CLAUDE_MD', 'Fatal error during CLAUDE.md cleanup', {
|
||||
error: String(error)
|
||||
error: errorMessage
|
||||
});
|
||||
return 1;
|
||||
}
|
||||
|
||||
+20
-21
@@ -43,22 +43,29 @@ export const contextHandler: EventHandler = {
|
||||
const apiPath = `/api/context/inject?projects=${encodeURIComponent(projectsParam)}&platformSource=${encodeURIComponent(platformSource)}`;
|
||||
const colorApiPath = input.platform === 'claude-code' ? `${apiPath}&colors=true` : apiPath;
|
||||
|
||||
// Note: Removed AbortSignal.timeout due to Windows Bun cleanup issue (libuv assertion)
|
||||
// Worker service has its own timeouts, so client-side timeout is redundant
|
||||
try {
|
||||
// Fetch markdown (for Claude context) and optionally colored (for user display)
|
||||
const [response, colorResponse] = await Promise.all([
|
||||
workerHttpRequest(apiPath),
|
||||
showTerminalOutput ? workerHttpRequest(colorApiPath).catch(() => null) : Promise.resolve(null)
|
||||
]);
|
||||
|
||||
if (!response.ok) {
|
||||
// Log but don't throw — context fetch failure should not block session start
|
||||
logger.warn('HOOK', 'Context generation failed, returning empty', { status: response.status });
|
||||
return {
|
||||
const emptyResult = {
|
||||
hookSpecificOutput: { hookEventName: 'SessionStart', additionalContext: '' },
|
||||
exitCode: HOOK_EXIT_CODES.SUCCESS
|
||||
};
|
||||
|
||||
// Note: Removed AbortSignal.timeout due to Windows Bun cleanup issue (libuv assertion)
|
||||
// Worker service has its own timeouts, so client-side timeout is redundant
|
||||
let response: Response;
|
||||
let colorResponse: Response | null;
|
||||
try {
|
||||
[response, colorResponse] = await Promise.all([
|
||||
workerHttpRequest(apiPath),
|
||||
showTerminalOutput ? workerHttpRequest(colorApiPath).catch(() => null) : Promise.resolve(null)
|
||||
]);
|
||||
} catch (error) {
|
||||
// Worker unreachable — return empty context gracefully
|
||||
logger.warn('HOOK', 'Context fetch error, returning empty', { error: error instanceof Error ? error.message : String(error) });
|
||||
return emptyResult;
|
||||
}
|
||||
|
||||
if (!response.ok) {
|
||||
logger.warn('HOOK', 'Context generation failed, returning empty', { status: response.status });
|
||||
return emptyResult;
|
||||
}
|
||||
|
||||
const [contextResult, colorResult] = await Promise.all([
|
||||
@@ -86,13 +93,5 @@ const apiPath = `/api/context/inject?projects=${encodeURIComponent(projectsParam
|
||||
},
|
||||
systemMessage
|
||||
};
|
||||
} catch (error) {
|
||||
// Worker unreachable — return empty context gracefully
|
||||
logger.warn('HOOK', 'Context fetch error, returning empty', { error: error instanceof Error ? error.message : String(error) });
|
||||
return {
|
||||
hookSpecificOutput: { hookEventName: 'SessionStart', additionalContext: '' },
|
||||
exitCode: HOOK_EXIT_CODES.SUCCESS
|
||||
};
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
@@ -199,9 +199,12 @@ export const fileContextHandler: EventHandler = {
|
||||
return { continue: true, suppressOutput: true };
|
||||
}
|
||||
fileMtimeMs = stat.mtimeMs;
|
||||
} catch (err: any) {
|
||||
if (err.code === 'ENOENT') return { continue: true, suppressOutput: true };
|
||||
} catch (err) {
|
||||
if (err instanceof Error && 'code' in err && (err as NodeJS.ErrnoException).code === 'ENOENT') {
|
||||
return { continue: true, suppressOutput: true };
|
||||
}
|
||||
// Other errors (symlink, permission denied) — fall through and let gate proceed
|
||||
logger.debug('HOOK', 'File stat failed, proceeding with gate', { error: err instanceof Error ? err.message : String(err) });
|
||||
}
|
||||
|
||||
// Check if project is excluded from tracking
|
||||
@@ -218,9 +221,7 @@ export const fileContextHandler: EventHandler = {
|
||||
}
|
||||
|
||||
// Query worker for observations related to this file
|
||||
try {
|
||||
const context = getProjectContext(input.cwd);
|
||||
// Observations store relative paths — convert absolute to relative using cwd
|
||||
const cwd = input.cwd || process.cwd();
|
||||
const absolutePath = path.isAbsolute(filePath) ? filePath : path.resolve(cwd, filePath);
|
||||
const relativePath = path.relative(cwd, absolutePath).split(path.sep).join("/");
|
||||
@@ -231,16 +232,22 @@ export const fileContextHandler: EventHandler = {
|
||||
}
|
||||
queryParams.set('limit', String(FETCH_LOOKAHEAD_LIMIT));
|
||||
|
||||
const response = await workerHttpRequest(`/api/observations/by-file?${queryParams.toString()}`, {
|
||||
method: 'GET',
|
||||
});
|
||||
let data: { observations: ObservationRow[]; count: number };
|
||||
try {
|
||||
const response = await workerHttpRequest(`/api/observations/by-file?${queryParams.toString()}`, { method: 'GET' });
|
||||
|
||||
if (!response.ok) {
|
||||
logger.warn('HOOK', 'File context query failed, skipping', { status: response.status, filePath });
|
||||
return { continue: true, suppressOutput: true };
|
||||
}
|
||||
|
||||
const data = await response.json() as { observations: ObservationRow[]; count: number };
|
||||
data = await response.json() as { observations: ObservationRow[]; count: number };
|
||||
} catch (error) {
|
||||
logger.warn('HOOK', 'File context fetch error, skipping', {
|
||||
error: error instanceof Error ? error.message : String(error),
|
||||
});
|
||||
return { continue: true, suppressOutput: true };
|
||||
}
|
||||
|
||||
if (!data.observations || data.observations.length === 0) {
|
||||
return { continue: true, suppressOutput: true };
|
||||
@@ -285,11 +292,5 @@ export const fileContextHandler: EventHandler = {
|
||||
updatedInput,
|
||||
},
|
||||
};
|
||||
} catch (error) {
|
||||
logger.warn('HOOK', 'File context fetch error, skipping', {
|
||||
error: error instanceof Error ? error.message : String(error),
|
||||
});
|
||||
return { continue: true, suppressOutput: true };
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
@@ -11,6 +11,21 @@ import { logger } from '../../utils/logger.js';
|
||||
import { HOOK_EXIT_CODES } from '../../shared/hook-constants.js';
|
||||
import { normalizePlatformSource } from '../../shared/platform-source.js';
|
||||
|
||||
async function sendFileEditObservation(requestBody: string, filePath: string): Promise<void> {
|
||||
const response = await workerHttpRequest('/api/sessions/observations', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: requestBody
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
logger.warn('HOOK', 'File edit observation storage failed, skipping', { status: response.status, filePath });
|
||||
return;
|
||||
}
|
||||
|
||||
logger.debug('HOOK', 'File edit observation sent successfully', { filePath });
|
||||
}
|
||||
|
||||
export const fileEditHandler: EventHandler = {
|
||||
async execute(input: NormalizedHookInput): Promise<HookResult> {
|
||||
// Ensure worker is running before any other logic
|
||||
@@ -38,27 +53,17 @@ export const fileEditHandler: EventHandler = {
|
||||
|
||||
// Send to worker as an observation with file edit metadata
|
||||
// The observation handler on the worker will process this appropriately
|
||||
try {
|
||||
const response = await workerHttpRequest('/api/sessions/observations', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
const requestBody = JSON.stringify({
|
||||
contentSessionId: sessionId,
|
||||
platformSource,
|
||||
tool_name: 'write_file',
|
||||
tool_input: { filePath, edits },
|
||||
tool_response: { success: true },
|
||||
cwd
|
||||
})
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
// Log but don't throw — file edit observation failure should not block editing
|
||||
logger.warn('HOOK', 'File edit observation storage failed, skipping', { status: response.status, filePath });
|
||||
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
|
||||
}
|
||||
|
||||
logger.debug('HOOK', 'File edit observation sent successfully', { filePath });
|
||||
try {
|
||||
await sendFileEditObservation(requestBody, filePath);
|
||||
} catch (error) {
|
||||
// Worker unreachable — skip file edit observation gracefully
|
||||
logger.warn('HOOK', 'File edit observation fetch error, skipping', { error: error instanceof Error ? error.message : String(error) });
|
||||
|
||||
@@ -13,6 +13,21 @@ import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js
|
||||
import { USER_SETTINGS_PATH } from '../../shared/paths.js';
|
||||
import { normalizePlatformSource } from '../../shared/platform-source.js';
|
||||
|
||||
async function sendObservationToWorker(requestBody: string, toolName: string): Promise<void> {
|
||||
const response = await workerHttpRequest('/api/sessions/observations', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: requestBody
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
logger.warn('HOOK', 'Observation storage failed, skipping', { status: response.status, toolName });
|
||||
return;
|
||||
}
|
||||
|
||||
logger.debug('HOOK', 'Observation sent successfully', { toolName });
|
||||
}
|
||||
|
||||
export const observationHandler: EventHandler = {
|
||||
async execute(input: NormalizedHookInput): Promise<HookResult> {
|
||||
// Ensure worker is running before any other logic
|
||||
@@ -47,27 +62,19 @@ export const observationHandler: EventHandler = {
|
||||
}
|
||||
|
||||
// Send to worker - worker handles privacy check and database operations
|
||||
try {
|
||||
const response = await workerHttpRequest('/api/sessions/observations', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
const requestBody = JSON.stringify({
|
||||
contentSessionId: sessionId,
|
||||
platformSource,
|
||||
tool_name: toolName,
|
||||
tool_input: toolInput,
|
||||
tool_response: toolResponse,
|
||||
cwd
|
||||
})
|
||||
cwd,
|
||||
agentId: input.agentId,
|
||||
agentType: input.agentType
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
// Log but don't throw — observation storage failure should not block tool use
|
||||
logger.warn('HOOK', 'Observation storage failed, skipping', { status: response.status, toolName });
|
||||
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
|
||||
}
|
||||
|
||||
logger.debug('HOOK', 'Observation sent successfully', { toolName });
|
||||
try {
|
||||
await sendObservationToWorker(requestBody, toolName);
|
||||
} catch (error) {
|
||||
// Worker unreachable — skip observation gracefully
|
||||
logger.warn('HOOK', 'Observation fetch error, skipping', { error: error instanceof Error ? error.message : String(error) });
|
||||
|
||||
@@ -14,6 +14,21 @@ import { ensureWorkerRunning, workerHttpRequest } from '../../shared/worker-util
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import { normalizePlatformSource } from '../../shared/platform-source.js';
|
||||
|
||||
async function sendSessionCompleteRequest(sessionId: string, platformSource: string): Promise<void> {
|
||||
const response = await workerHttpRequest('/api/sessions/complete', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ contentSessionId: sessionId, platformSource })
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const text = await response.text();
|
||||
logger.warn('HOOK', 'session-complete: Failed to complete session', { status: response.status, body: text });
|
||||
} else {
|
||||
logger.info('HOOK', 'Session completed successfully', { contentSessionId: sessionId });
|
||||
}
|
||||
}
|
||||
|
||||
export const sessionCompleteHandler: EventHandler = {
|
||||
async execute(input: NormalizedHookInput): Promise<HookResult> {
|
||||
// Ensure worker is running
|
||||
@@ -36,29 +51,12 @@ export const sessionCompleteHandler: EventHandler = {
|
||||
});
|
||||
|
||||
try {
|
||||
// Call the session complete endpoint by contentSessionId
|
||||
const response = await workerHttpRequest('/api/sessions/complete', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
contentSessionId: sessionId,
|
||||
platformSource
|
||||
})
|
||||
});
|
||||
|
||||
if (!response.ok) {
|
||||
const text = await response.text();
|
||||
logger.warn('HOOK', 'session-complete: Failed to complete session', {
|
||||
status: response.status,
|
||||
body: text
|
||||
});
|
||||
} else {
|
||||
logger.info('HOOK', 'Session completed successfully', { contentSessionId: sessionId });
|
||||
}
|
||||
await sendSessionCompleteRequest(sessionId, platformSource);
|
||||
} catch (error) {
|
||||
// Log but don't fail - session may already be gone
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.warn('HOOK', 'session-complete: Error completing session', {
|
||||
error: (error as Error).message
|
||||
error: errorMessage
|
||||
});
|
||||
}
|
||||
|
||||
|
||||
@@ -14,6 +14,27 @@ import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js
|
||||
import { USER_SETTINGS_PATH } from '../../shared/paths.js';
|
||||
import { normalizePlatformSource } from '../../shared/platform-source.js';
|
||||
|
||||
async function fetchSemanticContext(
|
||||
prompt: string,
|
||||
project: string,
|
||||
limit: string,
|
||||
sessionDbId: number
|
||||
): Promise<string> {
|
||||
const semanticRes = await workerHttpRequest('/api/context/semantic', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ q: prompt, project, limit })
|
||||
});
|
||||
if (semanticRes.ok) {
|
||||
const data = await semanticRes.json() as { context: string; count: number };
|
||||
if (data.context) {
|
||||
logger.debug('HOOK', `Semantic injection: ${data.count} observations for prompt`, { sessionId: sessionDbId, count: data.count });
|
||||
return data.context;
|
||||
}
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
export const sessionInitHandler: EventHandler = {
|
||||
async execute(input: NormalizedHookInput): Promise<HookResult> {
|
||||
// Ensure worker is running before any other logic
|
||||
@@ -131,22 +152,9 @@ export const sessionInitHandler: EventHandler = {
|
||||
let additionalContext = '';
|
||||
|
||||
if (semanticInject && prompt && prompt.length >= 20 && prompt !== '[media prompt]') {
|
||||
try {
|
||||
const limit = settings.CLAUDE_MEM_SEMANTIC_INJECT_LIMIT || '5';
|
||||
const semanticRes = await workerHttpRequest('/api/context/semantic', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({ q: prompt, project, limit })
|
||||
});
|
||||
if (semanticRes.ok) {
|
||||
const data = await semanticRes.json() as { context: string; count: number };
|
||||
if (data.context) {
|
||||
additionalContext = data.context;
|
||||
logger.debug('HOOK', `Semantic injection: ${data.count} observations for prompt`, {
|
||||
sessionId: sessionDbId, count: data.count
|
||||
});
|
||||
}
|
||||
}
|
||||
try {
|
||||
additionalContext = await fetchSemanticContext(prompt, project, limit, sessionDbId);
|
||||
} catch (e) {
|
||||
// Graceful degradation — semantic injection is optional
|
||||
logger.debug('HOOK', 'Semantic injection unavailable', {
|
||||
|
||||
@@ -26,6 +26,20 @@ const MAX_WAIT_FOR_SUMMARY_MS = 110_000; // 110s — fits within Stop hook's 120
|
||||
|
||||
export const summarizeHandler: EventHandler = {
|
||||
async execute(input: NormalizedHookInput): Promise<HookResult> {
|
||||
// Skip summaries in subagent context — subagents do not own the session summary.
|
||||
// Gate on agentId only: that field is present exclusively for Task-spawned subagents.
|
||||
// agentType alone (no agentId) indicates `--agent`-started main sessions, which still
|
||||
// own their summary. Do this BEFORE ensureWorkerRunning() so a subagent Stop hook
|
||||
// does not bootstrap the worker.
|
||||
if (input.agentId) {
|
||||
logger.debug('HOOK', 'Skipping summary: subagent context detected', {
|
||||
sessionId: input.sessionId,
|
||||
agentId: input.agentId,
|
||||
agentType: input.agentType
|
||||
});
|
||||
return { continue: true, suppressOutput: true, exitCode: HOOK_EXIT_CODES.SUCCESS };
|
||||
}
|
||||
|
||||
// Ensure worker is running before any other logic
|
||||
const workerReady = await ensureWorkerRunning();
|
||||
if (!workerReady) {
|
||||
@@ -94,11 +108,18 @@ export const summarizeHandler: EventHandler = {
|
||||
let summaryStored: boolean | null = null;
|
||||
while ((Date.now() - waitStart) < MAX_WAIT_FOR_SUMMARY_MS) {
|
||||
await new Promise(resolve => setTimeout(resolve, POLL_INTERVAL_MS));
|
||||
|
||||
let statusResponse: Response;
|
||||
let status: { queueLength?: number; summaryStored?: boolean | null };
|
||||
try {
|
||||
const statusResponse = await workerHttpRequest(`/api/sessions/status?contentSessionId=${encodeURIComponent(sessionId)}`, {
|
||||
timeoutMs: 5000
|
||||
});
|
||||
const status = await statusResponse.json() as { queueLength?: number; summaryStored?: boolean | null };
|
||||
statusResponse = await workerHttpRequest(`/api/sessions/status?contentSessionId=${encodeURIComponent(sessionId)}`, { timeoutMs: 5000 });
|
||||
status = await statusResponse.json() as { queueLength?: number; summaryStored?: boolean | null };
|
||||
} catch (pollError) {
|
||||
// Worker may be busy — keep polling
|
||||
logger.debug('HOOK', 'Summary status poll failed, retrying', { error: pollError instanceof Error ? pollError.message : String(pollError) });
|
||||
continue;
|
||||
}
|
||||
|
||||
const queueLength = status.queueLength ?? 0;
|
||||
// Only treat an empty queue as completion when the session exists (non-404).
|
||||
// A 404 means the session was not found — not that processing finished.
|
||||
@@ -118,9 +139,6 @@ export const summarizeHandler: EventHandler = {
|
||||
}
|
||||
break;
|
||||
}
|
||||
} catch {
|
||||
// Worker may be busy — keep polling
|
||||
}
|
||||
}
|
||||
|
||||
// 3. Complete the session — clean up active sessions map.
|
||||
|
||||
@@ -10,6 +10,25 @@ import type { EventHandler, NormalizedHookInput, HookResult } from '../types.js'
|
||||
import { ensureWorkerRunning, getWorkerPort, workerHttpRequest } from '../../shared/worker-utils.js';
|
||||
import { HOOK_EXIT_CODES } from '../../shared/hook-constants.js';
|
||||
|
||||
async function fetchAndDisplayContext(project: string, colorsParam: string, port: number): Promise<void> {
|
||||
const response = await workerHttpRequest(
|
||||
`/api/context/inject?project=${encodeURIComponent(project)}${colorsParam}`
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
return;
|
||||
}
|
||||
|
||||
const output = await response.text();
|
||||
process.stderr.write(
|
||||
"\n\n" + String.fromCodePoint(0x1F4DD) + " Claude-Mem Context Loaded\n\n" +
|
||||
output +
|
||||
"\n\n" + String.fromCodePoint(0x1F4A1) + " Wrap any message with <private> ... </private> to prevent storing sensitive information.\n" +
|
||||
"\n" + String.fromCodePoint(0x1F4AC) + " Community https://discord.gg/J4wttp9vDu" +
|
||||
`\n` + String.fromCodePoint(0x1F4FA) + ` Watch live in browser http://localhost:${port}/\n`
|
||||
);
|
||||
}
|
||||
|
||||
export const userMessageHandler: EventHandler = {
|
||||
async execute(input: NormalizedHookInput): Promise<HookResult> {
|
||||
// Ensure worker is running
|
||||
@@ -21,36 +40,12 @@ export const userMessageHandler: EventHandler = {
|
||||
|
||||
const port = getWorkerPort();
|
||||
const project = basename(input.cwd ?? process.cwd());
|
||||
|
||||
// Fetch formatted context directly from worker API
|
||||
// Only request ANSI colors for platforms that render them (claude-code)
|
||||
const colorsParam = input.platform === 'claude-code' ? '&colors=true' : '';
|
||||
|
||||
try {
|
||||
const response = await workerHttpRequest(
|
||||
`/api/context/inject?project=${encodeURIComponent(project)}${colorsParam}`
|
||||
);
|
||||
|
||||
if (!response.ok) {
|
||||
// Don't throw - context fetch failure should not block the user's prompt
|
||||
return { exitCode: HOOK_EXIT_CODES.SUCCESS };
|
||||
}
|
||||
|
||||
const output = await response.text();
|
||||
|
||||
// Write to stderr for user visibility
|
||||
// Note: Using process.stderr.write instead of console.error to avoid
|
||||
// Claude Code treating this as a hook error. The actual hook output
|
||||
// goes to stdout via hook-command.ts JSON serialization.
|
||||
process.stderr.write(
|
||||
"\n\n" + String.fromCodePoint(0x1F4DD) + " Claude-Mem Context Loaded\n\n" +
|
||||
output +
|
||||
"\n\n" + String.fromCodePoint(0x1F4A1) + " Wrap any message with <private> ... </private> to prevent storing sensitive information.\n" +
|
||||
"\n" + String.fromCodePoint(0x1F4AC) + " Community https://discord.gg/J4wttp9vDu" +
|
||||
`\n` + String.fromCodePoint(0x1F4FA) + ` Watch live in browser http://localhost:${port}/\n`
|
||||
);
|
||||
} catch (error) {
|
||||
await fetchAndDisplayContext(project, colorsParam, port);
|
||||
} catch {
|
||||
// Worker unreachable — skip user message gracefully
|
||||
// User message context error is non-critical — skip gracefully
|
||||
}
|
||||
|
||||
return { exitCode: HOOK_EXIT_CODES.SUCCESS };
|
||||
|
||||
+20
-11
@@ -65,17 +65,12 @@ export function isWorkerUnavailableError(error: unknown): boolean {
|
||||
return false;
|
||||
}
|
||||
|
||||
export async function hookCommand(platform: string, event: string, options: HookCommandOptions = {}): Promise<number> {
|
||||
// Suppress stderr in hook context — Claude Code shows stderr as error UI (#1181)
|
||||
// Exit 1: stderr shown to user. Exit 2: stderr fed to Claude for processing.
|
||||
// All diagnostics go to log file via logger; stderr must stay clean.
|
||||
const originalStderrWrite = process.stderr.write.bind(process.stderr);
|
||||
process.stderr.write = (() => true) as typeof process.stderr.write;
|
||||
|
||||
try {
|
||||
const adapter = getPlatformAdapter(platform);
|
||||
const handler = getEventHandler(event);
|
||||
|
||||
async function executeHookPipeline(
|
||||
adapter: ReturnType<typeof getPlatformAdapter>,
|
||||
handler: ReturnType<typeof getEventHandler>,
|
||||
platform: string,
|
||||
options: HookCommandOptions
|
||||
): Promise<number> {
|
||||
const rawInput = await readJsonFromStdin();
|
||||
const input = adapter.normalizeInput(rawInput);
|
||||
input.platform = platform; // Inject platform for handler-level decisions
|
||||
@@ -88,6 +83,20 @@ export async function hookCommand(platform: string, event: string, options: Hook
|
||||
process.exit(exitCode);
|
||||
}
|
||||
return exitCode;
|
||||
}
|
||||
|
||||
export async function hookCommand(platform: string, event: string, options: HookCommandOptions = {}): Promise<number> {
|
||||
// Suppress stderr in hook context — Claude Code shows stderr as error UI (#1181)
|
||||
// Exit 1: stderr shown to user. Exit 2: stderr fed to Claude for processing.
|
||||
// All diagnostics go to log file via logger; stderr must stay clean.
|
||||
const originalStderrWrite = process.stderr.write.bind(process.stderr);
|
||||
process.stderr.write = (() => true) as typeof process.stderr.write;
|
||||
|
||||
const adapter = getPlatformAdapter(platform);
|
||||
const handler = getEventHandler(event);
|
||||
|
||||
try {
|
||||
return await executeHookPipeline(adapter, handler, platform, options);
|
||||
} catch (error) {
|
||||
if (isWorkerUnavailableError(error)) {
|
||||
// Worker unavailable — degrade gracefully, don't block the user
|
||||
|
||||
+20
-11
@@ -7,6 +7,8 @@
|
||||
// to parse after each chunk. Once we have valid JSON, we resolve immediately
|
||||
// without waiting for EOF. This is the proper fix, not a timeout workaround.
|
||||
|
||||
import { logger } from '../utils/logger.js';
|
||||
|
||||
/**
|
||||
* Check if stdin is available and readable.
|
||||
*
|
||||
@@ -29,9 +31,10 @@ function isStdinAvailable(): boolean {
|
||||
// eslint-disable-next-line @typescript-eslint/no-unused-expressions
|
||||
stdin.readable;
|
||||
return true;
|
||||
} catch {
|
||||
} catch (error) {
|
||||
// Bun crashed trying to access stdin (EINVAL from fstat)
|
||||
// This is expected when Claude Code doesn't provide valid stdin
|
||||
logger.debug('HOOK', 'stdin not available (expected for some runtimes)', { error: error instanceof Error ? error.message : String(error) });
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -49,8 +52,9 @@ function tryParseJson(input: string): { success: true; value: unknown } | { succ
|
||||
try {
|
||||
const value = JSON.parse(trimmed);
|
||||
return { success: true, value };
|
||||
} catch {
|
||||
// JSON is incomplete or invalid
|
||||
} catch (error) {
|
||||
// JSON is incomplete or invalid — expected during incremental parsing
|
||||
logger.debug('HOOK', 'JSON parse attempt incomplete', { error: error instanceof Error ? error.message : String(error) });
|
||||
return { success: false };
|
||||
}
|
||||
}
|
||||
@@ -128,8 +132,7 @@ export async function readJsonFromStdin(): Promise<unknown> {
|
||||
}
|
||||
}, SAFETY_TIMEOUT_MS);
|
||||
|
||||
try {
|
||||
process.stdin.on('data', (chunk) => {
|
||||
const onData = (chunk: Buffer | string) => {
|
||||
input += chunk;
|
||||
|
||||
// Clear any pending parse delay
|
||||
@@ -148,9 +151,9 @@ export async function readJsonFromStdin(): Promise<unknown> {
|
||||
parseDelayId = setTimeout(() => {
|
||||
tryResolveWithJson();
|
||||
}, PARSE_DELAY_MS);
|
||||
});
|
||||
};
|
||||
|
||||
process.stdin.on('end', () => {
|
||||
const onEnd = () => {
|
||||
// stdin closed - parse whatever we have
|
||||
if (!resolved) {
|
||||
if (!tryResolveWithJson()) {
|
||||
@@ -158,17 +161,23 @@ export async function readJsonFromStdin(): Promise<unknown> {
|
||||
resolveWith(input.trim() ? undefined : undefined);
|
||||
}
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
process.stdin.on('error', () => {
|
||||
const onError = () => {
|
||||
if (!resolved) {
|
||||
// Don't reject on stdin errors - just return undefined
|
||||
// This is more graceful for hook execution
|
||||
resolveWith(undefined);
|
||||
}
|
||||
});
|
||||
} catch {
|
||||
};
|
||||
|
||||
try {
|
||||
process.stdin.on('data', onData);
|
||||
process.stdin.on('end', onEnd);
|
||||
process.stdin.on('error', onError);
|
||||
} catch (error) {
|
||||
// If attaching listeners fails (Bun stdin issue), resolve with undefined
|
||||
logger.debug('HOOK', 'Failed to attach stdin listeners', { error: error instanceof Error ? error.message : String(error) });
|
||||
resolved = true;
|
||||
clearTimeout(safetyTimeoutId);
|
||||
cleanup();
|
||||
|
||||
@@ -12,6 +12,10 @@ export interface NormalizedHookInput {
|
||||
edits?: unknown[]; // afterFileEdit
|
||||
// Platform-specific metadata (source, reason, trigger, mcp_context, etc.)
|
||||
metadata?: Record<string, unknown>;
|
||||
// Claude Code subagent identity — present only when hook fires inside a subagent.
|
||||
// Main session has both undefined. Discriminator for subagent context.
|
||||
agentId?: string; // Claude Code subagent agent_id (undefined in main session)
|
||||
agentType?: string; // Claude Code subagent agent_type (undefined in main session)
|
||||
}
|
||||
|
||||
export interface HookResult {
|
||||
|
||||
@@ -105,17 +105,13 @@ async function workerPost(
|
||||
path: string,
|
||||
body: Record<string, unknown>,
|
||||
): Promise<Record<string, unknown> | null> {
|
||||
let response: Response;
|
||||
try {
|
||||
const response = await fetch(`${WORKER_BASE_URL}${path}`, {
|
||||
response = await fetch(`${WORKER_BASE_URL}${path}`, {
|
||||
method: "POST",
|
||||
headers: { "Content-Type": "application/json" },
|
||||
body: JSON.stringify(body),
|
||||
});
|
||||
if (!response.ok) {
|
||||
console.warn(`[claude-mem] Worker POST ${path} returned ${response.status}`);
|
||||
return null;
|
||||
}
|
||||
return (await response.json()) as Record<string, unknown>;
|
||||
} catch (error: unknown) {
|
||||
// Gracefully handle ECONNREFUSED — worker may not be running
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
@@ -124,6 +120,12 @@ async function workerPost(
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
if (!response.ok) {
|
||||
console.warn(`[claude-mem] Worker POST ${path} returned ${response.status}`);
|
||||
return null;
|
||||
}
|
||||
return (await response.json()) as Record<string, unknown>;
|
||||
}
|
||||
|
||||
function workerPostFireAndForget(
|
||||
@@ -339,8 +341,14 @@ export const ClaudeMemPlugin = async (ctx: OpenCodePluginContext) => {
|
||||
return "claude-mem worker is not running. Start it with: npx claude-mem start";
|
||||
}
|
||||
|
||||
let data: any;
|
||||
try {
|
||||
const data = JSON.parse(text);
|
||||
data = JSON.parse(text);
|
||||
} catch (error: unknown) {
|
||||
console.warn('[claude-mem] Failed to parse search results:', error instanceof Error ? error.message : String(error));
|
||||
return "Failed to parse search results.";
|
||||
}
|
||||
|
||||
const items = Array.isArray(data.items) ? data.items : [];
|
||||
if (items.length === 0) {
|
||||
return `No results found for "${query}".`;
|
||||
@@ -354,9 +362,6 @@ export const ClaudeMemPlugin = async (ctx: OpenCodePluginContext) => {
|
||||
return `${index + 1}. ${title}${project}`;
|
||||
})
|
||||
.join("\n");
|
||||
} catch {
|
||||
return "Failed to parse search results.";
|
||||
}
|
||||
},
|
||||
} satisfies ToolDefinition,
|
||||
},
|
||||
|
||||
@@ -38,7 +38,11 @@ function isCommandInPath(command: string): boolean {
|
||||
const whichCommand = IS_WINDOWS ? 'where' : 'which';
|
||||
execSync(`${whichCommand} ${command}`, { stdio: 'pipe' });
|
||||
return true;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
// Command not found in PATH — expected for non-installed IDEs
|
||||
if (process.env.DEBUG) {
|
||||
console.error(`[ide-detection] ${command} not in PATH:`, error instanceof Error ? error.message : String(error));
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -53,7 +57,8 @@ function hasVscodeExtension(extensionNameFragment: string): boolean {
|
||||
try {
|
||||
const entries = readdirSync(extensionsDirectory);
|
||||
return entries.some((entry) => entry.toLowerCase().includes(extensionNameFragment.toLowerCase()));
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
console.warn('[ide-detection] Failed to read VS Code extensions directory:', error instanceof Error ? error.message : String(error));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -128,7 +128,8 @@ async function setupIDEs(selectedIDEs: string[]): Promise<string[]> {
|
||||
{ stdio: 'inherit' },
|
||||
);
|
||||
log.success('Claude Code: plugin installed via CLI.');
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
console.error('[install] Claude Code plugin install error:', error instanceof Error ? error.message : String(error));
|
||||
log.error('Claude Code: plugin install failed. Is `claude` CLI on your PATH?');
|
||||
failedIDEs.push(ideId);
|
||||
}
|
||||
@@ -372,7 +373,8 @@ function runSmartInstall(): boolean {
|
||||
...(IS_WINDOWS ? { shell: true as const } : {}),
|
||||
});
|
||||
return true;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
console.warn('[install] smart-install error:', error instanceof Error ? error.message : String(error));
|
||||
log.warn('smart-install encountered an issue. You may need to install Bun/uv manually.');
|
||||
return false;
|
||||
}
|
||||
@@ -409,7 +411,8 @@ export async function runInstallCommand(options: InstallOptions = {}): Promise<v
|
||||
readFileSync(join(marketplaceDir, 'plugin', '.claude-plugin', 'plugin.json'), 'utf-8'),
|
||||
);
|
||||
log.warn(`Existing installation detected (v${existingPluginJson.version ?? 'unknown'}).`);
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
console.warn('[install] Failed to read existing plugin version:', error instanceof Error ? error.message : String(error));
|
||||
log.warn('Existing installation detected.');
|
||||
}
|
||||
|
||||
@@ -498,7 +501,8 @@ export async function runInstallCommand(options: InstallOptions = {}): Promise<v
|
||||
try {
|
||||
runNpmInstallInMarketplace();
|
||||
return `Dependencies installed ${pc.green('OK')}`;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
console.warn('[install] npm install error:', error instanceof Error ? error.message : String(error));
|
||||
return `Dependencies may need manual install ${pc.yellow('!')}`;
|
||||
}
|
||||
},
|
||||
|
||||
@@ -154,8 +154,20 @@ export async function runSearchCommand(queryParts: string[]): Promise<void> {
|
||||
const workerPort = process.env.CLAUDE_MEM_WORKER_PORT || '37777';
|
||||
const searchUrl = `http://127.0.0.1:${workerPort}/api/search?query=${encodeURIComponent(query)}`;
|
||||
|
||||
let response: Response;
|
||||
try {
|
||||
const response = await fetch(searchUrl);
|
||||
response = await fetch(searchUrl);
|
||||
} catch (error: unknown) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
const cause = error instanceof Error ? (error as any).cause : undefined;
|
||||
if (cause?.code === 'ECONNREFUSED' || message.includes('ECONNREFUSED')) {
|
||||
console.error(pc.red('Worker is not running.'));
|
||||
console.error(`Start it with: ${pc.bold('npx claude-mem start')}`);
|
||||
process.exit(1);
|
||||
}
|
||||
console.error(pc.red(`Search failed: ${message}`));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (!response.ok) {
|
||||
if (response.status === 404) {
|
||||
@@ -167,22 +179,20 @@ export async function runSearchCommand(queryParts: string[]): Promise<void> {
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
const data = await response.json();
|
||||
let data: unknown;
|
||||
try {
|
||||
data = await response.json();
|
||||
} catch (error: unknown) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(pc.red(`Search failed: invalid JSON response (${message})`));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
if (typeof data === 'object' && data !== null) {
|
||||
console.log(JSON.stringify(data, null, 2));
|
||||
} else {
|
||||
console.log(data);
|
||||
}
|
||||
} catch (error: any) {
|
||||
if (error?.cause?.code === 'ECONNREFUSED' || error?.message?.includes('ECONNREFUSED')) {
|
||||
console.error(pc.red('Worker is not running.'));
|
||||
console.error(`Start it with: ${pc.bold('npx claude-mem start')}`);
|
||||
process.exit(1);
|
||||
}
|
||||
console.error(pc.red(`Search failed: ${error.message}`));
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -120,8 +120,10 @@ export async function runUninstallCommand(): Promise<void> {
|
||||
signal: AbortSignal.timeout(1000),
|
||||
});
|
||||
// Still alive — keep waiting
|
||||
} catch {
|
||||
break; // Connection refused = worker is gone
|
||||
} catch (error: unknown) {
|
||||
// Connection refused = worker is gone (expected shutdown behavior)
|
||||
console.error('[uninstall] Worker health check failed (worker stopped):', error instanceof Error ? error.message : String(error));
|
||||
break;
|
||||
}
|
||||
}
|
||||
p.log.info('Worker service stopped.');
|
||||
@@ -201,8 +203,9 @@ export async function runUninstallCommand(): Promise<void> {
|
||||
if (result === 0) {
|
||||
p.log.info(`${label}: removed.`);
|
||||
}
|
||||
} catch {
|
||||
// IDE not configured or uninstaller errored — skip silently
|
||||
} catch (error: unknown) {
|
||||
// IDE not configured or uninstaller errored — log and continue
|
||||
console.warn(`[uninstall] ${label} cleanup failed:`, error instanceof Error ? error.message : String(error));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -79,7 +79,8 @@ export function getBunVersionString(): string | null {
|
||||
shell: IS_WINDOWS,
|
||||
});
|
||||
return result.status === 0 ? result.stdout.trim() : null;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
console.error('[bun-resolver] Failed to get Bun version:', error instanceof Error ? error.message : String(error));
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
+78
-5
@@ -113,8 +113,13 @@ export function parseObservations(text: string, correlationId?: string): ParsedO
|
||||
/**
|
||||
* Parse summary XML block from SDK response
|
||||
* Returns null if no valid summary found or if summary was skipped
|
||||
*
|
||||
* @param coerceFromObservation - When true, attempts to convert <observation> tags
|
||||
* into summary fields if no <summary> tags are found. Only set this when the
|
||||
* response was expected to be a summary (i.e., a summarize message was sent).
|
||||
* Prevents the infinite retry loop described in #1633.
|
||||
*/
|
||||
export function parseSummary(text: string, sessionId?: number): ParsedSummary | null {
|
||||
export function parseSummary(text: string, sessionId?: number, coerceFromObservation: boolean = false): ParsedSummary | null {
|
||||
// Check for skip_summary first
|
||||
const skipRegex = /<skip_summary\s+reason="([^"]+)"\s*\/>/;
|
||||
const skipMatch = skipRegex.exec(text);
|
||||
@@ -132,10 +137,23 @@ export function parseSummary(text: string, sessionId?: number): ParsedSummary |
|
||||
const summaryMatch = summaryRegex.exec(text);
|
||||
|
||||
if (!summaryMatch) {
|
||||
// Log when the response contains <observation> instead of <summary>
|
||||
// to help diagnose prompt conditioning issues (see #1312)
|
||||
if (/<observation>/.test(text)) {
|
||||
logger.warn('PARSER', 'Summary response contained <observation> tags instead of <summary> — prompt conditioning may need strengthening', { sessionId });
|
||||
// When the LLM returns <observation> tags instead of <summary> tags on a
|
||||
// summary turn, coerce the observation content into summary fields rather
|
||||
// than discarding it. This breaks the infinite retry loop described in
|
||||
// #1633: without coercion, the summary is silently dropped, the session
|
||||
// completes without a summary, a new session is spawned with an ever-growing
|
||||
// prompt, and the cycle repeats.
|
||||
//
|
||||
// parseSummary is called on every response (see ResponseProcessor), not just
|
||||
// summary turns — so the absence of <summary> in an observation response is
|
||||
// expected, not a prompt-conditioning failure. Only act when the caller
|
||||
// actually expected a summary (coerceFromObservation=true).
|
||||
if (coerceFromObservation && /<observation>/.test(text)) {
|
||||
const coerced = coerceObservationToSummary(text, sessionId);
|
||||
if (coerced) {
|
||||
return coerced;
|
||||
}
|
||||
logger.warn('PARSER', 'Summary response contained <observation> tags instead of <summary> — coercion failed, no usable content', { sessionId });
|
||||
}
|
||||
return null;
|
||||
}
|
||||
@@ -171,6 +189,17 @@ export function parseSummary(text: string, sessionId?: number): ParsedSummary |
|
||||
// This is NOT the same as missing some fields (which we intentionally allow above).
|
||||
// Fix for #1360.
|
||||
if (!request && !investigated && !learned && !completed && !next_steps) {
|
||||
// If the response also contains <observation> tags with real content, fall
|
||||
// back to coercion rather than discarding the response entirely — this covers
|
||||
// the case where the LLM wraps empty <summary></summary> around observation
|
||||
// content, which would otherwise resurrect the #1633 retry loop.
|
||||
if (coerceFromObservation && /<observation>/.test(text)) {
|
||||
const coerced = coerceObservationToSummary(text, sessionId);
|
||||
if (coerced) {
|
||||
logger.warn('PARSER', 'Empty <summary> match rejected — coerced from <observation> fallback (#1633)', { sessionId });
|
||||
return coerced;
|
||||
}
|
||||
}
|
||||
logger.warn('PARSER', 'Summary match has no sub-tags — skipping false positive', { sessionId });
|
||||
return null;
|
||||
}
|
||||
@@ -185,6 +214,50 @@ export function parseSummary(text: string, sessionId?: number): ParsedSummary |
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Coerce <observation> response into a ParsedSummary when <summary> tags are missing.
|
||||
* Maps observation fields to the closest summary equivalents so that a usable
|
||||
* summary is stored instead of nothing — breaking the retry loop (#1633).
|
||||
*/
|
||||
function coerceObservationToSummary(text: string, sessionId?: number): ParsedSummary | null {
|
||||
// Iterate all <observation> blocks — if the LLM emits multiple and the first is
|
||||
// empty, we still want to salvage the first one that has usable content.
|
||||
const obsRegex = /<observation>([\s\S]*?)<\/observation>/g;
|
||||
let obsMatch: RegExpExecArray | null;
|
||||
let blockIndex = 0;
|
||||
|
||||
while ((obsMatch = obsRegex.exec(text)) !== null) {
|
||||
const obsContent = obsMatch[1];
|
||||
const title = extractField(obsContent, 'title');
|
||||
const subtitle = extractField(obsContent, 'subtitle');
|
||||
const narrative = extractField(obsContent, 'narrative');
|
||||
const facts = extractArrayElements(obsContent, 'facts', 'fact');
|
||||
|
||||
if (title || narrative || facts.length > 0) {
|
||||
// Map observation fields → summary fields (best-effort)
|
||||
const request = title || subtitle || null;
|
||||
const investigated = narrative || null;
|
||||
const learned = facts.length > 0 ? facts.join('; ') : null;
|
||||
const completed = title ? `${title}${subtitle ? ' — ' + subtitle : ''}` : null;
|
||||
const next_steps = null; // No direct observation equivalent
|
||||
|
||||
logger.warn('PARSER', 'Coerced <observation> response into <summary> to prevent retry loop (#1633)', {
|
||||
sessionId,
|
||||
blockIndex,
|
||||
hasTitle: !!title,
|
||||
hasNarrative: !!narrative,
|
||||
factCount: facts.length,
|
||||
});
|
||||
|
||||
return { request, investigated, learned, completed, next_steps, notes: null };
|
||||
}
|
||||
|
||||
blockIndex++;
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a simple field value from XML content
|
||||
* Returns null for missing or empty/whitespace-only fields
|
||||
|
||||
+24
-7
@@ -6,6 +6,20 @@
|
||||
import { logger } from '../utils/logger.js';
|
||||
import type { ModeConfig } from '../services/domain/types.js';
|
||||
|
||||
/**
|
||||
* Marker string embedded in summary prompts — used by ResponseProcessor to detect
|
||||
* whether the most recent user message was a summary request (enables observation→summary
|
||||
* coercion for #1633). Keep in sync with buildSummaryPrompt below.
|
||||
*/
|
||||
export const SUMMARY_MODE_MARKER = 'MODE SWITCH: PROGRESS SUMMARY';
|
||||
|
||||
/**
|
||||
* Maximum consecutive summary failures before the circuit breaker opens.
|
||||
* After this many failures, SessionManager.queueSummarize will skip further
|
||||
* summarize requests to prevent the infinite retry loop (#1633).
|
||||
*/
|
||||
export const MAX_CONSECUTIVE_SUMMARY_FAILURES = 3;
|
||||
|
||||
export interface Observation {
|
||||
id: number;
|
||||
tool_name: string;
|
||||
@@ -95,19 +109,19 @@ export function buildObservationPrompt(obs: Observation): string {
|
||||
|
||||
try {
|
||||
toolInput = typeof obs.tool_input === 'string' ? JSON.parse(obs.tool_input) : obs.tool_input;
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
logger.debug('SDK', 'Tool input is plain string, using as-is', {
|
||||
toolName: obs.tool_name
|
||||
}, error as Error);
|
||||
}, error instanceof Error ? error : new Error(String(error)));
|
||||
toolInput = obs.tool_input;
|
||||
}
|
||||
|
||||
try {
|
||||
toolOutput = typeof obs.tool_output === 'string' ? JSON.parse(obs.tool_output) : obs.tool_output;
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
logger.debug('SDK', 'Tool output is plain string, using as-is', {
|
||||
toolName: obs.tool_name
|
||||
}, error as Error);
|
||||
}, error instanceof Error ? error : new Error(String(error)));
|
||||
toolOutput = obs.tool_output;
|
||||
}
|
||||
|
||||
@@ -134,9 +148,11 @@ export function buildSummaryPrompt(session: SDKSession, mode: ModeConfig): strin
|
||||
return '';
|
||||
})();
|
||||
|
||||
return `--- MODE SWITCH: PROGRESS SUMMARY ---
|
||||
Do NOT output <observation> tags. This is a summary request, not an observation request.
|
||||
Your response MUST use <summary> tags ONLY. Any <observation> output will be discarded.
|
||||
return `--- ${SUMMARY_MODE_MARKER} ---
|
||||
⚠️ CRITICAL TAG REQUIREMENT — READ CAREFULLY:
|
||||
• You MUST wrap your ENTIRE response in <summary>...</summary> tags.
|
||||
• Do NOT use <observation> tags. <observation> output will be DISCARDED and cause a system error.
|
||||
• The ONLY accepted root tag is <summary>. Any other root tag is a protocol violation.
|
||||
|
||||
${mode.prompts.header_summary_checkpoint}
|
||||
${mode.prompts.summary_instruction}
|
||||
@@ -154,6 +170,7 @@ ${mode.prompts.summary_format_instruction}
|
||||
<notes>${mode.prompts.xml_summary_notes_placeholder}</notes>
|
||||
</summary>
|
||||
|
||||
REMINDER: Your response MUST use <summary> as the root tag, NOT <observation>.
|
||||
${mode.prompts.summary_footer}`;
|
||||
}
|
||||
|
||||
|
||||
+27
-20
@@ -108,7 +108,6 @@ async function callWorkerAPI(
|
||||
): Promise<{ content: Array<{ type: 'text'; text: string }>; isError?: boolean }> {
|
||||
logger.debug('SYSTEM', '→ Worker API', undefined, { endpoint, params });
|
||||
|
||||
try {
|
||||
const searchParams = new URLSearchParams();
|
||||
|
||||
// Convert params to query string
|
||||
@@ -119,6 +118,8 @@ async function callWorkerAPI(
|
||||
}
|
||||
|
||||
const apiPath = `${endpoint}?${searchParams}`;
|
||||
|
||||
try {
|
||||
const response = await workerHttpRequest(apiPath);
|
||||
|
||||
if (!response.ok) {
|
||||
@@ -132,8 +133,8 @@ async function callWorkerAPI(
|
||||
|
||||
// Worker returns { content: [...] } format directly
|
||||
return data;
|
||||
} catch (error) {
|
||||
logger.error('SYSTEM', '← Worker API error', { endpoint }, error as Error);
|
||||
} catch (error: unknown) {
|
||||
logger.error('SYSTEM', '← Worker API error', { endpoint }, error instanceof Error ? error : new Error(String(error)));
|
||||
return {
|
||||
content: [{
|
||||
type: 'text' as const,
|
||||
@@ -144,16 +145,10 @@ async function callWorkerAPI(
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Call Worker HTTP API with POST body
|
||||
*/
|
||||
async function callWorkerAPIPost(
|
||||
async function executeWorkerPostRequest(
|
||||
endpoint: string,
|
||||
body: Record<string, any>
|
||||
): Promise<{ content: Array<{ type: 'text'; text: string }>; isError?: boolean }> {
|
||||
logger.debug('HTTP', 'Worker API request (POST)', undefined, { endpoint });
|
||||
|
||||
try {
|
||||
): Promise<{ content: Array<{ type: 'text'; text: string }> }> {
|
||||
const response = await workerHttpRequest(endpoint, {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
@@ -169,15 +164,27 @@ async function callWorkerAPIPost(
|
||||
|
||||
logger.debug('HTTP', 'Worker API success (POST)', undefined, { endpoint });
|
||||
|
||||
// Wrap raw data in MCP format
|
||||
return {
|
||||
content: [{
|
||||
type: 'text' as const,
|
||||
text: JSON.stringify(data, null, 2)
|
||||
}]
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('HTTP', 'Worker API error (POST)', { endpoint }, error as Error);
|
||||
}
|
||||
|
||||
/**
|
||||
* Call Worker HTTP API with POST body
|
||||
*/
|
||||
async function callWorkerAPIPost(
|
||||
endpoint: string,
|
||||
body: Record<string, any>
|
||||
): Promise<{ content: Array<{ type: 'text'; text: string }>; isError?: boolean }> {
|
||||
logger.debug('HTTP', 'Worker API request (POST)', undefined, { endpoint });
|
||||
|
||||
try {
|
||||
return await executeWorkerPostRequest(endpoint, body);
|
||||
} catch (error: unknown) {
|
||||
logger.error('HTTP', 'Worker API error (POST)', { endpoint }, error instanceof Error ? error : new Error(String(error)));
|
||||
return {
|
||||
content: [{
|
||||
type: 'text' as const,
|
||||
@@ -195,9 +202,9 @@ async function verifyWorkerConnection(): Promise<boolean> {
|
||||
try {
|
||||
const response = await workerHttpRequest('/api/health');
|
||||
return response.ok;
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// Expected during worker startup or if worker is down
|
||||
logger.debug('SYSTEM', 'Worker health check failed', {}, error as Error);
|
||||
logger.debug('SYSTEM', 'Worker health check failed', {}, error instanceof Error ? error : new Error(String(error)));
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -229,12 +236,12 @@ async function ensureWorkerConnection(): Promise<boolean> {
|
||||
);
|
||||
}
|
||||
return started;
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
logger.error(
|
||||
'SYSTEM',
|
||||
'Worker auto-start threw — MCP tools that require the worker (search, timeline, get_observations) will fail until the worker is running.',
|
||||
undefined,
|
||||
error as Error
|
||||
error instanceof Error ? error : new Error(String(error))
|
||||
);
|
||||
return false;
|
||||
}
|
||||
@@ -593,8 +600,8 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
||||
|
||||
try {
|
||||
return await tool.handler(request.params.arguments || {});
|
||||
} catch (error) {
|
||||
logger.error('SYSTEM', 'Tool execution failed', { tool: request.params.name }, error as Error);
|
||||
} catch (error: unknown) {
|
||||
logger.error('SYSTEM', 'Tool execution failed', { tool: request.params.name }, error instanceof Error ? error : new Error(String(error)));
|
||||
return {
|
||||
content: [{
|
||||
type: 'text' as const,
|
||||
|
||||
@@ -49,14 +49,18 @@ const VERSION_MARKER_PATH = path.join(
|
||||
function initializeDatabase(): SessionStore | null {
|
||||
try {
|
||||
return new SessionStore();
|
||||
} catch (error: any) {
|
||||
if (error.code === 'ERR_DLOPEN_FAILED') {
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error && (error as NodeJS.ErrnoException).code === 'ERR_DLOPEN_FAILED') {
|
||||
try {
|
||||
unlinkSync(VERSION_MARKER_PATH);
|
||||
} catch (unlinkError) {
|
||||
logger.debug('SYSTEM', 'Marker file cleanup failed (may not exist)', {}, unlinkError as Error);
|
||||
if (unlinkError instanceof Error) {
|
||||
logger.debug('WORKER', 'Marker file cleanup failed (may not exist)', {}, unlinkError);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Marker file cleanup failed (may not exist)', { error: String(unlinkError) });
|
||||
}
|
||||
logger.error('SYSTEM', 'Native module rebuild needed - restart Claude Code to auto-fix');
|
||||
}
|
||||
logger.error('WORKER', 'Native module rebuild needed - restart Claude Code to auto-fix');
|
||||
return null;
|
||||
}
|
||||
throw error;
|
||||
|
||||
@@ -208,52 +208,58 @@ function cwdToDashed(cwd: string): string {
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract prior messages from transcript file
|
||||
* Find the last assistant message text from parsed transcript lines.
|
||||
*/
|
||||
export function extractPriorMessages(transcriptPath: string): PriorMessages {
|
||||
try {
|
||||
if (!existsSync(transcriptPath)) {
|
||||
return { userMessage: '', assistantMessage: '' };
|
||||
}
|
||||
|
||||
const content = readFileSync(transcriptPath, 'utf-8').trim();
|
||||
if (!content) {
|
||||
return { userMessage: '', assistantMessage: '' };
|
||||
}
|
||||
|
||||
const lines = content.split('\n').filter(line => line.trim());
|
||||
let lastAssistantMessage = '';
|
||||
|
||||
for (let i = lines.length - 1; i >= 0; i--) {
|
||||
try {
|
||||
const line = lines[i];
|
||||
if (!line.includes('"type":"assistant"')) {
|
||||
continue;
|
||||
}
|
||||
function parseAssistantTextFromLine(line: string): string | null {
|
||||
if (!line.includes('"type":"assistant"')) return null;
|
||||
|
||||
const entry = JSON.parse(line);
|
||||
if (entry.type === 'assistant' && entry.message?.content && Array.isArray(entry.message.content)) {
|
||||
let text = '';
|
||||
for (const block of entry.message.content) {
|
||||
if (block.type === 'text') {
|
||||
text += block.text;
|
||||
}
|
||||
if (block.type === 'text') text += block.text;
|
||||
}
|
||||
text = text.replace(SYSTEM_REMINDER_REGEX, '').trim();
|
||||
if (text) {
|
||||
lastAssistantMessage = text;
|
||||
break;
|
||||
}
|
||||
if (text) return text;
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
function findLastAssistantMessage(lines: string[]): string {
|
||||
for (let i = lines.length - 1; i >= 0; i--) {
|
||||
try {
|
||||
const result = parseAssistantTextFromLine(lines[i]);
|
||||
if (result) return result;
|
||||
} catch (parseError) {
|
||||
logger.debug('PARSER', 'Skipping malformed transcript line', { lineIndex: i }, parseError as Error);
|
||||
if (parseError instanceof Error) {
|
||||
logger.debug('WORKER', 'Skipping malformed transcript line', { lineIndex: i }, parseError);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Skipping malformed transcript line', { lineIndex: i, error: String(parseError) });
|
||||
}
|
||||
continue;
|
||||
}
|
||||
}
|
||||
return '';
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract prior messages from transcript file
|
||||
*/
|
||||
export function extractPriorMessages(transcriptPath: string): PriorMessages {
|
||||
try {
|
||||
if (!existsSync(transcriptPath)) return { userMessage: '', assistantMessage: '' };
|
||||
const content = readFileSync(transcriptPath, 'utf-8').trim();
|
||||
if (!content) return { userMessage: '', assistantMessage: '' };
|
||||
|
||||
const lines = content.split('\n').filter(line => line.trim());
|
||||
const lastAssistantMessage = findLastAssistantMessage(lines);
|
||||
return { userMessage: '', assistantMessage: lastAssistantMessage };
|
||||
} catch (error) {
|
||||
logger.failure('WORKER', `Failed to extract prior messages from transcript`, { transcriptPath }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.failure('WORKER', 'Failed to extract prior messages from transcript', { transcriptPath }, error);
|
||||
} else {
|
||||
logger.warn('WORKER', 'Failed to extract prior messages from transcript', { transcriptPath, error: String(error) });
|
||||
}
|
||||
return { userMessage: '', assistantMessage: '' };
|
||||
}
|
||||
}
|
||||
|
||||
@@ -144,7 +144,11 @@ export class ModeManager {
|
||||
});
|
||||
return mode;
|
||||
} catch (error) {
|
||||
logger.warn('SYSTEM', `Mode file not found: ${modeId}, falling back to 'code'`);
|
||||
if (error instanceof Error) {
|
||||
logger.warn('WORKER', `Mode file not found: ${modeId}, falling back to 'code'`, { message: error.message });
|
||||
} else {
|
||||
logger.warn('WORKER', `Mode file not found: ${modeId}, falling back to 'code'`, { error: String(error) });
|
||||
}
|
||||
// If we're already trying to load 'code', throw to prevent infinite recursion
|
||||
if (modeId === 'code') {
|
||||
throw new Error('Critical: code.json mode file missing');
|
||||
@@ -161,7 +165,11 @@ export class ModeManager {
|
||||
try {
|
||||
parentMode = this.loadMode(parentId);
|
||||
} catch (error) {
|
||||
logger.warn('SYSTEM', `Parent mode '${parentId}' not found for ${modeId}, falling back to 'code'`);
|
||||
if (error instanceof Error) {
|
||||
logger.warn('WORKER', `Parent mode '${parentId}' not found for ${modeId}, falling back to 'code'`, { message: error.message });
|
||||
} else {
|
||||
logger.warn('WORKER', `Parent mode '${parentId}' not found for ${modeId}, falling back to 'code'`, { error: String(error) });
|
||||
}
|
||||
parentMode = this.loadMode('code');
|
||||
}
|
||||
|
||||
@@ -171,7 +179,11 @@ export class ModeManager {
|
||||
overrideConfig = this.loadModeFile(overrideId);
|
||||
logger.debug('SYSTEM', `Loaded override file: ${overrideId} for parent ${parentId}`);
|
||||
} catch (error) {
|
||||
logger.warn('SYSTEM', `Override file '${overrideId}' not found, using parent mode '${parentId}' only`);
|
||||
if (error instanceof Error) {
|
||||
logger.warn('WORKER', `Override file '${overrideId}' not found, using parent mode '${parentId}' only`, { message: error.message });
|
||||
} else {
|
||||
logger.warn('WORKER', `Override file '${overrideId}' not found, using parent mode '${parentId}' only`, { error: String(error) });
|
||||
}
|
||||
this.activeMode = parentMode;
|
||||
return parentMode;
|
||||
}
|
||||
|
||||
@@ -53,7 +53,12 @@ export async function isPortInUse(port: number): Promise<boolean> {
|
||||
try {
|
||||
const response = await fetch(`http://127.0.0.1:${port}/api/health`);
|
||||
return response.ok;
|
||||
} catch {
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'Windows health check failed (port not in use)', {}, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Windows health check failed (port not in use)', { error: String(error) });
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -92,7 +97,11 @@ async function pollEndpointUntilOk(
|
||||
if (result.ok) return true;
|
||||
} catch (error) {
|
||||
// [ANTI-PATTERN IGNORED]: Retry loop - expected failures during startup, will retry
|
||||
logger.debug('SYSTEM', retryLogMessage, {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', retryLogMessage, {}, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', retryLogMessage, { error: String(error) });
|
||||
}
|
||||
}
|
||||
await new Promise(r => setTimeout(r, 500));
|
||||
}
|
||||
@@ -166,6 +175,7 @@ export function getInstalledPluginVersion(): string {
|
||||
const packageJson = JSON.parse(readFileSync(packageJsonPath, 'utf-8'));
|
||||
return packageJson.version;
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
const code = (error as NodeJS.ErrnoException).code;
|
||||
if (code === 'ENOENT' || code === 'EBUSY') {
|
||||
logger.debug('SYSTEM', 'Could not read plugin version (shutdown race)', { code });
|
||||
@@ -173,6 +183,8 @@ export function getInstalledPluginVersion(): string {
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -53,12 +53,21 @@ function isBunExecutablePath(executablePath: string | undefined | null): boolean
|
||||
function lookupBinaryInPath(binaryName: string, platform: NodeJS.Platform): string | null {
|
||||
const command = platform === 'win32' ? `where ${binaryName}` : `which ${binaryName}`;
|
||||
|
||||
let output: string;
|
||||
try {
|
||||
const output = execSync(command, {
|
||||
output = execSync(command, {
|
||||
stdio: ['ignore', 'pipe', 'ignore'],
|
||||
encoding: 'utf-8',
|
||||
windowsHide: true
|
||||
});
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', `Binary lookup failed for ${binaryName}`, { command }, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', `Binary lookup failed for ${binaryName}`, { command }, new Error(String(error)));
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
const firstMatch = output
|
||||
.split(/\r?\n/)
|
||||
@@ -66,9 +75,6 @@ function lookupBinaryInPath(binaryName: string, platform: NodeJS.Platform): stri
|
||||
.find(line => line.length > 0);
|
||||
|
||||
return firstMatch || null;
|
||||
} catch {
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
// Memoize the resolved runtime path for the no-options call site (which is
|
||||
@@ -202,8 +208,12 @@ export function readPidFile(): PidInfo | null {
|
||||
|
||||
try {
|
||||
return JSON.parse(readFileSync(PID_FILE, 'utf-8'));
|
||||
} catch (error) {
|
||||
logger.warn('SYSTEM', 'Failed to parse PID file', { path: PID_FILE }, error as Error);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.warn('SYSTEM', 'Failed to parse PID file', { path: PID_FILE }, error);
|
||||
} else {
|
||||
logger.warn('SYSTEM', 'Failed to parse PID file', { path: PID_FILE }, new Error(String(error)));
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
@@ -216,9 +226,13 @@ export function removePidFile(): void {
|
||||
|
||||
try {
|
||||
unlinkSync(PID_FILE);
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// [ANTI-PATTERN IGNORED]: Cleanup function - PID file removal failure is non-critical
|
||||
logger.warn('SYSTEM', 'Failed to remove PID file', { path: PID_FILE }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.warn('SYSTEM', 'Failed to remove PID file', { path: PID_FILE }, error);
|
||||
} else {
|
||||
logger.warn('SYSTEM', 'Failed to remove PID file', { path: PID_FILE }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -260,9 +274,13 @@ export async function getChildProcesses(parentPid: number): Promise<number[]> {
|
||||
.filter(line => line.length > 0 && /^\d+$/.test(line))
|
||||
.map(line => parseInt(line, 10))
|
||||
.filter(pid => pid > 0);
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// Shutdown cleanup - failure is non-critical, continue without child process cleanup
|
||||
logger.error('SYSTEM', 'Failed to enumerate child processes', { parentPid }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('SYSTEM', 'Failed to enumerate child processes', { parentPid }, error);
|
||||
} else {
|
||||
logger.error('SYSTEM', 'Failed to enumerate child processes', { parentPid }, new Error(String(error)));
|
||||
}
|
||||
return [];
|
||||
}
|
||||
}
|
||||
@@ -287,9 +305,13 @@ export async function forceKillProcess(pid: number): Promise<void> {
|
||||
process.kill(pid, 'SIGKILL');
|
||||
}
|
||||
logger.info('SYSTEM', 'Killed process', { pid });
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// [ANTI-PATTERN IGNORED]: Shutdown cleanup - process already exited, continue
|
||||
logger.debug('SYSTEM', 'Process already exited during force kill', { pid }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'Process already exited during force kill', { pid }, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Process already exited during force kill', { pid }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -304,7 +326,8 @@ export async function waitForProcessesExit(pids: number[], timeoutMs: number): P
|
||||
try {
|
||||
process.kill(pid, 0);
|
||||
return true;
|
||||
} catch (error) {
|
||||
} catch {
|
||||
// process.kill(pid, 0) throws when PID doesn't exist — expected during cleanup
|
||||
// [ANTI-PATTERN IGNORED]: Tight loop checking 100s of PIDs every 100ms during cleanup
|
||||
return false;
|
||||
}
|
||||
@@ -358,21 +381,12 @@ export function parseElapsedTime(etime: string): number {
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up orphaned claude-mem processes from previous worker sessions
|
||||
*
|
||||
* Targets mcp-server.cjs, worker-service.cjs, and chroma-mcp processes
|
||||
* that survived a previous daemon crash. Only kills processes older than
|
||||
* ORPHAN_MAX_AGE_MINUTES to avoid killing the current session.
|
||||
*
|
||||
* The periodic ProcessRegistry reaper handles in-session orphans;
|
||||
* this function handles cross-session orphans at startup.
|
||||
* Enumerate orphaned claude-mem processes matching ORPHAN_PROCESS_PATTERNS.
|
||||
* Returns PIDs of processes older than ORPHAN_MAX_AGE_MINUTES.
|
||||
*/
|
||||
export async function cleanupOrphanedProcesses(): Promise<void> {
|
||||
const isWindows = process.platform === 'win32';
|
||||
const currentPid = process.pid;
|
||||
async function enumerateOrphanedProcesses(isWindows: boolean, currentPid: number): Promise<number[]> {
|
||||
const pidsToKill: number[] = [];
|
||||
|
||||
try {
|
||||
if (isWindows) {
|
||||
// Windows: Use WQL -Filter for server-side filtering (no $_ pipeline syntax).
|
||||
// Avoids Git Bash $_ interpretation (#1062) and PowerShell syntax errors (#1024).
|
||||
@@ -385,7 +399,7 @@ export async function cleanupOrphanedProcesses(): Promise<void> {
|
||||
|
||||
if (!stdout.trim() || stdout.trim() === 'null') {
|
||||
logger.debug('SYSTEM', 'No orphaned claude-mem processes found (Windows)');
|
||||
return;
|
||||
return [];
|
||||
}
|
||||
|
||||
const processes = JSON.parse(stdout);
|
||||
@@ -418,7 +432,7 @@ export async function cleanupOrphanedProcesses(): Promise<void> {
|
||||
|
||||
if (!stdout.trim()) {
|
||||
logger.debug('SYSTEM', 'No orphaned claude-mem processes found (Unix)');
|
||||
return;
|
||||
return [];
|
||||
}
|
||||
|
||||
const lines = stdout.trim().split('\n');
|
||||
@@ -440,9 +454,34 @@ export async function cleanupOrphanedProcesses(): Promise<void> {
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
|
||||
return pidsToKill;
|
||||
}
|
||||
|
||||
/**
|
||||
* Clean up orphaned claude-mem processes from previous worker sessions
|
||||
*
|
||||
* Targets mcp-server.cjs, worker-service.cjs, and chroma-mcp processes
|
||||
* that survived a previous daemon crash. Only kills processes older than
|
||||
* ORPHAN_MAX_AGE_MINUTES to avoid killing the current session.
|
||||
*
|
||||
* The periodic ProcessRegistry reaper handles in-session orphans;
|
||||
* this function handles cross-session orphans at startup.
|
||||
*/
|
||||
export async function cleanupOrphanedProcesses(): Promise<void> {
|
||||
const isWindows = process.platform === 'win32';
|
||||
const currentPid = process.pid;
|
||||
let pidsToKill: number[];
|
||||
|
||||
try {
|
||||
pidsToKill = await enumerateOrphanedProcesses(isWindows, currentPid);
|
||||
} catch (error: unknown) {
|
||||
// Orphan cleanup is non-critical - log and continue
|
||||
logger.error('SYSTEM', 'Failed to enumerate orphaned processes', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('SYSTEM', 'Failed to enumerate orphaned processes', {}, error);
|
||||
} else {
|
||||
logger.error('SYSTEM', 'Failed to enumerate orphaned processes', {}, new Error(String(error)));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -467,18 +506,26 @@ export async function cleanupOrphanedProcesses(): Promise<void> {
|
||||
}
|
||||
try {
|
||||
execSync(`taskkill /PID ${pid} /T /F`, { timeout: HOOK_TIMEOUTS.POWERSHELL_COMMAND, stdio: 'ignore', windowsHide: true });
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// [ANTI-PATTERN IGNORED]: Cleanup loop - process may have exited, continue to next PID
|
||||
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for (const pid of pidsToKill) {
|
||||
try {
|
||||
process.kill(pid, 'SIGKILL');
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// [ANTI-PATTERN IGNORED]: Cleanup loop - process may have exited, continue to next PID
|
||||
logger.debug('SYSTEM', 'Process already exited', { pid }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'Process already exited', { pid }, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Process already exited', { pid }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -494,35 +541,17 @@ const AGGRESSIVE_CLEANUP_PATTERNS = ['worker-service.cjs', 'chroma-mcp'];
|
||||
const AGE_GATED_CLEANUP_PATTERNS = ['mcp-server.cjs'];
|
||||
|
||||
/**
|
||||
* Aggressive startup cleanup for orphaned claude-mem processes.
|
||||
*
|
||||
* Unlike cleanupOrphanedProcesses() which age-gates everything at 30 minutes,
|
||||
* this function kills worker-service.cjs and chroma-mcp processes immediately
|
||||
* (they should not outlive their parent worker). Only mcp-server.cjs keeps
|
||||
* the age threshold since it may be legitimately running.
|
||||
*
|
||||
* Called once at daemon startup.
|
||||
* Enumerate processes for aggressive startup cleanup. Aggressive patterns are
|
||||
* killed immediately; age-gated patterns only if older than ORPHAN_MAX_AGE_MINUTES.
|
||||
*/
|
||||
export async function aggressiveStartupCleanup(): Promise<void> {
|
||||
const isWindows = process.platform === 'win32';
|
||||
const currentPid = process.pid;
|
||||
async function enumerateAggressiveCleanupProcesses(
|
||||
isWindows: boolean,
|
||||
currentPid: number,
|
||||
protectedPids: Set<number>,
|
||||
allPatterns: string[]
|
||||
): Promise<number[]> {
|
||||
const pidsToKill: number[] = [];
|
||||
const allPatterns = [...AGGRESSIVE_CLEANUP_PATTERNS, ...AGE_GATED_CLEANUP_PATTERNS];
|
||||
|
||||
// Protect parent process (the hook that spawned us) from being killed.
|
||||
// Without this, a new daemon kills its own parent hook process (#1426).
|
||||
//
|
||||
// Note: readPidFile() is not used here because start() writes the new PID
|
||||
// before initializeBackground() calls this function, so readPidFile() would
|
||||
// just return process.pid (already protected). If a pre-existing worker needs
|
||||
// protection, ensureWorkerStarted() handles that by returning early when a
|
||||
// healthy worker is detected — we never reach this code in that case.
|
||||
const protectedPids = new Set<number>([currentPid]);
|
||||
if (process.ppid && process.ppid > 0) {
|
||||
protectedPids.add(process.ppid);
|
||||
}
|
||||
|
||||
try {
|
||||
if (isWindows) {
|
||||
// Use WQL -Filter for server-side filtering (no $_ pipeline syntax).
|
||||
// Avoids Git Bash $_ interpretation (#1062) and PowerShell syntax errors (#1024).
|
||||
@@ -535,7 +564,7 @@ export async function aggressiveStartupCleanup(): Promise<void> {
|
||||
|
||||
if (!stdout.trim() || stdout.trim() === 'null') {
|
||||
logger.debug('SYSTEM', 'No orphaned claude-mem processes found (Windows)');
|
||||
return;
|
||||
return [];
|
||||
}
|
||||
|
||||
const processes = JSON.parse(stdout);
|
||||
@@ -575,7 +604,7 @@ export async function aggressiveStartupCleanup(): Promise<void> {
|
||||
|
||||
if (!stdout.trim()) {
|
||||
logger.debug('SYSTEM', 'No orphaned claude-mem processes found (Unix)');
|
||||
return;
|
||||
return [];
|
||||
}
|
||||
|
||||
const lines = stdout.trim().split('\n');
|
||||
@@ -605,8 +634,47 @@ export async function aggressiveStartupCleanup(): Promise<void> {
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('SYSTEM', 'Failed to enumerate orphaned processes during aggressive cleanup', {}, error as Error);
|
||||
|
||||
return pidsToKill;
|
||||
}
|
||||
|
||||
/**
|
||||
* Aggressive startup cleanup for orphaned claude-mem processes.
|
||||
*
|
||||
* Unlike cleanupOrphanedProcesses() which age-gates everything at 30 minutes,
|
||||
* this function kills worker-service.cjs and chroma-mcp processes immediately
|
||||
* (they should not outlive their parent worker). Only mcp-server.cjs keeps
|
||||
* the age threshold since it may be legitimately running.
|
||||
*
|
||||
* Called once at daemon startup.
|
||||
*/
|
||||
export async function aggressiveStartupCleanup(): Promise<void> {
|
||||
const isWindows = process.platform === 'win32';
|
||||
const currentPid = process.pid;
|
||||
const allPatterns = [...AGGRESSIVE_CLEANUP_PATTERNS, ...AGE_GATED_CLEANUP_PATTERNS];
|
||||
|
||||
// Protect parent process (the hook that spawned us) from being killed.
|
||||
// Without this, a new daemon kills its own parent hook process (#1426).
|
||||
//
|
||||
// Note: readPidFile() is not used here because start() writes the new PID
|
||||
// before initializeBackground() calls this function, so readPidFile() would
|
||||
// just return process.pid (already protected). If a pre-existing worker needs
|
||||
// protection, ensureWorkerStarted() handles that by returning early when a
|
||||
// healthy worker is detected — we never reach this code in that case.
|
||||
const protectedPids = new Set<number>([currentPid]);
|
||||
if (process.ppid && process.ppid > 0) {
|
||||
protectedPids.add(process.ppid);
|
||||
}
|
||||
|
||||
let pidsToKill: number[];
|
||||
try {
|
||||
pidsToKill = await enumerateAggressiveCleanupProcesses(isWindows, currentPid, protectedPids, allPatterns);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('SYSTEM', 'Failed to enumerate orphaned processes during aggressive cleanup', {}, error);
|
||||
} else {
|
||||
logger.error('SYSTEM', 'Failed to enumerate orphaned processes during aggressive cleanup', {}, new Error(String(error)));
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -625,16 +693,24 @@ export async function aggressiveStartupCleanup(): Promise<void> {
|
||||
if (!Number.isInteger(pid) || pid <= 0) continue;
|
||||
try {
|
||||
execSync(`taskkill /PID ${pid} /T /F`, { timeout: HOOK_TIMEOUTS.POWERSHELL_COMMAND, stdio: 'ignore', windowsHide: true });
|
||||
} catch (error) {
|
||||
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, error as Error);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Failed to kill process, may have already exited', { pid }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for (const pid of pidsToKill) {
|
||||
try {
|
||||
process.kill(pid, 'SIGKILL');
|
||||
} catch (error) {
|
||||
logger.debug('SYSTEM', 'Process already exited', { pid }, error as Error);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'Process already exited', { pid }, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Process already exited', { pid }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -747,8 +823,22 @@ export function runOneTimeCwdRemap(dataDirectory?: string): void {
|
||||
|
||||
logger.warn('SYSTEM', 'Running one-time cwd-based project remap', { dbPath });
|
||||
|
||||
let db: import('bun:sqlite').Database | null = null;
|
||||
try {
|
||||
executeCwdRemap(dbPath, effectiveDataDir, markerPath);
|
||||
} catch (err: unknown) {
|
||||
if (err instanceof Error) {
|
||||
logger.error('SYSTEM', 'cwd-remap failed, marker not written (will retry on next startup)', {}, err);
|
||||
} else {
|
||||
logger.error('SYSTEM', 'cwd-remap failed, marker not written (will retry on next startup)', {}, new Error(String(err)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Execute the cwd-remap DB migration. Extracted to keep the try block small.
|
||||
* Opens, queries, and updates the DB, then writes the marker file on success.
|
||||
*/
|
||||
function executeCwdRemap(dbPath: string, effectiveDataDir: string, markerPath: string): void {
|
||||
const { Database } = require('bun:sqlite') as typeof import('bun:sqlite');
|
||||
|
||||
const probe = new Database(dbPath, { readonly: true });
|
||||
@@ -768,8 +858,8 @@ export function runOneTimeCwdRemap(dataDirectory?: string): void {
|
||||
copyFileSync(dbPath, backup);
|
||||
logger.info('SYSTEM', 'DB backed up before cwd-remap', { backup });
|
||||
|
||||
db = new Database(dbPath);
|
||||
|
||||
const db = new Database(dbPath);
|
||||
try {
|
||||
const cwdRows = db.prepare(`
|
||||
SELECT cwd FROM pending_messages
|
||||
WHERE cwd IS NOT NULL AND cwd != ''
|
||||
@@ -825,10 +915,8 @@ export function runOneTimeCwdRemap(dataDirectory?: string): void {
|
||||
mkdirSync(effectiveDataDir, { recursive: true });
|
||||
writeFileSync(markerPath, new Date().toISOString());
|
||||
logger.info('SYSTEM', 'cwd-remap marker written', { markerPath });
|
||||
} catch (err) {
|
||||
logger.error('SYSTEM', 'cwd-remap failed, marker not written (will retry on next startup)', {}, err as Error);
|
||||
} finally {
|
||||
db?.close();
|
||||
db.close();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -896,9 +984,13 @@ export function spawnDaemon(
|
||||
// never falsy checks like `if (!pid)`, which would silently treat
|
||||
// success as failure here.
|
||||
return 0;
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// APPROVED OVERRIDE: Windows daemon spawn is best-effort; log and let callers fall back to health checks/retry flow.
|
||||
logger.error('SYSTEM', 'Failed to spawn worker daemon on Windows', { runtimePath }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('SYSTEM', 'Failed to spawn worker daemon on Windows', { runtimePath }, error);
|
||||
} else {
|
||||
logger.error('SYSTEM', 'Failed to spawn worker daemon on Windows', { runtimePath }, new Error(String(error)));
|
||||
}
|
||||
return undefined;
|
||||
}
|
||||
}
|
||||
@@ -961,9 +1053,14 @@ export function isProcessAlive(pid: number): boolean {
|
||||
process.kill(pid, 0);
|
||||
return true;
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
const code = (error as NodeJS.ErrnoException).code;
|
||||
// EPERM = process exists but different user/session — treat as alive
|
||||
if (code === 'EPERM') return true;
|
||||
logger.debug('SYSTEM', 'Process not alive', { pid, code });
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Process not alive (non-Error thrown)', { pid }, new Error(String(error)));
|
||||
}
|
||||
// ESRCH = no such process — it's dead
|
||||
return false;
|
||||
}
|
||||
@@ -983,7 +1080,12 @@ export function isPidFileRecent(thresholdMs: number = 15000): boolean {
|
||||
try {
|
||||
const stats = statSync(PID_FILE);
|
||||
return (Date.now() - stats.mtimeMs) < thresholdMs;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'PID file not accessible for recency check', { path: PID_FILE }, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'PID file not accessible for recency check', { path: PID_FILE }, new Error(String(error)));
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -1032,9 +1134,13 @@ export function createSignalHandler(
|
||||
try {
|
||||
await shutdownFn();
|
||||
process.exit(0);
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
// Top-level signal handler - log any shutdown error and exit
|
||||
logger.error('SYSTEM', 'Error during shutdown', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('SYSTEM', 'Error during shutdown', {}, error);
|
||||
} else {
|
||||
logger.error('SYSTEM', 'Error during shutdown', {}, new Error(String(error)));
|
||||
}
|
||||
// Exit gracefully: Windows Terminal won't keep tab open on exit 0
|
||||
// Even on shutdown errors, exit cleanly to prevent tab accumulation
|
||||
process.exit(0);
|
||||
|
||||
@@ -248,22 +248,24 @@ export async function adoptMergedWorktrees(opts: {
|
||||
'UPDATE session_summaries SET merged_into_project = ? WHERE project = ? AND merged_into_project IS NULL'
|
||||
);
|
||||
|
||||
const tx = db.transaction(() => {
|
||||
for (const wt of targets) {
|
||||
try {
|
||||
const adoptWorktreeInTransaction = (wt: WorktreeEntry) => {
|
||||
const worktreeProject = getProjectContext(wt.path).primary;
|
||||
const rows = selectObsForPatch.all(
|
||||
worktreeProject,
|
||||
parentProject
|
||||
) as Array<{ id: number }>;
|
||||
for (const r of rows) adoptedSqliteIds.push(r.id);
|
||||
|
||||
// updateObs/updateSum only touch WHERE merged_into_project IS NULL,
|
||||
// so .changes reflects only newly-adopted rows (not the re-patched ones).
|
||||
const obsChanges = updateObs.run(parentProject, worktreeProject).changes;
|
||||
const sumChanges = updateSum.run(parentProject, worktreeProject).changes;
|
||||
for (const r of rows) adoptedSqliteIds.push(r.id);
|
||||
result.adoptedObservations += obsChanges;
|
||||
result.adoptedSummaries += sumChanges;
|
||||
};
|
||||
|
||||
const tx = db.transaction(() => {
|
||||
for (const wt of targets) {
|
||||
try {
|
||||
adoptWorktreeInTransaction(wt);
|
||||
} catch (err) {
|
||||
const message = err instanceof Error ? err.message : String(err);
|
||||
logger.warn('SYSTEM', 'Worktree adoption skipped branch', {
|
||||
@@ -285,7 +287,11 @@ export async function adoptMergedWorktrees(opts: {
|
||||
} catch (err) {
|
||||
if (err instanceof DryRunRollback) {
|
||||
// Rolled back as intended for dry-run — counts are still useful.
|
||||
} else if (err instanceof Error) {
|
||||
logger.error('SYSTEM', 'Worktree adoption transaction failed', {}, err);
|
||||
throw err;
|
||||
} else {
|
||||
logger.error('SYSTEM', 'Worktree adoption transaction failed with non-Error', { error: String(err) });
|
||||
throw err;
|
||||
}
|
||||
}
|
||||
@@ -299,12 +305,20 @@ export async function adoptMergedWorktrees(opts: {
|
||||
await chromaSync.updateMergedIntoProject(adoptedSqliteIds, parentProject);
|
||||
result.chromaUpdates = adoptedSqliteIds.length;
|
||||
} catch (err) {
|
||||
if (err instanceof Error) {
|
||||
logger.error(
|
||||
'CHROMA_SYNC',
|
||||
'SYSTEM',
|
||||
'Worktree adoption Chroma patch failed (SQL already committed)',
|
||||
{ parentProject, sqliteIdCount: adoptedSqliteIds.length },
|
||||
err as Error
|
||||
err
|
||||
);
|
||||
} else {
|
||||
logger.error(
|
||||
'SYSTEM',
|
||||
'Worktree adoption Chroma patch failed (SQL already committed)',
|
||||
{ parentProject, sqliteIdCount: adoptedSqliteIds.length, error: String(err) }
|
||||
);
|
||||
}
|
||||
result.chromaFailed = adoptedSqliteIds.length;
|
||||
} finally {
|
||||
await chromaSync.close();
|
||||
|
||||
@@ -67,7 +67,11 @@ function loadExistingTranscriptWatchConfig(): TranscriptWatchConfig {
|
||||
|
||||
return parsed;
|
||||
} catch (parseError) {
|
||||
logger.error('SYSTEM', 'Corrupt transcript-watch.json, creating backup', { path: configPath }, parseError as Error);
|
||||
if (parseError instanceof Error) {
|
||||
logger.error('WORKER', 'Corrupt transcript-watch.json, creating backup', { path: configPath }, parseError);
|
||||
} else {
|
||||
logger.error('WORKER', 'Corrupt transcript-watch.json, creating backup', { path: configPath }, new Error(String(parseError)));
|
||||
}
|
||||
|
||||
// Back up corrupt file
|
||||
const backupPath = `${configPath}.backup.${Date.now()}`;
|
||||
@@ -135,13 +139,22 @@ function writeTranscriptWatchConfig(config: TranscriptWatchConfig): void {
|
||||
* Preserves any existing user content outside the tags.
|
||||
*/
|
||||
function removeCodexAgentsMdContext(): void {
|
||||
try {
|
||||
if (!existsSync(CODEX_AGENTS_MD_PATH)) return;
|
||||
|
||||
const content = readFileSync(CODEX_AGENTS_MD_PATH, 'utf-8');
|
||||
const startTag = '<claude-mem-context>';
|
||||
const endTag = '</claude-mem-context>';
|
||||
|
||||
try {
|
||||
readAndStripContextTags(startTag, endTag);
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
logger.warn('WORKER', 'Failed to clean AGENTS.md context', { error: message });
|
||||
}
|
||||
}
|
||||
|
||||
function readAndStripContextTags(startTag: string, endTag: string): void {
|
||||
const content = readFileSync(CODEX_AGENTS_MD_PATH, 'utf-8');
|
||||
|
||||
const startIdx = content.indexOf(startTag);
|
||||
const endIdx = content.indexOf(endTag);
|
||||
|
||||
@@ -158,9 +171,6 @@ function removeCodexAgentsMdContext(): void {
|
||||
}
|
||||
|
||||
console.log(` Removed legacy global context from ${CODEX_AGENTS_MD_PATH}`);
|
||||
} catch (error) {
|
||||
logger.warn('SYSTEM', 'Failed to clean AGENTS.md context', { error: (error as Error).message });
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -184,16 +194,26 @@ const cleanupLegacyCodexAgentsMdContext = removeCodexAgentsMdContext;
|
||||
export async function installCodexCli(): Promise<number> {
|
||||
console.log('\nInstalling Claude-Mem for Codex CLI (transcript watching)...\n');
|
||||
|
||||
try {
|
||||
// Step 1: Merge transcript-watch config
|
||||
const existingConfig = loadExistingTranscriptWatchConfig();
|
||||
const mergedConfig = mergeCodexWatchConfig(existingConfig);
|
||||
|
||||
try {
|
||||
writeConfigAndShowCodexInstructions(mergedConfig);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nInstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
function writeConfigAndShowCodexInstructions(mergedConfig: TranscriptWatchConfig): void {
|
||||
writeTranscriptWatchConfig(mergedConfig);
|
||||
console.log(` Updated ${DEFAULT_CONFIG_PATH}`);
|
||||
console.log(` Watch path: ~/.codex/sessions/**/*.jsonl`);
|
||||
console.log(` Schema: codex (v${SAMPLE_CONFIG.schemas?.codex?.version ?? '?'})`);
|
||||
|
||||
// Step 2: Clean up legacy global AGENTS.md context
|
||||
cleanupLegacyCodexAgentsMdContext();
|
||||
|
||||
console.log(`
|
||||
@@ -211,12 +231,6 @@ Next steps:
|
||||
1. Start claude-mem worker: npx claude-mem start
|
||||
2. Use Codex CLI as usual -- memory capture is automatic!
|
||||
`);
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nInstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -234,23 +248,26 @@ Next steps:
|
||||
export function uninstallCodexCli(): number {
|
||||
console.log('\nUninstalling Claude-Mem Codex CLI integration...\n');
|
||||
|
||||
try {
|
||||
// Step 1: Remove codex watch from transcript-watch.json
|
||||
if (existsSync(DEFAULT_CONFIG_PATH)) {
|
||||
const config = loadExistingTranscriptWatchConfig();
|
||||
|
||||
// Remove codex watch
|
||||
config.watches = config.watches.filter(
|
||||
(w: WatchTarget) => w.name !== CODEX_WATCH_NAME,
|
||||
);
|
||||
|
||||
// Remove codex schema
|
||||
if (config.schemas) {
|
||||
delete config.schemas[CODEX_WATCH_NAME];
|
||||
}
|
||||
|
||||
try {
|
||||
writeTranscriptWatchConfig(config);
|
||||
console.log(` Removed codex watch from ${DEFAULT_CONFIG_PATH}`);
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nUninstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
} else {
|
||||
console.log(' No transcript-watch.json found -- nothing to remove.');
|
||||
}
|
||||
@@ -262,10 +279,6 @@ export function uninstallCodexCli(): number {
|
||||
console.log('Restart claude-mem worker to apply changes.\n');
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nUninstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
@@ -288,8 +301,21 @@ export function checkCodexCliStatus(): number {
|
||||
return 0;
|
||||
}
|
||||
|
||||
let config: TranscriptWatchConfig;
|
||||
try {
|
||||
const config = loadExistingTranscriptWatchConfig();
|
||||
config = loadExistingTranscriptWatchConfig();
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Could not parse transcript-watch.json', { path: DEFAULT_CONFIG_PATH }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Could not parse transcript-watch.json', { path: DEFAULT_CONFIG_PATH }, new Error(String(error)));
|
||||
}
|
||||
console.log('Status: Unknown');
|
||||
console.log(' Could not parse transcript-watch.json.');
|
||||
console.log('');
|
||||
return 0;
|
||||
}
|
||||
|
||||
const codexWatch = config.watches.find(
|
||||
(w: WatchTarget) => w.name === CODEX_WATCH_NAME,
|
||||
);
|
||||
@@ -308,14 +334,12 @@ export function checkCodexCliStatus(): number {
|
||||
console.log(` Schema: ${codexSchema ? `codex (v${codexSchema.version ?? '?'})` : 'missing'}`);
|
||||
console.log(` Start at end: ${codexWatch.startAtEnd ?? false}`);
|
||||
|
||||
// Check context config
|
||||
if (codexWatch.context) {
|
||||
console.log(` Context mode: ${codexWatch.context.mode}`);
|
||||
console.log(` Context path: ${codexWatch.context.path ?? '<workspace>/AGENTS.md (default)'}`);
|
||||
console.log(` Context updates on: ${codexWatch.context.updateOn?.join(', ') ?? 'none'}`);
|
||||
}
|
||||
|
||||
// Check legacy global AGENTS.md usage
|
||||
if (existsSync(CODEX_AGENTS_MD_PATH)) {
|
||||
const mdContent = readFileSync(CODEX_AGENTS_MD_PATH, 'utf-8');
|
||||
if (mdContent.includes('<claude-mem-context>')) {
|
||||
@@ -327,17 +351,12 @@ export function checkCodexCliStatus(): number {
|
||||
console.log(` Legacy global context: None`);
|
||||
}
|
||||
|
||||
// Check if ~/.codex/sessions exists (indicates Codex has been used)
|
||||
const sessionsDir = path.join(CODEX_DIR, 'sessions');
|
||||
if (existsSync(sessionsDir)) {
|
||||
console.log(` Sessions directory: exists`);
|
||||
} else {
|
||||
console.log(` Sessions directory: not yet created (use Codex CLI to generate sessions)`);
|
||||
}
|
||||
} catch {
|
||||
console.log('Status: Unknown');
|
||||
console.log(' Could not parse transcript-watch.json.');
|
||||
}
|
||||
|
||||
console.log('');
|
||||
return 0;
|
||||
|
||||
@@ -117,7 +117,11 @@ export async function updateCursorContextForProject(projectName: string, _port:
|
||||
logger.debug('CURSOR', 'Updated context file', { projectName, workspacePath: entry.workspacePath });
|
||||
} catch (error) {
|
||||
// [ANTI-PATTERN IGNORED]: Background context update - failure is non-critical, user workflow continues
|
||||
logger.error('CURSOR', 'Failed to update context file', { projectName }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Failed to update context file', { projectName }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Failed to update context file', { projectName }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -259,7 +263,11 @@ export function configureCursorMcp(target: CursorInstallTarget): number {
|
||||
}
|
||||
} catch (error) {
|
||||
// [ANTI-PATTERN IGNORED]: Fallback behavior - corrupt config, continue with empty
|
||||
logger.error('SYSTEM', 'Corrupt mcp.json, creating new config', { path: mcpJsonPath }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Corrupt mcp.json, creating new config', { path: mcpJsonPath }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Corrupt mcp.json, creating new config', { path: mcpJsonPath }, new Error(String(error)));
|
||||
}
|
||||
config = { mcpServers: {} };
|
||||
}
|
||||
}
|
||||
@@ -308,10 +316,6 @@ export async function installCursorHooks(target: CursorInstallTarget): Promise<n
|
||||
|
||||
const workspaceRoot = process.cwd();
|
||||
|
||||
try {
|
||||
// Create target directory
|
||||
mkdirSync(targetDir, { recursive: true });
|
||||
|
||||
// Generate hooks.json with unified CLI commands
|
||||
const hooksJsonPath = path.join(targetDir, 'hooks.json');
|
||||
|
||||
@@ -352,6 +356,29 @@ export async function installCursorHooks(target: CursorInstallTarget): Promise<n
|
||||
}
|
||||
};
|
||||
|
||||
try {
|
||||
// Create target directory inside try to catch EACCES/EPERM
|
||||
mkdirSync(targetDir, { recursive: true });
|
||||
await writeHooksJsonAndSetupProject(hooksJsonPath, hooksJson, workerServicePath, target, targetDir, workspaceRoot);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nInstallation failed: ${message}`);
|
||||
if (target === 'enterprise') {
|
||||
console.error(' Tip: Enterprise installation may require sudo/admin privileges');
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
async function writeHooksJsonAndSetupProject(
|
||||
hooksJsonPath: string,
|
||||
hooksJson: CursorHooksJson,
|
||||
workerServicePath: string,
|
||||
target: CursorInstallTarget,
|
||||
targetDir: string,
|
||||
workspaceRoot: string,
|
||||
): Promise<void> {
|
||||
writeFileSync(hooksJsonPath, JSON.stringify(hooksJson, null, 2));
|
||||
console.log(` Created hooks.json (unified CLI mode)`);
|
||||
console.log(` Worker service: ${workerServicePath}`);
|
||||
@@ -376,15 +403,6 @@ Context Injection:
|
||||
Context from past sessions is stored in .cursor/rules/claude-mem-context.mdc
|
||||
and automatically included in every chat. It updates after each session ends.
|
||||
`);
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nInstallation failed: ${(error as Error).message}`);
|
||||
if (target === 'enterprise') {
|
||||
console.error(' Tip: Enterprise installation may require sudo/admin privileges');
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -400,25 +418,14 @@ async function setupProjectContext(targetDir: string, workspaceRoot: string): Pr
|
||||
console.log(` Generating initial context...`);
|
||||
|
||||
try {
|
||||
// Check if worker is running (uses socket or TCP automatically)
|
||||
const healthResponse = await workerHttpRequest('/api/readiness');
|
||||
if (healthResponse.ok) {
|
||||
// Fetch context
|
||||
const contextResponse = await workerHttpRequest(
|
||||
`/api/context/inject?project=${encodeURIComponent(projectName)}`
|
||||
);
|
||||
if (contextResponse.ok) {
|
||||
const context = await contextResponse.text();
|
||||
if (context && context.trim()) {
|
||||
writeContextFile(workspaceRoot, context);
|
||||
contextGenerated = true;
|
||||
console.log(` Generated initial context from existing memory`);
|
||||
}
|
||||
}
|
||||
}
|
||||
contextGenerated = await fetchInitialContextFromWorker(projectName, workspaceRoot);
|
||||
} catch (error) {
|
||||
// [ANTI-PATTERN IGNORED]: Fallback behavior - worker not running, use placeholder
|
||||
logger.debug('CURSOR', 'Worker not running during install', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('WORKER', 'Worker not running during install', {}, error);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Worker not running during install', {}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
|
||||
if (!contextGenerated) {
|
||||
@@ -444,6 +451,27 @@ Use claude-mem's MCP search tools for manual memory queries.
|
||||
console.log(` Registered for auto-context updates`);
|
||||
}
|
||||
|
||||
async function fetchInitialContextFromWorker(
|
||||
projectName: string,
|
||||
workspaceRoot: string,
|
||||
): Promise<boolean> {
|
||||
const healthResponse = await workerHttpRequest('/api/readiness');
|
||||
if (!healthResponse.ok) return false;
|
||||
|
||||
const contextResponse = await workerHttpRequest(
|
||||
`/api/context/inject?project=${encodeURIComponent(projectName)}`,
|
||||
);
|
||||
if (!contextResponse.ok) return false;
|
||||
|
||||
const context = await contextResponse.text();
|
||||
if (context && context.trim()) {
|
||||
writeContextFile(workspaceRoot, context);
|
||||
console.log(` Generated initial context from existing memory`);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Uninstall Cursor hooks
|
||||
*/
|
||||
@@ -456,7 +484,6 @@ export function uninstallCursorHooks(target: CursorInstallTarget): number {
|
||||
return 1;
|
||||
}
|
||||
|
||||
try {
|
||||
const hooksDir = path.join(targetDir, 'hooks');
|
||||
const hooksJsonPath = path.join(targetDir, 'hooks.json');
|
||||
|
||||
@@ -468,6 +495,23 @@ export function uninstallCursorHooks(target: CursorInstallTarget): number {
|
||||
|
||||
const allScripts = [...bashScripts, ...psScripts];
|
||||
|
||||
try {
|
||||
removeCursorHooksFiles(hooksDir, allScripts, hooksJsonPath, target, targetDir);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nUninstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
function removeCursorHooksFiles(
|
||||
hooksDir: string,
|
||||
allScripts: string[],
|
||||
hooksJsonPath: string,
|
||||
target: CursorInstallTarget,
|
||||
targetDir: string,
|
||||
): void {
|
||||
for (const script of allScripts) {
|
||||
const scriptPath = path.join(hooksDir, script);
|
||||
if (existsSync(scriptPath)) {
|
||||
@@ -476,13 +520,11 @@ export function uninstallCursorHooks(target: CursorInstallTarget): number {
|
||||
}
|
||||
}
|
||||
|
||||
// Remove hooks.json
|
||||
if (existsSync(hooksJsonPath)) {
|
||||
unlinkSync(hooksJsonPath);
|
||||
console.log(` Removed hooks.json`);
|
||||
}
|
||||
|
||||
// Remove context file and unregister if project-level
|
||||
if (target === 'project') {
|
||||
const contextFile = path.join(targetDir, 'rules', 'claude-mem-context.mdc');
|
||||
if (existsSync(contextFile)) {
|
||||
@@ -490,7 +532,6 @@ export function uninstallCursorHooks(target: CursorInstallTarget): number {
|
||||
console.log(` Removed context file`);
|
||||
}
|
||||
|
||||
// Unregister from auto-context updates
|
||||
const projectName = path.basename(process.cwd());
|
||||
unregisterCursorProject(projectName);
|
||||
console.log(` Unregistered from auto-context updates`);
|
||||
@@ -498,12 +539,6 @@ export function uninstallCursorHooks(target: CursorInstallTarget): number {
|
||||
|
||||
console.log(`\nUninstallation complete!\n`);
|
||||
console.log('Restart Cursor to apply changes.');
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nUninstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -535,8 +570,19 @@ export function checkCursorHooksStatus(): number {
|
||||
console.log(` Config: ${hooksJson}`);
|
||||
|
||||
// Check if using unified CLI mode or legacy shell scripts
|
||||
let hooksContent: any = null;
|
||||
try {
|
||||
const hooksContent = JSON.parse(readFileSync(hooksJson, 'utf-8'));
|
||||
hooksContent = JSON.parse(readFileSync(hooksJson, 'utf-8'));
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Unable to parse hooks.json', { path: hooksJson }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Unable to parse hooks.json', { path: hooksJson }, new Error(String(error)));
|
||||
}
|
||||
console.log(` Mode: Unable to parse hooks.json`);
|
||||
}
|
||||
|
||||
if (hooksContent) {
|
||||
const firstCommand = hooksContent?.hooks?.beforeSubmitPrompt?.[0]?.command || '';
|
||||
|
||||
if (firstCommand.includes('worker-service.cjs') && firstCommand.includes('hook cursor')) {
|
||||
@@ -562,8 +608,6 @@ export function checkCursorHooksStatus(): number {
|
||||
console.log(` Mode: Unknown configuration`);
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
console.log(` Mode: Unable to parse hooks.json`);
|
||||
}
|
||||
|
||||
// Check for context file (project only)
|
||||
@@ -601,7 +645,11 @@ export async function detectClaudeCode(): Promise<boolean> {
|
||||
}
|
||||
} catch (error) {
|
||||
// [ANTI-PATTERN IGNORED]: Fallback behavior - CLI not found, continue to directory check
|
||||
logger.debug('SYSTEM', 'Claude CLI not in PATH', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('WORKER', 'Claude CLI not in PATH', {}, error);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Claude CLI not in PATH', {}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
|
||||
// Check for Claude Code plugin directory (respects CLAUDE_CONFIG_DIR)
|
||||
|
||||
@@ -162,6 +162,11 @@ function readGeminiSettings(): GeminiSettingsJson {
|
||||
try {
|
||||
return JSON.parse(content) as GeminiSettingsJson;
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Corrupt JSON in Gemini settings', { path: GEMINI_SETTINGS_PATH }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Corrupt JSON in Gemini settings', { path: GEMINI_SETTINGS_PATH }, new Error(String(error)));
|
||||
}
|
||||
throw new Error(`Corrupt JSON in ${GEMINI_SETTINGS_PATH}, refusing to overwrite user settings`);
|
||||
}
|
||||
}
|
||||
@@ -298,15 +303,22 @@ export async function installGeminiCliHooks(): Promise<number> {
|
||||
const existingSettings = readGeminiSettings();
|
||||
const mergedSettings = mergeHooksIntoSettings(existingSettings, hooksConfig);
|
||||
|
||||
// Write back
|
||||
writeGeminiHooksAndSetupContext(mergedSettings);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nInstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
function writeGeminiHooksAndSetupContext(mergedSettings: GeminiSettingsJson): void {
|
||||
writeGeminiSettings(mergedSettings);
|
||||
console.log(` Merged hooks into ${GEMINI_SETTINGS_PATH}`);
|
||||
|
||||
// Setup GEMINI.md context injection
|
||||
setupGeminiMdContextSection();
|
||||
console.log(` Setup context injection in ${GEMINI_MD_PATH}`);
|
||||
|
||||
// List installed events
|
||||
const eventNames = Object.keys(GEMINI_EVENT_TO_INTERNAL_EVENT);
|
||||
console.log(` Registered ${eventNames.length} hook events:`);
|
||||
for (const event of eventNames) {
|
||||
@@ -329,12 +341,6 @@ Context Injection:
|
||||
Context from past sessions is injected via ~/.gemini/GEMINI.md
|
||||
and automatically included in Gemini CLI conversations.
|
||||
`);
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nInstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -347,12 +353,12 @@ Context Injection:
|
||||
export function uninstallGeminiCliHooks(): number {
|
||||
console.log('\nUninstalling Claude-Mem Gemini CLI hooks...\n');
|
||||
|
||||
try {
|
||||
if (!existsSync(GEMINI_SETTINGS_PATH)) {
|
||||
console.log(' No Gemini CLI settings found — nothing to uninstall.');
|
||||
return 0;
|
||||
}
|
||||
|
||||
try {
|
||||
const settings = readGeminiSettings();
|
||||
if (!settings.hooks) {
|
||||
console.log(' No hooks found in Gemini CLI settings — nothing to uninstall.');
|
||||
@@ -383,10 +389,22 @@ export function uninstallGeminiCliHooks(): number {
|
||||
delete settings.hooks;
|
||||
}
|
||||
|
||||
writeSettingsAndCleanupGeminiContext(settings, removedCount);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nUninstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
function writeSettingsAndCleanupGeminiContext(
|
||||
settings: GeminiSettingsJson,
|
||||
removedCount: number,
|
||||
): void {
|
||||
writeGeminiSettings(settings);
|
||||
console.log(` Removed ${removedCount} claude-mem hook(s) from ${GEMINI_SETTINGS_PATH}`);
|
||||
|
||||
// Remove claude-mem context section from GEMINI.md
|
||||
if (existsSync(GEMINI_MD_PATH)) {
|
||||
let mdContent = readFileSync(GEMINI_MD_PATH, 'utf-8');
|
||||
const contextRegex = /\n?<claude-mem-context>[\s\S]*?<\/claude-mem-context>\n?/;
|
||||
@@ -399,11 +417,6 @@ export function uninstallGeminiCliHooks(): number {
|
||||
|
||||
console.log('\nUninstallation complete!\n');
|
||||
console.log('Restart Gemini CLI to apply changes.');
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nUninstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -425,7 +438,13 @@ export function checkGeminiCliHooksStatus(): number {
|
||||
try {
|
||||
settings = readGeminiSettings();
|
||||
} catch (error) {
|
||||
console.log(`Gemini CLI settings: ${(error as Error).message}\n`);
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Failed to read Gemini CLI settings', { path: GEMINI_SETTINGS_PATH }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Failed to read Gemini CLI settings', { path: GEMINI_SETTINGS_PATH }, new Error(String(error)));
|
||||
}
|
||||
console.log(`Gemini CLI settings: ${message}\n`);
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
@@ -105,27 +105,46 @@ function installMcpIntegration(config: McpInstallerConfig): () => Promise<number
|
||||
return 1;
|
||||
}
|
||||
|
||||
try {
|
||||
// Write MCP config
|
||||
const configPath = config.configPath;
|
||||
|
||||
// Warp special case: skip config write if ~/.warp/ doesn't exist
|
||||
if (config.ideId === 'warp' && !existsSync(path.dirname(configPath))) {
|
||||
const skipWarpConfigWrite = config.ideId === 'warp' && !existsSync(path.dirname(configPath));
|
||||
|
||||
let contextPath: string | undefined;
|
||||
if (config.contextFile) {
|
||||
contextPath = config.contextFile.path;
|
||||
}
|
||||
|
||||
try {
|
||||
writeMcpConfigAndContext(config, configPath, mcpServerPath, skipWarpConfigWrite, contextPath);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nInstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
function writeMcpConfigAndContext(
|
||||
config: McpInstallerConfig,
|
||||
configPath: string,
|
||||
mcpServerPath: string,
|
||||
skipWarpConfigWrite: boolean,
|
||||
contextPath: string | undefined,
|
||||
): void {
|
||||
if (skipWarpConfigWrite) {
|
||||
console.log(` Note: ~/.warp/ not found. MCP may need to be configured via Warp Drive UI.`);
|
||||
} else {
|
||||
writeMcpJsonConfig(configPath, mcpServerPath, config.configKey);
|
||||
console.log(` MCP config written to: ${configPath}`);
|
||||
}
|
||||
|
||||
// Inject context if configured
|
||||
let contextPath: string | undefined;
|
||||
if (config.contextFile) {
|
||||
contextPath = config.contextFile.path;
|
||||
if (contextPath) {
|
||||
injectContextIntoMarkdownFile(contextPath, PLACEHOLDER_CONTEXT);
|
||||
console.log(` Context placeholder written to: ${contextPath}`);
|
||||
}
|
||||
|
||||
// Print summary
|
||||
const summaryLines = [`\nInstallation complete!\n`];
|
||||
summaryLines.push(`MCP config: ${configPath}`);
|
||||
if (contextPath) {
|
||||
@@ -143,13 +162,6 @@ function installMcpIntegration(config: McpInstallerConfig): () => Promise<number
|
||||
summaryLines.push(` 2. Restart ${config.ideLabel} to pick up the MCP server`);
|
||||
summaryLines.push('');
|
||||
console.log(summaryLines.join('\n'));
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nInstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
@@ -274,27 +286,35 @@ export async function installGooseMcpIntegration(): Promise<number> {
|
||||
return 1;
|
||||
}
|
||||
|
||||
try {
|
||||
const configPath = getGooseConfigPath();
|
||||
const configDirectory = path.dirname(configPath);
|
||||
mkdirSync(configDirectory, { recursive: true });
|
||||
|
||||
try {
|
||||
mkdirSync(configDirectory, { recursive: true });
|
||||
mergeGooseYamlConfig(configPath, mcpServerPath);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nInstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
function mergeGooseYamlConfig(configPath: string, mcpServerPath: string): void {
|
||||
if (existsSync(configPath)) {
|
||||
let yamlContent = readFileSync(configPath, 'utf-8');
|
||||
|
||||
if (gooseConfigHasClaudeMemEntry(yamlContent)) {
|
||||
// Already configured — replace the claude-mem block
|
||||
// Find the claude-mem entry and replace it
|
||||
const claudeMemPattern = /( {2}claude-mem:\n(?:.*\n)*?(?= {2}\S|\n\n|^\S|$))/m;
|
||||
const newEntry = buildGooseClaudeMemEntryYaml(mcpServerPath) + '\n';
|
||||
|
||||
if (claudeMemPattern.test(yamlContent)) {
|
||||
yamlContent = yamlContent.replace(claudeMemPattern, newEntry);
|
||||
if (!claudeMemPattern.test(yamlContent)) {
|
||||
throw new Error('Found mcpServers/claude-mem markers but could not locate a replaceable claude-mem block');
|
||||
}
|
||||
yamlContent = yamlContent.replace(claudeMemPattern, newEntry);
|
||||
writeFileSync(configPath, yamlContent);
|
||||
console.log(` Updated existing claude-mem entry in: ${configPath}`);
|
||||
} else if (yamlContent.includes('mcpServers:')) {
|
||||
// mcpServers section exists but no claude-mem entry — append under it
|
||||
const mcpServersIndex = yamlContent.indexOf('mcpServers:');
|
||||
const insertionPoint = mcpServersIndex + 'mcpServers:'.length;
|
||||
const newEntry = '\n' + buildGooseClaudeMemEntryYaml(mcpServerPath);
|
||||
@@ -307,14 +327,12 @@ export async function installGooseMcpIntegration(): Promise<number> {
|
||||
writeFileSync(configPath, yamlContent);
|
||||
console.log(` Added claude-mem to existing mcpServers in: ${configPath}`);
|
||||
} else {
|
||||
// No mcpServers section — append the entire block
|
||||
const mcpBlock = '\n' + buildGooseMcpYamlBlock(mcpServerPath) + '\n';
|
||||
yamlContent = yamlContent.trimEnd() + '\n' + mcpBlock;
|
||||
writeFileSync(configPath, yamlContent);
|
||||
console.log(` Appended mcpServers section to: ${configPath}`);
|
||||
}
|
||||
} else {
|
||||
// File doesn't exist — create from template
|
||||
const templateContent = buildGooseMcpYamlBlock(mcpServerPath) + '\n';
|
||||
writeFileSync(configPath, templateContent);
|
||||
console.log(` Created config with MCP server: ${configPath}`);
|
||||
@@ -332,12 +350,6 @@ Next steps:
|
||||
1. Start claude-mem worker: npx claude-mem start
|
||||
2. Restart Goose to pick up the MCP server
|
||||
`);
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nInstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
|
||||
@@ -146,8 +146,10 @@ function readOpenClawConfig(): Record<string, any> {
|
||||
if (!existsSync(configFilePath)) return {};
|
||||
try {
|
||||
return JSON.parse(readFileSync(configFilePath, 'utf-8'));
|
||||
} catch {
|
||||
return {};
|
||||
} catch (error) {
|
||||
const normalizedError = error instanceof Error ? error : new Error(String(error));
|
||||
logger.error('WORKER', 'Failed to parse openclaw.json', { path: configFilePath }, normalizedError);
|
||||
throw normalizedError;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -250,31 +252,10 @@ export function installOpenClawPlugin(): number {
|
||||
const extensionDirectory = getOpenClawClaudeMemExtensionDirectory();
|
||||
const destinationDistDirectory = path.join(extensionDirectory, 'dist');
|
||||
|
||||
try {
|
||||
// Create the extension directory structure
|
||||
mkdirSync(destinationDistDirectory, { recursive: true });
|
||||
|
||||
// Copy pre-built dist files
|
||||
cpSync(preBuiltDistDirectory, destinationDistDirectory, { recursive: true, force: true });
|
||||
console.log(` Plugin dist copied to: ${destinationDistDirectory}`);
|
||||
|
||||
// Copy openclaw.plugin.json if available
|
||||
// Locate optional assets before entering the try block
|
||||
const manifestPath = findPluginManifestPath();
|
||||
if (manifestPath) {
|
||||
const destinationManifest = path.join(extensionDirectory, 'openclaw.plugin.json');
|
||||
cpSync(manifestPath, destinationManifest, { force: true });
|
||||
console.log(` Plugin manifest copied to: ${destinationManifest}`);
|
||||
}
|
||||
|
||||
// Copy skills directory if available
|
||||
const skillsDirectory = findPluginSkillsDirectory();
|
||||
if (skillsDirectory) {
|
||||
const destinationSkills = path.join(extensionDirectory, 'skills');
|
||||
cpSync(skillsDirectory, destinationSkills, { recursive: true, force: true });
|
||||
console.log(` Skills copied to: ${destinationSkills}`);
|
||||
}
|
||||
|
||||
// Create a minimal package.json for the extension (OpenClaw expects this)
|
||||
const extensionPackageJson = {
|
||||
name: 'claude-mem',
|
||||
version: '1.0.0',
|
||||
@@ -282,17 +263,11 @@ export function installOpenClawPlugin(): number {
|
||||
main: 'dist/index.js',
|
||||
openclaw: { extensions: ['./dist/index.js'] },
|
||||
};
|
||||
writeFileSync(
|
||||
path.join(extensionDirectory, 'package.json'),
|
||||
JSON.stringify(extensionPackageJson, null, 2) + '\n',
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
// Register in openclaw.json (merge, not overwrite)
|
||||
registerPluginInOpenClawConfig();
|
||||
console.log(` Registered in openclaw.json`);
|
||||
|
||||
logger.info('OPENCLAW', 'Plugin installed', { destination: extensionDirectory });
|
||||
try {
|
||||
// Create the extension directory structure inside try to catch EACCES/ENOSPC
|
||||
mkdirSync(destinationDistDirectory, { recursive: true });
|
||||
copyPluginFilesAndRegister(preBuiltDistDirectory, destinationDistDirectory, extensionDirectory, manifestPath, skillsDirectory, extensionPackageJson);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
@@ -301,6 +276,41 @@ export function installOpenClawPlugin(): number {
|
||||
}
|
||||
}
|
||||
|
||||
function copyPluginFilesAndRegister(
|
||||
preBuiltDistDirectory: string,
|
||||
destinationDistDirectory: string,
|
||||
extensionDirectory: string,
|
||||
manifestPath: string | null,
|
||||
skillsDirectory: string | null,
|
||||
extensionPackageJson: Record<string, unknown>,
|
||||
): void {
|
||||
cpSync(preBuiltDistDirectory, destinationDistDirectory, { recursive: true, force: true });
|
||||
console.log(` Plugin dist copied to: ${destinationDistDirectory}`);
|
||||
|
||||
if (manifestPath) {
|
||||
const destinationManifest = path.join(extensionDirectory, 'openclaw.plugin.json');
|
||||
cpSync(manifestPath, destinationManifest, { force: true });
|
||||
console.log(` Plugin manifest copied to: ${destinationManifest}`);
|
||||
}
|
||||
|
||||
if (skillsDirectory) {
|
||||
const destinationSkills = path.join(extensionDirectory, 'skills');
|
||||
cpSync(skillsDirectory, destinationSkills, { recursive: true, force: true });
|
||||
console.log(` Skills copied to: ${destinationSkills}`);
|
||||
}
|
||||
|
||||
writeFileSync(
|
||||
path.join(extensionDirectory, 'package.json'),
|
||||
JSON.stringify(extensionPackageJson, null, 2) + '\n',
|
||||
'utf-8',
|
||||
);
|
||||
|
||||
registerPluginInOpenClawConfig();
|
||||
console.log(` Registered in openclaw.json`);
|
||||
|
||||
logger.info('OPENCLAW', 'Plugin installed', { destination: extensionDirectory });
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Uninstallation
|
||||
// ============================================================================
|
||||
|
||||
@@ -164,10 +164,35 @@ export async function syncContextToAgentsMd(
|
||||
project: string,
|
||||
): Promise<void> {
|
||||
try {
|
||||
await fetchAndInjectOpenCodeContext(port, project);
|
||||
} catch (error) {
|
||||
// Worker not available — non-critical
|
||||
if (error instanceof Error) {
|
||||
logger.debug('WORKER', 'Worker not available during context sync', {}, error);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Worker not available during context sync', {}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function fetchRealContextFromWorker(): Promise<string | null> {
|
||||
const workerPort = getWorkerPort();
|
||||
const healthResponse = await fetch(`http://127.0.0.1:${workerPort}/api/readiness`);
|
||||
if (!healthResponse.ok) return null;
|
||||
|
||||
const contextResponse = await fetch(
|
||||
`http://127.0.0.1:${workerPort}/api/context/inject?project=opencode`,
|
||||
);
|
||||
if (!contextResponse.ok) return null;
|
||||
|
||||
const realContext = await contextResponse.text();
|
||||
return realContext && realContext.trim() ? realContext : null;
|
||||
}
|
||||
|
||||
async function fetchAndInjectOpenCodeContext(port: number, project: string): Promise<void> {
|
||||
const response = await fetch(
|
||||
`http://127.0.0.1:${port}/api/context/inject?project=${encodeURIComponent(project)}`,
|
||||
);
|
||||
|
||||
if (!response.ok) return;
|
||||
|
||||
const contextText = await response.text();
|
||||
@@ -177,15 +202,25 @@ export async function syncContextToAgentsMd(
|
||||
logger.warn('OPENCODE', 'Failed to inject context into AGENTS.md during sync');
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
// Worker not available — non-critical
|
||||
}
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
// Uninstallation
|
||||
// ============================================================================
|
||||
|
||||
function writeOrRemoveCleanedAgentsMd(agentsMdPath: string, trimmedContent: string): void {
|
||||
if (
|
||||
trimmedContent.length === 0 ||
|
||||
trimmedContent === '# Claude-Mem Memory Context'
|
||||
) {
|
||||
unlinkSync(agentsMdPath);
|
||||
console.log(` Removed empty AGENTS.md`);
|
||||
} else {
|
||||
writeFileSync(agentsMdPath, trimmedContent + '\n', 'utf-8');
|
||||
console.log(` Cleaned context from AGENTS.md`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Remove the claude-mem plugin from OpenCode.
|
||||
* Removes the plugin file and cleans up the AGENTS.md context section.
|
||||
@@ -211,8 +246,16 @@ export function uninstallOpenCodePlugin(): number {
|
||||
// Remove context section from AGENTS.md
|
||||
const agentsMdPath = getOpenCodeAgentsMdPath();
|
||||
if (existsSync(agentsMdPath)) {
|
||||
let content: string;
|
||||
try {
|
||||
let content = readFileSync(agentsMdPath, 'utf-8');
|
||||
content = readFileSync(agentsMdPath, 'utf-8');
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(` Failed to read AGENTS.md: ${message}`);
|
||||
hasErrors = true;
|
||||
content = '';
|
||||
}
|
||||
|
||||
const tagStartIndex = content.indexOf(CONTEXT_TAG_OPEN);
|
||||
const tagEndIndex = content.indexOf(CONTEXT_TAG_CLOSE);
|
||||
|
||||
@@ -222,25 +265,16 @@ export function uninstallOpenCodePlugin(): number {
|
||||
'\n' +
|
||||
content.slice(tagEndIndex + CONTEXT_TAG_CLOSE.length).trimStart();
|
||||
|
||||
// If the file is now essentially empty or only has our header, remove it
|
||||
const trimmedContent = content.trim();
|
||||
if (
|
||||
trimmedContent.length === 0 ||
|
||||
trimmedContent === '# Claude-Mem Memory Context'
|
||||
) {
|
||||
unlinkSync(agentsMdPath);
|
||||
console.log(` Removed empty AGENTS.md`);
|
||||
} else {
|
||||
writeFileSync(agentsMdPath, trimmedContent + '\n', 'utf-8');
|
||||
console.log(` Cleaned context from AGENTS.md`);
|
||||
}
|
||||
}
|
||||
try {
|
||||
writeOrRemoveCleanedAgentsMd(agentsMdPath, trimmedContent);
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(` Failed to clean AGENTS.md: ${message}`);
|
||||
hasErrors = true;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return hasErrors ? 1 : 0;
|
||||
}
|
||||
@@ -309,48 +343,29 @@ export async function installOpenCodeIntegration(): Promise<number> {
|
||||
Use claude-mem search tools for manual memory queries.`;
|
||||
|
||||
// Try to fetch real context from worker first
|
||||
let contextToInject = placeholderContext;
|
||||
let contextSource = 'placeholder';
|
||||
try {
|
||||
const workerPort = getWorkerPort();
|
||||
const healthResponse = await fetch(`http://127.0.0.1:${workerPort}/api/readiness`);
|
||||
if (healthResponse.ok) {
|
||||
const contextResponse = await fetch(
|
||||
`http://127.0.0.1:${workerPort}/api/context/inject?project=opencode`,
|
||||
);
|
||||
if (contextResponse.ok) {
|
||||
const realContext = await contextResponse.text();
|
||||
if (realContext && realContext.trim()) {
|
||||
const injectResult = injectContextIntoAgentsMd(realContext);
|
||||
if (injectResult !== 0) {
|
||||
logger.warn('OPENCODE', 'Failed to inject real context into AGENTS.md during install');
|
||||
const realContext = await fetchRealContextFromWorker();
|
||||
if (realContext) {
|
||||
contextToInject = realContext;
|
||||
contextSource = 'existing memory';
|
||||
}
|
||||
} catch (error) {
|
||||
// Worker not available — use placeholder
|
||||
if (error instanceof Error) {
|
||||
logger.debug('WORKER', 'Worker not available during OpenCode install', {}, error);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Worker not available during OpenCode install', {}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
|
||||
const injectResult = injectContextIntoAgentsMd(contextToInject);
|
||||
if (injectResult !== 0) {
|
||||
logger.warn('OPENCODE', `Failed to inject ${contextSource} context into AGENTS.md during install`);
|
||||
} else {
|
||||
if (contextSource === 'existing memory') {
|
||||
console.log(' Context injected from existing memory');
|
||||
}
|
||||
} else {
|
||||
const injectResult = injectContextIntoAgentsMd(placeholderContext);
|
||||
if (injectResult !== 0) {
|
||||
logger.warn('OPENCODE', 'Failed to inject placeholder context into AGENTS.md during install');
|
||||
} else {
|
||||
console.log(' Placeholder context created (will populate after first session)');
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const injectResult = injectContextIntoAgentsMd(placeholderContext);
|
||||
if (injectResult !== 0) {
|
||||
logger.warn('OPENCODE', 'Failed to inject placeholder context into AGENTS.md during install');
|
||||
}
|
||||
}
|
||||
} else {
|
||||
const injectResult = injectContextIntoAgentsMd(placeholderContext);
|
||||
if (injectResult !== 0) {
|
||||
logger.warn('OPENCODE', 'Failed to inject placeholder context into AGENTS.md during install');
|
||||
} else {
|
||||
console.log(' Placeholder context created (worker not running)');
|
||||
}
|
||||
}
|
||||
} catch {
|
||||
const injectResult = injectContextIntoAgentsMd(placeholderContext);
|
||||
if (injectResult !== 0) {
|
||||
logger.warn('OPENCODE', 'Failed to inject placeholder context into AGENTS.md during install');
|
||||
} else {
|
||||
console.log(' Placeholder context created (worker not running)');
|
||||
}
|
||||
|
||||
@@ -86,9 +86,11 @@ export function readWindsurfRegistry(): WindsurfProjectRegistry {
|
||||
if (!existsSync(WINDSURF_REGISTRY_FILE)) return {};
|
||||
return JSON.parse(readFileSync(WINDSURF_REGISTRY_FILE, 'utf-8'));
|
||||
} catch (error) {
|
||||
logger.error('WINDSURF', 'Failed to read registry, using empty', {
|
||||
file: WINDSURF_REGISTRY_FILE,
|
||||
}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Failed to read registry, using empty', { file: WINDSURF_REGISTRY_FILE }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Failed to read registry, using empty', { file: WINDSURF_REGISTRY_FILE }, new Error(String(error)));
|
||||
}
|
||||
return {};
|
||||
}
|
||||
}
|
||||
@@ -151,7 +153,11 @@ export async function updateWindsurfContextForProject(projectName: string, works
|
||||
logger.debug('WINDSURF', 'Updated context file', { projectName, workspacePath });
|
||||
} catch (error) {
|
||||
// Background context update — failure is non-critical
|
||||
logger.error('WINDSURF', 'Failed to update context file', { projectName, workspacePath }, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Failed to update context file', { projectName, workspacePath }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Failed to update context file', { projectName, workspacePath }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -235,6 +241,11 @@ function mergeAndWriteHooksJson(
|
||||
existingConfig.hooks = {};
|
||||
}
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Corrupt hooks.json, refusing to overwrite', { path: WINDSURF_HOOKS_JSON_PATH }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Corrupt hooks.json, refusing to overwrite', { path: WINDSURF_HOOKS_JSON_PATH }, new Error(String(error)));
|
||||
}
|
||||
throw new Error(`Corrupt hooks.json at ${WINDSURF_HOOKS_JSON_PATH}, refusing to overwrite`);
|
||||
}
|
||||
}
|
||||
@@ -286,16 +297,30 @@ export async function installWindsurfHooks(): Promise<number> {
|
||||
// IMPORTANT: Tilde expansion is NOT supported in working_directory — use absolute paths
|
||||
const workingDirectory = path.dirname(workerServicePath);
|
||||
|
||||
try {
|
||||
console.log(` Using Bun runtime: ${bunPath}`);
|
||||
console.log(` Worker service: ${workerServicePath}`);
|
||||
|
||||
// Merge our hooks into the existing hooks.json
|
||||
const workspaceRoot = process.cwd();
|
||||
|
||||
try {
|
||||
await writeWindsurfHooksAndSetupContext(bunPath, workerServicePath, workingDirectory, workspaceRoot);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nInstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
async function writeWindsurfHooksAndSetupContext(
|
||||
bunPath: string,
|
||||
workerServicePath: string,
|
||||
workingDirectory: string,
|
||||
workspaceRoot: string,
|
||||
): Promise<void> {
|
||||
mergeAndWriteHooksJson(bunPath, workerServicePath, workingDirectory);
|
||||
console.log(` Created/merged hooks.json`);
|
||||
|
||||
// Set up initial context for the current workspace
|
||||
const workspaceRoot = process.cwd();
|
||||
await setupWindsurfProjectContext(workspaceRoot);
|
||||
|
||||
console.log(`
|
||||
@@ -316,12 +341,6 @@ Next steps:
|
||||
2. Restart Windsurf to load the hooks
|
||||
3. Context is injected via .windsurf/rules/claude-mem-context.md (workspace-level)
|
||||
`);
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nInstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -335,23 +354,14 @@ async function setupWindsurfProjectContext(workspaceRoot: string): Promise<void>
|
||||
console.log(` Generating initial context...`);
|
||||
|
||||
try {
|
||||
const healthResponse = await fetch(`http://127.0.0.1:${port}/api/readiness`);
|
||||
if (healthResponse.ok) {
|
||||
const contextResponse = await fetch(
|
||||
`http://127.0.0.1:${port}/api/context/inject?project=${encodeURIComponent(projectName)}`
|
||||
);
|
||||
if (contextResponse.ok) {
|
||||
const context = await contextResponse.text();
|
||||
if (context && context.trim()) {
|
||||
writeWindsurfContextFile(workspaceRoot, context);
|
||||
contextGenerated = true;
|
||||
console.log(` Generated initial context from existing memory`);
|
||||
}
|
||||
}
|
||||
}
|
||||
contextGenerated = await fetchWindsurfContextFromWorker(port, projectName, workspaceRoot);
|
||||
} catch (error) {
|
||||
// Worker not running during install — non-critical
|
||||
logger.debug('WINDSURF', 'Worker not running during install', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('WORKER', 'Worker not running during install', {}, error);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Worker not running during install', {}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
|
||||
if (!contextGenerated) {
|
||||
@@ -374,31 +384,78 @@ Use claude-mem's MCP search tools for manual memory queries.
|
||||
console.log(` Registered for auto-context updates`);
|
||||
}
|
||||
|
||||
async function fetchWindsurfContextFromWorker(
|
||||
port: number,
|
||||
projectName: string,
|
||||
workspaceRoot: string,
|
||||
): Promise<boolean> {
|
||||
const healthResponse = await fetch(`http://127.0.0.1:${port}/api/readiness`);
|
||||
if (!healthResponse.ok) return false;
|
||||
|
||||
const contextResponse = await fetch(
|
||||
`http://127.0.0.1:${port}/api/context/inject?project=${encodeURIComponent(projectName)}`,
|
||||
);
|
||||
if (!contextResponse.ok) return false;
|
||||
|
||||
const context = await contextResponse.text();
|
||||
if (context && context.trim()) {
|
||||
writeWindsurfContextFile(workspaceRoot, context);
|
||||
console.log(` Generated initial context from existing memory`);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Uninstall Windsurf hooks — removes claude-mem entries from hooks.json
|
||||
*/
|
||||
export function uninstallWindsurfHooks(): number {
|
||||
console.log('\nUninstalling Claude-Mem Windsurf hooks...\n');
|
||||
|
||||
try {
|
||||
// Remove our entries from hooks.json (preserve other integrations)
|
||||
if (existsSync(WINDSURF_HOOKS_JSON_PATH)) {
|
||||
try {
|
||||
const config: WindsurfHooksJson = JSON.parse(readFileSync(WINDSURF_HOOKS_JSON_PATH, 'utf-8'));
|
||||
removeClaudeMemHookEntries();
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Could not parse hooks.json during uninstall', { path: WINDSURF_HOOKS_JSON_PATH }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Could not parse hooks.json during uninstall', { path: WINDSURF_HOOKS_JSON_PATH }, new Error(String(error)));
|
||||
}
|
||||
console.log(` Warning: could not parse hooks.json — leaving file intact to preserve other hooks`);
|
||||
}
|
||||
} else {
|
||||
console.log(` No hooks.json found`);
|
||||
}
|
||||
|
||||
const workspaceRoot = process.cwd();
|
||||
|
||||
try {
|
||||
removeWindsurfContextAndUnregister(workspaceRoot);
|
||||
return 0;
|
||||
} catch (error) {
|
||||
const message = error instanceof Error ? error.message : String(error);
|
||||
console.error(`\nUninstallation failed: ${message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
function removeClaudeMemHookEntries(): void {
|
||||
const parsed = JSON.parse(readFileSync(WINDSURF_HOOKS_JSON_PATH, 'utf-8')) as Partial<WindsurfHooksJson>;
|
||||
const config: WindsurfHooksJson = { hooks: parsed.hooks ?? {} };
|
||||
|
||||
for (const eventName of WINDSURF_HOOK_EVENTS) {
|
||||
if (config.hooks[eventName]) {
|
||||
config.hooks[eventName] = config.hooks[eventName].filter(
|
||||
(hook) => !hook.command.includes('worker-service') || !hook.command.includes('windsurf')
|
||||
const eventHooks = config.hooks[eventName] ?? [];
|
||||
if (eventHooks.length > 0) {
|
||||
config.hooks[eventName] = eventHooks.filter(
|
||||
(hook) => !hook.command.includes('worker-service') || !hook.command.includes('windsurf'),
|
||||
);
|
||||
// Remove empty arrays
|
||||
if (config.hooks[eventName].length === 0) {
|
||||
delete config.hooks[eventName];
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no hooks remain, remove the file entirely
|
||||
if (Object.keys(config.hooks).length === 0) {
|
||||
unlinkSync(WINDSURF_HOOKS_JSON_PATH);
|
||||
console.log(` Removed hooks.json (no hooks remaining)`);
|
||||
@@ -406,33 +463,20 @@ export function uninstallWindsurfHooks(): number {
|
||||
writeFileSync(WINDSURF_HOOKS_JSON_PATH, JSON.stringify(config, null, 2));
|
||||
console.log(` Removed claude-mem entries from hooks.json (other hooks preserved)`);
|
||||
}
|
||||
} catch (error) {
|
||||
console.log(` Warning: could not parse hooks.json — leaving file intact to preserve other hooks`);
|
||||
}
|
||||
} else {
|
||||
console.log(` No hooks.json found`);
|
||||
}
|
||||
}
|
||||
|
||||
// Remove context file from the current workspace
|
||||
const workspaceRoot = process.cwd();
|
||||
function removeWindsurfContextAndUnregister(workspaceRoot: string): void {
|
||||
const contextFile = path.join(workspaceRoot, '.windsurf', 'rules', 'claude-mem-context.md');
|
||||
if (existsSync(contextFile)) {
|
||||
unlinkSync(contextFile);
|
||||
console.log(` Removed context file`);
|
||||
}
|
||||
|
||||
// Unregister project
|
||||
unregisterWindsurfProject(workspaceRoot);
|
||||
console.log(` Unregistered from auto-context updates`);
|
||||
|
||||
console.log(`\nUninstallation complete!\n`);
|
||||
console.log('Restart Windsurf to apply changes.');
|
||||
|
||||
return 0;
|
||||
} catch (error) {
|
||||
console.error(`\nUninstallation failed: ${(error as Error).message}`);
|
||||
return 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -445,10 +489,18 @@ export function checkWindsurfHooksStatus(): number {
|
||||
console.log(`User-level: Installed`);
|
||||
console.log(` Config: ${WINDSURF_HOOKS_JSON_PATH}`);
|
||||
|
||||
let parsedConfig: Partial<WindsurfHooksJson> | null = null;
|
||||
try {
|
||||
const config: WindsurfHooksJson = JSON.parse(readFileSync(WINDSURF_HOOKS_JSON_PATH, 'utf-8'));
|
||||
parsedConfig = JSON.parse(readFileSync(WINDSURF_HOOKS_JSON_PATH, 'utf-8'));
|
||||
} catch (error) {
|
||||
const normalizedError = error instanceof Error ? error : new Error(String(error));
|
||||
logger.error('WORKER', 'Unable to parse hooks.json', { path: WINDSURF_HOOKS_JSON_PATH }, normalizedError);
|
||||
console.log(` Mode: Unable to parse hooks.json`);
|
||||
}
|
||||
|
||||
if (parsedConfig) {
|
||||
const registeredEvents = WINDSURF_HOOK_EVENTS.filter(
|
||||
(event) => config.hooks[event]?.some(
|
||||
(event) => (parsedConfig?.hooks?.[event] ?? []).some(
|
||||
(hook) => hook.command.includes('worker-service') && hook.command.includes('windsurf')
|
||||
)
|
||||
);
|
||||
@@ -456,8 +508,6 @@ export function checkWindsurfHooksStatus(): number {
|
||||
for (const event of registeredEvents) {
|
||||
console.log(` - ${event}`);
|
||||
}
|
||||
} catch {
|
||||
console.log(` Mode: Unable to parse hooks.json`);
|
||||
}
|
||||
|
||||
// Check for context file in current workspace
|
||||
|
||||
@@ -34,40 +34,38 @@ export class SessionQueueProcessor {
|
||||
let lastActivityTime = Date.now();
|
||||
|
||||
while (!signal.aborted) {
|
||||
try {
|
||||
// Atomically claim next pending message (marks as 'processing')
|
||||
// Claim phase: atomically claim next pending message (marks as 'processing')
|
||||
// Self-heals any stale processing messages before claiming
|
||||
const persistentMessage = this.store.claimNextMessage(sessionDbId);
|
||||
let persistentMessage: PersistentPendingMessage | null = null;
|
||||
try {
|
||||
persistentMessage = this.store.claimNextMessage(sessionDbId);
|
||||
} catch (error) {
|
||||
if (signal.aborted) return;
|
||||
const normalizedError = error instanceof Error ? error : new Error(String(error));
|
||||
logger.error('QUEUE', 'Failed to claim next message', { sessionDbId }, normalizedError);
|
||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
||||
continue;
|
||||
}
|
||||
|
||||
if (persistentMessage) {
|
||||
// Reset activity time when we successfully yield a message
|
||||
lastActivityTime = Date.now();
|
||||
// Yield the message for processing (it's marked as 'processing' in DB)
|
||||
yield this.toPendingMessageWithId(persistentMessage);
|
||||
} else {
|
||||
// Queue empty - wait for wake-up event or timeout
|
||||
const receivedMessage = await this.waitForMessage(signal, IDLE_TIMEOUT_MS);
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!receivedMessage && !signal.aborted) {
|
||||
// Timeout occurred - check if we've been idle too long
|
||||
const idleDuration = Date.now() - lastActivityTime;
|
||||
if (idleDuration >= IDLE_TIMEOUT_MS) {
|
||||
logger.info('SESSION', 'Idle timeout reached, triggering abort to kill subprocess', {
|
||||
sessionDbId,
|
||||
idleDurationMs: idleDuration,
|
||||
thresholdMs: IDLE_TIMEOUT_MS
|
||||
});
|
||||
onIdleTimeout?.();
|
||||
return;
|
||||
}
|
||||
// Reset timer on spurious wakeup - queue is empty but duration check failed
|
||||
// Wait phase: queue empty - wait for wake-up event or timeout
|
||||
try {
|
||||
const idleTimedOut = await this.handleWaitPhase(signal, lastActivityTime, sessionDbId, onIdleTimeout);
|
||||
if (idleTimedOut) return;
|
||||
// Reset timer on spurious wakeup if not timed out
|
||||
lastActivityTime = Date.now();
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
if (signal.aborted) return;
|
||||
logger.error('SESSION', 'Error in queue processor loop', { sessionDbId }, error as Error);
|
||||
// Small backoff to prevent tight loop on DB error
|
||||
const normalizedError = error instanceof Error ? error : new Error(String(error));
|
||||
logger.error('QUEUE', 'Error waiting for message', { sessionDbId }, normalizedError);
|
||||
// Small backoff to prevent tight loop on error
|
||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
||||
}
|
||||
}
|
||||
@@ -82,6 +80,33 @@ export class SessionQueueProcessor {
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle the wait phase: wait for a message or check idle timeout.
|
||||
* @returns true if idle timeout was reached (caller should return/exit iterator)
|
||||
*/
|
||||
private async handleWaitPhase(
|
||||
signal: AbortSignal,
|
||||
lastActivityTime: number,
|
||||
sessionDbId: number,
|
||||
onIdleTimeout?: () => void
|
||||
): Promise<boolean> {
|
||||
const receivedMessage = await this.waitForMessage(signal, IDLE_TIMEOUT_MS);
|
||||
|
||||
if (!receivedMessage && !signal.aborted) {
|
||||
const idleDuration = Date.now() - lastActivityTime;
|
||||
if (idleDuration >= IDLE_TIMEOUT_MS) {
|
||||
logger.info('SESSION', 'Idle timeout reached, triggering abort to kill subprocess', {
|
||||
sessionDbId,
|
||||
idleDurationMs: idleDuration,
|
||||
thresholdMs: IDLE_TIMEOUT_MS
|
||||
});
|
||||
onIdleTimeout?.();
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* Wait for a message event or timeout.
|
||||
* @param signal - AbortSignal to cancel waiting
|
||||
|
||||
@@ -208,31 +208,27 @@ export class Server {
|
||||
return res.status(400).json({ error: 'Invalid topic' });
|
||||
}
|
||||
|
||||
try {
|
||||
let content: string;
|
||||
|
||||
if (operation) {
|
||||
// Validate operation
|
||||
if (!ALLOWED_OPERATIONS.includes(operation)) {
|
||||
if (operation && !ALLOWED_OPERATIONS.includes(operation)) {
|
||||
return res.status(400).json({ error: 'Invalid operation' });
|
||||
}
|
||||
// Path boundary check
|
||||
|
||||
if (operation) {
|
||||
const OPERATIONS_BASE_DIR = path.resolve(__dirname, '../skills/mem-search/operations');
|
||||
const operationPath = path.resolve(OPERATIONS_BASE_DIR, `${operation}.md`);
|
||||
if (!operationPath.startsWith(OPERATIONS_BASE_DIR + path.sep)) {
|
||||
return res.status(400).json({ error: 'Invalid request' });
|
||||
}
|
||||
content = await fs.promises.readFile(operationPath, 'utf-8');
|
||||
} else {
|
||||
const skillPath = path.join(__dirname, '../skills/mem-search/SKILL.md');
|
||||
const fullContent = await fs.promises.readFile(skillPath, 'utf-8');
|
||||
content = this.extractInstructionSection(fullContent, topic);
|
||||
}
|
||||
|
||||
res.json({
|
||||
content: [{ type: 'text', text: content }]
|
||||
});
|
||||
try {
|
||||
const content = await this.loadInstructionContent(operation, topic);
|
||||
res.json({ content: [{ type: 'text', text: content }] });
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('HTTP', 'Instruction file not found', { topic, operation, message: error.message });
|
||||
} else {
|
||||
logger.debug('HTTP', 'Instruction file not found', { topic, operation, error: String(error) });
|
||||
}
|
||||
res.status(404).json({ error: 'Instruction not found' });
|
||||
}
|
||||
});
|
||||
@@ -334,6 +330,20 @@ export class Server {
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Load instruction content from disk for the /api/instructions endpoint.
|
||||
* Caller must validate operation/topic before calling.
|
||||
*/
|
||||
private async loadInstructionContent(operation: string | undefined, topic: string): Promise<string> {
|
||||
if (operation) {
|
||||
const operationPath = path.resolve(__dirname, '../skills/mem-search/operations', `${operation}.md`);
|
||||
return fs.promises.readFile(operationPath, 'utf-8');
|
||||
}
|
||||
const skillPath = path.join(__dirname, '../skills/mem-search/SKILL.md');
|
||||
const fullContent = await fs.promises.readFile(skillPath, 'utf-8');
|
||||
return this.extractInstructionSection(fullContent, topic);
|
||||
}
|
||||
|
||||
/**
|
||||
* Extract a specific section from instruction content
|
||||
*/
|
||||
|
||||
@@ -15,6 +15,7 @@ import { writeFileSync, readFileSync, mkdtempSync, rmSync, existsSync } from "no
|
||||
import { join, dirname } from "node:path";
|
||||
import { tmpdir } from "node:os";
|
||||
import { createRequire } from "node:module";
|
||||
import { logger } from "../../utils/logger.js";
|
||||
|
||||
// CJS-safe require for resolving external packages at runtime.
|
||||
// In ESM: import.meta.url works. In CJS bundle (esbuild): __filename works.
|
||||
@@ -160,6 +161,7 @@ export function loadUserGrammars(projectRoot: string): UserGrammarConfig {
|
||||
const content = readFileSync(configPath, "utf-8");
|
||||
rawConfig = JSON.parse(content);
|
||||
} catch {
|
||||
// [ANTI-PATTERN IGNORED]: .claude-mem.json missing is the normal case for most projects
|
||||
userGrammarCache.set(projectRoot, EMPTY_USER_GRAMMAR_CONFIG);
|
||||
return EMPTY_USER_GRAMMAR_CONFIG;
|
||||
}
|
||||
@@ -274,7 +276,9 @@ function resolveGrammarPath(language: string): string | null {
|
||||
const rootPkgPath = _require.resolve(pkg + "/package.json");
|
||||
const resolved = join(dirname(rootPkgPath), subdir);
|
||||
if (existsSync(join(resolved, "src"))) return resolved;
|
||||
} catch { /* fall through */ }
|
||||
} catch {
|
||||
// [ANTI-PATTERN IGNORED]: grammar package not installed is expected for unsupported languages
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
@@ -282,6 +286,7 @@ function resolveGrammarPath(language: string): string | null {
|
||||
const packageJsonPath = _require.resolve(pkg + "/package.json");
|
||||
return dirname(packageJsonPath);
|
||||
} catch {
|
||||
// [ANTI-PATTERN IGNORED]: grammar package not installed is expected for unsupported languages
|
||||
return null;
|
||||
}
|
||||
}
|
||||
@@ -550,7 +555,9 @@ function getTreeSitterBin(): string {
|
||||
cachedBinPath = binPath;
|
||||
return binPath;
|
||||
}
|
||||
} catch { /* fall through */ }
|
||||
} catch {
|
||||
// [ANTI-PATTERN IGNORED]: tree-sitter-cli not in node_modules is expected; falls back to PATH
|
||||
}
|
||||
|
||||
// Fallback: assume it's on PATH
|
||||
cachedBinPath = "tree-sitter";
|
||||
@@ -585,7 +592,8 @@ function runBatchQuery(queryFile: string, sourceFiles: string[], grammarPath: st
|
||||
let output: string;
|
||||
try {
|
||||
output = execFileSync(bin, execArgs, { encoding: "utf-8", timeout: 30000, stdio: ["pipe", "pipe", "pipe"] });
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('WORKER', `tree-sitter query failed for ${sourceFiles.length} file(s)`, undefined, error instanceof Error ? error : undefined);
|
||||
return new Map();
|
||||
}
|
||||
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
import { readFile, readdir, stat } from "node:fs/promises";
|
||||
import { join, relative } from "node:path";
|
||||
import { parseFilesBatch, formatFoldedView, loadUserGrammars, type FoldedFile } from "./parser.js";
|
||||
import { logger } from "../../utils/logger.js";
|
||||
|
||||
const CODE_EXTENSIONS = new Set([
|
||||
".js", ".jsx", ".ts", ".tsx", ".mjs", ".cjs",
|
||||
@@ -78,7 +79,8 @@ async function* walkDir(dir: string, rootDir: string, maxDepth: number = 20, ext
|
||||
let entries;
|
||||
try {
|
||||
entries = await readdir(dir, { withFileTypes: true });
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('WORKER', `walkDir: failed to read directory ${dir}`, undefined, error instanceof Error ? error : undefined);
|
||||
return; // permission denied, etc.
|
||||
}
|
||||
|
||||
@@ -114,7 +116,8 @@ async function safeReadFile(filePath: string): Promise<string | null> {
|
||||
if (content.slice(0, 1000).includes("\0")) return null;
|
||||
|
||||
return content;
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('WORKER', `safeReadFile: failed to read ${filePath}`, undefined, error instanceof Error ? error : undefined);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,6 +24,9 @@ export interface PersistentPendingMessage {
|
||||
created_at_epoch: number;
|
||||
started_processing_at_epoch: number | null;
|
||||
completed_at_epoch: number | null;
|
||||
// Claude Code subagent identity — NULL for main-session messages.
|
||||
agent_type: string | null;
|
||||
agent_id: string | null;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -64,8 +67,9 @@ export class PendingMessageStore {
|
||||
session_db_id, content_session_id, message_type,
|
||||
tool_name, tool_input, tool_response, cwd,
|
||||
last_assistant_message,
|
||||
prompt_number, status, retry_count, created_at_epoch
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'pending', 0, ?)
|
||||
prompt_number, status, retry_count, created_at_epoch,
|
||||
agent_type, agent_id
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, 'pending', 0, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
@@ -78,7 +82,9 @@ export class PendingMessageStore {
|
||||
message.cwd || null,
|
||||
message.last_assistant_message || null,
|
||||
message.prompt_number || null,
|
||||
now
|
||||
now,
|
||||
message.agentType ?? null,
|
||||
message.agentId ?? null
|
||||
);
|
||||
|
||||
return result.lastInsertRowid as number;
|
||||
@@ -496,7 +502,9 @@ export class PendingMessageStore {
|
||||
tool_response: persistent.tool_response ? JSON.parse(persistent.tool_response) : undefined,
|
||||
prompt_number: persistent.prompt_number || undefined,
|
||||
cwd: persistent.cwd || undefined,
|
||||
last_assistant_message: persistent.last_assistant_message || undefined
|
||||
last_assistant_message: persistent.last_assistant_message || undefined,
|
||||
agentId: persistent.agent_id ?? undefined,
|
||||
agentType: persistent.agent_type ?? undefined
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
@@ -75,6 +75,34 @@ export class SessionSearch {
|
||||
logger.info('DB', 'Creating FTS5 tables');
|
||||
|
||||
try {
|
||||
this.createFTSTablesAndTriggers();
|
||||
logger.info('DB', 'FTS5 tables created successfully');
|
||||
} catch (error) {
|
||||
// FTS5 creation failed at runtime despite probe succeeding — degrade gracefully
|
||||
logger.warn('DB', 'FTS5 table creation failed — search will use ChromaDB and LIKE queries', {}, error instanceof Error ? error : undefined);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Probe whether the FTS5 extension is available in the current SQLite build.
|
||||
* Creates and immediately drops a temporary FTS5 table.
|
||||
*/
|
||||
private isFts5Available(): boolean {
|
||||
try {
|
||||
this.db.run('CREATE VIRTUAL TABLE _fts5_probe USING fts5(test_column)');
|
||||
this.db.run('DROP TABLE _fts5_probe');
|
||||
return true;
|
||||
} catch {
|
||||
// [ANTI-PATTERN IGNORED]: FTS5 unavailability is an expected platform condition, not an error
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create FTS5 virtual tables and sync triggers for observations and session_summaries.
|
||||
* Extracted from ensureFTSTables to keep try block small.
|
||||
*/
|
||||
private createFTSTablesAndTriggers(): void {
|
||||
// Create observations_fts virtual table
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS observations_fts USING fts5(
|
||||
@@ -156,28 +184,7 @@ export class SessionSearch {
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
`);
|
||||
|
||||
logger.info('DB', 'FTS5 tables created successfully');
|
||||
} catch (error) {
|
||||
// FTS5 creation failed at runtime despite probe succeeding — degrade gracefully
|
||||
logger.warn('DB', 'FTS5 table creation failed — search will use ChromaDB and LIKE queries', {}, error as Error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Probe whether the FTS5 extension is available in the current SQLite build.
|
||||
* Creates and immediately drops a temporary FTS5 table.
|
||||
*/
|
||||
private isFts5Available(): boolean {
|
||||
try {
|
||||
this.db.run('CREATE VIRTUAL TABLE _fts5_probe USING fts5(test_column)');
|
||||
this.db.run('DROP TABLE _fts5_probe');
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/**
|
||||
* Build WHERE clause for structured filters
|
||||
@@ -381,7 +388,9 @@ export class SessionSearch {
|
||||
if (Array.isArray(files)) {
|
||||
return files.some(f => isDirectChild(f, folderPath));
|
||||
}
|
||||
} catch {}
|
||||
} catch (error) {
|
||||
logger.debug('DB', `Failed to parse files JSON for observation ${obs.id}`, undefined, error instanceof Error ? error : undefined);
|
||||
}
|
||||
return false;
|
||||
};
|
||||
|
||||
@@ -399,7 +408,9 @@ export class SessionSearch {
|
||||
if (Array.isArray(files)) {
|
||||
return files.some(f => isDirectChild(f, folderPath));
|
||||
}
|
||||
} catch {}
|
||||
} catch (error) {
|
||||
logger.debug('DB', `Failed to parse files JSON for session summary ${session.id}`, undefined, error instanceof Error ? error : undefined);
|
||||
}
|
||||
return false;
|
||||
};
|
||||
|
||||
|
||||
@@ -66,6 +66,7 @@ export class SessionStore {
|
||||
this.addSessionPlatformSourceColumn();
|
||||
this.addObservationModelColumns();
|
||||
this.ensureMergedIntoProjectColumns();
|
||||
this.addObservationSubagentColumns();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -445,17 +446,14 @@ export class SessionStore {
|
||||
|
||||
// Create FTS5 virtual table — skip if FTS5 is unavailable (e.g., Bun on Windows #791).
|
||||
// The user_prompts table itself is still created; only FTS indexing is skipped.
|
||||
try {
|
||||
this.db.run(`
|
||||
const ftsCreateSQL = `
|
||||
CREATE VIRTUAL TABLE user_prompts_fts USING fts5(
|
||||
prompt_text,
|
||||
content='user_prompts',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
|
||||
// Create triggers to sync FTS5
|
||||
this.db.run(`
|
||||
`;
|
||||
const ftsTriggersSQL = `
|
||||
CREATE TRIGGER user_prompts_ai AFTER INSERT ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
@@ -472,9 +470,22 @@ export class SessionStore {
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
`);
|
||||
`;
|
||||
|
||||
try {
|
||||
this.db.run(ftsCreateSQL);
|
||||
this.db.run(ftsTriggersSQL);
|
||||
} catch (ftsError) {
|
||||
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, ftsError as Error);
|
||||
if (ftsError instanceof Error) {
|
||||
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, ftsError);
|
||||
} else {
|
||||
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, new Error(String(ftsError)));
|
||||
}
|
||||
// FTS is optional — commit the main table and indexes, then return
|
||||
this.db.run('COMMIT');
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(10, new Date().toISOString());
|
||||
logger.debug('DB', 'Created user_prompts table (without FTS5)');
|
||||
return;
|
||||
}
|
||||
|
||||
// Commit transaction
|
||||
@@ -685,7 +696,6 @@ export class SessionStore {
|
||||
this.db.run('PRAGMA foreign_keys = OFF');
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
try {
|
||||
// ==========================================
|
||||
// 1. Recreate observations table
|
||||
// ==========================================
|
||||
@@ -698,7 +708,7 @@ export class SessionStore {
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS observations_new');
|
||||
|
||||
this.db.run(`
|
||||
const observationsNewSQL = `
|
||||
CREATE TABLE observations_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
@@ -718,32 +728,21 @@ export class SessionStore {
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
)
|
||||
`);
|
||||
|
||||
this.db.run(`
|
||||
`;
|
||||
const observationsCopySQL = `
|
||||
INSERT INTO observations_new
|
||||
SELECT id, memory_session_id, project, text, type, title, subtitle, facts,
|
||||
narrative, concepts, files_read, files_modified, prompt_number,
|
||||
discovery_tokens, created_at, created_at_epoch
|
||||
FROM observations
|
||||
`);
|
||||
|
||||
this.db.run('DROP TABLE observations');
|
||||
this.db.run('ALTER TABLE observations_new RENAME TO observations');
|
||||
|
||||
// Recreate indexes
|
||||
this.db.run(`
|
||||
`;
|
||||
const observationsIndexesSQL = `
|
||||
CREATE INDEX idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX idx_observations_project ON observations(project);
|
||||
CREATE INDEX idx_observations_type ON observations(type);
|
||||
CREATE INDEX idx_observations_created ON observations(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
// Recreate FTS triggers only if observations_fts exists
|
||||
// (SessionSearch.ensureFTSTables creates it on first use with IF NOT EXISTS)
|
||||
const hasFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='observations_fts'").all() as { name: string }[]).length > 0;
|
||||
if (hasFTS) {
|
||||
this.db.run(`
|
||||
`;
|
||||
const observationsFTSTriggersSQL = `
|
||||
CREATE TRIGGER IF NOT EXISTS observations_ai AFTER INSERT ON observations BEGIN
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
@@ -760,17 +759,21 @@ export class SessionStore {
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
END;
|
||||
`);
|
||||
}
|
||||
`;
|
||||
|
||||
// ==========================================
|
||||
// 2. Recreate session_summaries table
|
||||
// ==========================================
|
||||
|
||||
// Drop session_summaries FTS triggers before dropping the table
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ai');
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ad');
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_au');
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
|
||||
|
||||
this.db.run(`
|
||||
const summariesNewSQL = `
|
||||
CREATE TABLE session_summaries_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
@@ -789,35 +792,20 @@ export class SessionStore {
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
)
|
||||
`);
|
||||
|
||||
this.db.run(`
|
||||
`;
|
||||
const summariesCopySQL = `
|
||||
INSERT INTO session_summaries_new
|
||||
SELECT id, memory_session_id, project, request, investigated, learned,
|
||||
completed, next_steps, files_read, files_edited, notes,
|
||||
prompt_number, discovery_tokens, created_at, created_at_epoch
|
||||
FROM session_summaries
|
||||
`);
|
||||
|
||||
// Drop session_summaries FTS triggers before dropping the table
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ai');
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ad');
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_au');
|
||||
|
||||
this.db.run('DROP TABLE session_summaries');
|
||||
this.db.run('ALTER TABLE session_summaries_new RENAME TO session_summaries');
|
||||
|
||||
// Recreate indexes
|
||||
this.db.run(`
|
||||
`;
|
||||
const summariesIndexesSQL = `
|
||||
CREATE INDEX idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX idx_session_summaries_project ON session_summaries(project);
|
||||
CREATE INDEX idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
// Recreate session_summaries FTS triggers if FTS table exists
|
||||
const hasSummariesFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='session_summaries_fts'").all() as { name: string }[]).length > 0;
|
||||
if (hasSummariesFTS) {
|
||||
this.db.run(`
|
||||
`;
|
||||
const summariesFTSTriggersSQL = `
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_ai AFTER INSERT ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
@@ -834,21 +822,52 @@ export class SessionStore {
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
`);
|
||||
}
|
||||
`;
|
||||
|
||||
try {
|
||||
this.recreateObservationsWithCascade(observationsNewSQL, observationsCopySQL, observationsIndexesSQL, observationsFTSTriggersSQL);
|
||||
this.recreateSessionSummariesWithCascade(summariesNewSQL, summariesCopySQL, summariesIndexesSQL, summariesFTSTriggersSQL);
|
||||
|
||||
// Record migration
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(21, new Date().toISOString());
|
||||
|
||||
this.db.run('COMMIT');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
|
||||
logger.debug('DB', 'Successfully added ON UPDATE CASCADE to FK constraints');
|
||||
} catch (error) {
|
||||
this.db.run('ROLLBACK');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
if (error instanceof Error) {
|
||||
throw error;
|
||||
}
|
||||
throw new Error(String(error));
|
||||
}
|
||||
}
|
||||
|
||||
/** Recreate observations table with ON UPDATE CASCADE FK (used by migration 21) */
|
||||
private recreateObservationsWithCascade(createSQL: string, copySQL: string, indexesSQL: string, ftsTriggersSQL: string): void {
|
||||
this.db.run(createSQL);
|
||||
this.db.run(copySQL);
|
||||
this.db.run('DROP TABLE observations');
|
||||
this.db.run('ALTER TABLE observations_new RENAME TO observations');
|
||||
this.db.run(indexesSQL);
|
||||
|
||||
const hasFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='observations_fts'").all() as { name: string }[]).length > 0;
|
||||
if (hasFTS) {
|
||||
this.db.run(ftsTriggersSQL);
|
||||
}
|
||||
}
|
||||
|
||||
/** Recreate session_summaries table with ON UPDATE CASCADE FK (used by migration 21) */
|
||||
private recreateSessionSummariesWithCascade(createSQL: string, copySQL: string, indexesSQL: string, ftsTriggersSQL: string): void {
|
||||
this.db.run(createSQL);
|
||||
this.db.run(copySQL);
|
||||
this.db.run('DROP TABLE session_summaries');
|
||||
this.db.run('ALTER TABLE session_summaries_new RENAME TO session_summaries');
|
||||
this.db.run(indexesSQL);
|
||||
|
||||
const hasSummariesFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='session_summaries_fts'").all() as { name: string }[]).length > 0;
|
||||
if (hasSummariesFTS) {
|
||||
this.db.run(ftsTriggersSQL);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -975,6 +994,44 @@ export class SessionStore {
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add agent_type and agent_id columns to observations and pending_messages (migration 27).
|
||||
* Mirrors MigrationRunner.addObservationSubagentColumns so bundled artifacts that embed
|
||||
* SessionStore (e.g. context-generator.cjs) stay schema-consistent.
|
||||
*/
|
||||
private addObservationSubagentColumns(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(27) as SchemaVersion | undefined;
|
||||
|
||||
const obsCols = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
|
||||
const obsHasAgentType = obsCols.some(col => col.name === 'agent_type');
|
||||
const obsHasAgentId = obsCols.some(col => col.name === 'agent_id');
|
||||
|
||||
if (!obsHasAgentType) {
|
||||
this.db.run('ALTER TABLE observations ADD COLUMN agent_type TEXT');
|
||||
}
|
||||
if (!obsHasAgentId) {
|
||||
this.db.run('ALTER TABLE observations ADD COLUMN agent_id TEXT');
|
||||
}
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_type ON observations(agent_type)');
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_id ON observations(agent_id)');
|
||||
|
||||
const pendingCols = this.db.query('PRAGMA table_info(pending_messages)').all() as TableColumnInfo[];
|
||||
if (pendingCols.length > 0) {
|
||||
const pendingHasAgentType = pendingCols.some(col => col.name === 'agent_type');
|
||||
const pendingHasAgentId = pendingCols.some(col => col.name === 'agent_id');
|
||||
if (!pendingHasAgentType) {
|
||||
this.db.run('ALTER TABLE pending_messages ADD COLUMN agent_type TEXT');
|
||||
}
|
||||
if (!pendingHasAgentId) {
|
||||
this.db.run('ALTER TABLE pending_messages ADD COLUMN agent_id TEXT');
|
||||
}
|
||||
}
|
||||
|
||||
if (!applied) {
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(27, new Date().toISOString());
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the memory session ID for a session
|
||||
* Called by SDKAgent when it captures the session ID from the first SDK message
|
||||
@@ -1755,6 +1812,8 @@ export class SessionStore {
|
||||
concepts: string[];
|
||||
files_read: string[];
|
||||
files_modified: string[];
|
||||
agent_type?: string | null;
|
||||
agent_id?: string | null;
|
||||
},
|
||||
promptNumber?: number,
|
||||
discoveryTokens: number = 0,
|
||||
@@ -1775,9 +1834,9 @@ export class SessionStore {
|
||||
const stmt = this.db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch,
|
||||
generated_by_model)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
@@ -1793,6 +1852,8 @@ export class SessionStore {
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
observation.agent_type ?? null,
|
||||
observation.agent_id ?? null,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch,
|
||||
@@ -1884,6 +1945,8 @@ export class SessionStore {
|
||||
concepts: string[];
|
||||
files_read: string[];
|
||||
files_modified: string[];
|
||||
agent_type?: string | null;
|
||||
agent_id?: string | null;
|
||||
}>,
|
||||
summary: {
|
||||
request: string;
|
||||
@@ -1910,9 +1973,9 @@ export class SessionStore {
|
||||
const obsStmt = this.db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch,
|
||||
generated_by_model)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
for (const observation of observations) {
|
||||
@@ -1937,6 +2000,8 @@ export class SessionStore {
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
observation.agent_type ?? null,
|
||||
observation.agent_id ?? null,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch,
|
||||
@@ -2014,6 +2079,8 @@ export class SessionStore {
|
||||
concepts: string[];
|
||||
files_read: string[];
|
||||
files_modified: string[];
|
||||
agent_type?: string | null;
|
||||
agent_id?: string | null;
|
||||
}>,
|
||||
summary: {
|
||||
request: string;
|
||||
@@ -2042,9 +2109,9 @@ export class SessionStore {
|
||||
const obsStmt = this.db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch,
|
||||
generated_by_model)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
for (const observation of observations) {
|
||||
@@ -2069,6 +2136,8 @@ export class SessionStore {
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
observation.agent_type ?? null,
|
||||
observation.agent_id ?? null,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch,
|
||||
@@ -2269,8 +2338,12 @@ export class SessionStore {
|
||||
|
||||
startEpoch = beforeRecords.length > 0 ? beforeRecords[beforeRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
endEpoch = afterRecords.length > 0 ? afterRecords[afterRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
} catch (err: any) {
|
||||
logger.error('DB', 'Error getting boundary observations', undefined, { error: err, project });
|
||||
} catch (err) {
|
||||
if (err instanceof Error) {
|
||||
logger.error('DB', 'Error getting boundary observations', { project }, err);
|
||||
} else {
|
||||
logger.error('DB', 'Error getting boundary observations with non-Error', {}, new Error(String(err)));
|
||||
}
|
||||
return { observations: [], sessions: [], prompts: [] };
|
||||
}
|
||||
} else {
|
||||
@@ -2301,8 +2374,12 @@ export class SessionStore {
|
||||
|
||||
startEpoch = beforeRecords.length > 0 ? beforeRecords[beforeRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
endEpoch = afterRecords.length > 0 ? afterRecords[afterRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
} catch (err: any) {
|
||||
logger.error('DB', 'Error getting boundary timestamps', undefined, { error: err, project });
|
||||
} catch (err) {
|
||||
if (err instanceof Error) {
|
||||
logger.error('DB', 'Error getting boundary timestamps', { project }, err);
|
||||
} else {
|
||||
logger.error('DB', 'Error getting boundary timestamps with non-Error', {}, new Error(String(err)));
|
||||
}
|
||||
return { observations: [], sessions: [], prompts: [] };
|
||||
}
|
||||
}
|
||||
@@ -2629,6 +2706,8 @@ export class SessionStore {
|
||||
discovery_tokens: number;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
agent_type?: string | null;
|
||||
agent_id?: string | null;
|
||||
}): { imported: boolean; id: number } {
|
||||
// Check if observation already exists
|
||||
const existing = this.db.prepare(`
|
||||
@@ -2644,8 +2723,9 @@ export class SessionStore {
|
||||
INSERT INTO observations (
|
||||
memory_session_id, project, text, type, title, subtitle,
|
||||
facts, narrative, concepts, files_read, files_modified,
|
||||
prompt_number, discovery_tokens, created_at, created_at_epoch
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
prompt_number, discovery_tokens, agent_type, agent_id,
|
||||
created_at, created_at_epoch
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
@@ -2662,6 +2742,8 @@ export class SessionStore {
|
||||
obs.files_modified,
|
||||
obs.prompt_number,
|
||||
obs.discovery_tokens || 0,
|
||||
obs.agent_type ?? null,
|
||||
obs.agent_id ?? null,
|
||||
obs.created_at,
|
||||
obs.created_at_epoch
|
||||
);
|
||||
|
||||
@@ -141,6 +141,8 @@ export function importObservation(
|
||||
discovery_tokens: number;
|
||||
created_at: string;
|
||||
created_at_epoch: number;
|
||||
agent_type?: string | null;
|
||||
agent_id?: string | null;
|
||||
}
|
||||
): ImportResult {
|
||||
// Check if observation already exists
|
||||
@@ -163,8 +165,9 @@ export function importObservation(
|
||||
INSERT INTO observations (
|
||||
memory_session_id, project, text, type, title, subtitle,
|
||||
facts, narrative, concepts, files_read, files_modified,
|
||||
prompt_number, discovery_tokens, created_at, created_at_epoch
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
prompt_number, discovery_tokens, agent_type, agent_id,
|
||||
created_at, created_at_epoch
|
||||
) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
@@ -181,6 +184,8 @@ export function importObservation(
|
||||
obs.files_modified,
|
||||
obs.prompt_number,
|
||||
obs.discovery_tokens || 0,
|
||||
obs.agent_type ?? null,
|
||||
obs.agent_id ?? null,
|
||||
obs.created_at,
|
||||
obs.created_at_epoch
|
||||
);
|
||||
|
||||
@@ -1,5 +1,6 @@
|
||||
import { Database } from 'bun:sqlite';
|
||||
import { Migration } from './Database.js';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
|
||||
// Re-export MigrationRunner for SessionStore migration extraction
|
||||
export { MigrationRunner } from './migrations/runner.js';
|
||||
@@ -377,8 +378,8 @@ export const migration006: Migration = {
|
||||
try {
|
||||
db.run('CREATE VIRTUAL TABLE _fts5_probe USING fts5(test_column)');
|
||||
db.run('DROP TABLE _fts5_probe');
|
||||
} catch {
|
||||
console.log('⚠️ FTS5 not available on this platform — skipping FTS migration (search uses ChromaDB)');
|
||||
} catch (error) {
|
||||
logger.warn('DB', 'FTS5 not available on this platform — skipping FTS migration (search uses ChromaDB)', {}, error instanceof Error ? error : undefined);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -572,6 +573,61 @@ export const migration009: Migration = {
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Migration 010: Label observations (and their queue rows) with the subagent identity.
|
||||
*
|
||||
* Claude Code hooks that fire inside a subagent carry agent_id and agent_type on the
|
||||
* stdin payload. These flow hook → worker → pending_messages → SDK storage so that
|
||||
* observation rows can be attributed to the originating subagent. Main-session rows
|
||||
* keep NULL for both columns.
|
||||
*/
|
||||
export const migration010: Migration = {
|
||||
version: 27,
|
||||
up: (db: Database) => {
|
||||
const added: string[] = [];
|
||||
|
||||
const obsColumns = db.prepare('PRAGMA table_info(observations)').all() as Array<{ name: string }>;
|
||||
const obsHasAgentType = obsColumns.some(c => c.name === 'agent_type');
|
||||
const obsHasAgentId = obsColumns.some(c => c.name === 'agent_id');
|
||||
if (!obsHasAgentType) {
|
||||
db.run('ALTER TABLE observations ADD COLUMN agent_type TEXT');
|
||||
added.push('observations.agent_type');
|
||||
}
|
||||
if (!obsHasAgentId) {
|
||||
db.run('ALTER TABLE observations ADD COLUMN agent_id TEXT');
|
||||
added.push('observations.agent_id');
|
||||
}
|
||||
db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_type ON observations(agent_type)');
|
||||
db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_id ON observations(agent_id)');
|
||||
|
||||
// Also thread the same fields through the pending_messages queue so the label
|
||||
// survives worker restarts between enqueue and SDK-agent processing.
|
||||
const pendingColumns = db.prepare('PRAGMA table_info(pending_messages)').all() as Array<{ name: string }>;
|
||||
if (pendingColumns.length > 0) {
|
||||
const pendingHasAgentType = pendingColumns.some(c => c.name === 'agent_type');
|
||||
const pendingHasAgentId = pendingColumns.some(c => c.name === 'agent_id');
|
||||
if (!pendingHasAgentType) {
|
||||
db.run('ALTER TABLE pending_messages ADD COLUMN agent_type TEXT');
|
||||
added.push('pending_messages.agent_type');
|
||||
}
|
||||
if (!pendingHasAgentId) {
|
||||
db.run('ALTER TABLE pending_messages ADD COLUMN agent_id TEXT');
|
||||
added.push('pending_messages.agent_id');
|
||||
}
|
||||
}
|
||||
|
||||
logger.debug(
|
||||
'DB',
|
||||
added.length > 0
|
||||
? `[migration010] Added columns: ${added.join(', ')}`
|
||||
: '[migration010] Subagent identity columns already present; ensured indexes'
|
||||
);
|
||||
},
|
||||
down: (_db: Database) => {
|
||||
// SQLite DROP COLUMN not fully supported; no-op
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* All migrations in order
|
||||
*/
|
||||
@@ -584,5 +640,6 @@ export const migrations: Migration[] = [
|
||||
migration006,
|
||||
migration007,
|
||||
migration008,
|
||||
migration009
|
||||
migration009,
|
||||
migration010
|
||||
];
|
||||
@@ -38,6 +38,7 @@ export class MigrationRunner {
|
||||
this.createObservationFeedbackTable();
|
||||
this.addSessionPlatformSourceColumn();
|
||||
this.ensureMergedIntoProjectColumns();
|
||||
this.addObservationSubagentColumns();
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -418,6 +419,25 @@ export class MigrationRunner {
|
||||
// Create FTS5 virtual table — skip if FTS5 is unavailable (e.g., Bun on Windows #791).
|
||||
// The user_prompts table itself is still created; only FTS indexing is skipped.
|
||||
try {
|
||||
this.createUserPromptsFTS();
|
||||
} catch (ftsError) {
|
||||
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, ftsError instanceof Error ? ftsError : new Error(String(ftsError)));
|
||||
}
|
||||
|
||||
// Commit transaction
|
||||
this.db.run('COMMIT');
|
||||
|
||||
// Record migration
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(10, new Date().toISOString());
|
||||
|
||||
logger.debug('DB', 'Successfully created user_prompts table');
|
||||
}
|
||||
|
||||
/**
|
||||
* Create FTS5 virtual table and sync triggers for user_prompts.
|
||||
* Extracted from createUserPromptsTable to keep try block small.
|
||||
*/
|
||||
private createUserPromptsFTS(): void {
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE user_prompts_fts USING fts5(
|
||||
prompt_text,
|
||||
@@ -426,7 +446,6 @@ export class MigrationRunner {
|
||||
);
|
||||
`);
|
||||
|
||||
// Create triggers to sync FTS5
|
||||
this.db.run(`
|
||||
CREATE TRIGGER user_prompts_ai AFTER INSERT ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
@@ -445,17 +464,6 @@ export class MigrationRunner {
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
`);
|
||||
} catch (ftsError) {
|
||||
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, ftsError as Error);
|
||||
}
|
||||
|
||||
// Commit transaction
|
||||
this.db.run('COMMIT');
|
||||
|
||||
// Record migration
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(10, new Date().toISOString());
|
||||
|
||||
logger.debug('DB', 'Successfully created user_prompts table');
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -658,9 +666,29 @@ export class MigrationRunner {
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
try {
|
||||
// ===================================
|
||||
// 1. Recreate observations table
|
||||
// ===================================
|
||||
this.recreateObservationsWithUpdateCascade();
|
||||
this.recreateSessionSummariesWithUpdateCascade();
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(21, new Date().toISOString());
|
||||
this.db.run('COMMIT');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
|
||||
logger.debug('DB', 'Successfully added ON UPDATE CASCADE to FK constraints');
|
||||
} catch (error) {
|
||||
this.db.run('ROLLBACK');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
if (error instanceof Error) {
|
||||
throw error;
|
||||
}
|
||||
throw new Error(`Migration 21 failed: ${String(error)}`);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Recreate observations table with ON UPDATE CASCADE FK constraint.
|
||||
* Called within a transaction by addOnUpdateCascadeToForeignKeys.
|
||||
*/
|
||||
private recreateObservationsWithUpdateCascade(): void {
|
||||
// Drop FTS triggers first (they reference the observations table)
|
||||
this.db.run('DROP TRIGGER IF EXISTS observations_ai');
|
||||
this.db.run('DROP TRIGGER IF EXISTS observations_ad');
|
||||
@@ -702,7 +730,6 @@ export class MigrationRunner {
|
||||
this.db.run('DROP TABLE observations');
|
||||
this.db.run('ALTER TABLE observations_new RENAME TO observations');
|
||||
|
||||
// Recreate indexes
|
||||
this.db.run(`
|
||||
CREATE INDEX idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX idx_observations_project ON observations(project);
|
||||
@@ -732,10 +759,13 @@ export class MigrationRunner {
|
||||
END;
|
||||
`);
|
||||
}
|
||||
}
|
||||
|
||||
// ===================================
|
||||
// 2. Recreate session_summaries table
|
||||
// ===================================
|
||||
/**
|
||||
* Recreate session_summaries table with ON UPDATE CASCADE FK constraint.
|
||||
* Called within a transaction by addOnUpdateCascadeToForeignKeys.
|
||||
*/
|
||||
private recreateSessionSummariesWithUpdateCascade(): void {
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
|
||||
|
||||
@@ -776,7 +806,6 @@ export class MigrationRunner {
|
||||
this.db.run('DROP TABLE session_summaries');
|
||||
this.db.run('ALTER TABLE session_summaries_new RENAME TO session_summaries');
|
||||
|
||||
// Recreate indexes
|
||||
this.db.run(`
|
||||
CREATE INDEX idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX idx_session_summaries_project ON session_summaries(project);
|
||||
@@ -805,19 +834,6 @@ export class MigrationRunner {
|
||||
END;
|
||||
`);
|
||||
}
|
||||
|
||||
// Record migration
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(21, new Date().toISOString());
|
||||
|
||||
this.db.run('COMMIT');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
|
||||
logger.debug('DB', 'Successfully added ON UPDATE CASCADE to FK constraints');
|
||||
} catch (error) {
|
||||
this.db.run('ROLLBACK');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -952,4 +968,51 @@ export class MigrationRunner {
|
||||
'CREATE INDEX IF NOT EXISTS idx_summaries_merged_into ON session_summaries(merged_into_project)'
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add agent_type and agent_id columns to observations and pending_messages (migration 27).
|
||||
*
|
||||
* Labels observation rows with the originating Claude Code subagent identity so
|
||||
* downstream queries can distinguish main-session work from subagent work.
|
||||
* Main-session rows keep NULL for both columns.
|
||||
*
|
||||
* Also threads the same columns through pending_messages so the label survives
|
||||
* between enqueue (hook) and SDK-agent processing (which re-inserts into observations).
|
||||
*/
|
||||
private addObservationSubagentColumns(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(27) as SchemaVersion | undefined;
|
||||
|
||||
const obsCols = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
|
||||
const obsHasAgentType = obsCols.some(c => c.name === 'agent_type');
|
||||
const obsHasAgentId = obsCols.some(c => c.name === 'agent_id');
|
||||
|
||||
if (!obsHasAgentType) {
|
||||
this.db.run('ALTER TABLE observations ADD COLUMN agent_type TEXT');
|
||||
logger.debug('DB', 'Added agent_type column to observations table');
|
||||
}
|
||||
if (!obsHasAgentId) {
|
||||
this.db.run('ALTER TABLE observations ADD COLUMN agent_id TEXT');
|
||||
logger.debug('DB', 'Added agent_id column to observations table');
|
||||
}
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_type ON observations(agent_type)');
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_agent_id ON observations(agent_id)');
|
||||
|
||||
const pendingCols = this.db.query('PRAGMA table_info(pending_messages)').all() as TableColumnInfo[];
|
||||
if (pendingCols.length > 0) {
|
||||
const pendingHasAgentType = pendingCols.some(c => c.name === 'agent_type');
|
||||
const pendingHasAgentId = pendingCols.some(c => c.name === 'agent_id');
|
||||
if (!pendingHasAgentType) {
|
||||
this.db.run('ALTER TABLE pending_messages ADD COLUMN agent_type TEXT');
|
||||
logger.debug('DB', 'Added agent_type column to pending_messages table');
|
||||
}
|
||||
if (!pendingHasAgentId) {
|
||||
this.db.run('ALTER TABLE pending_messages ADD COLUMN agent_id TEXT');
|
||||
logger.debug('DB', 'Added agent_id column to pending_messages table');
|
||||
}
|
||||
}
|
||||
|
||||
if (!applied) {
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(27, new Date().toISOString());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ export function parseFileList(value: string | null | undefined): string[] {
|
||||
const parsed = JSON.parse(value);
|
||||
return Array.isArray(parsed) ? parsed : [String(parsed)];
|
||||
} catch {
|
||||
// [ANTI-PATTERN IGNORED]: legacy bare-path strings are expected input, not errors
|
||||
return [value];
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,6 +15,8 @@ const DEDUP_WINDOW_MS = 30_000;
|
||||
/**
|
||||
* Compute a short content hash for deduplication.
|
||||
* Uses (memory_session_id, title, narrative) as the semantic identity of an observation.
|
||||
* Subagent fields (agent_type, agent_id) are intentionally excluded so the same work
|
||||
* described once by a subagent and once by its parent deduplicates across contexts.
|
||||
*/
|
||||
export function computeObservationContentHash(
|
||||
memorySessionId: string,
|
||||
@@ -75,8 +77,8 @@ export function storeObservation(
|
||||
const stmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
@@ -92,6 +94,8 @@ export function storeObservation(
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
observation.agent_type ?? null,
|
||||
observation.agent_id ?? null,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
|
||||
@@ -16,6 +16,9 @@ export interface ObservationInput {
|
||||
concepts: string[];
|
||||
files_read: string[];
|
||||
files_modified: string[];
|
||||
// Claude Code subagent identity — NULL for main-session rows.
|
||||
agent_type?: string | null;
|
||||
agent_id?: string | null;
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -111,8 +111,9 @@ export function getTimelineAroundObservation(
|
||||
|
||||
startEpoch = beforeRecords.length > 0 ? beforeRecords[beforeRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
endEpoch = afterRecords.length > 0 ? afterRecords[afterRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
} catch (err: any) {
|
||||
logger.error('DB', 'Error getting boundary observations', undefined, { error: err, project });
|
||||
} catch (err) {
|
||||
const normalizedError = err instanceof Error ? err : new Error(String(err));
|
||||
logger.error('DB', 'Error getting boundary observations', { project }, normalizedError);
|
||||
return { observations: [], sessions: [], prompts: [] };
|
||||
}
|
||||
} else {
|
||||
@@ -143,8 +144,9 @@ export function getTimelineAroundObservation(
|
||||
|
||||
startEpoch = beforeRecords.length > 0 ? beforeRecords[beforeRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
endEpoch = afterRecords.length > 0 ? afterRecords[afterRecords.length - 1].created_at_epoch : anchorEpoch;
|
||||
} catch (err: any) {
|
||||
logger.error('DB', 'Error getting boundary timestamps', undefined, { error: err, project });
|
||||
} catch (err) {
|
||||
const normalizedError = err instanceof Error ? err : new Error(String(err));
|
||||
logger.error('DB', 'Error getting boundary timestamps', { project }, normalizedError);
|
||||
return { observations: [], sessions: [], prompts: [] };
|
||||
}
|
||||
}
|
||||
|
||||
@@ -68,8 +68,8 @@ export function storeObservationsAndMarkComplete(
|
||||
const obsStmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
for (const observation of observations) {
|
||||
@@ -93,6 +93,8 @@ export function storeObservationsAndMarkComplete(
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
observation.agent_type ?? null,
|
||||
observation.agent_id ?? null,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
@@ -187,8 +189,8 @@ export function storeObservations(
|
||||
const obsStmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
files_read, files_modified, prompt_number, discovery_tokens, agent_type, agent_id, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
for (const observation of observations) {
|
||||
@@ -212,6 +214,8 @@ export function storeObservations(
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
observation.agent_type ?? null,
|
||||
observation.agent_id ?? null,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
|
||||
@@ -78,6 +78,11 @@ export class ChromaMcpManager {
|
||||
await this.connecting;
|
||||
} catch (error) {
|
||||
this.lastConnectionFailureTimestamp = Date.now();
|
||||
if (error instanceof Error) {
|
||||
logger.error('CHROMA_MCP', 'Connection attempt failed', {}, error);
|
||||
} else {
|
||||
logger.error('CHROMA_MCP', 'Connection attempt failed with non-Error value', { error: String(error) });
|
||||
}
|
||||
throw error;
|
||||
} finally {
|
||||
this.connecting = null;
|
||||
@@ -307,9 +312,15 @@ export class ChromaMcpManager {
|
||||
// Try JSON parse first; if it fails, return the raw text for non-error responses.
|
||||
try {
|
||||
return JSON.parse(firstTextContent.text);
|
||||
} catch {
|
||||
} catch (parseError: unknown) {
|
||||
// Plain text response (e.g. "Successfully created collection cm__foo")
|
||||
// Return null for void-like success messages, callers don't need the text
|
||||
if (parseError instanceof Error) {
|
||||
logger.debug('CHROMA_MCP', 'Non-JSON response from tool, returning null', {
|
||||
toolName,
|
||||
textPreview: firstTextContent.text.slice(0, 100)
|
||||
});
|
||||
}
|
||||
return null;
|
||||
}
|
||||
}
|
||||
@@ -322,7 +333,10 @@ export class ChromaMcpManager {
|
||||
try {
|
||||
await this.callTool('chroma_list_collections', { limit: 1 });
|
||||
return true;
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.warn('CHROMA_MCP', 'Health check failed', {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -342,7 +356,11 @@ export class ChromaMcpManager {
|
||||
try {
|
||||
await this.client.close();
|
||||
} catch (error) {
|
||||
logger.debug('CHROMA_MCP', 'Error during client close (subprocess may already be dead)', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('CHROMA_MCP', 'Error during client close (subprocess may already be dead)', {}, error);
|
||||
} else {
|
||||
logger.debug('CHROMA_MCP', 'Error during client close (subprocess may already be dead)', { error: String(error) });
|
||||
}
|
||||
}
|
||||
|
||||
getSupervisor().unregisterProcess(CHROMA_SUPERVISOR_ID);
|
||||
@@ -394,7 +412,10 @@ export class ChromaMcpManager {
|
||||
'uvx --with certifi python -c "import certifi; print(certifi.where())"',
|
||||
{ encoding: 'utf8', stdio: ['pipe', 'pipe', 'pipe'], timeout: 10000 }
|
||||
).trim();
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('CHROMA_MCP', 'Failed to resolve certifi path via uvx', {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
return undefined;
|
||||
}
|
||||
|
||||
@@ -408,7 +429,10 @@ export class ChromaMcpManager {
|
||||
'security find-certificate -a -c "Zscaler" -p /Library/Keychains/System.keychain',
|
||||
{ encoding: 'utf8', stdio: ['pipe', 'pipe', 'pipe'], timeout: 5000 }
|
||||
);
|
||||
} catch {
|
||||
} catch (error) {
|
||||
logger.debug('CHROMA_MCP', 'No Zscaler certificate found in system keychain', {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
return undefined;
|
||||
}
|
||||
|
||||
|
||||
+123
-70
@@ -563,14 +563,53 @@ export class ChromaSync {
|
||||
const db = new SessionStore();
|
||||
|
||||
try {
|
||||
// Build exclusion list for observations
|
||||
// Filter to validated positive integers before interpolating into SQL
|
||||
const existingObsIds = Array.from(existing.observations).filter(id => Number.isInteger(id) && id > 0);
|
||||
await this.runBackfillPipeline(db, backfillProject, existing);
|
||||
} catch (error) {
|
||||
logger.error('CHROMA_SYNC', 'Backfill failed', { project: backfillProject }, error instanceof Error ? error : new Error(String(error)));
|
||||
throw new Error(`Backfill failed: ${error instanceof Error ? error.message : String(error)}`);
|
||||
} finally {
|
||||
db.close();
|
||||
}
|
||||
}
|
||||
|
||||
private async runBackfillPipeline(
|
||||
db: SessionStore,
|
||||
backfillProject: string,
|
||||
existing: { observations: Set<number>; summaries: Set<number>; prompts: Set<number> }
|
||||
): Promise<void> {
|
||||
const allDocs = await this.backfillObservations(db, backfillProject, existing.observations);
|
||||
const summaryDocs = await this.backfillSummaries(db, backfillProject, existing.summaries);
|
||||
const promptDocs = await this.backfillPrompts(db, backfillProject, existing.prompts);
|
||||
|
||||
logger.info('CHROMA_SYNC', 'Smart backfill complete', {
|
||||
project: backfillProject,
|
||||
synced: {
|
||||
observationDocs: allDocs.length,
|
||||
summaryDocs: summaryDocs.length,
|
||||
promptDocs: promptDocs.length
|
||||
},
|
||||
skipped: {
|
||||
observations: existing.observations.size,
|
||||
summaries: existing.summaries.size,
|
||||
prompts: existing.prompts.size
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Backfill observations missing from Chroma for a given project.
|
||||
* Returns the formatted documents that were synced.
|
||||
*/
|
||||
private async backfillObservations(
|
||||
db: SessionStore,
|
||||
backfillProject: string,
|
||||
existingObservationIds: Set<number>
|
||||
): Promise<ChromaDocument[]> {
|
||||
const existingObsIds = Array.from(existingObservationIds).filter(id => Number.isInteger(id) && id > 0);
|
||||
const obsExclusionClause = existingObsIds.length > 0
|
||||
? `AND id NOT IN (${existingObsIds.join(',')})`
|
||||
: '';
|
||||
|
||||
// Get only observations missing from Chroma
|
||||
const observations = db.db.prepare(`
|
||||
SELECT * FROM observations
|
||||
WHERE project = ? ${obsExclusionClause}
|
||||
@@ -584,17 +623,15 @@ export class ChromaSync {
|
||||
logger.info('CHROMA_SYNC', 'Backfilling observations', {
|
||||
project: backfillProject,
|
||||
missing: observations.length,
|
||||
existing: existing.observations.size,
|
||||
existing: existingObservationIds.size,
|
||||
total: totalObsCount.count
|
||||
});
|
||||
|
||||
// Format all observation documents
|
||||
const allDocs: ChromaDocument[] = [];
|
||||
for (const obs of observations) {
|
||||
allDocs.push(...this.formatObservationDocs(obs));
|
||||
}
|
||||
|
||||
// Sync in batches
|
||||
for (let i = 0; i < allDocs.length; i += this.BATCH_SIZE) {
|
||||
const batch = allDocs.slice(i, i + this.BATCH_SIZE);
|
||||
await this.addDocuments(batch);
|
||||
@@ -605,13 +642,23 @@ export class ChromaSync {
|
||||
});
|
||||
}
|
||||
|
||||
// Build exclusion list for summaries
|
||||
const existingSummaryIds = Array.from(existing.summaries).filter(id => Number.isInteger(id) && id > 0);
|
||||
return allDocs;
|
||||
}
|
||||
|
||||
/**
|
||||
* Backfill summaries missing from Chroma for a given project.
|
||||
* Returns the formatted documents that were synced.
|
||||
*/
|
||||
private async backfillSummaries(
|
||||
db: SessionStore,
|
||||
backfillProject: string,
|
||||
existingSummaryIdSet: Set<number>
|
||||
): Promise<ChromaDocument[]> {
|
||||
const existingSummaryIds = Array.from(existingSummaryIdSet).filter(id => Number.isInteger(id) && id > 0);
|
||||
const summaryExclusionClause = existingSummaryIds.length > 0
|
||||
? `AND id NOT IN (${existingSummaryIds.join(',')})`
|
||||
: '';
|
||||
|
||||
// Get only summaries missing from Chroma
|
||||
const summaries = db.db.prepare(`
|
||||
SELECT * FROM session_summaries
|
||||
WHERE project = ? ${summaryExclusionClause}
|
||||
@@ -625,17 +672,15 @@ export class ChromaSync {
|
||||
logger.info('CHROMA_SYNC', 'Backfilling summaries', {
|
||||
project: backfillProject,
|
||||
missing: summaries.length,
|
||||
existing: existing.summaries.size,
|
||||
existing: existingSummaryIdSet.size,
|
||||
total: totalSummaryCount.count
|
||||
});
|
||||
|
||||
// Format all summary documents
|
||||
const summaryDocs: ChromaDocument[] = [];
|
||||
for (const summary of summaries) {
|
||||
summaryDocs.push(...this.formatSummaryDocs(summary));
|
||||
}
|
||||
|
||||
// Sync in batches
|
||||
for (let i = 0; i < summaryDocs.length; i += this.BATCH_SIZE) {
|
||||
const batch = summaryDocs.slice(i, i + this.BATCH_SIZE);
|
||||
await this.addDocuments(batch);
|
||||
@@ -646,13 +691,23 @@ export class ChromaSync {
|
||||
});
|
||||
}
|
||||
|
||||
// Build exclusion list for prompts
|
||||
const existingPromptIds = Array.from(existing.prompts).filter(id => Number.isInteger(id) && id > 0);
|
||||
return summaryDocs;
|
||||
}
|
||||
|
||||
/**
|
||||
* Backfill user prompts missing from Chroma for a given project.
|
||||
* Returns the formatted documents that were synced.
|
||||
*/
|
||||
private async backfillPrompts(
|
||||
db: SessionStore,
|
||||
backfillProject: string,
|
||||
existingPromptIdSet: Set<number>
|
||||
): Promise<ChromaDocument[]> {
|
||||
const existingPromptIds = Array.from(existingPromptIdSet).filter(id => Number.isInteger(id) && id > 0);
|
||||
const promptExclusionClause = existingPromptIds.length > 0
|
||||
? `AND up.id NOT IN (${existingPromptIds.join(',')})`
|
||||
: '';
|
||||
|
||||
// Get only user prompts missing from Chroma
|
||||
const prompts = db.db.prepare(`
|
||||
SELECT
|
||||
up.*,
|
||||
@@ -674,17 +729,15 @@ export class ChromaSync {
|
||||
logger.info('CHROMA_SYNC', 'Backfilling user prompts', {
|
||||
project: backfillProject,
|
||||
missing: prompts.length,
|
||||
existing: existing.prompts.size,
|
||||
existing: existingPromptIdSet.size,
|
||||
total: totalPromptCount.count
|
||||
});
|
||||
|
||||
// Format all prompt documents
|
||||
const promptDocs: ChromaDocument[] = [];
|
||||
for (const prompt of prompts) {
|
||||
promptDocs.push(this.formatUserPromptDoc(prompt));
|
||||
}
|
||||
|
||||
// Sync in batches
|
||||
for (let i = 0; i < promptDocs.length; i += this.BATCH_SIZE) {
|
||||
const batch = promptDocs.slice(i, i + this.BATCH_SIZE);
|
||||
await this.addDocuments(batch);
|
||||
@@ -695,26 +748,7 @@ export class ChromaSync {
|
||||
});
|
||||
}
|
||||
|
||||
logger.info('CHROMA_SYNC', 'Smart backfill complete', {
|
||||
project: backfillProject,
|
||||
synced: {
|
||||
observationDocs: allDocs.length,
|
||||
summaryDocs: summaryDocs.length,
|
||||
promptDocs: promptDocs.length
|
||||
},
|
||||
skipped: {
|
||||
observations: existing.observations.size,
|
||||
summaries: existing.summaries.size,
|
||||
prompts: existing.prompts.size
|
||||
}
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
logger.error('CHROMA_SYNC', 'Backfill failed', { project: backfillProject }, error as Error);
|
||||
throw new Error(`Backfill failed: ${error instanceof Error ? error.message : String(error)}`);
|
||||
} finally {
|
||||
db.close();
|
||||
}
|
||||
return promptDocs;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -728,27 +762,58 @@ export class ChromaSync {
|
||||
): Promise<{ ids: number[]; distances: number[]; metadatas: any[] }> {
|
||||
await this.ensureCollectionExists();
|
||||
|
||||
let results: any;
|
||||
try {
|
||||
const chromaMcp = ChromaMcpManager.getInstance();
|
||||
const results = await chromaMcp.callTool('chroma_query_documents', {
|
||||
results = await chromaMcp.callTool('chroma_query_documents', {
|
||||
collection_name: this.collectionName,
|
||||
query_texts: [query],
|
||||
n_results: limit,
|
||||
...(whereFilter && { where: whereFilter }),
|
||||
include: ['documents', 'metadatas', 'distances']
|
||||
}) as any;
|
||||
});
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
|
||||
// chroma-mcp surfaces connection failures as Error messages with no structured
|
||||
// error codes or typed error classes. String matching is the only way to distinguish
|
||||
// transient connection errors (which need collection state reset) from semantic query errors.
|
||||
const isConnectionError =
|
||||
errorMessage.includes('ECONNREFUSED') || // [ANTI-PATTERN IGNORED]: chroma-mcp has no typed error classes, string matching is the only option
|
||||
errorMessage.includes('ENOTFOUND') || // [ANTI-PATTERN IGNORED]: chroma-mcp has no typed error classes, string matching is the only option
|
||||
errorMessage.includes('fetch failed') || // [ANTI-PATTERN IGNORED]: chroma-mcp has no typed error classes, string matching is the only option
|
||||
errorMessage.includes('subprocess closed') || // [ANTI-PATTERN IGNORED]: chroma-mcp has no typed error classes, string matching is the only option
|
||||
errorMessage.includes('timed out'); // [ANTI-PATTERN IGNORED]: chroma-mcp has no typed error classes, string matching is the only option
|
||||
|
||||
if (isConnectionError) {
|
||||
// Reset collection state so next call attempts reconnect
|
||||
this.collectionCreated = false;
|
||||
logger.error('CHROMA_SYNC', 'Connection lost during query',
|
||||
{ project: this.project, query }, error as Error);
|
||||
throw new Error(`Chroma query failed - connection lost: ${errorMessage}`);
|
||||
}
|
||||
|
||||
logger.error('CHROMA_SYNC', 'Query failed', { project: this.project, query }, error as Error);
|
||||
throw error;
|
||||
}
|
||||
|
||||
return this.deduplicateQueryResults(results);
|
||||
}
|
||||
|
||||
/**
|
||||
* Deduplicate Chroma query results by SQLite ID.
|
||||
* Multiple Chroma docs map to the same SQLite ID (one per field).
|
||||
* Keeps the first (best-ranked) distance and metadata per SQLite ID.
|
||||
*/
|
||||
private deduplicateQueryResults(results: any): { ids: number[]; distances: number[]; metadatas: any[] } {
|
||||
// chroma_query_documents returns nested arrays (one per query text)
|
||||
// We always pass a single query text, so we access [0]
|
||||
const ids: number[] = [];
|
||||
const seen = new Set<number>();
|
||||
const seen = new Set<string>();
|
||||
const docIds = results?.ids?.[0] || [];
|
||||
const rawMetadatas = results?.metadatas?.[0] || [];
|
||||
const rawDistances = results?.distances?.[0] || [];
|
||||
|
||||
// Build deduplicated arrays that stay index-aligned:
|
||||
// Multiple Chroma docs map to the same SQLite ID (one per field).
|
||||
// Keep the first (best-ranked) distance and metadata per SQLite ID.
|
||||
const metadatas: any[] = [];
|
||||
const distances: number[] = [];
|
||||
|
||||
@@ -763,16 +828,22 @@ export class ChromaSync {
|
||||
const promptMatch = docId.match(/prompt_(\d+)/);
|
||||
|
||||
let sqliteId: number | null = null;
|
||||
let entityType: string | null = null;
|
||||
if (obsMatch) {
|
||||
sqliteId = parseInt(obsMatch[1], 10);
|
||||
entityType = 'observation';
|
||||
} else if (summaryMatch) {
|
||||
sqliteId = parseInt(summaryMatch[1], 10);
|
||||
entityType = 'session_summary';
|
||||
} else if (promptMatch) {
|
||||
sqliteId = parseInt(promptMatch[1], 10);
|
||||
entityType = 'user_prompt';
|
||||
}
|
||||
|
||||
if (sqliteId !== null && !seen.has(sqliteId)) {
|
||||
seen.add(sqliteId);
|
||||
if (sqliteId !== null && entityType) {
|
||||
const dedupeKey = `${entityType}:${sqliteId}`;
|
||||
if (seen.has(dedupeKey)) continue;
|
||||
seen.add(dedupeKey);
|
||||
ids.push(sqliteId);
|
||||
metadatas.push(rawMetadatas[i] ?? null);
|
||||
distances.push(rawDistances[i] ?? 0);
|
||||
@@ -780,28 +851,6 @@ export class ChromaSync {
|
||||
}
|
||||
|
||||
return { ids, distances, metadatas };
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
|
||||
// Check for connection errors
|
||||
const isConnectionError =
|
||||
errorMessage.includes('ECONNREFUSED') ||
|
||||
errorMessage.includes('ENOTFOUND') ||
|
||||
errorMessage.includes('fetch failed') ||
|
||||
errorMessage.includes('subprocess closed') ||
|
||||
errorMessage.includes('timed out');
|
||||
|
||||
if (isConnectionError) {
|
||||
// Reset collection state so next call attempts reconnect
|
||||
this.collectionCreated = false;
|
||||
logger.error('CHROMA_SYNC', 'Connection lost during query',
|
||||
{ project: this.project, query }, error as Error);
|
||||
throw new Error(`Chroma query failed - connection lost: ${errorMessage}`);
|
||||
}
|
||||
|
||||
logger.error('CHROMA_SYNC', 'Query failed', { project: this.project, query }, error as Error);
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -826,7 +875,11 @@ export class ChromaSync {
|
||||
try {
|
||||
await sync.ensureBackfilled(project);
|
||||
} catch (error) {
|
||||
logger.error('CHROMA_SYNC', `Backfill failed for project: ${project}`, {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('CHROMA_SYNC', `Backfill failed for project: ${project}`, {}, error);
|
||||
} else {
|
||||
logger.error('CHROMA_SYNC', `Backfill failed for project: ${project}`, { error: String(error) });
|
||||
}
|
||||
// Continue to next project — don't let one failure stop others
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,3 +1,4 @@
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import type { FieldSpec, MatchRule, TranscriptSchema, WatchTarget } from './types.js';
|
||||
|
||||
interface ResolveContext {
|
||||
@@ -142,7 +143,8 @@ export function matchesRule(
|
||||
try {
|
||||
const regex = new RegExp(rule.regex);
|
||||
return regex.test(String(value ?? ''));
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
logger.debug('WORKER', 'Invalid regex in match rule', { regex: rule.regex }, error instanceof Error ? error : undefined);
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -277,7 +277,8 @@ export class TranscriptEventProcessor {
|
||||
if (!(trimmed.startsWith('{') || trimmed.startsWith('['))) return value;
|
||||
try {
|
||||
return JSON.parse(trimmed);
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
logger.debug('WORKER', 'Failed to parse JSON string', { length: trimmed.length }, error instanceof Error ? error : undefined);
|
||||
return value;
|
||||
}
|
||||
}
|
||||
@@ -321,18 +322,19 @@ export class TranscriptEventProcessor {
|
||||
if (!workerReady) return;
|
||||
|
||||
const lastAssistantMessage = session.lastAssistantMessage ?? '';
|
||||
const requestBody = JSON.stringify({
|
||||
contentSessionId: session.sessionId,
|
||||
last_assistant_message: lastAssistantMessage,
|
||||
platformSource: session.platformSource
|
||||
});
|
||||
|
||||
try {
|
||||
await workerHttpRequest('/api/sessions/summarize', {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify({
|
||||
contentSessionId: session.sessionId,
|
||||
last_assistant_message: lastAssistantMessage,
|
||||
platformSource: session.platformSource
|
||||
})
|
||||
body: requestBody
|
||||
});
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
logger.warn('TRANSCRIPT', 'Summary request failed', {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
@@ -352,22 +354,25 @@ export class TranscriptEventProcessor {
|
||||
const context = getProjectContext(cwd);
|
||||
const projectsParam = context.allProjects.join(',');
|
||||
|
||||
const contextUrl = `/api/context/inject?projects=${encodeURIComponent(projectsParam)}&platformSource=${encodeURIComponent(session.platformSource)}`;
|
||||
const agentsPath = expandHomePath(watch.context.path ?? `${cwd}/AGENTS.md`);
|
||||
|
||||
let response: Awaited<ReturnType<typeof workerHttpRequest>>;
|
||||
try {
|
||||
const response = await workerHttpRequest(
|
||||
`/api/context/inject?projects=${encodeURIComponent(projectsParam)}&platformSource=${encodeURIComponent(session.platformSource)}`
|
||||
);
|
||||
response = await workerHttpRequest(contextUrl);
|
||||
} catch (error: unknown) {
|
||||
logger.warn('TRANSCRIPT', 'Failed to fetch AGENTS.md context', {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
if (!response.ok) return;
|
||||
|
||||
const content = (await response.text()).trim();
|
||||
if (!content) return;
|
||||
|
||||
const agentsPath = expandHomePath(watch.context.path ?? `${cwd}/AGENTS.md`);
|
||||
writeAgentsMd(agentsPath, content);
|
||||
logger.debug('TRANSCRIPT', 'Updated AGENTS.md context', { agentsPath, watch: watch.name });
|
||||
} catch (error) {
|
||||
logger.warn('TRANSCRIPT', 'Failed to update AGENTS.md context', {
|
||||
error: error instanceof Error ? error.message : String(error)
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -43,7 +43,8 @@ class FileTailer {
|
||||
let size = 0;
|
||||
try {
|
||||
size = statSync(this.filePath).size;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
logger.debug('WORKER', 'Failed to stat transcript file', { file: this.filePath }, error instanceof Error ? error : undefined);
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -152,7 +153,8 @@ export class TranscriptWatcher {
|
||||
return globSync(pattern, { nodir: true, absolute: true });
|
||||
}
|
||||
return [inputPath];
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
logger.debug('WORKER', 'Failed to stat watch path', { path: inputPath }, error instanceof Error ? error : undefined);
|
||||
return [];
|
||||
}
|
||||
}
|
||||
@@ -180,7 +182,8 @@ export class TranscriptWatcher {
|
||||
if (offset === 0 && watch.startAtEnd && initialDiscovery) {
|
||||
try {
|
||||
offset = statSync(filePath).size;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
logger.debug('WORKER', 'Failed to stat file for startAtEnd offset', { file: filePath }, error instanceof Error ? error : undefined);
|
||||
offset = 0;
|
||||
}
|
||||
}
|
||||
@@ -216,11 +219,19 @@ export class TranscriptWatcher {
|
||||
try {
|
||||
const entry = JSON.parse(line);
|
||||
await this.processor.processEntry(entry, watch, schema, sessionIdOverride ?? undefined);
|
||||
} catch (error) {
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('TRANSCRIPT', 'Failed to parse transcript line', {
|
||||
watch: watch.name,
|
||||
file: basename(filePath)
|
||||
}, error as Error);
|
||||
}, error);
|
||||
} else {
|
||||
logger.warn('TRANSCRIPT', 'Failed to parse transcript line (non-Error thrown)', {
|
||||
watch: watch.name,
|
||||
file: basename(filePath),
|
||||
error: String(error)
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
+116
-45
@@ -289,11 +289,16 @@ export class WorkerService {
|
||||
await Promise.race([this.initializationComplete, timeoutPromise]);
|
||||
next();
|
||||
} catch (error) {
|
||||
logger.error('HTTP', `Request to ${req.method} ${req.path} rejected — DB not initialized`, {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', `Request to ${req.method} ${req.path} rejected — DB not initialized`, {}, error);
|
||||
} else {
|
||||
logger.error('WORKER', `Request to ${req.method} ${req.path} rejected — DB not initialized with non-Error`, {}, new Error(String(error)));
|
||||
}
|
||||
res.status(503).json({
|
||||
error: 'Service initializing',
|
||||
message: 'Database is still initializing, please retry'
|
||||
});
|
||||
return;
|
||||
}
|
||||
});
|
||||
|
||||
@@ -372,8 +377,18 @@ export class WorkerService {
|
||||
// The worker daemon is spawned with cwd=marketplace-plugin-dir (not a git
|
||||
// repo), so we can't seed adoption with process.cwd(). Instead, discover
|
||||
// parent repos from recorded pending_messages.cwd values.
|
||||
let adoptions: Awaited<ReturnType<typeof adoptMergedWorktreesForAllKnownRepos>> | null = null;
|
||||
try {
|
||||
const adoptions = await adoptMergedWorktreesForAllKnownRepos({});
|
||||
adoptions = await adoptMergedWorktreesForAllKnownRepos({});
|
||||
} catch (err) {
|
||||
// [ANTI-PATTERN IGNORED]: Worktree adoption is best-effort on startup; failure must not block worker initialization
|
||||
if (err instanceof Error) {
|
||||
logger.error('WORKER', 'Worktree adoption failed (non-fatal)', {}, err);
|
||||
} else {
|
||||
logger.error('WORKER', 'Worktree adoption failed (non-fatal) with non-Error', {}, new Error(String(err)));
|
||||
}
|
||||
}
|
||||
if (adoptions) {
|
||||
for (const adoption of adoptions) {
|
||||
if (adoption.adoptedObservations > 0 || adoption.adoptedSummaries > 0 || adoption.chromaUpdates > 0) {
|
||||
logger.info('SYSTEM', 'Merged worktrees adopted on startup', adoption);
|
||||
@@ -385,8 +400,6 @@ export class WorkerService {
|
||||
});
|
||||
}
|
||||
}
|
||||
} catch (err) {
|
||||
logger.error('SYSTEM', 'Worktree adoption failed (non-fatal)', {}, err as Error);
|
||||
}
|
||||
|
||||
// Initialize ChromaMcpManager only if Chroma is enabled
|
||||
@@ -493,8 +506,11 @@ export class WorkerService {
|
||||
});
|
||||
try {
|
||||
await transport.close();
|
||||
} catch {
|
||||
// Best effort: the supervisor handles later process cleanup for survivors.
|
||||
} catch (transportCloseError) {
|
||||
// [ANTI-PATTERN IGNORED]: transport.close() is best-effort cleanup after MCP connection already failed; supervisor handles orphan processes
|
||||
logger.debug('WORKER', 'transport.close() failed during MCP cleanup', {
|
||||
error: transportCloseError instanceof Error ? transportCloseError.message : String(transportCloseError)
|
||||
});
|
||||
}
|
||||
logger.info('WORKER', 'Bundled MCP server remains available for external stdio clients', {
|
||||
path: mcpServerPath
|
||||
@@ -534,7 +550,12 @@ export class WorkerService {
|
||||
logger.info('SYSTEM', `Reaped ${reaped} stale sessions`);
|
||||
}
|
||||
} catch (e) {
|
||||
logger.error('SYSTEM', 'Stale session reaper error', { error: e instanceof Error ? e.message : String(e) });
|
||||
// [ANTI-PATTERN IGNORED]: setInterval callback cannot throw; reaper retries on next tick (every 2 min)
|
||||
if (e instanceof Error) {
|
||||
logger.error('WORKER', 'Stale session reaper error', {}, e);
|
||||
} else {
|
||||
logger.error('WORKER', 'Stale session reaper error with non-Error', {}, new Error(String(e)));
|
||||
}
|
||||
}
|
||||
}, 2 * 60 * 1000);
|
||||
|
||||
@@ -571,7 +592,7 @@ export class WorkerService {
|
||||
const configPath = settings.CLAUDE_MEM_TRANSCRIPTS_CONFIG_PATH || DEFAULT_CONFIG_PATH;
|
||||
const resolvedConfigPath = expandHomePath(configPath);
|
||||
|
||||
try {
|
||||
// Ensure sample config exists (setup, outside try)
|
||||
if (!existsSync(resolvedConfigPath)) {
|
||||
writeSampleConfig(configPath);
|
||||
logger.info('TRANSCRIPT', 'Created default transcript watch config', {
|
||||
@@ -582,20 +603,29 @@ export class WorkerService {
|
||||
const transcriptConfig = loadTranscriptWatchConfig(configPath);
|
||||
const statePath = expandHomePath(transcriptConfig.stateFile ?? DEFAULT_STATE_PATH);
|
||||
|
||||
try {
|
||||
this.transcriptWatcher = new TranscriptWatcher(transcriptConfig, statePath);
|
||||
await this.transcriptWatcher.start();
|
||||
} catch (error) {
|
||||
this.transcriptWatcher?.stop();
|
||||
this.transcriptWatcher = null;
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Failed to start transcript watcher (continuing without Codex ingestion)', {
|
||||
configPath: resolvedConfigPath
|
||||
}, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Failed to start transcript watcher with non-Error (continuing without Codex ingestion)', {
|
||||
configPath: resolvedConfigPath
|
||||
}, new Error(String(error)));
|
||||
}
|
||||
// [ANTI-PATTERN IGNORED]: Transcript watcher is intentionally non-fatal so Claude hooks remain usable even if transcript ingestion is misconfigured
|
||||
return;
|
||||
}
|
||||
logger.info('TRANSCRIPT', 'Transcript watcher started', {
|
||||
configPath: resolvedConfigPath,
|
||||
statePath,
|
||||
watches: transcriptConfig.watches.length
|
||||
});
|
||||
} catch (error) {
|
||||
this.transcriptWatcher?.stop();
|
||||
this.transcriptWatcher = null;
|
||||
logger.error('TRANSCRIPT', 'Failed to start transcript watcher (continuing without Codex ingestion)', {
|
||||
configPath: resolvedConfigPath
|
||||
}, error as Error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -693,7 +723,8 @@ export class WorkerService {
|
||||
}
|
||||
|
||||
// Detect stale resume failures - SDK session context was lost
|
||||
if ((errorMessage.includes('aborted by user') || errorMessage.includes('No conversation found'))
|
||||
const staleResumePatterns = ['aborted by user', 'No conversation found'];
|
||||
if (staleResumePatterns.some(p => errorMessage.includes(p))
|
||||
&& session.memorySessionId) {
|
||||
logger.warn('SDK', 'Detected stale resume failure, clearing memorySessionId for fresh start', {
|
||||
sessionId: session.sessionDbId,
|
||||
@@ -798,16 +829,30 @@ export class WorkerService {
|
||||
/**
|
||||
* Match errors that indicate the Claude Code process/session is gone (resume impossible).
|
||||
* Used to trigger graceful fallback instead of leaving pending messages stuck forever.
|
||||
*
|
||||
* These patterns come from the Claude SDK's ProcessTransport and related internals.
|
||||
* The SDK does not export typed error classes, so string matching on normalized
|
||||
* messages is the only reliable detection method. Each pattern corresponds to a
|
||||
* specific SDK failure mode:
|
||||
* - 'process aborted by user': user cancelled the Claude Code session
|
||||
* - 'processtransport': transport layer disconnected
|
||||
* - 'not ready for writing': stdio pipe to Claude process is closed
|
||||
* - 'session generator failed': wrapper error from our own agent layer
|
||||
* - 'claude code process': process exited or was killed
|
||||
*/
|
||||
private static readonly SESSION_TERMINATED_PATTERNS = [
|
||||
'process aborted by user',
|
||||
'processtransport',
|
||||
'not ready for writing',
|
||||
'session generator failed',
|
||||
'claude code process',
|
||||
] as const;
|
||||
|
||||
private isSessionTerminatedError(error: unknown): boolean {
|
||||
const msg = error instanceof Error ? error.message : String(error);
|
||||
const normalized = msg.toLowerCase();
|
||||
return (
|
||||
normalized.includes('process aborted by user') ||
|
||||
normalized.includes('processtransport') ||
|
||||
normalized.includes('not ready for writing') ||
|
||||
normalized.includes('session generator failed') ||
|
||||
normalized.includes('claude code process')
|
||||
return WorkerService.SESSION_TERMINATED_PATTERNS.some(
|
||||
pattern => normalized.includes(pattern)
|
||||
);
|
||||
}
|
||||
|
||||
@@ -835,10 +880,15 @@ export class WorkerService {
|
||||
await this.geminiAgent.startSession(session, this);
|
||||
return;
|
||||
} catch (e) {
|
||||
logger.warn('SDK', 'Fallback Gemini failed, trying OpenRouter', {
|
||||
// [ANTI-PATTERN IGNORED]: Fallback chain by design — Gemini failure falls through to OpenRouter attempt
|
||||
if (e instanceof Error) {
|
||||
logger.warn('WORKER', 'Fallback Gemini failed, trying OpenRouter', {
|
||||
sessionId: sessionDbId,
|
||||
error: e instanceof Error ? e.message : String(e)
|
||||
});
|
||||
logger.error('WORKER', 'Gemini fallback error detail', { sessionId: sessionDbId }, e);
|
||||
} else {
|
||||
logger.error('WORKER', 'Gemini fallback failed with non-Error', { sessionId: sessionDbId }, new Error(String(e)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -847,10 +897,12 @@ export class WorkerService {
|
||||
await this.openRouterAgent.startSession(session, this);
|
||||
return;
|
||||
} catch (e) {
|
||||
logger.warn('SDK', 'Fallback OpenRouter failed', {
|
||||
sessionId: sessionDbId,
|
||||
error: e instanceof Error ? e.message : String(e)
|
||||
});
|
||||
// [ANTI-PATTERN IGNORED]: Last fallback in chain — failure falls through to message abandonment, which is the designed terminal behavior
|
||||
if (e instanceof Error) {
|
||||
logger.error('WORKER', 'Fallback OpenRouter failed, will abandon messages', { sessionId: sessionDbId }, e);
|
||||
} else {
|
||||
logger.error('WORKER', 'Fallback OpenRouter failed with non-Error, will abandon messages', { sessionId: sessionDbId }, new Error(String(e)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -909,7 +961,6 @@ export class WorkerService {
|
||||
const STALE_SESSION_THRESHOLD_MS = 6 * 60 * 60 * 1000;
|
||||
const staleThreshold = Date.now() - STALE_SESSION_THRESHOLD_MS;
|
||||
|
||||
try {
|
||||
const staleSessionIds = sessionStore.db.prepare(`
|
||||
SELECT id FROM sdk_sessions
|
||||
WHERE status = 'active' AND started_at_epoch < ?
|
||||
@@ -918,28 +969,42 @@ export class WorkerService {
|
||||
if (staleSessionIds.length > 0) {
|
||||
const ids = staleSessionIds.map(r => r.id);
|
||||
const placeholders = ids.map(() => '?').join(',');
|
||||
const now = Date.now();
|
||||
|
||||
try {
|
||||
sessionStore.db.prepare(`
|
||||
UPDATE sdk_sessions
|
||||
SET status = 'failed', completed_at_epoch = ?
|
||||
WHERE id IN (${placeholders})
|
||||
`).run(Date.now(), ...ids);
|
||||
|
||||
`).run(now, ...ids);
|
||||
logger.info('SYSTEM', `Marked ${ids.length} stale sessions as failed`);
|
||||
} catch (error) {
|
||||
// [ANTI-PATTERN IGNORED]: Stale session cleanup is best-effort; pending queue processing below must still proceed
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Failed to mark stale sessions as failed', { staleCount: ids.length }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Failed to mark stale sessions as failed with non-Error', { staleCount: ids.length }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
|
||||
try {
|
||||
const msgResult = sessionStore.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'failed', failed_at_epoch = ?
|
||||
WHERE status = 'pending'
|
||||
AND session_db_id IN (${placeholders})
|
||||
`).run(Date.now(), ...ids);
|
||||
|
||||
`).run(now, ...ids);
|
||||
if (msgResult.changes > 0) {
|
||||
logger.info('SYSTEM', `Marked ${msgResult.changes} pending messages from stale sessions as failed`);
|
||||
}
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('SYSTEM', 'Failed to clean up stale sessions', {}, error as Error);
|
||||
// [ANTI-PATTERN IGNORED]: Pending message cleanup is best-effort; queue processing below must still proceed
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Failed to clean up stale pending messages', { staleCount: ids.length }, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Failed to clean up stale pending messages with non-Error', { staleCount: ids.length }, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
const orphanedSessionIds = pendingStore.getSessionsWithPendingMessages();
|
||||
@@ -958,28 +1023,34 @@ export class WorkerService {
|
||||
for (const sessionDbId of orphanedSessionIds) {
|
||||
if (result.sessionsStarted >= sessionLimit) break;
|
||||
|
||||
try {
|
||||
const existingSession = this.sessionManager.getSession(sessionDbId);
|
||||
if (existingSession?.generatorPromise) {
|
||||
result.sessionsSkipped++;
|
||||
continue;
|
||||
}
|
||||
|
||||
try {
|
||||
const session = this.sessionManager.initializeSession(sessionDbId);
|
||||
logger.info('SYSTEM', `Starting processor for session ${sessionDbId}`, {
|
||||
project: session.project,
|
||||
pendingCount: pendingStore.getPendingCount(sessionDbId)
|
||||
});
|
||||
|
||||
this.startSessionProcessor(session, 'startup-recovery');
|
||||
result.sessionsStarted++;
|
||||
result.startedSessionIds.push(sessionDbId);
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', `Failed to initialize/start session ${sessionDbId}`, { sessionDbId }, error);
|
||||
} else {
|
||||
logger.error('WORKER', `Failed to initialize/start session ${sessionDbId} with non-Error`, { sessionDbId }, new Error(String(error)));
|
||||
}
|
||||
result.sessionsSkipped++;
|
||||
// [ANTI-PATTERN IGNORED]: Per-session failure must not abort the loop; other sessions may still be recoverable
|
||||
continue;
|
||||
}
|
||||
|
||||
logger.info('SYSTEM', `Starting processor for session ${sessionDbId}`, {
|
||||
project: this.sessionManager.getSession(sessionDbId)?.project,
|
||||
pendingCount: pendingStore.getPendingCount(sessionDbId)
|
||||
});
|
||||
|
||||
await new Promise(resolve => setTimeout(resolve, 100));
|
||||
} catch (error) {
|
||||
logger.error('SYSTEM', `Failed to process session ${sessionDbId}`, {}, error as Error);
|
||||
result.sessionsSkipped++;
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
|
||||
@@ -53,7 +53,12 @@ function shouldSkipSpawnOnWindows(): boolean {
|
||||
try {
|
||||
const modifiedTimeMs = statSync(lockPath).mtimeMs;
|
||||
return Date.now() - modifiedTimeMs < WINDOWS_SPAWN_COOLDOWN_MS;
|
||||
} catch {
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SYSTEM', 'Could not stat worker spawn lock file', {}, error);
|
||||
} else {
|
||||
logger.debug('SYSTEM', 'Could not stat worker spawn lock file', { error: String(error) });
|
||||
}
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -46,6 +46,14 @@ export interface ActiveSession {
|
||||
// Track whether the most recent storage operation persisted a summary record.
|
||||
// Used by the status endpoint so the Stop hook can detect silent summary loss (#1633).
|
||||
lastSummaryStored?: boolean;
|
||||
// Circuit breaker: track consecutive summary failures to prevent infinite retry loops (#1633).
|
||||
// When this reaches MAX_CONSECUTIVE_SUMMARY_FAILURES, further summarize requests are skipped.
|
||||
consecutiveSummaryFailures: number;
|
||||
// Subagent identity carried forward from the most recent claimed pending message.
|
||||
// When observations are parsed and stored, these fields label the resulting rows
|
||||
// so subagent work is attributable. NULL / undefined means the batch came from the main session.
|
||||
pendingAgentId?: string | null;
|
||||
pendingAgentType?: string | null;
|
||||
}
|
||||
|
||||
export interface PendingMessage {
|
||||
@@ -56,6 +64,9 @@ export interface PendingMessage {
|
||||
prompt_number?: number;
|
||||
cwd?: string;
|
||||
last_assistant_message?: string;
|
||||
// Claude Code subagent identity — present only when the hook fired inside a subagent.
|
||||
agentId?: string;
|
||||
agentType?: string;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -74,6 +85,9 @@ export interface ObservationData {
|
||||
tool_response: any;
|
||||
prompt_number: number;
|
||||
cwd?: string;
|
||||
// Claude Code subagent identity — present only when the hook fired inside a subagent.
|
||||
agentId?: string;
|
||||
agentType?: string;
|
||||
}
|
||||
|
||||
// ============================================================================
|
||||
|
||||
@@ -118,15 +118,27 @@ export function getBranchInfo(): BranchInfo {
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
// Get current branch
|
||||
const branch = execGit(['rev-parse', '--abbrev-ref', 'HEAD']);
|
||||
let branch: string;
|
||||
let status: string;
|
||||
try {
|
||||
branch = execGit(['rev-parse', '--abbrev-ref', 'HEAD']);
|
||||
status = execGit(['status', '--porcelain']);
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.error('WORKER', 'Failed to get branch info', {}, error instanceof Error ? error : new Error(errorMessage));
|
||||
return {
|
||||
branch: null,
|
||||
isBeta: false,
|
||||
isGitRepo: true,
|
||||
isDirty: false,
|
||||
canSwitch: false,
|
||||
error: errorMessage
|
||||
};
|
||||
}
|
||||
|
||||
// Check if dirty (has uncommitted changes)
|
||||
const status = execGit(['status', '--porcelain']);
|
||||
// Determine branch state from git results
|
||||
const isDirty = status.length > 0;
|
||||
|
||||
// Determine if on beta branch
|
||||
const isBeta = branch.startsWith('beta');
|
||||
|
||||
return {
|
||||
@@ -136,17 +148,6 @@ export function getBranchInfo(): BranchInfo {
|
||||
isDirty,
|
||||
canSwitch: true // We can always switch (will discard local changes)
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('BRANCH', 'Failed to get branch info', {}, error as Error);
|
||||
return {
|
||||
branch: null,
|
||||
isBeta: false,
|
||||
isGitRepo: true,
|
||||
isDirty: false,
|
||||
canSwitch: false,
|
||||
error: (error as Error).message
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -243,7 +244,8 @@ export async function switchBranch(targetBranch: string): Promise<SwitchResult>
|
||||
}
|
||||
} catch (recoveryError) {
|
||||
// [POSSIBLY RELEVANT]: Recovery checkout failed, user needs manual intervention - already logging main error above
|
||||
logger.error('BRANCH', 'Recovery checkout also failed', { originalBranch: info.branch }, recoveryError as Error);
|
||||
const recoveryErrorMessage = recoveryError instanceof Error ? recoveryError.message : String(recoveryError);
|
||||
logger.error('WORKER', 'Recovery checkout also failed', { originalBranch: info.branch }, recoveryError instanceof Error ? recoveryError : new Error(recoveryErrorMessage));
|
||||
}
|
||||
|
||||
return {
|
||||
@@ -266,7 +268,6 @@ export async function pullUpdates(): Promise<SwitchResult> {
|
||||
};
|
||||
}
|
||||
|
||||
try {
|
||||
// SECURITY: Validate branch name before use
|
||||
if (!isValidBranchName(info.branch)) {
|
||||
return {
|
||||
@@ -277,6 +278,10 @@ export async function pullUpdates(): Promise<SwitchResult> {
|
||||
|
||||
logger.info('BRANCH', 'Pulling updates', { branch: info.branch });
|
||||
|
||||
// Prepare install marker path
|
||||
const installMarker = join(INSTALLED_PLUGIN_PATH, '.install-version');
|
||||
|
||||
try {
|
||||
// Discard local changes first
|
||||
execGit(['checkout', '--', '.']);
|
||||
|
||||
@@ -285,11 +290,18 @@ export async function pullUpdates(): Promise<SwitchResult> {
|
||||
execGit(['pull', 'origin', info.branch]);
|
||||
|
||||
// Clear install marker and reinstall
|
||||
const installMarker = join(INSTALLED_PLUGIN_PATH, '.install-version');
|
||||
if (existsSync(installMarker)) {
|
||||
unlinkSync(installMarker);
|
||||
}
|
||||
execNpm(['install'], NPM_INSTALL_TIMEOUT_MS);
|
||||
} catch (error) {
|
||||
const errorMessage = error instanceof Error ? error.message : String(error);
|
||||
logger.error('WORKER', 'Pull failed', {}, error instanceof Error ? error : new Error(errorMessage));
|
||||
return {
|
||||
success: false,
|
||||
error: `Pull failed: ${errorMessage}`
|
||||
};
|
||||
}
|
||||
|
||||
logger.success('BRANCH', 'Updates pulled', { branch: info.branch });
|
||||
|
||||
@@ -298,13 +310,6 @@ export async function pullUpdates(): Promise<SwitchResult> {
|
||||
branch: info.branch,
|
||||
message: `Updated ${info.branch}. Worker will restart automatically.`
|
||||
};
|
||||
} catch (error) {
|
||||
logger.error('BRANCH', 'Pull failed', {}, error as Error);
|
||||
return {
|
||||
success: false,
|
||||
error: `Pull failed: ${(error as Error).message}`
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
|
||||
@@ -22,6 +22,7 @@ import { USER_SETTINGS_PATH } from '../../shared/paths.js';
|
||||
import { estimateTokens } from '../../shared/timeline-formatting.js';
|
||||
import type { ActiveSession, ConversationMessage } from '../worker-types.js';
|
||||
import { ModeManager } from '../domain/ModeManager.js';
|
||||
import type { ModeConfig } from '../domain/types.js';
|
||||
import {
|
||||
processAgentResponse,
|
||||
shouldFallbackToClaude,
|
||||
@@ -135,8 +136,7 @@ export class GeminiAgent {
|
||||
* Uses multi-turn conversation to maintain context across messages
|
||||
*/
|
||||
async startSession(session: ActiveSession, worker?: WorkerRef): Promise<void> {
|
||||
try {
|
||||
// Get Gemini configuration
|
||||
// --- Configuration & validation (no try needed - throws clear errors) ---
|
||||
const { apiKey, model, rateLimitingEnabled } = this.getGeminiConfig();
|
||||
|
||||
if (!apiKey) {
|
||||
@@ -151,48 +151,69 @@ export class GeminiAgent {
|
||||
logger.info('SESSION', `MEMORY_ID_GENERATED | sessionDbId=${session.sessionDbId} | provider=Gemini`);
|
||||
}
|
||||
|
||||
// Load active mode
|
||||
// Load active mode and build initial prompt
|
||||
const mode = ModeManager.getInstance().getActiveMode();
|
||||
|
||||
// Build initial prompt
|
||||
const initPrompt = session.lastPromptNumber === 1
|
||||
? buildInitPrompt(session.project, session.contentSessionId, session.userPrompt, mode)
|
||||
: buildContinuationPrompt(session.userPrompt, session.lastPromptNumber, session.contentSessionId, mode);
|
||||
|
||||
// Add to conversation history and query Gemini with full context
|
||||
// --- Init query: API call + response processing ---
|
||||
session.conversationHistory.push({ role: 'user', content: initPrompt });
|
||||
const initResponse = await this.queryGeminiMultiTurn(session.conversationHistory, apiKey, model, rateLimitingEnabled);
|
||||
let initResponse: { content: string; tokensUsed?: number };
|
||||
try {
|
||||
initResponse = await this.queryGeminiMultiTurn(session.conversationHistory, apiKey, model, rateLimitingEnabled);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('SDK', 'Gemini init query failed', { sessionId: session.sessionDbId, model }, error);
|
||||
} else {
|
||||
logger.error('SDK', 'Gemini init query failed with non-Error', { sessionId: session.sessionDbId, model }, new Error(String(error)));
|
||||
}
|
||||
return this.handleGeminiError(error, session, worker);
|
||||
}
|
||||
|
||||
if (initResponse.content) {
|
||||
// Add response to conversation history
|
||||
session.conversationHistory.push({ role: 'assistant', content: initResponse.content });
|
||||
|
||||
// Track token usage
|
||||
const tokensUsed = initResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7); // Rough estimate
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
|
||||
// Process response using shared ResponseProcessor (no original timestamp for init - not from queue)
|
||||
await processAgentResponse(
|
||||
initResponse.content,
|
||||
session,
|
||||
this.dbManager,
|
||||
this.sessionManager,
|
||||
worker,
|
||||
tokensUsed,
|
||||
null,
|
||||
'Gemini',
|
||||
undefined,
|
||||
model
|
||||
);
|
||||
await processAgentResponse(initResponse.content, session, this.dbManager, this.sessionManager, worker, tokensUsed, null, 'Gemini', undefined, model);
|
||||
} else {
|
||||
logger.error('SDK', 'Empty Gemini init response - session may lack context', {
|
||||
logger.error('SDK', 'Empty Gemini init response - session may lack context', { sessionId: session.sessionDbId, model });
|
||||
}
|
||||
|
||||
// --- Message processing loop: iterate pending messages ---
|
||||
try {
|
||||
await this.processMessageLoop(session, worker, apiKey, model, rateLimitingEnabled, mode);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('SDK', 'Gemini message loop failed', { sessionId: session.sessionDbId, model }, error);
|
||||
} else {
|
||||
logger.error('SDK', 'Gemini message loop failed with non-Error', { sessionId: session.sessionDbId, model }, new Error(String(error)));
|
||||
}
|
||||
return this.handleGeminiError(error, session, worker);
|
||||
}
|
||||
|
||||
// Mark session complete
|
||||
const sessionDuration = Date.now() - session.startTime;
|
||||
logger.success('SDK', 'Gemini agent completed', {
|
||||
sessionId: session.sessionDbId,
|
||||
model
|
||||
duration: `${(sessionDuration / 1000).toFixed(1)}s`,
|
||||
historyLength: session.conversationHistory.length
|
||||
});
|
||||
}
|
||||
|
||||
// Process pending messages
|
||||
/**
|
||||
* Process pending messages from the session queue.
|
||||
* Extracted from startSession to keep try blocks focused.
|
||||
*/
|
||||
private async processMessageLoop(
|
||||
session: ActiveSession,
|
||||
worker: WorkerRef | undefined,
|
||||
apiKey: string,
|
||||
model: GeminiModel,
|
||||
rateLimitingEnabled: boolean,
|
||||
mode: ModeConfig
|
||||
): Promise<void> {
|
||||
// Track cwd from messages for CLAUDE.md generation
|
||||
let lastCwd: string | undefined;
|
||||
|
||||
@@ -201,6 +222,13 @@ export class GeminiAgent {
|
||||
// The message is now in 'processing' status in DB until ResponseProcessor calls confirmProcessed()
|
||||
session.processingMessageIds.push(message._persistentId);
|
||||
|
||||
// Capture subagent identity from the claimed message so ResponseProcessor
|
||||
// can label observation rows with the originating Claude Code subagent.
|
||||
// Always overwrite (even with null) so a main-session message after a subagent
|
||||
// message clears the stale identity; otherwise mixed batches could mislabel.
|
||||
session.pendingAgentId = message.agentId ?? null;
|
||||
session.pendingAgentType = message.agentType ?? null;
|
||||
|
||||
// Capture cwd from each message for worktree support
|
||||
if (message.cwd) {
|
||||
lastCwd = message.cwd;
|
||||
@@ -210,6 +238,26 @@ export class GeminiAgent {
|
||||
const originalTimestamp = session.earliestPendingTimestamp;
|
||||
|
||||
if (message.type === 'observation') {
|
||||
await this.processObservationMessage(session, message, worker, apiKey, model, rateLimitingEnabled, originalTimestamp, lastCwd);
|
||||
} else if (message.type === 'summarize') {
|
||||
await this.processSummaryMessage(session, message, worker, apiKey, model, rateLimitingEnabled, mode, originalTimestamp, lastCwd);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process a single observation message via Gemini API.
|
||||
*/
|
||||
private async processObservationMessage(
|
||||
session: ActiveSession,
|
||||
message: { type: string; prompt_number?: number; tool_name?: string; tool_input?: unknown; tool_response?: unknown; cwd?: string },
|
||||
worker: WorkerRef | undefined,
|
||||
apiKey: string,
|
||||
model: GeminiModel,
|
||||
rateLimitingEnabled: boolean,
|
||||
originalTimestamp: number | null,
|
||||
lastCwd: string | undefined
|
||||
): Promise<void> {
|
||||
// Update last prompt number
|
||||
if (message.prompt_number !== undefined) {
|
||||
session.lastPromptNumber = message.prompt_number;
|
||||
@@ -231,34 +279,19 @@ export class GeminiAgent {
|
||||
cwd: message.cwd
|
||||
});
|
||||
|
||||
// Add to conversation history and query Gemini with full context
|
||||
session.conversationHistory.push({ role: 'user', content: obsPrompt });
|
||||
const obsResponse = await this.queryGeminiMultiTurn(session.conversationHistory, apiKey, model, rateLimitingEnabled);
|
||||
|
||||
let tokensUsed = 0;
|
||||
if (obsResponse.content) {
|
||||
// Add response to conversation history
|
||||
session.conversationHistory.push({ role: 'assistant', content: obsResponse.content });
|
||||
|
||||
tokensUsed = obsResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7);
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
}
|
||||
|
||||
// Process response using shared ResponseProcessor
|
||||
if (obsResponse.content) {
|
||||
await processAgentResponse(
|
||||
obsResponse.content,
|
||||
session,
|
||||
this.dbManager,
|
||||
this.sessionManager,
|
||||
worker,
|
||||
tokensUsed,
|
||||
originalTimestamp,
|
||||
'Gemini',
|
||||
lastCwd,
|
||||
model
|
||||
);
|
||||
await processAgentResponse(obsResponse.content, session, this.dbManager, this.sessionManager, worker, tokensUsed, originalTimestamp, 'Gemini', lastCwd, model);
|
||||
} else {
|
||||
logger.warn('SDK', 'Empty Gemini observation response, skipping processing to preserve message', {
|
||||
sessionId: session.sessionDbId,
|
||||
@@ -266,8 +299,22 @@ export class GeminiAgent {
|
||||
});
|
||||
// Don't confirm - leave message for stale recovery
|
||||
}
|
||||
}
|
||||
|
||||
} else if (message.type === 'summarize') {
|
||||
/**
|
||||
* Process a single summary message via Gemini API.
|
||||
*/
|
||||
private async processSummaryMessage(
|
||||
session: ActiveSession,
|
||||
message: { type: string; last_assistant_message?: string },
|
||||
worker: WorkerRef | undefined,
|
||||
apiKey: string,
|
||||
model: GeminiModel,
|
||||
rateLimitingEnabled: boolean,
|
||||
mode: ModeConfig,
|
||||
originalTimestamp: number | null,
|
||||
lastCwd: string | undefined
|
||||
): Promise<void> {
|
||||
// CRITICAL: Check memorySessionId BEFORE making expensive LLM call
|
||||
if (!session.memorySessionId) {
|
||||
throw new Error('Cannot process summary: memorySessionId not yet captured. This session may need to be reinitialized.');
|
||||
@@ -282,34 +329,19 @@ export class GeminiAgent {
|
||||
last_assistant_message: message.last_assistant_message || ''
|
||||
}, mode);
|
||||
|
||||
// Add to conversation history and query Gemini with full context
|
||||
session.conversationHistory.push({ role: 'user', content: summaryPrompt });
|
||||
const summaryResponse = await this.queryGeminiMultiTurn(session.conversationHistory, apiKey, model, rateLimitingEnabled);
|
||||
|
||||
let tokensUsed = 0;
|
||||
if (summaryResponse.content) {
|
||||
// Add response to conversation history
|
||||
session.conversationHistory.push({ role: 'assistant', content: summaryResponse.content });
|
||||
|
||||
tokensUsed = summaryResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7);
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
}
|
||||
|
||||
// Process response using shared ResponseProcessor
|
||||
if (summaryResponse.content) {
|
||||
await processAgentResponse(
|
||||
summaryResponse.content,
|
||||
session,
|
||||
this.dbManager,
|
||||
this.sessionManager,
|
||||
worker,
|
||||
tokensUsed,
|
||||
originalTimestamp,
|
||||
'Gemini',
|
||||
lastCwd,
|
||||
model
|
||||
);
|
||||
await processAgentResponse(summaryResponse.content, session, this.dbManager, this.sessionManager, worker, tokensUsed, originalTimestamp, 'Gemini', lastCwd, model);
|
||||
} else {
|
||||
logger.warn('SDK', 'Empty Gemini summary response, skipping processing to preserve message', {
|
||||
sessionId: session.sessionDbId,
|
||||
@@ -318,17 +350,12 @@ export class GeminiAgent {
|
||||
// Don't confirm - leave message for stale recovery
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Mark session complete
|
||||
const sessionDuration = Date.now() - session.startTime;
|
||||
logger.success('SDK', 'Gemini agent completed', {
|
||||
sessionId: session.sessionDbId,
|
||||
duration: `${(sessionDuration / 1000).toFixed(1)}s`,
|
||||
historyLength: session.conversationHistory.length
|
||||
});
|
||||
|
||||
} catch (error: unknown) {
|
||||
/**
|
||||
* Handle errors from Gemini API calls with abort detection and Claude fallback.
|
||||
* Shared by init query and message processing try blocks.
|
||||
*/
|
||||
private handleGeminiError(error: unknown, session: ActiveSession, worker?: WorkerRef): Promise<void> | never {
|
||||
if (isAbortError(error)) {
|
||||
logger.warn('SDK', 'Gemini agent aborted', { sessionId: session.sessionDbId });
|
||||
throw error;
|
||||
@@ -347,10 +374,9 @@ export class GeminiAgent {
|
||||
return this.fallbackAgent.startSession(session, worker);
|
||||
}
|
||||
|
||||
logger.failure('SDK', 'Gemini agent error', { sessionDbId: session.sessionDbId }, error as Error);
|
||||
logger.failure('SDK', 'Gemini agent error', { sessionDbId: session.sessionDbId }, error instanceof Error ? error : new Error(String(error)));
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Truncate conversation history to prevent runaway context costs.
|
||||
|
||||
@@ -17,6 +17,7 @@ import { SettingsDefaultsManager } from '../../shared/SettingsDefaultsManager.js
|
||||
import { USER_SETTINGS_PATH } from '../../shared/paths.js';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import { ModeManager } from '../domain/ModeManager.js';
|
||||
import type { ModeConfig } from '../domain/types.js';
|
||||
import type { ActiveSession, ConversationMessage } from '../worker-types.js';
|
||||
import { DatabaseManager } from './DatabaseManager.js';
|
||||
import { SessionManager } from './SessionManager.js';
|
||||
@@ -84,8 +85,7 @@ export class OpenRouterAgent {
|
||||
* Uses multi-turn conversation to maintain context across messages
|
||||
*/
|
||||
async startSession(session: ActiveSession, worker?: WorkerRef): Promise<void> {
|
||||
try {
|
||||
// Get OpenRouter configuration
|
||||
// Get OpenRouter configuration (pure lookup, no external I/O)
|
||||
const { apiKey, model, siteUrl, appName } = this.getOpenRouterConfig();
|
||||
|
||||
if (!apiKey) {
|
||||
@@ -108,148 +108,38 @@ export class OpenRouterAgent {
|
||||
? buildInitPrompt(session.project, session.contentSessionId, session.userPrompt, mode)
|
||||
: buildContinuationPrompt(session.userPrompt, session.lastPromptNumber, session.contentSessionId, mode);
|
||||
|
||||
// Add to conversation history and query OpenRouter with full context
|
||||
// Send init prompt to OpenRouter
|
||||
session.conversationHistory.push({ role: 'user', content: initPrompt });
|
||||
|
||||
try {
|
||||
const initResponse = await this.queryOpenRouterMultiTurn(session.conversationHistory, apiKey, model, siteUrl, appName);
|
||||
|
||||
if (initResponse.content) {
|
||||
// Add response to conversation history
|
||||
// session.conversationHistory.push({ role: 'assistant', content: initResponse.content });
|
||||
|
||||
// Track token usage
|
||||
const tokensUsed = initResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7); // Rough estimate
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
|
||||
// Process response using shared ResponseProcessor (no original timestamp for init - not from queue)
|
||||
await processAgentResponse(
|
||||
initResponse.content,
|
||||
session,
|
||||
this.dbManager,
|
||||
this.sessionManager,
|
||||
worker,
|
||||
tokensUsed,
|
||||
null,
|
||||
'OpenRouter',
|
||||
undefined, // No lastCwd yet - before message processing
|
||||
model
|
||||
);
|
||||
await this.handleInitResponse(initResponse, session, worker, model);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('SDK', 'OpenRouter init failed', { sessionId: session.sessionDbId, model }, error);
|
||||
} else {
|
||||
logger.error('SDK', 'Empty OpenRouter init response - session may lack context', {
|
||||
sessionId: session.sessionDbId,
|
||||
model
|
||||
});
|
||||
logger.error('SDK', 'OpenRouter init failed with non-Error', { sessionId: session.sessionDbId, model }, new Error(String(error)));
|
||||
}
|
||||
await this.handleSessionError(error, session, worker);
|
||||
return;
|
||||
}
|
||||
|
||||
// Track lastCwd from messages for CLAUDE.md generation
|
||||
let lastCwd: string | undefined;
|
||||
|
||||
// Process pending messages
|
||||
try {
|
||||
for await (const message of this.sessionManager.getMessageIterator(session.sessionDbId)) {
|
||||
// CLAIM-CONFIRM: Track message ID for confirmProcessed() after successful storage
|
||||
// The message is now in 'processing' status in DB until ResponseProcessor calls confirmProcessed()
|
||||
session.processingMessageIds.push(message._persistentId);
|
||||
|
||||
// Capture cwd from messages for proper worktree support
|
||||
if (message.cwd) {
|
||||
lastCwd = message.cwd;
|
||||
lastCwd = await this.processOneMessage(session, message, lastCwd, apiKey, model, siteUrl, appName, worker, mode);
|
||||
}
|
||||
// Capture earliest timestamp BEFORE processing (will be cleared after)
|
||||
const originalTimestamp = session.earliestPendingTimestamp;
|
||||
|
||||
if (message.type === 'observation') {
|
||||
// Update last prompt number
|
||||
if (message.prompt_number !== undefined) {
|
||||
session.lastPromptNumber = message.prompt_number;
|
||||
}
|
||||
|
||||
// CRITICAL: Check memorySessionId BEFORE making expensive LLM call
|
||||
// This prevents wasting tokens when we won't be able to store the result anyway
|
||||
if (!session.memorySessionId) {
|
||||
throw new Error('Cannot process observations: memorySessionId not yet captured. This session may need to be reinitialized.');
|
||||
}
|
||||
|
||||
// Build observation prompt
|
||||
const obsPrompt = buildObservationPrompt({
|
||||
id: 0,
|
||||
tool_name: message.tool_name!,
|
||||
tool_input: JSON.stringify(message.tool_input),
|
||||
tool_output: JSON.stringify(message.tool_response),
|
||||
created_at_epoch: originalTimestamp ?? Date.now(),
|
||||
cwd: message.cwd
|
||||
});
|
||||
|
||||
// Add to conversation history and query OpenRouter with full context
|
||||
session.conversationHistory.push({ role: 'user', content: obsPrompt });
|
||||
const obsResponse = await this.queryOpenRouterMultiTurn(session.conversationHistory, apiKey, model, siteUrl, appName);
|
||||
|
||||
let tokensUsed = 0;
|
||||
if (obsResponse.content) {
|
||||
// Add response to conversation history
|
||||
// session.conversationHistory.push({ role: 'assistant', content: obsResponse.content });
|
||||
|
||||
tokensUsed = obsResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7);
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
}
|
||||
|
||||
// Process response using shared ResponseProcessor
|
||||
await processAgentResponse(
|
||||
obsResponse.content || '',
|
||||
session,
|
||||
this.dbManager,
|
||||
this.sessionManager,
|
||||
worker,
|
||||
tokensUsed,
|
||||
originalTimestamp,
|
||||
'OpenRouter',
|
||||
lastCwd,
|
||||
model
|
||||
);
|
||||
|
||||
} else if (message.type === 'summarize') {
|
||||
// CRITICAL: Check memorySessionId BEFORE making expensive LLM call
|
||||
if (!session.memorySessionId) {
|
||||
throw new Error('Cannot process summary: memorySessionId not yet captured. This session may need to be reinitialized.');
|
||||
}
|
||||
|
||||
// Build summary prompt
|
||||
const summaryPrompt = buildSummaryPrompt({
|
||||
id: session.sessionDbId,
|
||||
memory_session_id: session.memorySessionId,
|
||||
project: session.project,
|
||||
user_prompt: session.userPrompt,
|
||||
last_assistant_message: message.last_assistant_message || ''
|
||||
}, mode);
|
||||
|
||||
// Add to conversation history and query OpenRouter with full context
|
||||
session.conversationHistory.push({ role: 'user', content: summaryPrompt });
|
||||
const summaryResponse = await this.queryOpenRouterMultiTurn(session.conversationHistory, apiKey, model, siteUrl, appName);
|
||||
|
||||
let tokensUsed = 0;
|
||||
if (summaryResponse.content) {
|
||||
// Add response to conversation history
|
||||
// session.conversationHistory.push({ role: 'assistant', content: summaryResponse.content });
|
||||
|
||||
tokensUsed = summaryResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7);
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
}
|
||||
|
||||
// Process response using shared ResponseProcessor
|
||||
await processAgentResponse(
|
||||
summaryResponse.content || '',
|
||||
session,
|
||||
this.dbManager,
|
||||
this.sessionManager,
|
||||
worker,
|
||||
tokensUsed,
|
||||
originalTimestamp,
|
||||
'OpenRouter',
|
||||
lastCwd,
|
||||
model
|
||||
);
|
||||
} catch (error: unknown) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('SDK', 'OpenRouter message processing failed', { sessionId: session.sessionDbId, model }, error);
|
||||
} else {
|
||||
logger.error('SDK', 'OpenRouter message processing failed with non-Error', { sessionId: session.sessionDbId, model }, new Error(String(error)));
|
||||
}
|
||||
await this.handleSessionError(error, session, worker);
|
||||
return;
|
||||
}
|
||||
|
||||
// Mark session complete
|
||||
@@ -260,14 +150,191 @@ export class OpenRouterAgent {
|
||||
historyLength: session.conversationHistory.length,
|
||||
model
|
||||
});
|
||||
}
|
||||
|
||||
} catch (error: unknown) {
|
||||
/**
|
||||
* Prepare common message metadata before processing.
|
||||
* Tracks message IDs and captures subagent identity.
|
||||
*/
|
||||
private prepareMessageMetadata(session: ActiveSession, message: { _persistentId: number; agentId?: string | null; agentType?: string | null }): void {
|
||||
// CLAIM-CONFIRM: Track message ID for confirmProcessed() after successful storage
|
||||
session.processingMessageIds.push(message._persistentId);
|
||||
|
||||
// Capture subagent identity from the claimed message so ResponseProcessor
|
||||
// can label observation rows with the originating Claude Code subagent.
|
||||
// Always overwrite (even with null) so a main-session message after a subagent
|
||||
// message clears the stale identity; otherwise mixed batches could mislabel.
|
||||
session.pendingAgentId = message.agentId ?? null;
|
||||
session.pendingAgentType = message.agentType ?? null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle the init response from OpenRouter: update token counts and process or log empty.
|
||||
*/
|
||||
private async handleInitResponse(
|
||||
initResponse: { content: string; tokensUsed?: number },
|
||||
session: ActiveSession,
|
||||
worker: WorkerRef | undefined,
|
||||
model: string
|
||||
): Promise<void> {
|
||||
if (initResponse.content) {
|
||||
session.conversationHistory.push({ role: 'assistant', content: initResponse.content });
|
||||
const tokensUsed = initResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7);
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
|
||||
await processAgentResponse(
|
||||
initResponse.content, session, this.dbManager, this.sessionManager,
|
||||
worker, tokensUsed, null, 'OpenRouter', undefined, model
|
||||
);
|
||||
} else {
|
||||
logger.error('SDK', 'Empty OpenRouter init response - session may lack context', {
|
||||
sessionId: session.sessionDbId, model
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Process one message from the iterator: prepare metadata, dispatch to observation or summary handler.
|
||||
* Returns the updated lastCwd value.
|
||||
*/
|
||||
private async processOneMessage(
|
||||
session: ActiveSession,
|
||||
message: { _persistentId: number; agentId?: string | null; agentType?: string | null; type?: string; cwd?: string; prompt_number?: number; tool_name?: string; tool_input?: unknown; tool_response?: unknown; last_assistant_message?: string },
|
||||
lastCwd: string | undefined,
|
||||
apiKey: string,
|
||||
model: string,
|
||||
siteUrl: string | undefined,
|
||||
appName: string | undefined,
|
||||
worker: WorkerRef | undefined,
|
||||
mode: ModeConfig
|
||||
): Promise<string | undefined> {
|
||||
this.prepareMessageMetadata(session, message);
|
||||
|
||||
if (message.cwd) {
|
||||
lastCwd = message.cwd;
|
||||
}
|
||||
const originalTimestamp = session.earliestPendingTimestamp;
|
||||
|
||||
if (message.type === 'observation') {
|
||||
await this.processObservationMessage(
|
||||
session, message, originalTimestamp, lastCwd,
|
||||
apiKey, model, siteUrl, appName, worker, mode
|
||||
);
|
||||
} else if (message.type === 'summarize') {
|
||||
await this.processSummaryMessage(
|
||||
session, message, originalTimestamp, lastCwd,
|
||||
apiKey, model, siteUrl, appName, worker, mode
|
||||
);
|
||||
}
|
||||
|
||||
return lastCwd;
|
||||
}
|
||||
|
||||
/**
|
||||
* Process a single observation message: build prompt, call OpenRouter, store result.
|
||||
*/
|
||||
private async processObservationMessage(
|
||||
session: ActiveSession,
|
||||
message: { prompt_number?: number; tool_name?: string; tool_input?: unknown; tool_response?: unknown; cwd?: string },
|
||||
originalTimestamp: number | null,
|
||||
lastCwd: string | undefined,
|
||||
apiKey: string,
|
||||
model: string,
|
||||
siteUrl: string | undefined,
|
||||
appName: string | undefined,
|
||||
worker: WorkerRef | undefined,
|
||||
_mode: ModeConfig
|
||||
): Promise<void> {
|
||||
if (message.prompt_number !== undefined) {
|
||||
session.lastPromptNumber = message.prompt_number;
|
||||
}
|
||||
|
||||
// CRITICAL: Check memorySessionId BEFORE making expensive LLM call
|
||||
if (!session.memorySessionId) {
|
||||
throw new Error('Cannot process observations: memorySessionId not yet captured. This session may need to be reinitialized.');
|
||||
}
|
||||
|
||||
const obsPrompt = buildObservationPrompt({
|
||||
id: 0,
|
||||
tool_name: message.tool_name!,
|
||||
tool_input: JSON.stringify(message.tool_input),
|
||||
tool_output: JSON.stringify(message.tool_response),
|
||||
created_at_epoch: originalTimestamp ?? Date.now(),
|
||||
cwd: message.cwd
|
||||
});
|
||||
|
||||
session.conversationHistory.push({ role: 'user', content: obsPrompt });
|
||||
const obsResponse = await this.queryOpenRouterMultiTurn(session.conversationHistory, apiKey, model, siteUrl, appName);
|
||||
|
||||
let tokensUsed = 0;
|
||||
if (obsResponse.content) {
|
||||
session.conversationHistory.push({ role: 'assistant', content: obsResponse.content });
|
||||
tokensUsed = obsResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7);
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
}
|
||||
|
||||
await processAgentResponse(
|
||||
obsResponse.content || '', session, this.dbManager, this.sessionManager,
|
||||
worker, tokensUsed, originalTimestamp, 'OpenRouter', lastCwd, model
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Process a single summary message: build prompt, call OpenRouter, store result.
|
||||
*/
|
||||
private async processSummaryMessage(
|
||||
session: ActiveSession,
|
||||
message: { last_assistant_message?: string },
|
||||
originalTimestamp: number | null,
|
||||
lastCwd: string | undefined,
|
||||
apiKey: string,
|
||||
model: string,
|
||||
siteUrl: string | undefined,
|
||||
appName: string | undefined,
|
||||
worker: WorkerRef | undefined,
|
||||
mode: ModeConfig
|
||||
): Promise<void> {
|
||||
// CRITICAL: Check memorySessionId BEFORE making expensive LLM call
|
||||
if (!session.memorySessionId) {
|
||||
throw new Error('Cannot process summary: memorySessionId not yet captured. This session may need to be reinitialized.');
|
||||
}
|
||||
|
||||
const summaryPrompt = buildSummaryPrompt({
|
||||
id: session.sessionDbId,
|
||||
memory_session_id: session.memorySessionId,
|
||||
project: session.project,
|
||||
user_prompt: session.userPrompt,
|
||||
last_assistant_message: message.last_assistant_message || ''
|
||||
}, mode);
|
||||
|
||||
session.conversationHistory.push({ role: 'user', content: summaryPrompt });
|
||||
const summaryResponse = await this.queryOpenRouterMultiTurn(session.conversationHistory, apiKey, model, siteUrl, appName);
|
||||
|
||||
let tokensUsed = 0;
|
||||
if (summaryResponse.content) {
|
||||
session.conversationHistory.push({ role: 'assistant', content: summaryResponse.content });
|
||||
tokensUsed = summaryResponse.tokensUsed || 0;
|
||||
session.cumulativeInputTokens += Math.floor(tokensUsed * 0.7);
|
||||
session.cumulativeOutputTokens += Math.floor(tokensUsed * 0.3);
|
||||
}
|
||||
|
||||
await processAgentResponse(
|
||||
summaryResponse.content || '', session, this.dbManager, this.sessionManager,
|
||||
worker, tokensUsed, originalTimestamp, 'OpenRouter', lastCwd, model
|
||||
);
|
||||
}
|
||||
|
||||
/**
|
||||
* Handle errors from session processing: abort re-throw, fallback to Claude, or log and re-throw.
|
||||
*/
|
||||
private async handleSessionError(error: unknown, session: ActiveSession, worker?: WorkerRef): Promise<never | void> {
|
||||
if (isAbortError(error)) {
|
||||
logger.warn('SDK', 'OpenRouter agent aborted', { sessionId: session.sessionDbId });
|
||||
throw error;
|
||||
}
|
||||
|
||||
// Check if we should fall back to Claude
|
||||
if (shouldFallbackToClaude(error) && this.fallbackAgent) {
|
||||
logger.warn('SDK', 'OpenRouter API failed, falling back to Claude SDK', {
|
||||
sessionDbId: session.sessionDbId,
|
||||
@@ -277,13 +344,13 @@ export class OpenRouterAgent {
|
||||
|
||||
// Fall back to Claude - it will use the same session with shared conversationHistory
|
||||
// Note: With claim-and-delete queue pattern, messages are already deleted on claim
|
||||
return this.fallbackAgent.startSession(session, worker);
|
||||
await this.fallbackAgent.startSession(session, worker);
|
||||
return;
|
||||
}
|
||||
|
||||
logger.failure('SDK', 'OpenRouter agent error', { sessionDbId: session.sessionDbId }, error as Error);
|
||||
logger.failure('SDK', 'OpenRouter agent error', { sessionDbId: session.sessionDbId }, error instanceof Error ? error : new Error(String(error)));
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Estimate token count from text (conservative estimate)
|
||||
|
||||
@@ -55,7 +55,11 @@ export class PaginationHelper {
|
||||
// Return as JSON string
|
||||
return JSON.stringify(strippedPaths);
|
||||
} catch (err) {
|
||||
logger.debug('WORKER', 'File paths is plain string, using as-is', {}, err as Error);
|
||||
if (err instanceof Error) {
|
||||
logger.debug('WORKER', 'File paths is plain string, using as-is', {}, err);
|
||||
} else {
|
||||
logger.debug('WORKER', 'File paths is plain string, using as-is', { rawError: String(err) });
|
||||
}
|
||||
return filePathsStr;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -395,8 +395,11 @@ export function createPidCapturingSpawn(sessionDbId: number) {
|
||||
try {
|
||||
existing.process.kill('SIGTERM');
|
||||
exited = existing.process.exitCode !== null;
|
||||
} catch {
|
||||
} catch (error: unknown) {
|
||||
// Already dead — safe to unregister immediately
|
||||
if (error instanceof Error) {
|
||||
logger.warn('WORKER', `Failed to kill duplicate process PID ${existing.pid}, likely already dead`, { existingPid: existing.pid, sessionDbId }, error);
|
||||
}
|
||||
exited = true;
|
||||
}
|
||||
|
||||
@@ -495,7 +498,11 @@ export function startOrphanReaper(getActiveSessionIds: () => Set<number>, interv
|
||||
logger.info('PROCESS', `Reaper cleaned up ${killed} orphaned processes`, { killed });
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('PROCESS', 'Reaper error', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.error('WORKER', 'Reaper error', {}, error);
|
||||
} else {
|
||||
logger.error('WORKER', 'Reaper error', { rawError: String(error) });
|
||||
}
|
||||
}
|
||||
}, intervalMs);
|
||||
|
||||
|
||||
@@ -374,6 +374,13 @@ export class SDKAgent {
|
||||
// The message is now in 'processing' status in DB until ResponseProcessor calls confirmProcessed()
|
||||
session.processingMessageIds.push(message._persistentId);
|
||||
|
||||
// Capture subagent identity from the claimed message so ResponseProcessor
|
||||
// can label observation rows with the originating Claude Code subagent.
|
||||
// Always overwrite (even with null) so a main-session message after a subagent
|
||||
// message clears the stale identity; otherwise mixed batches could mislabel.
|
||||
session.pendingAgentId = message.agentId ?? null;
|
||||
session.pendingAgentType = message.agentType ?? null;
|
||||
|
||||
// Capture cwd from each message for worktree support
|
||||
if (message.cwd) {
|
||||
cwdTracker.lastCwd = message.cwd;
|
||||
@@ -473,7 +480,11 @@ export class SDKAgent {
|
||||
if (claudePath) return claudePath;
|
||||
} catch (error) {
|
||||
// [ANTI-PATTERN IGNORED]: Fallback behavior - which/where failed, continue to throw clear error
|
||||
logger.debug('SDK', 'Claude executable auto-detection failed', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('SDK', 'Claude executable auto-detection failed', {}, error);
|
||||
} else {
|
||||
logger.debug('SDK', 'Claude executable auto-detection failed with non-Error', {}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
|
||||
throw new Error('Claude executable not found. Please either:\n1. Add "claude" to your system PATH, or\n2. Set CLAUDE_CODE_PATH in ~/.claude-mem/settings.json');
|
||||
|
||||
@@ -67,6 +67,23 @@ export class SearchManager {
|
||||
return await this.chromaSync.queryChroma(query, limit, whereFilter);
|
||||
}
|
||||
|
||||
private async searchChromaForTimeline(query: string, ninetyDaysAgo: number): Promise<ObservationSearchResult[]> {
|
||||
const chromaResults = await this.queryChroma(query, 100);
|
||||
logger.debug('SEARCH', 'Chroma returned semantic matches for timeline', { matchCount: chromaResults?.ids?.length ?? 0 });
|
||||
|
||||
if (chromaResults?.ids && chromaResults.ids.length > 0) {
|
||||
const recentIds = chromaResults.ids.filter((_id, idx) => {
|
||||
const meta = chromaResults.metadatas[idx];
|
||||
return meta && meta.created_at_epoch > ninetyDaysAgo;
|
||||
});
|
||||
|
||||
if (recentIds.length > 0) {
|
||||
return this.sessionStore.getObservationsByIds(recentIds, { orderBy: 'date_desc', limit: 1 });
|
||||
}
|
||||
}
|
||||
return [];
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper to normalize query parameters from URL-friendly format
|
||||
* Converts comma-separated strings to arrays and flattens date params
|
||||
@@ -439,24 +456,13 @@ export class SearchManager {
|
||||
let results: ObservationSearchResult[] = [];
|
||||
|
||||
if (this.chromaSync) {
|
||||
try {
|
||||
logger.debug('SEARCH', 'Using hybrid semantic search for timeline query', {});
|
||||
const chromaResults = await this.queryChroma(query, 100);
|
||||
logger.debug('SEARCH', 'Chroma returned semantic matches for timeline', { matchCount: chromaResults?.ids?.length ?? 0 });
|
||||
|
||||
if (chromaResults?.ids && chromaResults.ids.length > 0) {
|
||||
const ninetyDaysAgo = Date.now() - SEARCH_CONSTANTS.RECENCY_WINDOW_MS;
|
||||
const recentIds = chromaResults.ids.filter((_id, idx) => {
|
||||
const meta = chromaResults.metadatas[idx];
|
||||
return meta && meta.created_at_epoch > ninetyDaysAgo;
|
||||
});
|
||||
|
||||
if (recentIds.length > 0) {
|
||||
results = this.sessionStore.getObservationsByIds(recentIds, { orderBy: 'date_desc', limit: 1 });
|
||||
}
|
||||
}
|
||||
try {
|
||||
results = await this.searchChromaForTimeline(query, ninetyDaysAgo);
|
||||
} catch (chromaError) {
|
||||
logger.error('SEARCH', 'Chroma search failed for timeline, continuing without semantic results', {}, chromaError as Error);
|
||||
const errorObject = chromaError instanceof Error ? chromaError : new Error(String(chromaError));
|
||||
logger.error('WORKER', 'Chroma search failed for timeline, continuing without semantic results', {}, errorObject);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -689,18 +695,21 @@ export class SearchManager {
|
||||
|
||||
// Search for decision-type observations
|
||||
if (this.chromaSync) {
|
||||
try {
|
||||
if (query) {
|
||||
// Semantic search filtered to decision type
|
||||
logger.debug('SEARCH', 'Using Chroma semantic search with type=decision filter', {});
|
||||
try {
|
||||
const chromaResults = await this.queryChroma(query, Math.min((filters.limit || 20) * 2, 100), { type: 'decision' });
|
||||
const obsIds = chromaResults.ids;
|
||||
|
||||
if (obsIds.length > 0) {
|
||||
results = this.sessionStore.getObservationsByIds(obsIds, { ...filters, type: 'decision' });
|
||||
// Preserve Chroma ranking order
|
||||
results.sort((a, b) => obsIds.indexOf(a.id) - obsIds.indexOf(b.id));
|
||||
}
|
||||
} catch (chromaError) {
|
||||
const errorObject = chromaError instanceof Error ? chromaError : new Error(String(chromaError));
|
||||
logger.error('WORKER', 'Chroma search failed for decisions, falling back to metadata search', {}, errorObject);
|
||||
}
|
||||
} else {
|
||||
// No query: get all decisions, rank by "decision" keyword
|
||||
logger.debug('SEARCH', 'Using metadata-first + semantic ranking for decisions', {});
|
||||
@@ -708,6 +717,7 @@ export class SearchManager {
|
||||
|
||||
if (metadataResults.length > 0) {
|
||||
const ids = metadataResults.map(obs => obs.id);
|
||||
try {
|
||||
const chromaResults = await this.queryChroma('decision', Math.min(ids.length, 100));
|
||||
|
||||
const rankedIds: number[] = [];
|
||||
@@ -721,10 +731,11 @@ export class SearchManager {
|
||||
results = this.sessionStore.getObservationsByIds(rankedIds, { limit: filters.limit || 20 });
|
||||
results.sort((a, b) => rankedIds.indexOf(a.id) - rankedIds.indexOf(b.id));
|
||||
}
|
||||
}
|
||||
}
|
||||
} catch (chromaError) {
|
||||
logger.error('SEARCH', 'Chroma search failed for decisions, falling back to metadata search', {}, chromaError as Error);
|
||||
const errorObject = chromaError instanceof Error ? chromaError : new Error(String(chromaError));
|
||||
logger.error('WORKER', 'Chroma semantic ranking failed for decisions, falling back to metadata search', {}, errorObject);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -763,7 +774,6 @@ export class SearchManager {
|
||||
|
||||
// Search for change-type observations and change-related concepts
|
||||
if (this.chromaSync) {
|
||||
try {
|
||||
logger.debug('SEARCH', 'Using hybrid search for change-related observations', {});
|
||||
|
||||
// Get all observations with type="change" or concepts containing change
|
||||
@@ -777,6 +787,7 @@ export class SearchManager {
|
||||
|
||||
if (allIds.size > 0) {
|
||||
const idsArray = Array.from(allIds);
|
||||
try {
|
||||
const chromaResults = await this.queryChroma('what changed', Math.min(idsArray.length, 100));
|
||||
|
||||
const rankedIds: number[] = [];
|
||||
@@ -790,9 +801,10 @@ export class SearchManager {
|
||||
results = this.sessionStore.getObservationsByIds(rankedIds, { limit: filters.limit || 20 });
|
||||
results.sort((a, b) => rankedIds.indexOf(a.id) - rankedIds.indexOf(b.id));
|
||||
}
|
||||
}
|
||||
} catch (chromaError) {
|
||||
logger.error('SEARCH', 'Chroma search failed for changes, falling back to metadata search', {}, chromaError as Error);
|
||||
const errorObject = chromaError instanceof Error ? chromaError : new Error(String(chromaError));
|
||||
logger.error('WORKER', 'Chroma search failed for changes, falling back to metadata search', {}, errorObject);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1373,7 +1385,8 @@ export class SearchManager {
|
||||
lines.push(`**Files Read:** ${filesRead.join(', ')}`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.debug('WORKER', 'files_read is plain string, using as-is', {}, error as Error);
|
||||
const errorObject = error instanceof Error ? error : new Error(String(error));
|
||||
logger.debug('WORKER', 'files_read is plain string, using as-is', {}, errorObject);
|
||||
if (summary.files_read.trim()) {
|
||||
lines.push(`**Files Read:** ${summary.files_read}`);
|
||||
}
|
||||
@@ -1388,7 +1401,8 @@ export class SearchManager {
|
||||
lines.push(`**Files Edited:** ${filesEdited.join(', ')}`);
|
||||
}
|
||||
} catch (error) {
|
||||
logger.debug('WORKER', 'files_edited is plain string, using as-is', {}, error as Error);
|
||||
const errorObject = error instanceof Error ? error : new Error(String(error));
|
||||
logger.debug('WORKER', 'files_edited is plain string, using as-is', {}, errorObject);
|
||||
if (summary.files_edited.trim()) {
|
||||
lines.push(`**Files Edited:** ${summary.files_edited}`);
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ import { PendingMessageStore } from '../sqlite/PendingMessageStore.js';
|
||||
import { SessionQueueProcessor } from '../queue/SessionQueueProcessor.js';
|
||||
import { getProcessBySession, ensureProcessExit } from './ProcessRegistry.js';
|
||||
import { getSupervisor } from '../../supervisor/index.js';
|
||||
import { MAX_CONSECUTIVE_SUMMARY_FAILURES } from '../../sdk/prompts.js';
|
||||
|
||||
/** Idle threshold before a stuck generator (zombie subprocess) is force-killed. */
|
||||
export const MAX_GENERATOR_IDLE_MS = 5 * 60 * 1000; // 5 minutes
|
||||
@@ -68,7 +69,13 @@ export function detectStaleGenerator(
|
||||
if (proc && proc.exitCode === null) {
|
||||
try {
|
||||
proc.kill('SIGKILL');
|
||||
} catch {}
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.warn('SESSION', 'Failed to SIGKILL stale generator subprocess', {}, error);
|
||||
} else {
|
||||
logger.warn('SESSION', 'Failed to SIGKILL stale generator subprocess with non-Error', {}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
}
|
||||
// Signal the SDK agent loop to exit
|
||||
session.abortController.abort();
|
||||
@@ -219,7 +226,10 @@ export class SessionManager {
|
||||
currentProvider: null, // Will be set when generator starts
|
||||
consecutiveRestarts: 0, // Track consecutive restart attempts to prevent infinite loops
|
||||
processingMessageIds: [], // CLAIM-CONFIRM: Track message IDs for confirmProcessed()
|
||||
lastGeneratorActivity: Date.now() // Initialize for stale detection (Issue #1099)
|
||||
lastGeneratorActivity: Date.now(), // Initialize for stale detection (Issue #1099)
|
||||
consecutiveSummaryFailures: 0, // Circuit breaker for summary retry loop (#1633)
|
||||
pendingAgentId: null, // Subagent identity carried from the most recent claimed message
|
||||
pendingAgentType: null // (null for main-session messages)
|
||||
};
|
||||
|
||||
logger.debug('SESSION', 'Creating new session object (memorySessionId cleared to prevent stale resume)', {
|
||||
@@ -275,7 +285,9 @@ export class SessionManager {
|
||||
tool_input: data.tool_input,
|
||||
tool_response: data.tool_response,
|
||||
prompt_number: data.prompt_number,
|
||||
cwd: data.cwd
|
||||
cwd: data.cwd,
|
||||
agentId: data.agentId,
|
||||
agentType: data.agentType
|
||||
};
|
||||
|
||||
try {
|
||||
@@ -286,10 +298,17 @@ export class SessionManager {
|
||||
sessionId: sessionDbId
|
||||
});
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('SESSION', 'Failed to persist observation to DB', {
|
||||
sessionId: sessionDbId,
|
||||
tool: data.tool_name
|
||||
}, error);
|
||||
} else {
|
||||
logger.error('SESSION', 'Failed to persist observation to DB with non-Error', {
|
||||
sessionId: sessionDbId,
|
||||
tool: data.tool_name
|
||||
}, new Error(String(error)));
|
||||
}
|
||||
throw error; // Don't continue if we can't persist
|
||||
}
|
||||
|
||||
@@ -312,6 +331,18 @@ export class SessionManager {
|
||||
session = this.initializeSession(sessionDbId);
|
||||
}
|
||||
|
||||
// Circuit breaker: skip summarize if too many consecutive failures (#1633).
|
||||
// This prevents the infinite loop where each failed summary spawns a new session
|
||||
// with an ever-growing prompt. Counter is in-memory per ActiveSession — it resets
|
||||
// on worker restart, which is acceptable because session state is already ephemeral.
|
||||
if (session.consecutiveSummaryFailures >= MAX_CONSECUTIVE_SUMMARY_FAILURES) {
|
||||
logger.warn('SESSION', `Circuit breaker OPEN: skipping summarize after ${session.consecutiveSummaryFailures} consecutive failures (#1633)`, {
|
||||
sessionId: sessionDbId,
|
||||
contentSessionId: session.contentSessionId
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
// CRITICAL: Persist to database FIRST
|
||||
const message: PendingMessage = {
|
||||
type: 'summarize',
|
||||
@@ -325,9 +356,15 @@ export class SessionManager {
|
||||
sessionId: sessionDbId
|
||||
});
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.error('SESSION', 'Failed to persist summarize to DB', {
|
||||
sessionId: sessionDbId
|
||||
}, error);
|
||||
} else {
|
||||
logger.error('SESSION', 'Failed to persist summarize to DB with non-Error', {
|
||||
sessionId: sessionDbId
|
||||
}, new Error(String(error)));
|
||||
}
|
||||
throw error; // Don't continue if we can't persist
|
||||
}
|
||||
|
||||
@@ -379,9 +416,15 @@ export class SessionManager {
|
||||
try {
|
||||
await getSupervisor().getRegistry().reapSession(sessionDbId);
|
||||
} catch (error) {
|
||||
if (error instanceof Error) {
|
||||
logger.warn('SESSION', 'Supervisor reapSession failed (non-blocking)', {
|
||||
sessionId: sessionDbId
|
||||
}, error as Error);
|
||||
}, error);
|
||||
} else {
|
||||
logger.warn('SESSION', 'Supervisor reapSession failed (non-blocking) with non-Error', {
|
||||
sessionId: sessionDbId
|
||||
}, new Error(String(error)));
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Cleanup
|
||||
@@ -451,7 +494,11 @@ export class SessionManager {
|
||||
try {
|
||||
trackedProcess.process.kill('SIGKILL');
|
||||
} catch (err) {
|
||||
logger.warn('SESSION', 'Failed to SIGKILL subprocess for stale generator', { sessionDbId }, err as Error);
|
||||
if (err instanceof Error) {
|
||||
logger.warn('SESSION', 'Failed to SIGKILL subprocess for stale generator', { sessionDbId }, err);
|
||||
} else {
|
||||
logger.warn('SESSION', 'Failed to SIGKILL subprocess for stale generator with non-Error', { sessionDbId }, new Error(String(err)));
|
||||
}
|
||||
}
|
||||
}
|
||||
// Signal the SDK agent loop to exit after the subprocess dies
|
||||
|
||||
@@ -43,7 +43,11 @@ export class SettingsManager {
|
||||
|
||||
return settings;
|
||||
} catch (error) {
|
||||
logger.debug('WORKER', 'Failed to load settings, using defaults', {}, error as Error);
|
||||
if (error instanceof Error) {
|
||||
logger.debug('WORKER', 'Failed to load settings, using defaults', {}, error);
|
||||
} else {
|
||||
logger.debug('WORKER', 'Failed to load settings, using defaults', { rawError: String(error) });
|
||||
}
|
||||
return { ...this.defaultSettings };
|
||||
}
|
||||
}
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
|
||||
import { logger } from '../../../utils/logger.js';
|
||||
import { parseObservations, parseSummary, type ParsedObservation, type ParsedSummary } from '../../../sdk/parser.js';
|
||||
import { SUMMARY_MODE_MARKER, MAX_CONSECUTIVE_SUMMARY_FAILURES } from '../../../sdk/prompts.js';
|
||||
import { updateCursorContextForProject } from '../../integrations/CursorHooksInstaller.js';
|
||||
import { updateFolderClaudeMdFiles } from '../../../utils/claude-md-utils.js';
|
||||
import { getWorkerPort } from '../../../shared/worker-utils.js';
|
||||
@@ -67,7 +68,17 @@ export async function processAgentResponse(
|
||||
|
||||
// Parse observations and summary
|
||||
const observations = parseObservations(text, session.contentSessionId);
|
||||
const summary = parseSummary(text, session.sessionDbId);
|
||||
|
||||
// Detect whether the most recent prompt was a summary request.
|
||||
// If so, enable observation-to-summary coercion to prevent the infinite
|
||||
// retry loop described in #1633.
|
||||
const lastMessage = session.conversationHistory.at(-1);
|
||||
const lastUserMessage = lastMessage?.role === 'user'
|
||||
? lastMessage
|
||||
: session.conversationHistory.findLast(m => m.role === 'user') ?? null;
|
||||
const summaryExpected = lastUserMessage?.content?.includes(SUMMARY_MODE_MARKER) ?? false;
|
||||
|
||||
const summary = parseSummary(text, session.sessionDbId, summaryExpected);
|
||||
|
||||
if (
|
||||
text.trim() &&
|
||||
@@ -107,18 +118,36 @@ export async function processAgentResponse(
|
||||
memorySessionId: session.memorySessionId
|
||||
});
|
||||
|
||||
// Label observations with the subagent identity captured from the claimed messages.
|
||||
// Main-session messages leave these null, so main-session rows stay NULL in the DB.
|
||||
const labeledObservations = observations.map(obs => ({
|
||||
...obs,
|
||||
agent_type: session.pendingAgentType ?? null,
|
||||
agent_id: session.pendingAgentId ?? null
|
||||
}));
|
||||
|
||||
// ATOMIC TRANSACTION: Store observations + summary ONCE
|
||||
// Messages are already deleted from queue on claim, so no completion tracking needed
|
||||
const result = sessionStore.storeObservations(
|
||||
// Messages are already deleted from queue on claim, so no completion tracking needed.
|
||||
// Wrap in try/finally so the subagent tracker clears even if storage throws —
|
||||
// otherwise stale identity could leak into the next batch and mislabel rows.
|
||||
// Expected invariant: all observations in a batch share the same agent context,
|
||||
// because ResponseProcessor runs after a single agent-response cycle.
|
||||
let result: ReturnType<typeof sessionStore.storeObservations>;
|
||||
try {
|
||||
result = sessionStore.storeObservations(
|
||||
session.memorySessionId,
|
||||
session.project,
|
||||
observations,
|
||||
labeledObservations,
|
||||
summaryForStore,
|
||||
session.lastPromptNumber,
|
||||
discoveryTokens,
|
||||
originalTimestamp ?? undefined,
|
||||
modelId
|
||||
);
|
||||
} finally {
|
||||
session.pendingAgentId = null;
|
||||
session.pendingAgentType = null;
|
||||
}
|
||||
|
||||
// Log storage result with IDs for end-to-end traceability
|
||||
logger.info('DB', `STORED | sessionDbId=${session.sessionDbId} | memorySessionId=${session.memorySessionId} | obsCount=${result.observationIds.length} | obsIds=[${result.observationIds.join(',')}] | summaryId=${result.summaryId || 'none'}`, {
|
||||
@@ -130,6 +159,32 @@ export async function processAgentResponse(
|
||||
// to the Stop hook for silent-summary-loss detection (#1633)
|
||||
session.lastSummaryStored = result.summaryId !== null;
|
||||
|
||||
// Circuit breaker: track consecutive summary failures (#1633).
|
||||
// Only evaluate when a summary was actually expected (summarize message was sent).
|
||||
// Without this guard, the counter would increment on every normal observation
|
||||
// response, tripping the breaker after 3 observations and permanently blocking
|
||||
// summarization — reproducing the data-loss scenario this fix is meant to prevent.
|
||||
if (summaryExpected) {
|
||||
const skippedIntentionally = /<skip_summary\b/.test(text);
|
||||
if (summaryForStore !== null) {
|
||||
// Summary was present in the response — reset the failure counter
|
||||
session.consecutiveSummaryFailures = 0;
|
||||
} else if (skippedIntentionally) {
|
||||
// Explicit <skip_summary/> is a valid protocol response — neither success
|
||||
// nor failure. Leave the counter unchanged so we don't mask a bad run that
|
||||
// happens to end on a skip, but also don't punish intentional skips.
|
||||
} else {
|
||||
// Summary was expected but none was stored — count as failure
|
||||
session.consecutiveSummaryFailures += 1;
|
||||
if (session.consecutiveSummaryFailures >= MAX_CONSECUTIVE_SUMMARY_FAILURES) {
|
||||
logger.error('SESSION', `Circuit breaker: ${session.consecutiveSummaryFailures} consecutive summary failures — further summarize requests will be skipped (#1633)`, {
|
||||
sessionId: session.sessionDbId,
|
||||
contentSessionId: session.contentSessionId
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// CLAIM-CONFIRM: Now that storage succeeded, confirm all processing messages (delete from queue)
|
||||
// This is the critical step that prevents message loss on generator crash
|
||||
const pendingStore = sessionManager.getPendingMessageStore();
|
||||
|
||||
@@ -27,8 +27,9 @@ export abstract class BaseRouteHandler {
|
||||
result.catch(error => this.handleError(res, error as Error));
|
||||
}
|
||||
} catch (error) {
|
||||
logger.error('HTTP', 'Route handler error', { path: req.path }, error as Error);
|
||||
this.handleError(res, error as Error);
|
||||
const normalizedError = error instanceof Error ? error : new Error(String(error));
|
||||
logger.error('HTTP', 'Route handler error', { path: req.path }, normalizedError);
|
||||
this.handleError(res, normalizedError);
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
@@ -7,6 +7,7 @@
|
||||
|
||||
import express, { Request, Response } from 'express';
|
||||
import { BaseRouteHandler } from '../BaseRouteHandler.js';
|
||||
import { logger } from '../../../../utils/logger.js';
|
||||
import { CorpusStore } from '../../knowledge/CorpusStore.js';
|
||||
import { CorpusBuilder } from '../../knowledge/CorpusBuilder.js';
|
||||
import { KnowledgeAgent } from '../../knowledge/KnowledgeAgent.js';
|
||||
@@ -93,7 +94,10 @@ export class CorpusRoutes extends BaseRouteHandler {
|
||||
if (typeof value === 'string') {
|
||||
try {
|
||||
parsed = JSON.parse(value);
|
||||
} catch {
|
||||
} catch (parseError: unknown) {
|
||||
if (parseError instanceof Error) {
|
||||
logger.debug('HTTP', `${fieldName} is not valid JSON, treating as comma-separated string`, { value });
|
||||
}
|
||||
parsed = value.split(',').map(part => part.trim()).filter(Boolean);
|
||||
}
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user