c6f932988a
* MAESTRO: fix ChromaDB core issues — Python pinning, Windows paths, disable toggle, metadata sanitization, transport errors - Add --python version pinning to uvx args in both local and remote mode (fixes #1196, #1206, #1208) - Convert backslash paths to forward slashes for --data-dir on Windows (fixes #1199) - Add CLAUDE_MEM_CHROMA_ENABLED setting for SQLite-only fallback mode (fixes #707) - Sanitize metadata in addDocuments() to filter null/undefined/empty values (fixes #1183, #1188) - Wrap callTool() in try/catch for transport errors with auto-reconnect (fixes #1162) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix data integrity — content-hash deduplication, project name collision, empty project guard, stuck isProcessing - Add SHA-256 content-hash deduplication to observations INSERT (store.ts, transactions.ts, SessionStore.ts) - Add content_hash column via migration 22 with backfill and index - Fix project name collision: getCurrentProjectName() now returns parent/basename - Guard against empty project string with cwd-derived fallback - Fix stuck isProcessing: hasAnyPendingWork() resets processing messages older than 5 minutes - Add 12 new tests covering all four fixes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix hook lifecycle — stderr suppression, output isolation, conversation pollution prevention - Suppress process.stderr.write in hookCommand() to prevent Claude Code showing diagnostic output as error UI (#1181). Restores stderr in finally block for worker-continues case. - Convert console.error() to logger.warn()/error() in hook-command.ts and handlers/index.ts so all diagnostics route to log file instead of stderr. - Verified all 7 handlers return suppressOutput: true (prevents conversation pollution #598, #784). - Verified session-complete is a recognized event type (fixes #984). - Verified unknown event types return no-op handler with exit 0 (graceful degradation). - Added 10 new tests in tests/hook-lifecycle.test.ts covering event dispatch, adapter defaults, stderr suppression, and standard response constants. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix worker lifecycle — restart loop coordination, stale transport retry, ENOENT shutdown race - Add PID file mtime guard to prevent concurrent restart storms (#1145): isPidFileRecent() + touchPidFile() coordinate across sessions - Add transparent retry in ChromaMcpManager.callTool() on transport error — reconnects and retries once instead of failing (#1131) - Wrap getInstalledPluginVersion() with ENOENT/EBUSY handling (#1042) - Verified ChromaMcpManager.stop() already called on all shutdown paths Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Windows platform support — uvx.cmd spawn, PowerShell $_ elimination, windowsHide, FTS5 fallback - Route uvx spawn through cmd.exe /c on Windows since MCP SDK lacks shell:true (#1190, #1192, #1199) - Replace all PowerShell Where-Object {$_} pipelines with WQL -Filter server-side filtering (#1024, #1062) - Add windowsHide: true to all exec/spawn calls missing it to prevent console popups (#1048) - Add FTS5 runtime probe with graceful fallback when unavailable on Windows (#791) - Guard FTS5 table creation in migrations, SessionSearch, and SessionStore with try/catch Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix skills/ distribution — build-time verification and regression tests (#1187) Add post-build verification in build-hooks.js that fails if critical distribution files (skills, hooks, plugin manifest) are missing. Add 10 regression tests covering skill file presence, YAML frontmatter, hooks.json integrity, and package.json files field. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix MigrationRunner schema initialization (#979) — version conflict between parallel migration systems Root cause: old DatabaseManager migrations 1-7 shared schema_versions table with MigrationRunner's 4-22, causing version number collisions (5=drop tables vs add column, 6=FTS5 vs prompt tracking, 7=discovery_tokens vs remove UNIQUE). initializeSchema() was gated behind maxApplied===0, so core tables were never created when old versions were present. Fixes: - initializeSchema() always creates core tables via CREATE TABLE IF NOT EXISTS - Migrations 5-7 check actual DB state (columns/constraints) not just version tracking - Crash-safe temp table rebuilds (DROP IF EXISTS _new before CREATE) - Added missing migration 21 (ON UPDATE CASCADE) to MigrationRunner - Added ON UPDATE CASCADE to FK definitions in initializeSchema() - All changes applied to both runner.ts and SessionStore.ts Tests: 13 new tests in migration-runner.test.ts covering fresh DB, idempotency, version conflicts, crash recovery, FK constraints, and data integrity. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix 21 test failures — stale mocks, outdated assertions, missing OpenClaw guards Server tests (12): Added missing workerPath and getAiStatus to ServerOptions mocks after interface expansion. ChromaSync tests (3): Updated to verify transport cleanup in ChromaMcpManager after architecture refactor. OpenClaw (2): Added memory_ tool skipping and response truncation to prevent recursive loops and oversized payloads. MarkdownFormatter (2): Updated assertions to match current output. SettingsDefaultsManager (1): Used correct default key for getBool test. Logger standards (1): Excluded CLI transcript command from background service check. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Codex CLI compatibility (#744) — session_id fallbacks, unknown platform tolerance, undefined guard Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Cursor IDE integration (#838, #1049) — adapter field fallbacks, tolerant session-init validation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix /api/logs OOM (#1203) — tail-read replaces full-file readFileSync Replace readFileSync (loads entire file into memory) with readLastLines() that reads only from the end of the file in expanding chunks (64KB → 10MB cap). Prevents OOM on large log files while preserving the same API response shape. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Settings CORS error (#1029) — explicit methods and allowedHeaders in CORS config Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: add session custom_title for agent attribution (#1213) — migration 23, endpoint + store support Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: prevent CLAUDE.md/AGENTS.md writes inside .git/ directories (#1165) Add .git path guard to all 4 write sites to prevent ref corruption when paths resolve inside .git internals. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix plugin disabled state not respected (#781) — early exit check in all hook entry points Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix UserPromptSubmit context re-injection on every turn (#1079) — contextInjected session flag Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix stale AbortController queue stall (#1099) — lastGeneratorActivity tracking + 30s timeout Three-layer fix: 1. Added lastGeneratorActivity timestamp to ActiveSession, updated by processAgentResponse (all agents), getMessageIterator (queue yields), and startGeneratorWithProvider (generator launch) 2. Added stale generator detection in ensureGeneratorRunning — if no activity for >30s, aborts stale controller, resets state, restarts 3. Added AbortSignal.timeout(30000) in deleteSession to prevent indefinite hang when awaiting a stuck generator promise Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
255 lines
8.7 KiB
TypeScript
255 lines
8.7 KiB
TypeScript
/**
|
|
* Cross-boundary database transactions
|
|
*
|
|
* This module contains atomic transactions that span multiple domains
|
|
* (observations, summaries, pending messages). These functions ensure
|
|
* data consistency across domain boundaries.
|
|
*/
|
|
|
|
import { Database } from 'bun:sqlite';
|
|
import { logger } from '../../utils/logger.js';
|
|
import type { ObservationInput } from './observations/types.js';
|
|
import type { SummaryInput } from './summaries/types.js';
|
|
import { computeObservationContentHash, findDuplicateObservation } from './observations/store.js';
|
|
|
|
/**
|
|
* Result from storeObservations / storeObservationsAndMarkComplete transaction
|
|
*/
|
|
export interface StoreObservationsResult {
|
|
observationIds: number[];
|
|
summaryId: number | null;
|
|
createdAtEpoch: number;
|
|
}
|
|
|
|
// Legacy alias for backwards compatibility
|
|
export type StoreAndMarkCompleteResult = StoreObservationsResult;
|
|
|
|
/**
|
|
* ATOMIC: Store observations + summary + mark pending message as processed
|
|
*
|
|
* This function wraps observation storage, summary storage, and message completion
|
|
* in a single database transaction to prevent race conditions. If the worker crashes
|
|
* during processing, either all operations succeed together or all fail together.
|
|
*
|
|
* This fixes the observation duplication bug where observations were stored but
|
|
* the message wasn't marked complete, causing reprocessing on crash recovery.
|
|
*
|
|
* @param db - Database instance
|
|
* @param memorySessionId - SDK memory session ID
|
|
* @param project - Project name
|
|
* @param observations - Array of observations to store (can be empty)
|
|
* @param summary - Optional summary to store
|
|
* @param messageId - Pending message ID to mark as processed
|
|
* @param promptNumber - Optional prompt number
|
|
* @param discoveryTokens - Discovery tokens count
|
|
* @param overrideTimestampEpoch - Optional override timestamp
|
|
* @returns Object with observation IDs, optional summary ID, and timestamp
|
|
*/
|
|
export function storeObservationsAndMarkComplete(
|
|
db: Database,
|
|
memorySessionId: string,
|
|
project: string,
|
|
observations: ObservationInput[],
|
|
summary: SummaryInput | null,
|
|
messageId: number,
|
|
promptNumber?: number,
|
|
discoveryTokens: number = 0,
|
|
overrideTimestampEpoch?: number
|
|
): StoreAndMarkCompleteResult {
|
|
// Use override timestamp if provided
|
|
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
|
|
const timestampIso = new Date(timestampEpoch).toISOString();
|
|
|
|
// Create transaction that wraps all operations
|
|
const storeAndMarkTx = db.transaction(() => {
|
|
const observationIds: number[] = [];
|
|
|
|
// 1. Store all observations (with content-hash deduplication)
|
|
const obsStmt = db.prepare(`
|
|
INSERT INTO observations
|
|
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
|
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
|
`);
|
|
|
|
for (const observation of observations) {
|
|
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
|
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
|
if (existing) {
|
|
observationIds.push(existing.id);
|
|
continue;
|
|
}
|
|
|
|
const result = obsStmt.run(
|
|
memorySessionId,
|
|
project,
|
|
observation.type,
|
|
observation.title,
|
|
observation.subtitle,
|
|
JSON.stringify(observation.facts),
|
|
observation.narrative,
|
|
JSON.stringify(observation.concepts),
|
|
JSON.stringify(observation.files_read),
|
|
JSON.stringify(observation.files_modified),
|
|
promptNumber || null,
|
|
discoveryTokens,
|
|
contentHash,
|
|
timestampIso,
|
|
timestampEpoch
|
|
);
|
|
observationIds.push(Number(result.lastInsertRowid));
|
|
}
|
|
|
|
// 2. Store summary if provided
|
|
let summaryId: number | null = null;
|
|
if (summary) {
|
|
const summaryStmt = db.prepare(`
|
|
INSERT INTO session_summaries
|
|
(memory_session_id, project, request, investigated, learned, completed,
|
|
next_steps, notes, prompt_number, discovery_tokens, created_at, created_at_epoch)
|
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
|
`);
|
|
|
|
const result = summaryStmt.run(
|
|
memorySessionId,
|
|
project,
|
|
summary.request,
|
|
summary.investigated,
|
|
summary.learned,
|
|
summary.completed,
|
|
summary.next_steps,
|
|
summary.notes,
|
|
promptNumber || null,
|
|
discoveryTokens,
|
|
timestampIso,
|
|
timestampEpoch
|
|
);
|
|
summaryId = Number(result.lastInsertRowid);
|
|
}
|
|
|
|
// 3. Mark pending message as processed
|
|
// This UPDATE is part of the same transaction, so if it fails,
|
|
// observations and summary will be rolled back
|
|
const updateStmt = db.prepare(`
|
|
UPDATE pending_messages
|
|
SET
|
|
status = 'processed',
|
|
completed_at_epoch = ?,
|
|
tool_input = NULL,
|
|
tool_response = NULL
|
|
WHERE id = ? AND status = 'processing'
|
|
`);
|
|
updateStmt.run(timestampEpoch, messageId);
|
|
|
|
return { observationIds, summaryId, createdAtEpoch: timestampEpoch };
|
|
});
|
|
|
|
// Execute the transaction and return results
|
|
return storeAndMarkTx();
|
|
}
|
|
|
|
/**
|
|
* ATOMIC: Store observations + summary (no message tracking)
|
|
*
|
|
* Simplified version for use with claim-and-delete queue pattern.
|
|
* Messages are deleted from queue immediately on claim, so there's no
|
|
* message completion to track. This just stores observations and summary.
|
|
*
|
|
* @param db - Database instance
|
|
* @param memorySessionId - SDK memory session ID
|
|
* @param project - Project name
|
|
* @param observations - Array of observations to store (can be empty)
|
|
* @param summary - Optional summary to store
|
|
* @param promptNumber - Optional prompt number
|
|
* @param discoveryTokens - Discovery tokens count
|
|
* @param overrideTimestampEpoch - Optional override timestamp
|
|
* @returns Object with observation IDs, optional summary ID, and timestamp
|
|
*/
|
|
export function storeObservations(
|
|
db: Database,
|
|
memorySessionId: string,
|
|
project: string,
|
|
observations: ObservationInput[],
|
|
summary: SummaryInput | null,
|
|
promptNumber?: number,
|
|
discoveryTokens: number = 0,
|
|
overrideTimestampEpoch?: number
|
|
): StoreObservationsResult {
|
|
// Use override timestamp if provided
|
|
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
|
|
const timestampIso = new Date(timestampEpoch).toISOString();
|
|
|
|
// Create transaction that wraps all operations
|
|
const storeTx = db.transaction(() => {
|
|
const observationIds: number[] = [];
|
|
|
|
// 1. Store all observations (with content-hash deduplication)
|
|
const obsStmt = db.prepare(`
|
|
INSERT INTO observations
|
|
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
|
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
|
`);
|
|
|
|
for (const observation of observations) {
|
|
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
|
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
|
if (existing) {
|
|
observationIds.push(existing.id);
|
|
continue;
|
|
}
|
|
|
|
const result = obsStmt.run(
|
|
memorySessionId,
|
|
project,
|
|
observation.type,
|
|
observation.title,
|
|
observation.subtitle,
|
|
JSON.stringify(observation.facts),
|
|
observation.narrative,
|
|
JSON.stringify(observation.concepts),
|
|
JSON.stringify(observation.files_read),
|
|
JSON.stringify(observation.files_modified),
|
|
promptNumber || null,
|
|
discoveryTokens,
|
|
contentHash,
|
|
timestampIso,
|
|
timestampEpoch
|
|
);
|
|
observationIds.push(Number(result.lastInsertRowid));
|
|
}
|
|
|
|
// 2. Store summary if provided
|
|
let summaryId: number | null = null;
|
|
if (summary) {
|
|
const summaryStmt = db.prepare(`
|
|
INSERT INTO session_summaries
|
|
(memory_session_id, project, request, investigated, learned, completed,
|
|
next_steps, notes, prompt_number, discovery_tokens, created_at, created_at_epoch)
|
|
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
|
`);
|
|
|
|
const result = summaryStmt.run(
|
|
memorySessionId,
|
|
project,
|
|
summary.request,
|
|
summary.investigated,
|
|
summary.learned,
|
|
summary.completed,
|
|
summary.next_steps,
|
|
summary.notes,
|
|
promptNumber || null,
|
|
discoveryTokens,
|
|
timestampIso,
|
|
timestampEpoch
|
|
);
|
|
summaryId = Number(result.lastInsertRowid);
|
|
}
|
|
|
|
return { observationIds, summaryId, createdAtEpoch: timestampEpoch };
|
|
});
|
|
|
|
// Execute the transaction and return results
|
|
return storeTx();
|
|
}
|