Fix 30+ root-cause bugs across 10 triage phases (#1214)
* MAESTRO: fix ChromaDB core issues — Python pinning, Windows paths, disable toggle, metadata sanitization, transport errors - Add --python version pinning to uvx args in both local and remote mode (fixes #1196, #1206, #1208) - Convert backslash paths to forward slashes for --data-dir on Windows (fixes #1199) - Add CLAUDE_MEM_CHROMA_ENABLED setting for SQLite-only fallback mode (fixes #707) - Sanitize metadata in addDocuments() to filter null/undefined/empty values (fixes #1183, #1188) - Wrap callTool() in try/catch for transport errors with auto-reconnect (fixes #1162) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix data integrity — content-hash deduplication, project name collision, empty project guard, stuck isProcessing - Add SHA-256 content-hash deduplication to observations INSERT (store.ts, transactions.ts, SessionStore.ts) - Add content_hash column via migration 22 with backfill and index - Fix project name collision: getCurrentProjectName() now returns parent/basename - Guard against empty project string with cwd-derived fallback - Fix stuck isProcessing: hasAnyPendingWork() resets processing messages older than 5 minutes - Add 12 new tests covering all four fixes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix hook lifecycle — stderr suppression, output isolation, conversation pollution prevention - Suppress process.stderr.write in hookCommand() to prevent Claude Code showing diagnostic output as error UI (#1181). Restores stderr in finally block for worker-continues case. - Convert console.error() to logger.warn()/error() in hook-command.ts and handlers/index.ts so all diagnostics route to log file instead of stderr. - Verified all 7 handlers return suppressOutput: true (prevents conversation pollution #598, #784). - Verified session-complete is a recognized event type (fixes #984). - Verified unknown event types return no-op handler with exit 0 (graceful degradation). - Added 10 new tests in tests/hook-lifecycle.test.ts covering event dispatch, adapter defaults, stderr suppression, and standard response constants. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix worker lifecycle — restart loop coordination, stale transport retry, ENOENT shutdown race - Add PID file mtime guard to prevent concurrent restart storms (#1145): isPidFileRecent() + touchPidFile() coordinate across sessions - Add transparent retry in ChromaMcpManager.callTool() on transport error — reconnects and retries once instead of failing (#1131) - Wrap getInstalledPluginVersion() with ENOENT/EBUSY handling (#1042) - Verified ChromaMcpManager.stop() already called on all shutdown paths Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Windows platform support — uvx.cmd spawn, PowerShell $_ elimination, windowsHide, FTS5 fallback - Route uvx spawn through cmd.exe /c on Windows since MCP SDK lacks shell:true (#1190, #1192, #1199) - Replace all PowerShell Where-Object {$_} pipelines with WQL -Filter server-side filtering (#1024, #1062) - Add windowsHide: true to all exec/spawn calls missing it to prevent console popups (#1048) - Add FTS5 runtime probe with graceful fallback when unavailable on Windows (#791) - Guard FTS5 table creation in migrations, SessionSearch, and SessionStore with try/catch Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix skills/ distribution — build-time verification and regression tests (#1187) Add post-build verification in build-hooks.js that fails if critical distribution files (skills, hooks, plugin manifest) are missing. Add 10 regression tests covering skill file presence, YAML frontmatter, hooks.json integrity, and package.json files field. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix MigrationRunner schema initialization (#979) — version conflict between parallel migration systems Root cause: old DatabaseManager migrations 1-7 shared schema_versions table with MigrationRunner's 4-22, causing version number collisions (5=drop tables vs add column, 6=FTS5 vs prompt tracking, 7=discovery_tokens vs remove UNIQUE). initializeSchema() was gated behind maxApplied===0, so core tables were never created when old versions were present. Fixes: - initializeSchema() always creates core tables via CREATE TABLE IF NOT EXISTS - Migrations 5-7 check actual DB state (columns/constraints) not just version tracking - Crash-safe temp table rebuilds (DROP IF EXISTS _new before CREATE) - Added missing migration 21 (ON UPDATE CASCADE) to MigrationRunner - Added ON UPDATE CASCADE to FK definitions in initializeSchema() - All changes applied to both runner.ts and SessionStore.ts Tests: 13 new tests in migration-runner.test.ts covering fresh DB, idempotency, version conflicts, crash recovery, FK constraints, and data integrity. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix 21 test failures — stale mocks, outdated assertions, missing OpenClaw guards Server tests (12): Added missing workerPath and getAiStatus to ServerOptions mocks after interface expansion. ChromaSync tests (3): Updated to verify transport cleanup in ChromaMcpManager after architecture refactor. OpenClaw (2): Added memory_ tool skipping and response truncation to prevent recursive loops and oversized payloads. MarkdownFormatter (2): Updated assertions to match current output. SettingsDefaultsManager (1): Used correct default key for getBool test. Logger standards (1): Excluded CLI transcript command from background service check. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Codex CLI compatibility (#744) — session_id fallbacks, unknown platform tolerance, undefined guard Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Cursor IDE integration (#838, #1049) — adapter field fallbacks, tolerant session-init validation Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix /api/logs OOM (#1203) — tail-read replaces full-file readFileSync Replace readFileSync (loads entire file into memory) with readLastLines() that reads only from the end of the file in expanding chunks (64KB → 10MB cap). Prevents OOM on large log files while preserving the same API response shape. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix Settings CORS error (#1029) — explicit methods and allowedHeaders in CORS config Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: add session custom_title for agent attribution (#1213) — migration 23, endpoint + store support Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: prevent CLAUDE.md/AGENTS.md writes inside .git/ directories (#1165) Add .git path guard to all 4 write sites to prevent ref corruption when paths resolve inside .git internals. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix plugin disabled state not respected (#781) — early exit check in all hook entry points Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix UserPromptSubmit context re-injection on every turn (#1079) — contextInjected session flag Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * MAESTRO: fix stale AbortController queue stall (#1099) — lastGeneratorActivity tracking + 30s timeout Three-layer fix: 1. Added lastGeneratorActivity timestamp to ActiveSession, updated by processAgentResponse (all agents), getMessageIterator (queue yields), and startGeneratorWithProvider (generator launch) 2. Added stale generator detection in ensureGeneratorRunning — if no activity for >30s, aborts stale controller, resets state, restarts 3. Added AbortSignal.timeout(30000) in deleteSession to prevent indefinite hang when awaiting a stuck generator promise Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -398,9 +398,22 @@ export class PendingMessageStore {
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if any session has pending work
|
||||
* Check if any session has pending work.
|
||||
* Excludes 'processing' messages stuck for >5 minutes (resets them to 'pending' as a side effect).
|
||||
*/
|
||||
hasAnyPendingWork(): boolean {
|
||||
// Reset stuck 'processing' messages older than 5 minutes before checking
|
||||
const stuckCutoff = Date.now() - (5 * 60 * 1000);
|
||||
const resetStmt = this.db.prepare(`
|
||||
UPDATE pending_messages
|
||||
SET status = 'pending', started_processing_at_epoch = NULL
|
||||
WHERE status = 'processing' AND started_processing_at_epoch < ?
|
||||
`);
|
||||
const resetResult = resetStmt.run(stuckCutoff);
|
||||
if (resetResult.changes > 0) {
|
||||
logger.info('QUEUE', `STUCK_RESET | hasAnyPendingWork reset ${resetResult.changes} stuck processing message(s) older than 5 minutes`);
|
||||
}
|
||||
|
||||
const stmt = this.db.prepare(`
|
||||
SELECT COUNT(*) as count FROM pending_messages
|
||||
WHERE status IN ('pending', 'processing')
|
||||
|
||||
@@ -46,6 +46,10 @@ export class SessionSearch {
|
||||
* - Tables maintained but search paths removed
|
||||
* - Triggers still fire to keep tables synchronized
|
||||
*
|
||||
* FTS5 may be unavailable on some platforms (e.g., Bun on Windows #791).
|
||||
* When unavailable, we skip FTS table creation — search falls back to
|
||||
* ChromaDB (vector) and LIKE queries (structured filters) which are unaffected.
|
||||
*
|
||||
* TODO: Remove FTS5 infrastructure in future major version (v7.0.0)
|
||||
*/
|
||||
private ensureFTSTables(): void {
|
||||
@@ -58,91 +62,117 @@ export class SessionSearch {
|
||||
return;
|
||||
}
|
||||
|
||||
// Runtime check: verify FTS5 is available before attempting to create tables.
|
||||
// bun:sqlite on Windows may not include the FTS5 extension (#791).
|
||||
if (!this.isFts5Available()) {
|
||||
logger.warn('DB', 'FTS5 not available on this platform — skipping FTS table creation (search uses ChromaDB)');
|
||||
return;
|
||||
}
|
||||
|
||||
logger.info('DB', 'Creating FTS5 tables');
|
||||
|
||||
// Create observations_fts virtual table
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS observations_fts USING fts5(
|
||||
title,
|
||||
subtitle,
|
||||
narrative,
|
||||
text,
|
||||
facts,
|
||||
concepts,
|
||||
content='observations',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
try {
|
||||
// Create observations_fts virtual table
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS observations_fts USING fts5(
|
||||
title,
|
||||
subtitle,
|
||||
narrative,
|
||||
text,
|
||||
facts,
|
||||
concepts,
|
||||
content='observations',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
|
||||
// Populate with existing data
|
||||
this.db.run(`
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
SELECT id, title, subtitle, narrative, text, facts, concepts
|
||||
FROM observations;
|
||||
`);
|
||||
|
||||
// Create triggers for observations
|
||||
this.db.run(`
|
||||
CREATE TRIGGER IF NOT EXISTS observations_ai AFTER INSERT ON observations BEGIN
|
||||
// Populate with existing data
|
||||
this.db.run(`
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
END;
|
||||
SELECT id, title, subtitle, narrative, text, facts, concepts
|
||||
FROM observations;
|
||||
`);
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS observations_ad AFTER DELETE ON observations BEGIN
|
||||
INSERT INTO observations_fts(observations_fts, rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES('delete', old.id, old.title, old.subtitle, old.narrative, old.text, old.facts, old.concepts);
|
||||
END;
|
||||
// Create triggers for observations
|
||||
this.db.run(`
|
||||
CREATE TRIGGER IF NOT EXISTS observations_ai AFTER INSERT ON observations BEGIN
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS observations_au AFTER UPDATE ON observations BEGIN
|
||||
INSERT INTO observations_fts(observations_fts, rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES('delete', old.id, old.title, old.subtitle, old.narrative, old.text, old.facts, old.concepts);
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
END;
|
||||
`);
|
||||
CREATE TRIGGER IF NOT EXISTS observations_ad AFTER DELETE ON observations BEGIN
|
||||
INSERT INTO observations_fts(observations_fts, rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES('delete', old.id, old.title, old.subtitle, old.narrative, old.text, old.facts, old.concepts);
|
||||
END;
|
||||
|
||||
// Create session_summaries_fts virtual table
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS session_summaries_fts USING fts5(
|
||||
request,
|
||||
investigated,
|
||||
learned,
|
||||
completed,
|
||||
next_steps,
|
||||
notes,
|
||||
content='session_summaries',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
CREATE TRIGGER IF NOT EXISTS observations_au AFTER UPDATE ON observations BEGIN
|
||||
INSERT INTO observations_fts(observations_fts, rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES('delete', old.id, old.title, old.subtitle, old.narrative, old.text, old.facts, old.concepts);
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
END;
|
||||
`);
|
||||
|
||||
// Populate with existing data
|
||||
this.db.run(`
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
SELECT id, request, investigated, learned, completed, next_steps, notes
|
||||
FROM session_summaries;
|
||||
`);
|
||||
// Create session_summaries_fts virtual table
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE IF NOT EXISTS session_summaries_fts USING fts5(
|
||||
request,
|
||||
investigated,
|
||||
learned,
|
||||
completed,
|
||||
next_steps,
|
||||
notes,
|
||||
content='session_summaries',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
|
||||
// Create triggers for session_summaries
|
||||
this.db.run(`
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_ai AFTER INSERT ON session_summaries BEGIN
|
||||
// Populate with existing data
|
||||
this.db.run(`
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
SELECT id, request, investigated, learned, completed, next_steps, notes
|
||||
FROM session_summaries;
|
||||
`);
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_ad AFTER DELETE ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(session_summaries_fts, rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES('delete', old.id, old.request, old.investigated, old.learned, old.completed, old.next_steps, old.notes);
|
||||
END;
|
||||
// Create triggers for session_summaries
|
||||
this.db.run(`
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_ai AFTER INSERT ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_au AFTER UPDATE ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(session_summaries_fts, rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES('delete', old.id, old.request, old.investigated, old.learned, old.completed, old.next_steps, old.notes);
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
`);
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_ad AFTER DELETE ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(session_summaries_fts, rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES('delete', old.id, old.request, old.investigated, old.learned, old.completed, old.next_steps, old.notes);
|
||||
END;
|
||||
|
||||
logger.info('DB', 'FTS5 tables created successfully');
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_au AFTER UPDATE ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(session_summaries_fts, rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES('delete', old.id, old.request, old.investigated, old.learned, old.completed, old.next_steps, old.notes);
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
`);
|
||||
|
||||
logger.info('DB', 'FTS5 tables created successfully');
|
||||
} catch (error) {
|
||||
// FTS5 creation failed at runtime despite probe succeeding — degrade gracefully
|
||||
logger.warn('DB', 'FTS5 table creation failed — search will use ChromaDB and LIKE queries', {}, error as Error);
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Probe whether the FTS5 extension is available in the current SQLite build.
|
||||
* Creates and immediately drops a temporary FTS5 table.
|
||||
*/
|
||||
private isFts5Available(): boolean {
|
||||
try {
|
||||
this.db.run('CREATE VIRTUAL TABLE _fts5_probe USING fts5(test_column)');
|
||||
this.db.run('DROP TABLE _fts5_probe');
|
||||
return true;
|
||||
} catch {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
|
||||
+183
-117
@@ -13,6 +13,7 @@ import {
|
||||
LatestPromptResult
|
||||
} from '../../types/database.js';
|
||||
import type { PendingMessageStore } from './PendingMessageStore.js';
|
||||
import { computeObservationContentHash, findDuplicateObservation } from './observations/store.js';
|
||||
|
||||
/**
|
||||
* Session data store for SDK sessions, observations, and summaries
|
||||
@@ -48,11 +49,17 @@ export class SessionStore {
|
||||
this.repairSessionIdColumnRename();
|
||||
this.addFailedAtEpochColumn();
|
||||
this.addOnUpdateCascadeToForeignKeys();
|
||||
this.addObservationContentHashColumn();
|
||||
this.addSessionCustomTitleColumn();
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize database schema using migrations (migration004)
|
||||
* This runs the core SDK tables migration if no tables exist
|
||||
* Initialize database schema (migration004)
|
||||
*
|
||||
* ALWAYS creates core tables using CREATE TABLE IF NOT EXISTS — safe to run
|
||||
* regardless of schema_versions state. This fixes issue #979 where the old
|
||||
* DatabaseManager migration system (versions 1-7) shared the schema_versions
|
||||
* table, causing maxApplied > 0 and skipping core table creation entirely.
|
||||
*/
|
||||
private initializeSchema(): void {
|
||||
// Create schema_versions table if it doesn't exist
|
||||
@@ -64,90 +71,77 @@ export class SessionStore {
|
||||
)
|
||||
`);
|
||||
|
||||
// Get applied migrations
|
||||
const appliedVersions = this.db.prepare('SELECT version FROM schema_versions ORDER BY version').all() as SchemaVersion[];
|
||||
const maxApplied = appliedVersions.length > 0 ? Math.max(...appliedVersions.map(v => v.version)) : 0;
|
||||
// Always create core tables — IF NOT EXISTS makes this idempotent
|
||||
this.db.run(`
|
||||
CREATE TABLE IF NOT EXISTS sdk_sessions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
content_session_id TEXT UNIQUE NOT NULL,
|
||||
memory_session_id TEXT UNIQUE,
|
||||
project TEXT NOT NULL,
|
||||
user_prompt TEXT,
|
||||
started_at TEXT NOT NULL,
|
||||
started_at_epoch INTEGER NOT NULL,
|
||||
completed_at TEXT,
|
||||
completed_at_epoch INTEGER,
|
||||
status TEXT CHECK(status IN ('active', 'completed', 'failed')) NOT NULL DEFAULT 'active'
|
||||
);
|
||||
|
||||
// Only run migration004 if no migrations have been applied
|
||||
// This creates the sdk_sessions, observations, and session_summaries tables
|
||||
if (maxApplied === 0) {
|
||||
logger.info('DB', 'Initializing fresh database with migration004');
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_claude_id ON sdk_sessions(content_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_sdk_id ON sdk_sessions(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_project ON sdk_sessions(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_status ON sdk_sessions(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_started ON sdk_sessions(started_at_epoch DESC);
|
||||
|
||||
// Migration004: SDK agent architecture tables
|
||||
this.db.run(`
|
||||
CREATE TABLE IF NOT EXISTS sdk_sessions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
content_session_id TEXT UNIQUE NOT NULL,
|
||||
memory_session_id TEXT UNIQUE,
|
||||
project TEXT NOT NULL,
|
||||
user_prompt TEXT,
|
||||
started_at TEXT NOT NULL,
|
||||
started_at_epoch INTEGER NOT NULL,
|
||||
completed_at TEXT,
|
||||
completed_at_epoch INTEGER,
|
||||
status TEXT CHECK(status IN ('active', 'completed', 'failed')) NOT NULL DEFAULT 'active'
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
text TEXT NOT NULL,
|
||||
type TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_claude_id ON sdk_sessions(content_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_sdk_id ON sdk_sessions(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_project ON sdk_sessions(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_status ON sdk_sessions(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_started ON sdk_sessions(started_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_project ON observations(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_type ON observations(type);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_created ON observations(created_at_epoch DESC);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
text TEXT NOT NULL,
|
||||
type TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS session_summaries (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT UNIQUE NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
request TEXT,
|
||||
investigated TEXT,
|
||||
learned TEXT,
|
||||
completed TEXT,
|
||||
next_steps TEXT,
|
||||
files_read TEXT,
|
||||
files_edited TEXT,
|
||||
notes TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_project ON observations(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_type ON observations(type);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_created ON observations(created_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_project ON session_summaries(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS session_summaries (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT UNIQUE NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
request TEXT,
|
||||
investigated TEXT,
|
||||
learned TEXT,
|
||||
completed TEXT,
|
||||
next_steps TEXT,
|
||||
files_read TEXT,
|
||||
files_edited TEXT,
|
||||
notes TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_project ON session_summaries(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
// Record migration004 as applied
|
||||
this.db.prepare('INSERT INTO schema_versions (version, applied_at) VALUES (?, ?)').run(4, new Date().toISOString());
|
||||
|
||||
logger.info('DB', 'Migration004 applied successfully');
|
||||
}
|
||||
// Record migration004 as applied (OR IGNORE handles re-runs safely)
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(4, new Date().toISOString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure worker_port column exists (migration 5)
|
||||
*
|
||||
* NOTE: Version 5 conflicts with old DatabaseManager migration005 (which drops orphaned tables).
|
||||
* We check actual column state rather than relying solely on version tracking.
|
||||
*/
|
||||
private ensureWorkerPortColumn(): void {
|
||||
// Check if migration already applied
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(5) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Check if column exists
|
||||
// Check actual column existence — don't rely on version tracking alone (issue #979)
|
||||
const tableInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
|
||||
const hasWorkerPort = tableInfo.some(col => col.name === 'worker_port');
|
||||
|
||||
@@ -162,12 +156,12 @@ export class SessionStore {
|
||||
|
||||
/**
|
||||
* Ensure prompt tracking columns exist (migration 6)
|
||||
*
|
||||
* NOTE: Version 6 conflicts with old DatabaseManager migration006 (which creates FTS5 tables).
|
||||
* We check actual column state rather than relying solely on version tracking.
|
||||
*/
|
||||
private ensurePromptTrackingColumns(): void {
|
||||
// Check if migration already applied
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(6) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Check actual column existence — don't rely on version tracking alone (issue #979)
|
||||
// Check sdk_sessions for prompt_counter
|
||||
const sessionsInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
|
||||
const hasPromptCounter = sessionsInfo.some(col => col.name === 'prompt_counter');
|
||||
@@ -201,13 +195,12 @@ export class SessionStore {
|
||||
|
||||
/**
|
||||
* Remove UNIQUE constraint from session_summaries.memory_session_id (migration 7)
|
||||
*
|
||||
* NOTE: Version 7 conflicts with old DatabaseManager migration007 (which adds discovery_tokens).
|
||||
* We check actual constraint state rather than relying solely on version tracking.
|
||||
*/
|
||||
private removeSessionSummariesUniqueConstraint(): void {
|
||||
// Check if migration already applied
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(7) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Check if UNIQUE constraint exists
|
||||
// Check actual constraint state — don't rely on version tracking alone (issue #979)
|
||||
const summariesIndexes = this.db.query('PRAGMA index_list(session_summaries)').all() as IndexInfo[];
|
||||
const hasUniqueConstraint = summariesIndexes.some(idx => idx.unique === 1);
|
||||
|
||||
@@ -222,6 +215,9 @@ export class SessionStore {
|
||||
// Begin transaction
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
|
||||
|
||||
// Create new table without UNIQUE constraint
|
||||
this.db.run(`
|
||||
CREATE TABLE session_summaries_new (
|
||||
@@ -335,6 +331,9 @@ export class SessionStore {
|
||||
// Begin transaction
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS observations_new');
|
||||
|
||||
// Create new table with text as nullable
|
||||
this.db.run(`
|
||||
CREATE TABLE observations_new (
|
||||
@@ -428,34 +427,39 @@ export class SessionStore {
|
||||
CREATE INDEX idx_user_prompts_lookup ON user_prompts(content_session_id, prompt_number);
|
||||
`);
|
||||
|
||||
// Create FTS5 virtual table
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE user_prompts_fts USING fts5(
|
||||
prompt_text,
|
||||
content='user_prompts',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
// Create FTS5 virtual table — skip if FTS5 is unavailable (e.g., Bun on Windows #791).
|
||||
// The user_prompts table itself is still created; only FTS indexing is skipped.
|
||||
try {
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE user_prompts_fts USING fts5(
|
||||
prompt_text,
|
||||
content='user_prompts',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
|
||||
// Create triggers to sync FTS5
|
||||
this.db.run(`
|
||||
CREATE TRIGGER user_prompts_ai AFTER INSERT ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
// Create triggers to sync FTS5
|
||||
this.db.run(`
|
||||
CREATE TRIGGER user_prompts_ai AFTER INSERT ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER user_prompts_ad AFTER DELETE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
END;
|
||||
CREATE TRIGGER user_prompts_ad AFTER DELETE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER user_prompts_au AFTER UPDATE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
`);
|
||||
CREATE TRIGGER user_prompts_au AFTER UPDATE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
`);
|
||||
} catch (ftsError) {
|
||||
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, ftsError as Error);
|
||||
}
|
||||
|
||||
// Commit transaction
|
||||
this.db.run('COMMIT');
|
||||
@@ -463,7 +467,7 @@ export class SessionStore {
|
||||
// Record migration
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(10, new Date().toISOString());
|
||||
|
||||
logger.debug('DB', 'Successfully created user_prompts table with FTS5 support');
|
||||
logger.debug('DB', 'Successfully created user_prompts table');
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -675,6 +679,9 @@ export class SessionStore {
|
||||
this.db.run('DROP TRIGGER IF EXISTS observations_ad');
|
||||
this.db.run('DROP TRIGGER IF EXISTS observations_au');
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS observations_new');
|
||||
|
||||
this.db.run(`
|
||||
CREATE TABLE observations_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
@@ -744,6 +751,9 @@ export class SessionStore {
|
||||
// 2. Recreate session_summaries table
|
||||
// ==========================================
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
|
||||
|
||||
this.db.run(`
|
||||
CREATE TABLE session_summaries_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
@@ -825,6 +835,44 @@ export class SessionStore {
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add content_hash column to observations for deduplication (migration 22)
|
||||
*/
|
||||
private addObservationContentHashColumn(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(22) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
const tableInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
|
||||
const hasColumn = tableInfo.some(col => col.name === 'content_hash');
|
||||
|
||||
if (!hasColumn) {
|
||||
this.db.run('ALTER TABLE observations ADD COLUMN content_hash TEXT');
|
||||
this.db.run("UPDATE observations SET content_hash = substr(hex(randomblob(8)), 1, 16) WHERE content_hash IS NULL");
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_content_hash ON observations(content_hash, created_at_epoch)');
|
||||
logger.debug('DB', 'Added content_hash column to observations table with backfill and index');
|
||||
}
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(22, new Date().toISOString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Add custom_title column to sdk_sessions for agent attribution (migration 23)
|
||||
*/
|
||||
private addSessionCustomTitleColumn(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(23) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
const tableInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
|
||||
const hasColumn = tableInfo.some(col => col.name === 'custom_title');
|
||||
|
||||
if (!hasColumn) {
|
||||
this.db.run('ALTER TABLE sdk_sessions ADD COLUMN custom_title TEXT');
|
||||
logger.debug('DB', 'Added custom_title column to sdk_sessions table');
|
||||
}
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(23, new Date().toISOString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Update the memory session ID for a session
|
||||
* Called by SDKAgent when it captures the session ID from the first SDK message
|
||||
@@ -1290,9 +1338,10 @@ export class SessionStore {
|
||||
memory_session_id: string | null;
|
||||
project: string;
|
||||
user_prompt: string;
|
||||
custom_title: string | null;
|
||||
} | null {
|
||||
const stmt = this.db.prepare(`
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt, custom_title
|
||||
FROM sdk_sessions
|
||||
WHERE id = ?
|
||||
LIMIT 1
|
||||
@@ -1311,6 +1360,7 @@ export class SessionStore {
|
||||
memory_session_id: string;
|
||||
project: string;
|
||||
user_prompt: string;
|
||||
custom_title: string | null;
|
||||
started_at: string;
|
||||
started_at_epoch: number;
|
||||
completed_at: string | null;
|
||||
@@ -1321,7 +1371,7 @@ export class SessionStore {
|
||||
|
||||
const placeholders = memorySessionIds.map(() => '?').join(',');
|
||||
const stmt = this.db.prepare(`
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt,
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt, custom_title,
|
||||
started_at, started_at_epoch, completed_at, completed_at_epoch, status
|
||||
FROM sdk_sessions
|
||||
WHERE memory_session_id IN (${placeholders})
|
||||
@@ -1366,7 +1416,7 @@ export class SessionStore {
|
||||
* Pure get-or-create: never modifies memory_session_id.
|
||||
* Multi-terminal isolation is handled by ON UPDATE CASCADE at the schema level.
|
||||
*/
|
||||
createSDKSession(contentSessionId: string, project: string, userPrompt: string): number {
|
||||
createSDKSession(contentSessionId: string, project: string, userPrompt: string, customTitle?: string): number {
|
||||
const now = new Date();
|
||||
const nowEpoch = now.getTime();
|
||||
|
||||
@@ -1383,6 +1433,13 @@ export class SessionStore {
|
||||
WHERE content_session_id = ? AND (project IS NULL OR project = '')
|
||||
`).run(project, contentSessionId);
|
||||
}
|
||||
// Backfill custom_title if provided and not yet set
|
||||
if (customTitle) {
|
||||
this.db.prepare(`
|
||||
UPDATE sdk_sessions SET custom_title = ?
|
||||
WHERE content_session_id = ? AND custom_title IS NULL
|
||||
`).run(customTitle, contentSessionId);
|
||||
}
|
||||
return existing.id;
|
||||
}
|
||||
|
||||
@@ -1392,9 +1449,9 @@ export class SessionStore {
|
||||
// must NEVER equal contentSessionId - that would inject memory messages into the user's transcript!
|
||||
this.db.prepare(`
|
||||
INSERT INTO sdk_sessions
|
||||
(content_session_id, memory_session_id, project, user_prompt, started_at, started_at_epoch, status)
|
||||
VALUES (?, NULL, ?, ?, ?, ?, 'active')
|
||||
`).run(contentSessionId, project, userPrompt, now.toISOString(), nowEpoch);
|
||||
(content_session_id, memory_session_id, project, user_prompt, custom_title, started_at, started_at_epoch, status)
|
||||
VALUES (?, NULL, ?, ?, ?, ?, ?, 'active')
|
||||
`).run(contentSessionId, project, userPrompt, customTitle || null, now.toISOString(), nowEpoch);
|
||||
|
||||
// Return new ID
|
||||
const row = this.db.prepare('SELECT id FROM sdk_sessions WHERE content_session_id = ?')
|
||||
@@ -1441,6 +1498,7 @@ export class SessionStore {
|
||||
/**
|
||||
* Store an observation (from SDK parsing)
|
||||
* Assumes session already exists (created by hook)
|
||||
* Performs content-hash deduplication: skips INSERT if an identical observation exists within 30s
|
||||
*/
|
||||
storeObservation(
|
||||
memorySessionId: string,
|
||||
@@ -1463,11 +1521,18 @@ export class SessionStore {
|
||||
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
|
||||
const timestampIso = new Date(timestampEpoch).toISOString();
|
||||
|
||||
// Content-hash deduplication
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(this.db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
return { id: existing.id, createdAtEpoch: existing.created_at_epoch };
|
||||
}
|
||||
|
||||
const stmt = this.db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
@@ -1483,6 +1548,7 @@ export class SessionStore {
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
);
|
||||
|
||||
@@ -372,6 +372,16 @@ export const migration005: Migration = {
|
||||
export const migration006: Migration = {
|
||||
version: 6,
|
||||
up: (db: Database) => {
|
||||
// FTS5 may be unavailable on some platforms (e.g., Bun on Windows #791).
|
||||
// Probe before creating tables — search falls back to ChromaDB when unavailable.
|
||||
try {
|
||||
db.run('CREATE VIRTUAL TABLE _fts5_probe USING fts5(test_column)');
|
||||
db.run('DROP TABLE _fts5_probe');
|
||||
} catch {
|
||||
console.log('⚠️ FTS5 not available on this platform — skipping FTS migration (search uses ChromaDB)');
|
||||
return;
|
||||
}
|
||||
|
||||
// FTS5 virtual table for observations
|
||||
// Note: This assumes the hierarchical fields (title, subtitle, etc.) already exist
|
||||
// from the inline migrations in SessionStore constructor
|
||||
|
||||
@@ -31,11 +31,18 @@ export class MigrationRunner {
|
||||
this.renameSessionIdColumns();
|
||||
this.repairSessionIdColumnRename();
|
||||
this.addFailedAtEpochColumn();
|
||||
this.addOnUpdateCascadeToForeignKeys();
|
||||
this.addObservationContentHashColumn();
|
||||
this.addSessionCustomTitleColumn();
|
||||
}
|
||||
|
||||
/**
|
||||
* Initialize database schema using migrations (migration004)
|
||||
* This runs the core SDK tables migration if no tables exist
|
||||
* Initialize database schema (migration004)
|
||||
*
|
||||
* ALWAYS creates core tables using CREATE TABLE IF NOT EXISTS — safe to run
|
||||
* regardless of schema_versions state. This fixes issue #979 where the old
|
||||
* DatabaseManager migration system (versions 1-7) shared the schema_versions
|
||||
* table, causing maxApplied > 0 and skipping core table creation entirely.
|
||||
*/
|
||||
private initializeSchema(): void {
|
||||
// Create schema_versions table if it doesn't exist
|
||||
@@ -47,90 +54,77 @@ export class MigrationRunner {
|
||||
)
|
||||
`);
|
||||
|
||||
// Get applied migrations
|
||||
const appliedVersions = this.db.prepare('SELECT version FROM schema_versions ORDER BY version').all() as SchemaVersion[];
|
||||
const maxApplied = appliedVersions.length > 0 ? Math.max(...appliedVersions.map(v => v.version)) : 0;
|
||||
// Always create core tables — IF NOT EXISTS makes this idempotent
|
||||
this.db.run(`
|
||||
CREATE TABLE IF NOT EXISTS sdk_sessions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
content_session_id TEXT UNIQUE NOT NULL,
|
||||
memory_session_id TEXT UNIQUE,
|
||||
project TEXT NOT NULL,
|
||||
user_prompt TEXT,
|
||||
started_at TEXT NOT NULL,
|
||||
started_at_epoch INTEGER NOT NULL,
|
||||
completed_at TEXT,
|
||||
completed_at_epoch INTEGER,
|
||||
status TEXT CHECK(status IN ('active', 'completed', 'failed')) NOT NULL DEFAULT 'active'
|
||||
);
|
||||
|
||||
// Only run migration004 if no migrations have been applied
|
||||
// This creates the sdk_sessions, observations, and session_summaries tables
|
||||
if (maxApplied === 0) {
|
||||
logger.info('DB', 'Initializing fresh database with migration004');
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_claude_id ON sdk_sessions(content_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_sdk_id ON sdk_sessions(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_project ON sdk_sessions(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_status ON sdk_sessions(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_started ON sdk_sessions(started_at_epoch DESC);
|
||||
|
||||
// Migration004: SDK agent architecture tables
|
||||
this.db.run(`
|
||||
CREATE TABLE IF NOT EXISTS sdk_sessions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
content_session_id TEXT UNIQUE NOT NULL,
|
||||
memory_session_id TEXT UNIQUE,
|
||||
project TEXT NOT NULL,
|
||||
user_prompt TEXT,
|
||||
started_at TEXT NOT NULL,
|
||||
started_at_epoch INTEGER NOT NULL,
|
||||
completed_at TEXT,
|
||||
completed_at_epoch INTEGER,
|
||||
status TEXT CHECK(status IN ('active', 'completed', 'failed')) NOT NULL DEFAULT 'active'
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
text TEXT NOT NULL,
|
||||
type TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_claude_id ON sdk_sessions(content_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_sdk_id ON sdk_sessions(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_project ON sdk_sessions(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_status ON sdk_sessions(status);
|
||||
CREATE INDEX IF NOT EXISTS idx_sdk_sessions_started ON sdk_sessions(started_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_project ON observations(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_type ON observations(type);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_created ON observations(created_at_epoch DESC);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS observations (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
text TEXT NOT NULL,
|
||||
type TEXT NOT NULL,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE
|
||||
);
|
||||
CREATE TABLE IF NOT EXISTS session_summaries (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT UNIQUE NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
request TEXT,
|
||||
investigated TEXT,
|
||||
learned TEXT,
|
||||
completed TEXT,
|
||||
next_steps TEXT,
|
||||
files_read TEXT,
|
||||
files_edited TEXT,
|
||||
notes TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_project ON observations(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_type ON observations(type);
|
||||
CREATE INDEX IF NOT EXISTS idx_observations_created ON observations(created_at_epoch DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_project ON session_summaries(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS session_summaries (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT UNIQUE NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
request TEXT,
|
||||
investigated TEXT,
|
||||
learned TEXT,
|
||||
completed TEXT,
|
||||
next_steps TEXT,
|
||||
files_read TEXT,
|
||||
files_edited TEXT,
|
||||
notes TEXT,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_project ON session_summaries(project);
|
||||
CREATE INDEX IF NOT EXISTS idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
// Record migration004 as applied
|
||||
this.db.prepare('INSERT INTO schema_versions (version, applied_at) VALUES (?, ?)').run(4, new Date().toISOString());
|
||||
|
||||
logger.info('DB', 'Migration004 applied successfully');
|
||||
}
|
||||
// Record migration004 as applied (OR IGNORE handles re-runs safely)
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(4, new Date().toISOString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Ensure worker_port column exists (migration 5)
|
||||
*
|
||||
* NOTE: Version 5 conflicts with old DatabaseManager migration005 (which drops orphaned tables).
|
||||
* We check actual column state rather than relying solely on version tracking.
|
||||
*/
|
||||
private ensureWorkerPortColumn(): void {
|
||||
// Check if migration already applied
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(5) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Check if column exists
|
||||
// Check actual column existence — don't rely on version tracking alone (issue #979)
|
||||
const tableInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
|
||||
const hasWorkerPort = tableInfo.some(col => col.name === 'worker_port');
|
||||
|
||||
@@ -145,12 +139,12 @@ export class MigrationRunner {
|
||||
|
||||
/**
|
||||
* Ensure prompt tracking columns exist (migration 6)
|
||||
*
|
||||
* NOTE: Version 6 conflicts with old DatabaseManager migration006 (which creates FTS5 tables).
|
||||
* We check actual column state rather than relying solely on version tracking.
|
||||
*/
|
||||
private ensurePromptTrackingColumns(): void {
|
||||
// Check if migration already applied
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(6) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Check actual column existence — don't rely on version tracking alone (issue #979)
|
||||
// Check sdk_sessions for prompt_counter
|
||||
const sessionsInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
|
||||
const hasPromptCounter = sessionsInfo.some(col => col.name === 'prompt_counter');
|
||||
@@ -184,13 +178,12 @@ export class MigrationRunner {
|
||||
|
||||
/**
|
||||
* Remove UNIQUE constraint from session_summaries.memory_session_id (migration 7)
|
||||
*
|
||||
* NOTE: Version 7 conflicts with old DatabaseManager migration007 (which adds discovery_tokens).
|
||||
* We check actual constraint state rather than relying solely on version tracking.
|
||||
*/
|
||||
private removeSessionSummariesUniqueConstraint(): void {
|
||||
// Check if migration already applied
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(7) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
// Check if UNIQUE constraint exists
|
||||
// Check actual constraint state — don't rely on version tracking alone (issue #979)
|
||||
const summariesIndexes = this.db.query('PRAGMA index_list(session_summaries)').all() as IndexInfo[];
|
||||
const hasUniqueConstraint = summariesIndexes.some(idx => idx.unique === 1);
|
||||
|
||||
@@ -205,6 +198,9 @@ export class MigrationRunner {
|
||||
// Begin transaction
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
|
||||
|
||||
// Create new table without UNIQUE constraint
|
||||
this.db.run(`
|
||||
CREATE TABLE session_summaries_new (
|
||||
@@ -318,6 +314,9 @@ export class MigrationRunner {
|
||||
// Begin transaction
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS observations_new');
|
||||
|
||||
// Create new table with text as nullable
|
||||
this.db.run(`
|
||||
CREATE TABLE observations_new (
|
||||
@@ -411,34 +410,39 @@ export class MigrationRunner {
|
||||
CREATE INDEX idx_user_prompts_lookup ON user_prompts(content_session_id, prompt_number);
|
||||
`);
|
||||
|
||||
// Create FTS5 virtual table
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE user_prompts_fts USING fts5(
|
||||
prompt_text,
|
||||
content='user_prompts',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
// Create FTS5 virtual table — skip if FTS5 is unavailable (e.g., Bun on Windows #791).
|
||||
// The user_prompts table itself is still created; only FTS indexing is skipped.
|
||||
try {
|
||||
this.db.run(`
|
||||
CREATE VIRTUAL TABLE user_prompts_fts USING fts5(
|
||||
prompt_text,
|
||||
content='user_prompts',
|
||||
content_rowid='id'
|
||||
);
|
||||
`);
|
||||
|
||||
// Create triggers to sync FTS5
|
||||
this.db.run(`
|
||||
CREATE TRIGGER user_prompts_ai AFTER INSERT ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
// Create triggers to sync FTS5
|
||||
this.db.run(`
|
||||
CREATE TRIGGER user_prompts_ai AFTER INSERT ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER user_prompts_ad AFTER DELETE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
END;
|
||||
CREATE TRIGGER user_prompts_ad AFTER DELETE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER user_prompts_au AFTER UPDATE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
`);
|
||||
CREATE TRIGGER user_prompts_au AFTER UPDATE ON user_prompts BEGIN
|
||||
INSERT INTO user_prompts_fts(user_prompts_fts, rowid, prompt_text)
|
||||
VALUES('delete', old.id, old.prompt_text);
|
||||
INSERT INTO user_prompts_fts(rowid, prompt_text)
|
||||
VALUES (new.id, new.prompt_text);
|
||||
END;
|
||||
`);
|
||||
} catch (ftsError) {
|
||||
logger.warn('DB', 'FTS5 not available — user_prompts_fts skipped (search uses ChromaDB)', {}, ftsError as Error);
|
||||
}
|
||||
|
||||
// Commit transaction
|
||||
this.db.run('COMMIT');
|
||||
@@ -446,7 +450,7 @@ export class MigrationRunner {
|
||||
// Record migration
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(10, new Date().toISOString());
|
||||
|
||||
logger.debug('DB', 'Successfully created user_prompts table with FTS5 support');
|
||||
logger.debug('DB', 'Successfully created user_prompts table');
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -628,4 +632,231 @@ export class MigrationRunner {
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(20, new Date().toISOString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Add ON UPDATE CASCADE to FK constraints on observations and session_summaries (migration 21)
|
||||
*
|
||||
* Both tables have FK(memory_session_id) -> sdk_sessions(memory_session_id) with ON DELETE CASCADE
|
||||
* but missing ON UPDATE CASCADE. This causes FK constraint violations when code updates
|
||||
* sdk_sessions.memory_session_id while child rows still reference the old value.
|
||||
*
|
||||
* SQLite doesn't support ALTER TABLE for FK changes, so we recreate both tables.
|
||||
*/
|
||||
private addOnUpdateCascadeToForeignKeys(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(21) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
logger.debug('DB', 'Adding ON UPDATE CASCADE to FK constraints on observations and session_summaries');
|
||||
|
||||
// PRAGMA foreign_keys must be set outside a transaction
|
||||
this.db.run('PRAGMA foreign_keys = OFF');
|
||||
this.db.run('BEGIN TRANSACTION');
|
||||
|
||||
try {
|
||||
// ==========================================
|
||||
// 1. Recreate observations table
|
||||
// ==========================================
|
||||
|
||||
// Drop FTS triggers first (they reference the observations table)
|
||||
this.db.run('DROP TRIGGER IF EXISTS observations_ai');
|
||||
this.db.run('DROP TRIGGER IF EXISTS observations_ad');
|
||||
this.db.run('DROP TRIGGER IF EXISTS observations_au');
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS observations_new');
|
||||
|
||||
this.db.run(`
|
||||
CREATE TABLE observations_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
text TEXT,
|
||||
type TEXT NOT NULL,
|
||||
title TEXT,
|
||||
subtitle TEXT,
|
||||
facts TEXT,
|
||||
narrative TEXT,
|
||||
concepts TEXT,
|
||||
files_read TEXT,
|
||||
files_modified TEXT,
|
||||
prompt_number INTEGER,
|
||||
discovery_tokens INTEGER DEFAULT 0,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
)
|
||||
`);
|
||||
|
||||
this.db.run(`
|
||||
INSERT INTO observations_new
|
||||
SELECT id, memory_session_id, project, text, type, title, subtitle, facts,
|
||||
narrative, concepts, files_read, files_modified, prompt_number,
|
||||
discovery_tokens, created_at, created_at_epoch
|
||||
FROM observations
|
||||
`);
|
||||
|
||||
this.db.run('DROP TABLE observations');
|
||||
this.db.run('ALTER TABLE observations_new RENAME TO observations');
|
||||
|
||||
// Recreate indexes
|
||||
this.db.run(`
|
||||
CREATE INDEX idx_observations_sdk_session ON observations(memory_session_id);
|
||||
CREATE INDEX idx_observations_project ON observations(project);
|
||||
CREATE INDEX idx_observations_type ON observations(type);
|
||||
CREATE INDEX idx_observations_created ON observations(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
// Recreate FTS triggers only if observations_fts exists
|
||||
const hasFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='observations_fts'").all() as { name: string }[]).length > 0;
|
||||
if (hasFTS) {
|
||||
this.db.run(`
|
||||
CREATE TRIGGER IF NOT EXISTS observations_ai AFTER INSERT ON observations BEGIN
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS observations_ad AFTER DELETE ON observations BEGIN
|
||||
INSERT INTO observations_fts(observations_fts, rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES('delete', old.id, old.title, old.subtitle, old.narrative, old.text, old.facts, old.concepts);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS observations_au AFTER UPDATE ON observations BEGIN
|
||||
INSERT INTO observations_fts(observations_fts, rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES('delete', old.id, old.title, old.subtitle, old.narrative, old.text, old.facts, old.concepts);
|
||||
INSERT INTO observations_fts(rowid, title, subtitle, narrative, text, facts, concepts)
|
||||
VALUES (new.id, new.title, new.subtitle, new.narrative, new.text, new.facts, new.concepts);
|
||||
END;
|
||||
`);
|
||||
}
|
||||
|
||||
// ==========================================
|
||||
// 2. Recreate session_summaries table
|
||||
// ==========================================
|
||||
|
||||
// Clean up leftover temp table from a previously-crashed run
|
||||
this.db.run('DROP TABLE IF EXISTS session_summaries_new');
|
||||
|
||||
this.db.run(`
|
||||
CREATE TABLE session_summaries_new (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
memory_session_id TEXT NOT NULL,
|
||||
project TEXT NOT NULL,
|
||||
request TEXT,
|
||||
investigated TEXT,
|
||||
learned TEXT,
|
||||
completed TEXT,
|
||||
next_steps TEXT,
|
||||
files_read TEXT,
|
||||
files_edited TEXT,
|
||||
notes TEXT,
|
||||
prompt_number INTEGER,
|
||||
discovery_tokens INTEGER DEFAULT 0,
|
||||
created_at TEXT NOT NULL,
|
||||
created_at_epoch INTEGER NOT NULL,
|
||||
FOREIGN KEY(memory_session_id) REFERENCES sdk_sessions(memory_session_id) ON DELETE CASCADE ON UPDATE CASCADE
|
||||
)
|
||||
`);
|
||||
|
||||
this.db.run(`
|
||||
INSERT INTO session_summaries_new
|
||||
SELECT id, memory_session_id, project, request, investigated, learned,
|
||||
completed, next_steps, files_read, files_edited, notes,
|
||||
prompt_number, discovery_tokens, created_at, created_at_epoch
|
||||
FROM session_summaries
|
||||
`);
|
||||
|
||||
// Drop session_summaries FTS triggers before dropping the table
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ai');
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_ad');
|
||||
this.db.run('DROP TRIGGER IF EXISTS session_summaries_au');
|
||||
|
||||
this.db.run('DROP TABLE session_summaries');
|
||||
this.db.run('ALTER TABLE session_summaries_new RENAME TO session_summaries');
|
||||
|
||||
// Recreate indexes
|
||||
this.db.run(`
|
||||
CREATE INDEX idx_session_summaries_sdk_session ON session_summaries(memory_session_id);
|
||||
CREATE INDEX idx_session_summaries_project ON session_summaries(project);
|
||||
CREATE INDEX idx_session_summaries_created ON session_summaries(created_at_epoch DESC);
|
||||
`);
|
||||
|
||||
// Recreate session_summaries FTS triggers if FTS table exists
|
||||
const hasSummariesFTS = (this.db.prepare("SELECT name FROM sqlite_master WHERE type='table' AND name='session_summaries_fts'").all() as { name: string }[]).length > 0;
|
||||
if (hasSummariesFTS) {
|
||||
this.db.run(`
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_ai AFTER INSERT ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_ad AFTER DELETE ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(session_summaries_fts, rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES('delete', old.id, old.request, old.investigated, old.learned, old.completed, old.next_steps, old.notes);
|
||||
END;
|
||||
|
||||
CREATE TRIGGER IF NOT EXISTS session_summaries_au AFTER UPDATE ON session_summaries BEGIN
|
||||
INSERT INTO session_summaries_fts(session_summaries_fts, rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES('delete', old.id, old.request, old.investigated, old.learned, old.completed, old.next_steps, old.notes);
|
||||
INSERT INTO session_summaries_fts(rowid, request, investigated, learned, completed, next_steps, notes)
|
||||
VALUES (new.id, new.request, new.investigated, new.learned, new.completed, new.next_steps, new.notes);
|
||||
END;
|
||||
`);
|
||||
}
|
||||
|
||||
// Record migration
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(21, new Date().toISOString());
|
||||
|
||||
this.db.run('COMMIT');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
|
||||
logger.debug('DB', 'Successfully added ON UPDATE CASCADE to FK constraints');
|
||||
} catch (error) {
|
||||
this.db.run('ROLLBACK');
|
||||
this.db.run('PRAGMA foreign_keys = ON');
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Add content_hash column to observations for deduplication (migration 22)
|
||||
* Prevents duplicate observations from being stored when the same content is processed multiple times.
|
||||
* Backfills existing rows with unique random hashes so they don't block new inserts.
|
||||
*/
|
||||
private addObservationContentHashColumn(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(22) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
const tableInfo = this.db.query('PRAGMA table_info(observations)').all() as TableColumnInfo[];
|
||||
const hasColumn = tableInfo.some(col => col.name === 'content_hash');
|
||||
|
||||
if (!hasColumn) {
|
||||
this.db.run('ALTER TABLE observations ADD COLUMN content_hash TEXT');
|
||||
// Backfill existing rows with unique random hashes
|
||||
this.db.run("UPDATE observations SET content_hash = substr(hex(randomblob(8)), 1, 16) WHERE content_hash IS NULL");
|
||||
// Index for fast dedup lookups
|
||||
this.db.run('CREATE INDEX IF NOT EXISTS idx_observations_content_hash ON observations(content_hash, created_at_epoch)');
|
||||
logger.debug('DB', 'Added content_hash column to observations table with backfill and index');
|
||||
}
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(22, new Date().toISOString());
|
||||
}
|
||||
|
||||
/**
|
||||
* Add custom_title column to sdk_sessions for agent attribution (migration 23)
|
||||
* Allows callers (e.g. Maestro agents) to label sessions with a human-readable name.
|
||||
*/
|
||||
private addSessionCustomTitleColumn(): void {
|
||||
const applied = this.db.prepare('SELECT version FROM schema_versions WHERE version = ?').get(23) as SchemaVersion | undefined;
|
||||
if (applied) return;
|
||||
|
||||
const tableInfo = this.db.query('PRAGMA table_info(sdk_sessions)').all() as TableColumnInfo[];
|
||||
const hasColumn = tableInfo.some(col => col.name === 'custom_title');
|
||||
|
||||
if (!hasColumn) {
|
||||
this.db.run('ALTER TABLE sdk_sessions ADD COLUMN custom_title TEXT');
|
||||
logger.debug('DB', 'Added custom_title column to sdk_sessions table');
|
||||
}
|
||||
|
||||
this.db.prepare('INSERT OR IGNORE INTO schema_versions (version, applied_at) VALUES (?, ?)').run(23, new Date().toISOString());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -3,13 +3,50 @@
|
||||
* Extracted from SessionStore.ts for modular organization
|
||||
*/
|
||||
|
||||
import { createHash } from 'crypto';
|
||||
import { Database } from 'bun:sqlite';
|
||||
import { logger } from '../../../utils/logger.js';
|
||||
import { getCurrentProjectName } from '../../../shared/paths.js';
|
||||
import type { ObservationInput, StoreObservationResult } from './types.js';
|
||||
|
||||
/** Deduplication window: observations with the same content hash within this window are skipped */
|
||||
const DEDUP_WINDOW_MS = 30_000;
|
||||
|
||||
/**
|
||||
* Compute a short content hash for deduplication.
|
||||
* Uses (memory_session_id, title, narrative) as the semantic identity of an observation.
|
||||
*/
|
||||
export function computeObservationContentHash(
|
||||
memorySessionId: string,
|
||||
title: string | null,
|
||||
narrative: string | null
|
||||
): string {
|
||||
return createHash('sha256')
|
||||
.update((memorySessionId || '') + (title || '') + (narrative || ''))
|
||||
.digest('hex')
|
||||
.slice(0, 16);
|
||||
}
|
||||
|
||||
/**
|
||||
* Check if a duplicate observation exists within the dedup window.
|
||||
* Returns the existing observation's id and timestamp if found, null otherwise.
|
||||
*/
|
||||
export function findDuplicateObservation(
|
||||
db: Database,
|
||||
contentHash: string,
|
||||
timestampEpoch: number
|
||||
): { id: number; created_at_epoch: number } | null {
|
||||
const windowStart = timestampEpoch - DEDUP_WINDOW_MS;
|
||||
const stmt = db.prepare(
|
||||
'SELECT id, created_at_epoch FROM observations WHERE content_hash = ? AND created_at_epoch > ?'
|
||||
);
|
||||
return (stmt.get(contentHash, windowStart) as { id: number; created_at_epoch: number } | null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Store an observation (from SDK parsing)
|
||||
* Assumes session already exists (created by hook)
|
||||
* Performs content-hash deduplication: skips INSERT if an identical observation exists within 30s
|
||||
*/
|
||||
export function storeObservation(
|
||||
db: Database,
|
||||
@@ -24,16 +61,27 @@ export function storeObservation(
|
||||
const timestampEpoch = overrideTimestampEpoch ?? Date.now();
|
||||
const timestampIso = new Date(timestampEpoch).toISOString();
|
||||
|
||||
// Guard against empty project string (race condition where project isn't set yet)
|
||||
const resolvedProject = project || getCurrentProjectName();
|
||||
|
||||
// Content-hash deduplication
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
logger.debug('DEDUP', `Skipped duplicate observation | contentHash=${contentHash} | existingId=${existing.id}`);
|
||||
return { id: existing.id, createdAtEpoch: existing.created_at_epoch };
|
||||
}
|
||||
|
||||
const stmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
const result = stmt.run(
|
||||
memorySessionId,
|
||||
project,
|
||||
resolvedProject,
|
||||
observation.type,
|
||||
observation.title,
|
||||
observation.subtitle,
|
||||
@@ -44,6 +92,7 @@ export function storeObservation(
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
);
|
||||
|
||||
@@ -21,7 +21,8 @@ export function createSDKSession(
|
||||
db: Database,
|
||||
contentSessionId: string,
|
||||
project: string,
|
||||
userPrompt: string
|
||||
userPrompt: string,
|
||||
customTitle?: string
|
||||
): number {
|
||||
const now = new Date();
|
||||
const nowEpoch = now.getTime();
|
||||
@@ -39,6 +40,13 @@ export function createSDKSession(
|
||||
WHERE content_session_id = ? AND (project IS NULL OR project = '')
|
||||
`).run(project, contentSessionId);
|
||||
}
|
||||
// Backfill custom_title if provided and not yet set
|
||||
if (customTitle) {
|
||||
db.prepare(`
|
||||
UPDATE sdk_sessions SET custom_title = ?
|
||||
WHERE content_session_id = ? AND custom_title IS NULL
|
||||
`).run(customTitle, contentSessionId);
|
||||
}
|
||||
return existing.id;
|
||||
}
|
||||
|
||||
@@ -48,9 +56,9 @@ export function createSDKSession(
|
||||
// must NEVER equal contentSessionId - that would inject memory messages into the user's transcript!
|
||||
db.prepare(`
|
||||
INSERT INTO sdk_sessions
|
||||
(content_session_id, memory_session_id, project, user_prompt, started_at, started_at_epoch, status)
|
||||
VALUES (?, NULL, ?, ?, ?, ?, 'active')
|
||||
`).run(contentSessionId, project, userPrompt, now.toISOString(), nowEpoch);
|
||||
(content_session_id, memory_session_id, project, user_prompt, custom_title, started_at, started_at_epoch, status)
|
||||
VALUES (?, NULL, ?, ?, ?, ?, ?, 'active')
|
||||
`).run(contentSessionId, project, userPrompt, customTitle || null, now.toISOString(), nowEpoch);
|
||||
|
||||
// Return new ID
|
||||
const row = db.prepare('SELECT id FROM sdk_sessions WHERE content_session_id = ?')
|
||||
|
||||
@@ -17,7 +17,7 @@ import type {
|
||||
*/
|
||||
export function getSessionById(db: Database, id: number): SessionBasic | null {
|
||||
const stmt = db.prepare(`
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt, custom_title
|
||||
FROM sdk_sessions
|
||||
WHERE id = ?
|
||||
LIMIT 1
|
||||
@@ -38,7 +38,7 @@ export function getSdkSessionsBySessionIds(
|
||||
|
||||
const placeholders = memorySessionIds.map(() => '?').join(',');
|
||||
const stmt = db.prepare(`
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt,
|
||||
SELECT id, content_session_id, memory_session_id, project, user_prompt, custom_title,
|
||||
started_at, started_at_epoch, completed_at, completed_at_epoch, status
|
||||
FROM sdk_sessions
|
||||
WHERE memory_session_id IN (${placeholders})
|
||||
|
||||
@@ -13,6 +13,7 @@ export interface SessionBasic {
|
||||
memory_session_id: string | null;
|
||||
project: string;
|
||||
user_prompt: string;
|
||||
custom_title: string | null;
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -24,6 +25,7 @@ export interface SessionFull {
|
||||
memory_session_id: string;
|
||||
project: string;
|
||||
user_prompt: string;
|
||||
custom_title: string | null;
|
||||
started_at: string;
|
||||
started_at_epoch: number;
|
||||
completed_at: string | null;
|
||||
|
||||
@@ -10,6 +10,7 @@ import { Database } from 'bun:sqlite';
|
||||
import { logger } from '../../utils/logger.js';
|
||||
import type { ObservationInput } from './observations/types.js';
|
||||
import type { SummaryInput } from './summaries/types.js';
|
||||
import { computeObservationContentHash, findDuplicateObservation } from './observations/store.js';
|
||||
|
||||
/**
|
||||
* Result from storeObservations / storeObservationsAndMarkComplete transaction
|
||||
@@ -63,15 +64,22 @@ export function storeObservationsAndMarkComplete(
|
||||
const storeAndMarkTx = db.transaction(() => {
|
||||
const observationIds: number[] = [];
|
||||
|
||||
// 1. Store all observations
|
||||
// 1. Store all observations (with content-hash deduplication)
|
||||
const obsStmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
for (const observation of observations) {
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
observationIds.push(existing.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = obsStmt.run(
|
||||
memorySessionId,
|
||||
project,
|
||||
@@ -85,6 +93,7 @@ export function storeObservationsAndMarkComplete(
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
);
|
||||
@@ -174,15 +183,22 @@ export function storeObservations(
|
||||
const storeTx = db.transaction(() => {
|
||||
const observationIds: number[] = [];
|
||||
|
||||
// 1. Store all observations
|
||||
// 1. Store all observations (with content-hash deduplication)
|
||||
const obsStmt = db.prepare(`
|
||||
INSERT INTO observations
|
||||
(memory_session_id, project, type, title, subtitle, facts, narrative, concepts,
|
||||
files_read, files_modified, prompt_number, discovery_tokens, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
files_read, files_modified, prompt_number, discovery_tokens, content_hash, created_at, created_at_epoch)
|
||||
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
|
||||
`);
|
||||
|
||||
for (const observation of observations) {
|
||||
const contentHash = computeObservationContentHash(memorySessionId, observation.title, observation.narrative);
|
||||
const existing = findDuplicateObservation(db, contentHash, timestampEpoch);
|
||||
if (existing) {
|
||||
observationIds.push(existing.id);
|
||||
continue;
|
||||
}
|
||||
|
||||
const result = obsStmt.run(
|
||||
memorySessionId,
|
||||
project,
|
||||
@@ -196,6 +212,7 @@ export function storeObservations(
|
||||
JSON.stringify(observation.files_modified),
|
||||
promptNumber || null,
|
||||
discoveryTokens,
|
||||
contentHash,
|
||||
timestampIso,
|
||||
timestampEpoch
|
||||
);
|
||||
|
||||
Reference in New Issue
Block a user