Files
claude-mem/src/services/infrastructure/HealthMonitor.ts
T
Alex Newman 80a8c90a1a feat: add embedded Process Supervisor for unified process lifecycle (#1370)
* feat: add embedded Process Supervisor for unified process lifecycle management

Consolidates scattered process management (ProcessManager, GracefulShutdown,
HealthMonitor, ProcessRegistry) into a unified src/supervisor/ module.

New: ProcessRegistry with JSON persistence, env sanitizer (strips CLAUDECODE_*
vars), graceful shutdown cascade (SIGTERM → 5s wait → SIGKILL with tree-kill
on Windows), PID file liveness validation, and singleton Supervisor API.

Fixes #1352 (worker inherits CLAUDECODE env causing nested sessions)
Fixes #1356 (zombie TCP socket after Windows reboot)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add session-scoped process reaping to supervisor

Adds reapSession(sessionId) to ProcessRegistry for killing session-tagged
processes on session end. SessionManager.deleteSession() now triggers reaping.
Tightens orphan reaper interval from 60s to 30s.

Fixes #1351 (MCP server processes leak on session end)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add Unix domain socket support for worker communication

Introduces socket-manager.ts for UDS-based worker communication, eliminating
port 37777 collisions between concurrent sessions. Worker listens on
~/.claude-mem/sockets/worker.sock by default with TCP fallback.

All hook handlers, MCP server, health checks, and admin commands updated to
use socket-aware workerHttpRequest(). Backwards compatible — settings can
force TCP mode via CLAUDE_MEM_WORKER_TRANSPORT=tcp.

Fixes #1346 (port 37777 collision across concurrent sessions)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: remove in-process worker fallback from hook command

Removes the fallback path where hook scripts started WorkerService in-process,
making the worker a grandchild of Claude Code (killed by sandbox). Hooks now
always delegate to ensureWorkerStarted() which spawns a fully detached daemon.

Fixes #1249 (grandchild process killed by sandbox)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add health checker and /api/admin/doctor endpoint

Adds 30-second periodic health sweep that prunes dead processes from the
supervisor registry and cleans stale socket files. Adds /api/admin/doctor
endpoint exposing supervisor state, process liveness, and environment health.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* test: add comprehensive supervisor test suite

64 tests covering all supervisor modules: process registry (18 tests),
env sanitizer (8), shutdown cascade (10), socket manager (15), health
checker (5), and supervisor API (6). Includes persistence, isolation,
edge cases, and cross-module integration scenarios.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* fix: revert Unix domain socket transport, restore TCP on port 37777

The socket-manager introduced UDS as default transport, but this broke
the HTTP server's TCP accessibility (viewer UI, curl, external monitoring).
Since there's only ever one worker process handling all sessions, the
port collision rationale for UDS doesn't apply. Reverts to TCP-only,
removing ~900 lines of unnecessary complexity.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* chore: remove dead code found in pre-landing review

Remove unused `acceptingSpawns` field from Supervisor class (written but
never read — assertCanSpawn uses stopPromise instead) and unused
`buildWorkerUrl` import from context handler.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* updated gitignore

* fix: address PR review feedback - downgrade HTTP logging, clean up gitignore, harden supervisor

- Downgrade request/response HTTP logging from info to debug to reduce noise
- Remove unused getWorkerPort imports, use buildWorkerUrl helper
- Export ENV_PREFIXES/ENV_EXACT_MATCHES from env-sanitizer, reuse in Server.ts
- Fix isPidAlive(0) returning true (should be false)
- Add shutdownInitiated flag to prevent signal handler race condition
- Make validateWorkerPidFile testable with pidFilePath option
- Remove unused dataDir from ShutdownCascadeOptions
- Upgrade reapSession log from debug to warn
- Rename zombiePidFiles to deadProcessPids (returns actual PIDs)
- Clean up gitignore: remove duplicate datasets/, stale ~*/ and http*/ patterns
- Fix tests to use temp directories instead of relying on real PID file

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-16 14:49:23 -07:00

190 lines
6.6 KiB
TypeScript

/**
* HealthMonitor - Port monitoring, health checks, and version checking
*
* Extracted from worker-service.ts monolith to provide centralized health monitoring.
* Handles:
* - Port availability checking
* - Worker health/readiness polling
* - Version mismatch detection (critical for plugin updates)
* - HTTP-based shutdown requests
*/
import path from 'path';
import { readFileSync } from 'fs';
import { logger } from '../../utils/logger.js';
import { MARKETPLACE_ROOT } from '../../shared/paths.js';
/**
* Make an HTTP request to the worker via TCP.
* Returns { ok, statusCode, body } or throws on transport error.
*/
async function httpRequestToWorker(
port: number,
endpointPath: string,
method: string = 'GET'
): Promise<{ ok: boolean; statusCode: number; body: string }> {
const response = await fetch(`http://127.0.0.1:${port}${endpointPath}`, { method });
// Gracefully handle cases where response body isn't available (e.g., test mocks)
let body = '';
try {
body = await response.text();
} catch {
// Body unavailable — health/readiness checks only need .ok
}
return { ok: response.ok, statusCode: response.status, body };
}
/**
* Check if a port is in use by querying the health endpoint
*/
export async function isPortInUse(port: number): Promise<boolean> {
try {
// Note: Removed AbortSignal.timeout to avoid Windows Bun cleanup issue (libuv assertion)
const response = await fetch(`http://127.0.0.1:${port}/api/health`);
return response.ok;
} catch (error) {
// [ANTI-PATTERN IGNORED]: Health check polls every 500ms, logging would flood
return false;
}
}
/**
* Poll a worker endpoint until it returns 200 OK or timeout.
* Shared implementation for liveness and readiness checks.
*/
async function pollEndpointUntilOk(
port: number,
endpointPath: string,
timeoutMs: number,
retryLogMessage: string
): Promise<boolean> {
const start = Date.now();
while (Date.now() - start < timeoutMs) {
try {
const result = await httpRequestToWorker(port, endpointPath);
if (result.ok) return true;
} catch (error) {
// [ANTI-PATTERN IGNORED]: Retry loop - expected failures during startup, will retry
logger.debug('SYSTEM', retryLogMessage, {}, error as Error);
}
await new Promise(r => setTimeout(r, 500));
}
return false;
}
/**
* Wait for the worker HTTP server to become responsive (liveness check).
* Uses /api/health which returns 200 as soon as the HTTP server is listening.
* For full initialization (DB + search), use waitForReadiness() instead.
*/
export function waitForHealth(port: number, timeoutMs: number = 30000): Promise<boolean> {
return pollEndpointUntilOk(port, '/api/health', timeoutMs, 'Service not ready yet, will retry');
}
/**
* Wait for the worker to be fully initialized (DB + search ready).
* Uses /api/readiness which returns 200 only after core initialization completes.
* Now that initializationCompleteFlag is set after DB/search init (not MCP),
* this typically completes in a few seconds.
*/
export function waitForReadiness(port: number, timeoutMs: number = 30000): Promise<boolean> {
return pollEndpointUntilOk(port, '/api/readiness', timeoutMs, 'Worker not ready yet, will retry');
}
/**
* Wait for a port to become free (no longer responding to health checks)
* Used after shutdown to confirm the port is available for restart
*/
export async function waitForPortFree(port: number, timeoutMs: number = 10000): Promise<boolean> {
const start = Date.now();
while (Date.now() - start < timeoutMs) {
if (!(await isPortInUse(port))) return true;
await new Promise(r => setTimeout(r, 500));
}
return false;
}
/**
* Send HTTP shutdown request to a running worker
* @returns true if shutdown request was acknowledged, false otherwise
*/
export async function httpShutdown(port: number): Promise<boolean> {
try {
const result = await httpRequestToWorker(port, '/api/admin/shutdown', 'POST');
if (!result.ok) {
logger.warn('SYSTEM', 'Shutdown request returned error', { status: result.statusCode });
return false;
}
return true;
} catch (error) {
// Connection refused is expected if worker already stopped
if (error instanceof Error && error.message?.includes('ECONNREFUSED')) {
logger.debug('SYSTEM', 'Worker already stopped', {}, error);
return false;
}
// Unexpected error - log full details
logger.error('SYSTEM', 'Shutdown request failed unexpectedly', {}, error as Error);
return false;
}
}
/**
* Get the plugin version from the installed marketplace package.json
* This is the "expected" version that should be running.
* Returns 'unknown' on ENOENT/EBUSY (shutdown race condition, fix #1042).
*/
export function getInstalledPluginVersion(): string {
try {
const packageJsonPath = path.join(MARKETPLACE_ROOT, 'package.json');
const packageJson = JSON.parse(readFileSync(packageJsonPath, 'utf-8'));
return packageJson.version;
} catch (error: unknown) {
const code = (error as NodeJS.ErrnoException).code;
if (code === 'ENOENT' || code === 'EBUSY') {
logger.debug('SYSTEM', 'Could not read plugin version (shutdown race)', { code });
return 'unknown';
}
throw error;
}
}
/**
* Get the running worker's version via API
* This is the "actual" version currently running.
*/
export async function getRunningWorkerVersion(port: number): Promise<string | null> {
try {
const result = await httpRequestToWorker(port, '/api/version');
if (!result.ok) return null;
const data = JSON.parse(result.body) as { version: string };
return data.version;
} catch {
// Expected: worker not running or version endpoint unavailable
logger.debug('SYSTEM', 'Could not fetch worker version', {});
return null;
}
}
export interface VersionCheckResult {
matches: boolean;
pluginVersion: string;
workerVersion: string | null;
}
/**
* Check if worker version matches plugin version
* Critical for detecting when plugin is updated but worker is still running old code
* Returns true if versions match or if we can't determine (assume match for graceful degradation)
*/
export async function checkVersionMatch(port: number): Promise<VersionCheckResult> {
const pluginVersion = getInstalledPluginVersion();
const workerVersion = await getRunningWorkerVersion(port);
// If either version is unknown/null, assume match (graceful degradation, fix #1042)
if (!workerVersion || pluginVersion === 'unknown') {
return { matches: true, pluginVersion, workerVersion };
}
return { matches: pluginVersion === workerVersion, pluginVersion, workerVersion };
}