f38b5b85bc
* docs: add investigation reports for 5 open GitHub issues Comprehensive analysis of issues #543, #544, #545, #555, and #557: - #557: settings.json not generated, module loader error (node/bun mismatch) - #555: Windows hooks not executing, hasIpc always false - #545: formatTool crashes on non-JSON tool_input strings - #544: mem-search skill hint shown incorrectly to Claude Code users - #543: /claude-mem slash command unavailable despite installation Each report includes root cause analysis, affected files, and proposed fixes. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(logger): handle non-JSON tool_input in formatTool (#545) Wrap JSON.parse in try-catch to handle raw string inputs (e.g., Bash commands) that aren't valid JSON. Falls back to using the string as-is. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(context): update mem-search hint to reference MCP tools (#544) Update hint messages to reference MCP tools (search, get_observations) instead of the deprecated "mem-search skill" terminology. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(settings): auto-create settings.json on first load (#557, #543) When settings.json doesn't exist, create it with defaults instead of returning in-memory defaults. Creates parent directory if needed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(hooks): use bun runtime for hooks except smart-install (#557) Change hook commands from node to bun since hooks use bun:sqlite. Keep smart-install.js on node since it bootstraps bun installation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: rebuild plugin scripts * docs: clarify that build artifacts must be committed * fix(docs): update build artifacts directory reference in CLAUDE.md * test: add test coverage for PR #558 fixes - Fix 2 failing tests: update "mem-search skill" → "MCP tools" expectations - Add 56 tests for formatTool() JSON.parse crash fix (Issue #545) - Add 27 tests for settings.json auto-creation (Issue #543) Test coverage includes: - formatTool: JSON parsing, raw strings, objects, null/undefined, all tool types - Settings: file creation, directory creation, schema migration, edge cases 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix(tests): clean up flaky tests and fix circular dependency Phase 1 of test quality improvements: - Delete 6 harmful/worthless test files that used problematic mock.module() patterns or tested implementation details rather than behavior: - context-builder.test.ts (tested internal implementation) - export-types.test.ts (fragile mock patterns) - smart-install.test.ts (shell script testing antipattern) - session_id_refactor.test.ts (outdated, tested refactoring itself) - validate_sql_update.test.ts (one-time migration validation) - observation-broadcaster.test.ts (excessive mocking) - Fix circular dependency between logger.ts and SettingsDefaultsManager.ts by using late binding pattern - logger now lazily loads settings - Refactor mock.module() to spyOn() in several test files for more maintainable and less brittle tests: - observation-compiler.test.ts - gemini_agent.test.ts - error-handler.test.ts - server.test.ts - response-processor.test.ts All 649 tests pass. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor(tests): phase 2 - reduce mock-heavy tests and improve focus - Remove mock-heavy query tests from observation-compiler.test.ts, keep real buildTimeline tests - Convert session_id_usage_validation.test.ts from 477 to 178 lines of focused smoke tests - Remove tests for language built-ins from worker-spawn.test.ts (JSON.parse, array indexing) - Rename logger-coverage.test.ts to logger-usage-standards.test.ts for clarity 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs(tests): phase 3 - add JSDoc mock justification to test files Document mock usage rationale in 5 test files to improve maintainability: - error-handler.test.ts: Express req/res mocks, logger spies (~11%) - fallback-error-handler.test.ts: Zero mocks, pure function tests - session-cleanup-helper.test.ts: Session fixtures, worker mocks (~19%) - hook-constants.test.ts: process.platform mock for Windows tests (~12%) - session_store.test.ts: Zero mocks, real SQLite :memory: database Part of ongoing effort to document mock justifications per TESTING.md guidelines. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * test(integration): phase 5 - add 72 tests for critical coverage gaps Add comprehensive test coverage for previously untested areas: - tests/integration/hook-execution-e2e.test.ts (10 tests) Tests lifecycle hooks execution flow and context propagation - tests/integration/worker-api-endpoints.test.ts (19 tests) Tests all worker service HTTP endpoints without heavy mocking - tests/integration/chroma-vector-sync.test.ts (16 tests) Tests vector embedding synchronization with ChromaDB - tests/utils/tag-stripping.test.ts (27 tests) Tests privacy tag stripping utilities for both <private> and <meta-observation> tags All tests use real implementations where feasible, following the project's testing philosophy of preferring integration-style tests over unit tests with extensive mocking. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * context update * docs: add comment linking DEFAULT_DATA_DIR locations Added NOTE comment in logger.ts pointing to the canonical DEFAULT_DATA_DIR in SettingsDefaultsManager.ts. This addresses PR reviewer feedback about the fragility of having the default defined in two places to avoid circular dependencies. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
380 lines
12 KiB
TypeScript
380 lines
12 KiB
TypeScript
import { describe, it, expect, mock, beforeEach, afterEach, spyOn } from 'bun:test';
|
|
import { logger } from '../../src/utils/logger.js';
|
|
|
|
// Mock middleware to avoid complex dependencies
|
|
mock.module('../../src/services/worker/http/middleware.js', () => ({
|
|
createMiddleware: () => [],
|
|
requireLocalhost: (_req: any, _res: any, next: any) => next(),
|
|
summarizeRequestBody: () => 'test body',
|
|
}));
|
|
|
|
// Import after mocks
|
|
import { Server } from '../../src/services/server/Server.js';
|
|
import type { RouteHandler, ServerOptions } from '../../src/services/server/Server.js';
|
|
|
|
// Spy on logger methods to suppress output during tests
|
|
let loggerSpies: ReturnType<typeof spyOn>[] = [];
|
|
|
|
describe('Server', () => {
|
|
let server: Server;
|
|
let mockOptions: ServerOptions;
|
|
|
|
beforeEach(() => {
|
|
loggerSpies = [
|
|
spyOn(logger, 'info').mockImplementation(() => {}),
|
|
spyOn(logger, 'debug').mockImplementation(() => {}),
|
|
spyOn(logger, 'warn').mockImplementation(() => {}),
|
|
spyOn(logger, 'error').mockImplementation(() => {}),
|
|
];
|
|
|
|
mockOptions = {
|
|
getInitializationComplete: () => true,
|
|
getMcpReady: () => true,
|
|
onShutdown: mock(() => Promise.resolve()),
|
|
onRestart: mock(() => Promise.resolve()),
|
|
};
|
|
});
|
|
|
|
afterEach(async () => {
|
|
loggerSpies.forEach(spy => spy.mockRestore());
|
|
// Clean up server if created and still has an active http server
|
|
if (server && server.getHttpServer()) {
|
|
try {
|
|
await server.close();
|
|
} catch {
|
|
// Ignore errors on cleanup
|
|
}
|
|
}
|
|
mock.restore();
|
|
});
|
|
|
|
describe('constructor', () => {
|
|
it('should create Express app', () => {
|
|
server = new Server(mockOptions);
|
|
|
|
expect(server.app).toBeDefined();
|
|
expect(typeof server.app.get).toBe('function');
|
|
expect(typeof server.app.post).toBe('function');
|
|
expect(typeof server.app.use).toBe('function');
|
|
});
|
|
|
|
it('should expose app as readonly property', () => {
|
|
server = new Server(mockOptions);
|
|
|
|
// App should be accessible
|
|
expect(server.app).toBeDefined();
|
|
|
|
// App should be an Express application
|
|
expect(typeof server.app.listen).toBe('function');
|
|
});
|
|
});
|
|
|
|
describe('listen', () => {
|
|
it('should start server on specified port', async () => {
|
|
server = new Server(mockOptions);
|
|
|
|
// Use a random high port to avoid conflicts
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
// Server should now be listening
|
|
const httpServer = server.getHttpServer();
|
|
expect(httpServer).not.toBeNull();
|
|
expect(httpServer!.listening).toBe(true);
|
|
});
|
|
|
|
it('should reject if port is already in use', async () => {
|
|
server = new Server(mockOptions);
|
|
const server2 = new Server(mockOptions);
|
|
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
// Start first server
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
// Second server should fail on same port
|
|
await expect(server2.listen(testPort, '127.0.0.1')).rejects.toThrow();
|
|
|
|
// The server object was created but not successfully listening
|
|
const httpServer = server2.getHttpServer();
|
|
if (httpServer) {
|
|
expect(httpServer.listening).toBe(false);
|
|
}
|
|
});
|
|
});
|
|
|
|
describe('close', () => {
|
|
it('should stop server from listening after close', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
// Server should exist and be listening
|
|
const httpServerBefore = server.getHttpServer();
|
|
expect(httpServerBefore).not.toBeNull();
|
|
expect(httpServerBefore!.listening).toBe(true);
|
|
|
|
// Close the server - may throw ERR_SERVER_NOT_RUNNING on some platforms
|
|
// because closeAllConnections() might immediately close the server
|
|
try {
|
|
await server.close();
|
|
} catch (e: any) {
|
|
// ERR_SERVER_NOT_RUNNING is acceptable - closeAllConnections() already closed it
|
|
if (e.code !== 'ERR_SERVER_NOT_RUNNING') {
|
|
throw e;
|
|
}
|
|
}
|
|
|
|
// The server should no longer be listening (even if ref is not null due to early throw)
|
|
const httpServerAfter = server.getHttpServer();
|
|
if (httpServerAfter) {
|
|
expect(httpServerAfter.listening).toBe(false);
|
|
}
|
|
});
|
|
|
|
it('should handle close when server not started', async () => {
|
|
server = new Server(mockOptions);
|
|
|
|
// Should not throw when closing unstarted server
|
|
await expect(server.close()).resolves.toBeUndefined();
|
|
});
|
|
|
|
it('should allow starting a new server on same port after close', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
// Close the server
|
|
try {
|
|
await server.close();
|
|
} catch (e: any) {
|
|
// ERR_SERVER_NOT_RUNNING is acceptable
|
|
if (e.code !== 'ERR_SERVER_NOT_RUNNING') {
|
|
throw e;
|
|
}
|
|
}
|
|
|
|
// Small delay to ensure port is released
|
|
await new Promise(resolve => setTimeout(resolve, 100));
|
|
|
|
// Should be able to listen again on same port with a new server
|
|
const server2 = new Server(mockOptions);
|
|
await server2.listen(testPort, '127.0.0.1');
|
|
|
|
expect(server2.getHttpServer()!.listening).toBe(true);
|
|
|
|
// Clean up server2
|
|
try {
|
|
await server2.close();
|
|
} catch {
|
|
// Ignore cleanup errors
|
|
}
|
|
});
|
|
});
|
|
|
|
describe('getHttpServer', () => {
|
|
it('should return null before listen', () => {
|
|
server = new Server(mockOptions);
|
|
|
|
expect(server.getHttpServer()).toBeNull();
|
|
});
|
|
|
|
it('should return http.Server after listen', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const httpServer = server.getHttpServer();
|
|
expect(httpServer).not.toBeNull();
|
|
expect(httpServer!.listening).toBe(true);
|
|
});
|
|
});
|
|
|
|
describe('registerRoutes', () => {
|
|
it('should call setupRoutes on route handler', () => {
|
|
server = new Server(mockOptions);
|
|
|
|
const setupRoutesMock = mock(() => {});
|
|
const mockRouteHandler: RouteHandler = {
|
|
setupRoutes: setupRoutesMock,
|
|
};
|
|
|
|
server.registerRoutes(mockRouteHandler);
|
|
|
|
expect(setupRoutesMock).toHaveBeenCalledTimes(1);
|
|
expect(setupRoutesMock).toHaveBeenCalledWith(server.app);
|
|
});
|
|
|
|
it('should register multiple route handlers', () => {
|
|
server = new Server(mockOptions);
|
|
|
|
const handler1Mock = mock(() => {});
|
|
const handler2Mock = mock(() => {});
|
|
|
|
const handler1: RouteHandler = { setupRoutes: handler1Mock };
|
|
const handler2: RouteHandler = { setupRoutes: handler2Mock };
|
|
|
|
server.registerRoutes(handler1);
|
|
server.registerRoutes(handler2);
|
|
|
|
expect(handler1Mock).toHaveBeenCalledTimes(1);
|
|
expect(handler2Mock).toHaveBeenCalledTimes(1);
|
|
});
|
|
});
|
|
|
|
describe('finalizeRoutes', () => {
|
|
it('should not throw when called', () => {
|
|
server = new Server(mockOptions);
|
|
|
|
expect(() => server.finalizeRoutes()).not.toThrow();
|
|
});
|
|
});
|
|
|
|
describe('health endpoint', () => {
|
|
it('should return 200 with status ok', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const response = await fetch(`http://127.0.0.1:${testPort}/api/health`);
|
|
|
|
expect(response.status).toBe(200);
|
|
|
|
const body = await response.json();
|
|
expect(body.status).toBe('ok');
|
|
});
|
|
|
|
it('should include initialization status', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const response = await fetch(`http://127.0.0.1:${testPort}/api/health`);
|
|
const body = await response.json();
|
|
|
|
expect(body.initialized).toBe(true);
|
|
expect(body.mcpReady).toBe(true);
|
|
});
|
|
|
|
it('should reflect initialization state changes', async () => {
|
|
let isInitialized = false;
|
|
const dynamicOptions: ServerOptions = {
|
|
getInitializationComplete: () => isInitialized,
|
|
getMcpReady: () => true,
|
|
onShutdown: mock(() => Promise.resolve()),
|
|
onRestart: mock(() => Promise.resolve()),
|
|
};
|
|
|
|
server = new Server(dynamicOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
// Check when not initialized
|
|
let response = await fetch(`http://127.0.0.1:${testPort}/api/health`);
|
|
let body = await response.json();
|
|
expect(body.initialized).toBe(false);
|
|
|
|
// Change state
|
|
isInitialized = true;
|
|
|
|
// Check when initialized
|
|
response = await fetch(`http://127.0.0.1:${testPort}/api/health`);
|
|
body = await response.json();
|
|
expect(body.initialized).toBe(true);
|
|
});
|
|
|
|
it('should include platform and pid', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const response = await fetch(`http://127.0.0.1:${testPort}/api/health`);
|
|
const body = await response.json();
|
|
|
|
expect(body.platform).toBeDefined();
|
|
expect(body.pid).toBeDefined();
|
|
expect(typeof body.pid).toBe('number');
|
|
});
|
|
});
|
|
|
|
describe('readiness endpoint', () => {
|
|
it('should return 200 when initialized', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const response = await fetch(`http://127.0.0.1:${testPort}/api/readiness`);
|
|
|
|
expect(response.status).toBe(200);
|
|
|
|
const body = await response.json();
|
|
expect(body.status).toBe('ready');
|
|
});
|
|
|
|
it('should return 503 when not initialized', async () => {
|
|
const uninitializedOptions: ServerOptions = {
|
|
getInitializationComplete: () => false,
|
|
getMcpReady: () => false,
|
|
onShutdown: mock(() => Promise.resolve()),
|
|
onRestart: mock(() => Promise.resolve()),
|
|
};
|
|
|
|
server = new Server(uninitializedOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const response = await fetch(`http://127.0.0.1:${testPort}/api/readiness`);
|
|
|
|
expect(response.status).toBe(503);
|
|
|
|
const body = await response.json();
|
|
expect(body.status).toBe('initializing');
|
|
expect(body.message).toBeDefined();
|
|
});
|
|
});
|
|
|
|
describe('version endpoint', () => {
|
|
it('should return 200 with version', async () => {
|
|
server = new Server(mockOptions);
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const response = await fetch(`http://127.0.0.1:${testPort}/api/version`);
|
|
|
|
expect(response.status).toBe(200);
|
|
|
|
const body = await response.json();
|
|
expect(body.version).toBeDefined();
|
|
expect(typeof body.version).toBe('string');
|
|
});
|
|
});
|
|
|
|
describe('404 handling', () => {
|
|
it('should return 404 for unknown routes after finalizeRoutes', async () => {
|
|
server = new Server(mockOptions);
|
|
server.finalizeRoutes();
|
|
|
|
const testPort = 40000 + Math.floor(Math.random() * 10000);
|
|
await server.listen(testPort, '127.0.0.1');
|
|
|
|
const response = await fetch(`http://127.0.0.1:${testPort}/api/nonexistent`);
|
|
|
|
expect(response.status).toBe(404);
|
|
|
|
const body = await response.json();
|
|
expect(body.error).toBe('NotFound');
|
|
});
|
|
});
|
|
});
|