fix: resolve issues #543, #544, #545, #557 (#558)

* docs: add investigation reports for 5 open GitHub issues

Comprehensive analysis of issues #543, #544, #545, #555, and #557:

- #557: settings.json not generated, module loader error (node/bun mismatch)
- #555: Windows hooks not executing, hasIpc always false
- #545: formatTool crashes on non-JSON tool_input strings
- #544: mem-search skill hint shown incorrectly to Claude Code users
- #543: /claude-mem slash command unavailable despite installation

Each report includes root cause analysis, affected files, and proposed fixes.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(logger): handle non-JSON tool_input in formatTool (#545)

Wrap JSON.parse in try-catch to handle raw string inputs (e.g., Bash
commands) that aren't valid JSON. Falls back to using the string as-is.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(context): update mem-search hint to reference MCP tools (#544)

Update hint messages to reference MCP tools (search, get_observations)
instead of the deprecated "mem-search skill" terminology.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(settings): auto-create settings.json on first load (#557, #543)

When settings.json doesn't exist, create it with defaults instead of
returning in-memory defaults. Creates parent directory if needed.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(hooks): use bun runtime for hooks except smart-install (#557)

Change hook commands from node to bun since hooks use bun:sqlite.
Keep smart-install.js on node since it bootstraps bun installation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* chore: rebuild plugin scripts

* docs: clarify that build artifacts must be committed

* fix(docs): update build artifacts directory reference in CLAUDE.md

* test: add test coverage for PR #558 fixes

- Fix 2 failing tests: update "mem-search skill" → "MCP tools" expectations
- Add 56 tests for formatTool() JSON.parse crash fix (Issue #545)
- Add 27 tests for settings.json auto-creation (Issue #543)

Test coverage includes:
- formatTool: JSON parsing, raw strings, objects, null/undefined, all tool types
- Settings: file creation, directory creation, schema migration, edge cases

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* fix(tests): clean up flaky tests and fix circular dependency

Phase 1 of test quality improvements:

- Delete 6 harmful/worthless test files that used problematic mock.module()
  patterns or tested implementation details rather than behavior:
  - context-builder.test.ts (tested internal implementation)
  - export-types.test.ts (fragile mock patterns)
  - smart-install.test.ts (shell script testing antipattern)
  - session_id_refactor.test.ts (outdated, tested refactoring itself)
  - validate_sql_update.test.ts (one-time migration validation)
  - observation-broadcaster.test.ts (excessive mocking)

- Fix circular dependency between logger.ts and SettingsDefaultsManager.ts
  by using late binding pattern - logger now lazily loads settings

- Refactor mock.module() to spyOn() in several test files for more
  maintainable and less brittle tests:
  - observation-compiler.test.ts
  - gemini_agent.test.ts
  - error-handler.test.ts
  - server.test.ts
  - response-processor.test.ts

All 649 tests pass.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor(tests): phase 2 - reduce mock-heavy tests and improve focus

- Remove mock-heavy query tests from observation-compiler.test.ts, keep real buildTimeline tests
- Convert session_id_usage_validation.test.ts from 477 to 178 lines of focused smoke tests
- Remove tests for language built-ins from worker-spawn.test.ts (JSON.parse, array indexing)
- Rename logger-coverage.test.ts to logger-usage-standards.test.ts for clarity

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* docs(tests): phase 3 - add JSDoc mock justification to test files

Document mock usage rationale in 5 test files to improve maintainability:
- error-handler.test.ts: Express req/res mocks, logger spies (~11%)
- fallback-error-handler.test.ts: Zero mocks, pure function tests
- session-cleanup-helper.test.ts: Session fixtures, worker mocks (~19%)
- hook-constants.test.ts: process.platform mock for Windows tests (~12%)
- session_store.test.ts: Zero mocks, real SQLite :memory: database

Part of ongoing effort to document mock justifications per TESTING.md guidelines.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* test(integration): phase 5 - add 72 tests for critical coverage gaps

Add comprehensive test coverage for previously untested areas:

- tests/integration/hook-execution-e2e.test.ts (10 tests)
  Tests lifecycle hooks execution flow and context propagation

- tests/integration/worker-api-endpoints.test.ts (19 tests)
  Tests all worker service HTTP endpoints without heavy mocking

- tests/integration/chroma-vector-sync.test.ts (16 tests)
  Tests vector embedding synchronization with ChromaDB

- tests/utils/tag-stripping.test.ts (27 tests)
  Tests privacy tag stripping utilities for both <private> and
  <meta-observation> tags

All tests use real implementations where feasible, following the
project's testing philosophy of preferring integration-style tests
over unit tests with extensive mocking.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* context update

* docs: add comment linking DEFAULT_DATA_DIR locations

Added NOTE comment in logger.ts pointing to the canonical DEFAULT_DATA_DIR
in SettingsDefaultsManager.ts. This addresses PR reviewer feedback about
the fragility of having the default defined in two places to avoid
circular dependencies.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
Alex Newman
2026-01-05 19:45:09 -05:00
committed by GitHub
parent f1ccc22593
commit f38b5b85bc
55 changed files with 4712 additions and 2676 deletions
+7
View File
@@ -0,0 +1,7 @@
<claude-mem-context>
# Recent Activity
<!-- This section is auto-generated by claude-mem. Edit content outside the tags. -->
*No recent activity*
</claude-mem-context>
-340
View File
@@ -1,340 +0,0 @@
import { describe, it, expect, mock, beforeEach, afterEach } from 'bun:test';
// Create mock functions that can be accessed
const mockPrepare = mock(() => ({
all: mock(() => []),
run: mock(() => {}),
}));
const mockClose = mock(() => {});
// Mock SessionStore before importing ContextBuilder
mock.module('../../src/services/sqlite/SessionStore.js', () => ({
SessionStore: class MockSessionStore {
db = {
prepare: mockPrepare,
};
close = mockClose;
},
}));
// Mock the logger
mock.module('../../src/utils/logger.js', () => ({
logger: {
debug: mock(() => {}),
failure: mock(() => {}),
error: mock(() => {}),
info: mock(() => {}),
},
}));
// Mock project-name utility
mock.module('../../src/utils/project-name.js', () => ({
getProjectName: mock((cwd: string) => cwd.split('/').pop() || 'unknown'),
}));
// Mock SettingsDefaultsManager
mock.module('../../src/shared/SettingsDefaultsManager.js', () => ({
SettingsDefaultsManager: {
loadFromFile: mock(() => ({
CLAUDE_MEM_MODE: 'code',
CLAUDE_MEM_CONTEXT_OBSERVATIONS: '50',
CLAUDE_MEM_CONTEXT_FULL_COUNT: '5',
CLAUDE_MEM_CONTEXT_SESSION_COUNT: '3',
CLAUDE_MEM_CONTEXT_SHOW_READ_TOKENS: 'true',
CLAUDE_MEM_CONTEXT_SHOW_WORK_TOKENS: 'true',
CLAUDE_MEM_CONTEXT_SHOW_SAVINGS_AMOUNT: 'true',
CLAUDE_MEM_CONTEXT_SHOW_SAVINGS_PERCENT: 'true',
CLAUDE_MEM_CONTEXT_OBSERVATION_TYPES: 'discovery,decision,bugfix',
CLAUDE_MEM_CONTEXT_OBSERVATION_CONCEPTS: 'architecture,testing',
CLAUDE_MEM_CONTEXT_FULL_FIELD: 'narrative',
CLAUDE_MEM_CONTEXT_SHOW_LAST_SUMMARY: 'true',
CLAUDE_MEM_CONTEXT_SHOW_LAST_MESSAGE: 'false',
})),
},
}));
// Mock ModeManager
mock.module('../../src/services/domain/ModeManager.js', () => ({
ModeManager: {
getInstance: () => ({
getActiveMode: () => ({
name: 'code',
prompts: {},
observation_types: [
{ id: 'decision', emoji: 'D' },
{ id: 'bugfix', emoji: 'B' },
{ id: 'discovery', emoji: 'I' },
],
observation_concepts: [
{ id: 'architecture' },
{ id: 'testing' },
],
}),
getTypeIcon: (type: string) => {
const icons: Record<string, string> = { decision: 'D', bugfix: 'B', discovery: 'I' };
return icons[type] || '?';
},
getWorkEmoji: () => 'W',
}),
},
}));
import { generateContext, loadContextConfig } from '../../src/services/context/index.js';
import type { ContextConfig } from '../../src/services/context/types.js';
describe('ContextBuilder', () => {
beforeEach(() => {
mockPrepare.mockClear();
mockClose.mockClear();
});
describe('loadContextConfig', () => {
it('should return valid ContextConfig object', () => {
const config = loadContextConfig();
expect(config).toBeDefined();
expect(typeof config.totalObservationCount).toBe('number');
expect(typeof config.fullObservationCount).toBe('number');
expect(typeof config.sessionCount).toBe('number');
});
it('should parse observation count as number', () => {
const config = loadContextConfig();
expect(config.totalObservationCount).toBe(50);
});
it('should parse full observation count as number', () => {
const config = loadContextConfig();
expect(config.fullObservationCount).toBe(5);
});
it('should parse session count as number', () => {
const config = loadContextConfig();
expect(config.sessionCount).toBe(3);
});
it('should parse boolean flags correctly', () => {
const config = loadContextConfig();
expect(config.showReadTokens).toBe(true);
expect(config.showWorkTokens).toBe(true);
expect(config.showSavingsAmount).toBe(true);
expect(config.showSavingsPercent).toBe(true);
});
it('should parse observation types into Set', () => {
const config = loadContextConfig();
expect(config.observationTypes instanceof Set).toBe(true);
expect(config.observationTypes.has('discovery')).toBe(true);
expect(config.observationTypes.has('decision')).toBe(true);
expect(config.observationTypes.has('bugfix')).toBe(true);
});
it('should parse observation concepts into Set', () => {
const config = loadContextConfig();
expect(config.observationConcepts instanceof Set).toBe(true);
expect(config.observationConcepts.has('architecture')).toBe(true);
expect(config.observationConcepts.has('testing')).toBe(true);
});
it('should set fullObservationField', () => {
const config = loadContextConfig();
expect(config.fullObservationField).toBe('narrative');
});
it('should parse showLastSummary and showLastMessage', () => {
const config = loadContextConfig();
expect(config.showLastSummary).toBe(true);
expect(config.showLastMessage).toBe(false);
});
});
describe('generateContext', () => {
it('should produce non-empty output when data exists', async () => {
// Setup mock to return some observations
mockPrepare.mockImplementation((sql: string) => ({
all: mock((...args: any[]) => {
if (sql.includes('FROM observations')) {
return [{
id: 1,
memory_session_id: 'session-1',
type: 'discovery',
title: 'Test Discovery',
subtitle: null,
narrative: 'Found something interesting',
facts: '["fact1"]',
concepts: '["architecture"]',
files_read: null,
files_modified: null,
discovery_tokens: 100,
created_at: '2025-01-01T12:00:00.000Z',
created_at_epoch: 1735732800000,
}];
}
return [];
}),
}));
const result = await generateContext({ cwd: '/test/project' }, false);
expect(result.length).toBeGreaterThan(0);
});
it('should return empty state message when no data', async () => {
// Setup mock to return empty arrays
mockPrepare.mockImplementation(() => ({
all: mock(() => []),
}));
const result = await generateContext({ cwd: '/test/my-project' }, false);
expect(result).toContain('recent context');
expect(result).toContain('No previous sessions');
});
it('should contain project name in output', async () => {
mockPrepare.mockImplementation((sql: string) => ({
all: mock(() => {
if (sql.includes('FROM observations')) {
return [{
id: 1,
memory_session_id: 'session-1',
type: 'discovery',
title: 'Test',
subtitle: null,
narrative: 'Narrative',
facts: '[]',
concepts: '["architecture"]',
files_read: null,
files_modified: null,
discovery_tokens: 50,
created_at: '2025-01-01T12:00:00.000Z',
created_at_epoch: 1735732800000,
}];
}
return [];
}),
}));
const result = await generateContext({ cwd: '/path/to/awesome-project' }, false);
expect(result).toContain('awesome-project');
});
it('should close database after completion', async () => {
mockPrepare.mockImplementation(() => ({
all: mock(() => []),
}));
await generateContext({ cwd: '/test/project' }, false);
expect(mockClose).toHaveBeenCalled();
});
it('should contain expected markdown sections', async () => {
mockPrepare.mockImplementation((sql: string) => ({
all: mock(() => {
if (sql.includes('FROM observations')) {
return [{
id: 1,
memory_session_id: 'session-1',
type: 'discovery',
title: 'Interesting Finding',
subtitle: null,
narrative: 'Description here',
facts: '["fact"]',
concepts: '["architecture"]',
files_read: null,
files_modified: null,
discovery_tokens: 200,
created_at: '2025-01-01T10:00:00.000Z',
created_at_epoch: 1735725600000,
}];
}
if (sql.includes('FROM session_summaries')) {
return [{
id: 1,
memory_session_id: 'session-1',
request: 'Build feature',
investigated: 'Code review',
learned: 'Best practices',
completed: 'Initial implementation',
next_steps: 'Add tests',
created_at: '2025-01-01T11:00:00.000Z',
created_at_epoch: 1735729200000,
}];
}
return [];
}),
}));
const result = await generateContext({ cwd: '/test/project' }, false);
// Should contain header
expect(result).toContain('recent context');
// Should contain observation data
expect(result).toContain('Interesting Finding');
});
it('should use cwd from input when provided', async () => {
mockPrepare.mockImplementation(() => ({
all: mock(() => []),
}));
const result = await generateContext({ cwd: '/custom/path/special-project' }, false);
expect(result).toContain('special-project');
});
it('should handle undefined input gracefully', async () => {
mockPrepare.mockImplementation(() => ({
all: mock(() => []),
}));
// Should not throw
const result = await generateContext(undefined, false);
expect(typeof result).toBe('string');
});
it('should produce markdown format when useColors is false', async () => {
mockPrepare.mockImplementation((sql: string) => ({
all: mock(() => {
if (sql.includes('FROM observations')) {
return [{
id: 1,
memory_session_id: 'session-1',
type: 'discovery',
title: 'Test',
subtitle: null,
narrative: 'Text',
facts: '[]',
concepts: '["testing"]',
files_read: null,
files_modified: null,
discovery_tokens: 10,
created_at: '2025-01-01T12:00:00.000Z',
created_at_epoch: 1735732800000,
}];
}
return [];
}),
}));
const result = await generateContext({ cwd: '/test/project' }, false);
// Markdown format uses # for headers
expect(result).toContain('#');
// Should not contain ANSI escape codes
expect(result).not.toContain('\x1b[');
});
});
});
@@ -169,11 +169,11 @@ describe('MarkdownFormatter', () => {
expect(result[0]).toContain('**Context Index:**');
});
it('should mention mem-search skill', () => {
it('should mention MCP tools', () => {
const result = renderMarkdownContextIndex();
const joined = result.join('\n');
expect(joined).toContain('mem-search');
expect(joined).toContain('MCP tools');
});
});
@@ -488,11 +488,11 @@ describe('MarkdownFormatter', () => {
expect(joined).toContain('500');
});
it('should mention mem-search skill', () => {
it('should mention MCP', () => {
const result = renderMarkdownFooter(5000, 100);
const joined = result.join('\n');
expect(joined).toContain('mem-search');
expect(joined).toContain('MCP');
});
it('should round work tokens to nearest thousand', () => {
+10 -197
View File
@@ -1,21 +1,13 @@
import { describe, it, expect, mock, beforeEach } from 'bun:test';
import { describe, it, expect } from 'bun:test';
import { buildTimeline } from '../../src/services/context/index.js';
import type { Observation, SummaryTimelineItem } from '../../src/services/context/types.js';
// Mock the logger before importing modules that use it
mock.module('../../src/utils/logger.js', () => ({
logger: {
debug: mock(() => {}),
failure: mock(() => {}),
error: mock(() => {}),
},
}));
import {
queryObservations,
querySummaries,
buildTimeline,
getPriorSessionMessages,
} from '../../src/services/context/index.js';
import type { Observation, SessionSummary, SummaryTimelineItem, ContextConfig } from '../../src/services/context/types.js';
/**
* Timeline building tests - validates real sorting and merging logic
*
* Removed: queryObservations, querySummaries tests (mock database - not testing real behavior)
* Kept: buildTimeline tests (tests actual sorting algorithm)
*/
// Helper to create a minimal observation
function createTestObservation(overrides: Partial<Observation> = {}): Observation {
@@ -56,137 +48,7 @@ function createTestSummaryTimelineItem(overrides: Partial<SummaryTimelineItem> =
};
}
// Helper to create a minimal ContextConfig
function createTestConfig(overrides: Partial<ContextConfig> = {}): ContextConfig {
return {
totalObservationCount: 50,
fullObservationCount: 5,
sessionCount: 3,
showReadTokens: true,
showWorkTokens: true,
showSavingsAmount: true,
showSavingsPercent: true,
observationTypes: new Set(['discovery', 'decision', 'bugfix']),
observationConcepts: new Set(['concept1', 'concept2']),
fullObservationField: 'narrative',
showLastSummary: true,
showLastMessage: false,
...overrides,
};
}
// Mock database that returns specified data
function createMockDb(observations: Observation[] = [], summaries: SessionSummary[] = []) {
return {
db: {
prepare: mock((sql: string) => ({
all: mock((...args: any[]) => {
// Check if query is for observations or summaries
if (sql.includes('FROM observations')) {
return observations;
} else if (sql.includes('FROM session_summaries')) {
return summaries;
}
return [];
}),
})),
},
};
}
describe('ObservationCompiler', () => {
describe('queryObservations', () => {
it('should query observations with correct SQL pattern', () => {
const mockObs = [createTestObservation()];
const mockDb = createMockDb(mockObs);
const config = createTestConfig();
const result = queryObservations(mockDb as any, 'test-project', config);
expect(result).toEqual(mockObs);
expect(mockDb.db.prepare).toHaveBeenCalled();
});
it('should pass observation types from config to query', () => {
const mockDb = createMockDb([]);
const config = createTestConfig({
observationTypes: new Set(['decision', 'bugfix']),
});
queryObservations(mockDb as any, 'test-project', config);
expect(mockDb.db.prepare).toHaveBeenCalled();
});
it('should respect totalObservationCount limit from config', () => {
const mockDb = createMockDb([]);
const config = createTestConfig({ totalObservationCount: 100 });
queryObservations(mockDb as any, 'test-project', config);
expect(mockDb.db.prepare).toHaveBeenCalled();
});
it('should return empty array when no observations match', () => {
const mockDb = createMockDb([]);
const config = createTestConfig();
const result = queryObservations(mockDb as any, 'test-project', config);
expect(result).toEqual([]);
});
it('should handle multiple observation types', () => {
const mockObs = [
createTestObservation({ id: 1, type: 'discovery' }),
createTestObservation({ id: 2, type: 'decision' }),
createTestObservation({ id: 3, type: 'bugfix' }),
];
const mockDb = createMockDb(mockObs);
const config = createTestConfig({
observationTypes: new Set(['discovery', 'decision', 'bugfix']),
});
const result = queryObservations(mockDb as any, 'test-project', config);
expect(result).toHaveLength(3);
});
});
describe('querySummaries', () => {
it('should query summaries with session count from config', () => {
const mockSummaries: SessionSummary[] = [
{
id: 1,
memory_session_id: 'session-1',
request: 'Request 1',
investigated: null,
learned: null,
completed: null,
next_steps: null,
created_at: '2025-01-01T12:00:00.000Z',
created_at_epoch: 1735732800000,
},
];
const mockDb = createMockDb([], mockSummaries);
const config = createTestConfig({ sessionCount: 5 });
const result = querySummaries(mockDb as any, 'test-project', config);
expect(result).toEqual(mockSummaries);
});
it('should return empty array when no summaries exist', () => {
const mockDb = createMockDb([], []);
const config = createTestConfig();
const result = querySummaries(mockDb as any, 'test-project', config);
expect(result).toEqual([]);
});
});
describe('buildTimeline', () => {
describe('buildTimeline', () => {
it('should combine observations and summaries into timeline', () => {
const observations = [
createTestObservation({ id: 1, created_at_epoch: 1000 }),
@@ -281,53 +143,4 @@ describe('ObservationCompiler', () => {
expect(timeline[0].type).toBe('summary');
expect(timeline[1].type).toBe('observation');
});
});
describe('getPriorSessionMessages', () => {
it('should return empty messages when showLastMessage is false', () => {
const observations = [createTestObservation()];
const config = createTestConfig({ showLastMessage: false });
const result = getPriorSessionMessages(observations, config, 'current-session', '/test/cwd');
expect(result.userMessage).toBe('');
expect(result.assistantMessage).toBe('');
});
it('should return empty messages when observations array is empty', () => {
const config = createTestConfig({ showLastMessage: true });
const result = getPriorSessionMessages([], config, 'current-session', '/test/cwd');
expect(result.userMessage).toBe('');
expect(result.assistantMessage).toBe('');
});
it('should return empty messages when no prior session found', () => {
// All observations have same session ID as current
const observations = [
createTestObservation({ memory_session_id: 'current-session' }),
];
const config = createTestConfig({ showLastMessage: true });
const result = getPriorSessionMessages(observations, config, 'current-session', '/test/cwd');
expect(result.userMessage).toBe('');
expect(result.assistantMessage).toBe('');
});
it('should look for prior session when current session differs', () => {
// Has observation from a different session
const observations = [
createTestObservation({ memory_session_id: 'prior-session' }),
];
const config = createTestConfig({ showLastMessage: true });
// Transcript file won't exist, so should return empty strings
const result = getPriorSessionMessages(observations, config, 'current-session', '/nonexistent/path');
expect(result.userMessage).toBe('');
expect(result.assistantMessage).toBe('');
});
});
});