68290a9121
* refactor: Reduce continuation prompt token usage by 95 lines Removed redundant instructions from continuation prompt that were originally added to mitigate a session continuity issue. That issue has since been resolved, making these detailed instructions unnecessary on every continuation. Changes: - Reduced continuation prompt from ~106 lines to ~11 lines (~95 line reduction) - Changed "User's Goal:" to "Next Prompt in Session:" (more accurate framing) - Removed redundant WHAT TO RECORD, WHEN TO SKIP, and OUTPUT FORMAT sections - Kept concise reminder: "Continue generating observations and progress summaries..." - Initial prompt still contains all detailed instructions Impact: - Significant token savings on every continuation prompt - Faster context injection with no loss of functionality - Instructions remain comprehensive in initial prompt Files modified: - src/sdk/prompts.ts (buildContinuationPrompt function) - plugin/scripts/worker-service.cjs (compiled output) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * refactor: Enhance observation and summary prompts for clarity and token efficiency * Enhance prompt clarity and instructions in prompts.ts - Added a reminder to think about instructions before starting work. - Simplified the continuation prompt instruction by removing "for this ongoing session." * feat: Enhance settings.json with permissions and deny access to sensitive files refactor: Remove PLAN-full-observation-display.md and PR_SUMMARY.md as they are no longer needed chore: Delete SECURITY_SUMMARY.md since it is redundant after recent changes fix: Update worker-service.cjs to streamline observation generation instructions cleanup: Remove src-analysis.md and src-tree.md for a cleaner codebase refactor: Modify prompts.ts to clarify instructions for memory processing * refactor: Remove legacy worker service implementation * feat: Enhance summary hook to extract last assistant message and improve logging - Added function to extract the last assistant message from the transcript. - Updated summary hook to include last assistant message in the summary request. - Modified SDKSession interface to store last assistant message. - Adjusted buildSummaryPrompt to utilize last assistant message for generating summaries. - Updated worker service and session manager to handle last assistant message in summarize requests. - Introduced silentDebug utility for improved logging and diagnostics throughout the summary process. * docs: Add comprehensive implementation plan for ROI metrics feature Added detailed implementation plan covering: - Token usage capture from Agent SDK - Database schema changes (migration #8) - Discovery cost tracking per observation - Context hook display with ROI metrics - Testing and rollout strategy Timeline: ~20 hours over 4 days Goal: Empirical data for YC application amendment 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> * feat: Add transcript processing scripts for analysis and formatting - Implemented `dump-transcript-readable.ts` to generate a readable markdown dump of transcripts, excluding certain entry types. - Created `extract-rich-context-examples.ts` to extract and showcase rich context examples from transcripts, highlighting user requests and assistant reasoning. - Developed `format-transcript-context.ts` to format transcript context into a structured markdown format for improved observation generation. - Added `test-transcript-parser.ts` for validating data extraction from transcript JSONL files, including statistics and error reporting. - Introduced `transcript-to-markdown.ts` for a complete representation of transcript data in markdown format, showing all context data. - Enhanced type definitions in `transcript.ts` to support new features and ensure type safety. - Built `transcript-parser.ts` to handle parsing of transcript JSONL files, including error handling and data extraction methods. * Refactor hooks and SDKAgent for improved observation handling - Updated `new-hook.ts` to clean user prompts by stripping leading slashes for better semantic clarity. - Enhanced `save-hook.ts` to include additional tools in the SKIP_TOOLS set, preventing unnecessary observations from certain command invocations. - Modified `prompts.ts` to change the structure of observation prompts, emphasizing the observational role and providing a detailed XML output format for observations. - Adjusted `SDKAgent.ts` to enforce stricter tool usage restrictions, ensuring the memory agent operates solely as an observer without any tool access. * feat: Enhance session initialization to accept user prompts and prompt numbers - Updated `handleSessionInit` in `worker-service.ts` to extract `userPrompt` and `promptNumber` from the request body and pass them to `initializeSession`. - Modified `initializeSession` in `SessionManager.ts` to handle optional `currentUserPrompt` and `promptNumber` parameters. - Added logic to update the existing session's `userPrompt` and `lastPromptNumber` if a `currentUserPrompt` is provided. - Implemented debug logging for session initialization and updates to track user prompts and prompt numbers. --------- Co-authored-by: Claude <noreply@anthropic.com>
157 lines
4.4 KiB
Markdown
157 lines
4.4 KiB
Markdown
# Rich Context Examples
|
|
|
|
This document shows what contextual data is available in transcripts
|
|
that could improve observation generation quality.
|
|
|
|
## Statistics
|
|
|
|
- Total entries: 369
|
|
- User messages: 74
|
|
- Assistant messages: 133
|
|
- Token usage: 67,465 total
|
|
- Cache efficiency: 6,979,410 tokens read from cache
|
|
|
|
## Conversation Flow
|
|
|
|
This shows how user requests, assistant reasoning, and tool executions flow together.
|
|
This is the rich context currently missing from individual tool observations.
|
|
|
|
---
|
|
|
|
### Example 1
|
|
|
|
#### 👤 User Request
|
|
```
|
|
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
|
|
```
|
|
|
|
#### 🔧 Tools Executed (1)
|
|
|
|
**TodoWrite**
|
|
```json
|
|
{
|
|
"todos": [
|
|
{
|
|
"content": "Read the ROI implementation plan to understand full scope",
|
|
"status": "in_progress",
|
|
"activeForm": "Reading ROI implementation plan"
|
|
},
|
|
{
|
|
|
|
```
|
|
|
|
**📊 Data Available for This Exchange:**
|
|
- User intent: ✅ (218 chars)
|
|
- Assistant reasoning: ✅ (0 chars)
|
|
- Thinking process: ❌
|
|
- Tool executions: ✅ (1 tools)
|
|
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
|
|
|
|
---
|
|
|
|
### Example 2
|
|
|
|
#### 👤 User Request
|
|
```
|
|
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
|
|
```
|
|
|
|
#### 🔧 Tools Executed (1)
|
|
|
|
**Glob**
|
|
- Pattern: `**/*roi*`
|
|
|
|
**📊 Data Available for This Exchange:**
|
|
- User intent: ✅ (218 chars)
|
|
- Assistant reasoning: ✅ (0 chars)
|
|
- Thinking process: ❌
|
|
- Tool executions: ✅ (1 tools)
|
|
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
|
|
|
|
---
|
|
|
|
### Example 3
|
|
|
|
#### 👤 User Request
|
|
```
|
|
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
|
|
```
|
|
|
|
#### 🔧 Tools Executed (1)
|
|
|
|
**Glob**
|
|
- Pattern: `**/*implementation*plan*`
|
|
|
|
**📊 Data Available for This Exchange:**
|
|
- User intent: ✅ (218 chars)
|
|
- Assistant reasoning: ✅ (0 chars)
|
|
- Thinking process: ❌
|
|
- Tool executions: ✅ (1 tools)
|
|
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
|
|
|
|
---
|
|
|
|
### Example 4
|
|
|
|
#### 👤 User Request
|
|
```
|
|
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
|
|
```
|
|
|
|
#### 🔧 Tools Executed (1)
|
|
|
|
**Read**
|
|
- Reading: `/Users/alexnewman/Scripts/claude-mem/docs/context/transcript-data-discovery.md`
|
|
|
|
**📊 Data Available for This Exchange:**
|
|
- User intent: ✅ (218 chars)
|
|
- Assistant reasoning: ✅ (0 chars)
|
|
- Thinking process: ❌
|
|
- Tool executions: ✅ (1 tools)
|
|
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
|
|
|
|
---
|
|
|
|
### Example 5
|
|
|
|
#### 👤 User Request
|
|
```
|
|
Thank you for that. So now that you have a very deep understanding of what we are doing here, I'd like you to begin working on the enhancements to our prompts that leverage data using the transcript model we discovered
|
|
```
|
|
|
|
#### 🔧 Tools Executed (1)
|
|
|
|
**Read**
|
|
- Reading: `/Users/alexnewman/Scripts/claude-mem/IMPLEMENTATION_PLAN_ROI_METRICS.md`
|
|
|
|
**📊 Data Available for This Exchange:**
|
|
- User intent: ✅ (218 chars)
|
|
- Assistant reasoning: ✅ (0 chars)
|
|
- Thinking process: ❌
|
|
- Tool executions: ✅ (1 tools)
|
|
- **Currently sent to memory worker:** Tool inputs/outputs only (no context!) ❌
|
|
|
|
|
|
---
|
|
|
|
## Key Insight
|
|
|
|
Currently, the memory worker receives **isolated tool executions** via save-hook:
|
|
- tool_name: "Read"
|
|
- tool_input: {"file_path": "src/foo.ts"}
|
|
- tool_output: {file contents}
|
|
|
|
But the transcript contains **rich contextual data**:
|
|
- WHY the tool was used (user's request)
|
|
- WHAT the assistant planned to accomplish
|
|
- HOW it fits into the broader task
|
|
- The assistant's reasoning/thinking
|
|
- Multiple related tools used together
|
|
|
|
This context would help the memory worker:
|
|
1. Understand if a tool use is meaningful or routine
|
|
2. Generate observations that capture WHY, not just WHAT
|
|
3. Group related tools into coherent actions
|
|
4. Avoid "investigating" - the context is already present
|
|
|