-
Fixed: #140 #133 #80
released this
2025-10-25 21:39:15 +00:00 | 1573 commits to main since this releasefeat(translator): add token counting functionality for Gemini, Claude, and CLI
- Introduced
TokenCounthandling across various Codex translators (Gemini, Claude, CLI) with respective implementations. - Added utility methods for token counting and formatting responses.
- Integrated
tiktoken-go/tokenizerlibrary for tokenization. - Updated CodexExecutor with token counting logic to support multiple models including GPT-5 variants.
- Refined go.mod and go.sum to include new dependencies.
feat(runtime): add token counting functionality across executors
- Implemented token counting in OpenAICompatExecutor, QwenExecutor, and IFlowExecutor.
- Added utilities for token counting and response formatting using
tiktoken-go/tokenizer. - Integrated token counting into translators for Gemini, Claude, and Gemini CLI.
- Enhanced multiple model support, including GPT-5 variants, for token counting.
docs: update environment variable instructions for multi-model support
- Added details for setting
ANTHROPIC_DEFAULT_OPUS_MODEL,ANTHROPIC_DEFAULT_SONNET_MODEL, andANTHROPIC_DEFAULT_HAIKU_MODELfor version 2.x.x. - Clarified usage of
ANTHROPIC_MODELandANTHROPIC_SMALL_FAST_MODELfor version 1.x.x. - Expanded examples for setting environment variables across different models including Gemini, GPT-5, Claude, and Qwen3.
Downloads
- Introduced