Compare commits

..

21 Commits

Author SHA1 Message Date
Luis Pater d1736cb29c Merge pull request #315 from router-for-me/aistudio
docker-image / docker (push) Has been cancelled
goreleaser / goreleaser (push) Has been cancelled
fix(aistudio): strip Gemini generation config overrides
2025-11-23 20:25:59 +08:00
hkfires 62bfd62871 fix(aistudio): strip Gemini generation config overrides
Remove generationConfig.maxOutputTokens, generationConfig.responseMimeType and generationConfig.responseJsonSchema from the Gemini payload in translateRequest so we no longer send unsupported or conflicting response configuration fields. This lets the backend or caller control response formatting and output limits and helps prevent potential API errors caused by these keys.
2025-11-23 19:44:03 +08:00
Luis Pater 257621c5ed **chore(executor): update default agent version and simplify const formatting**
docker-image / docker (push) Has been cancelled
goreleaser / goreleaser (push) Has been cancelled
- Updated `defaultAntigravityAgent` to version `1.11.5`.
- Adjusted const value formatting for improved readability.

**feat(executor): introduce fallback mechanism for Antigravity base URLs**

- Added retry logic with fallback order for Antigravity base URLs to handle request errors and rate limits.
- Refactored base URL handling with `antigravityBaseURLFallbackOrder` and related utilities.
- Enhanced error handling in non-streaming and streaming requests with retry support and improved metadata reporting.
- Updated `buildRequest` to support dynamic base URL assignment.
2025-11-23 17:53:07 +08:00
Luis Pater ac064389ca **feat(executor, translator): enhance token handling and payload processing**
docker-image / docker (push) Has been cancelled
goreleaser / goreleaser (push) Has been cancelled
- Improved Antigravity executor to handle `thinkingConfig` adjustments and default `thinkingBudget` when `thinkingLevel` is removed.
- Updated translator response handling to set default values for output token counts when specific token data is missing.
2025-11-23 11:32:37 +08:00
Luis Pater 8d23ffc873 **feat(executor): add model alias mapping and improve Antigravity payload handling**
docker-image / docker (push) Has been cancelled
goreleaser / goreleaser (push) Has been cancelled
- Introduced `modelName2Alias` and `alias2ModelName` functions for mapping between model names and aliases.
- Improved Antigravity payload transformation to include alias-to-model name conversion.
- Enhanced processing for Claude Sonnet models to adjust template parameters based on schema presence.
2025-11-23 03:16:14 +08:00
Luis Pater 4307f08bbc **feat(watcher): optimize auth file handling with hash-based change detection**
- Added `authFileUnchanged` to skip reloads for unchanged files based on SHA256 hash comparisons.
- Introduced `isKnownAuthFile` to verify known files before handling removal events.
- Improved event processing in `handleEvent` to reduce unnecessary reloads and enhance performance.
2025-11-23 01:22:16 +08:00
Luis Pater 9d50a68768 **feat(translator): improve content processing and Antigravity request conversion**
docker-image / docker (push) Has been cancelled
goreleaser / goreleaser (push) Has been cancelled
- Refactored response translation logic to support mixed content types (`input_text`, `output_text`, `input_image`) with better role assignments and part handling.
- Added image processing logic for embedding inline data with MIME type and base64 encoded content.
- Updated Antigravity request conversion to replace Gemini CLI references for consistency.
2025-11-22 21:34:34 +08:00
Luis Pater 7c3c24addc Merge pull request #306 from router-for-me/usage
fix some bugs
2025-11-22 17:45:49 +08:00
hkfires 166fa9e2e6 fix(gemini): parse stream usage from JSON, skip thoughtSignature 2025-11-22 16:07:12 +08:00
hkfires 88e566281e fix(gemini): filter SSE usage metadata in streams 2025-11-22 15:53:36 +08:00
hkfires d32bb9db6b fix(runtime): treat non-empty finishReason as terminal 2025-11-22 15:39:46 +08:00
hkfires 8356b35320 fix(executor): expire stop chunks without usage metadata 2025-11-22 15:27:47 +08:00
hkfires 19a048879c feat(runtime): track antigravity usage and token counts 2025-11-22 14:04:28 +08:00
hkfires 1061354b2f fix: handle empty and non-JSON SSE chunks safely 2025-11-22 13:49:23 +08:00
hkfires 46b4110ff3 fix: preserve SSE usage metadata-only trailing chunks 2025-11-22 13:25:25 +08:00
hkfires c29931e093 fix(translator): ignore empty JSON chunks in OpenAI responses 2025-11-22 13:09:16 +08:00
hkfires b05cfd9f84 fix(translator): include empty text chunks in responses 2025-11-22 13:03:50 +08:00
hkfires 8ce22b8403 fix(sse): preserve usage metadata for stop chunks 2025-11-22 12:50:23 +08:00
Luis Pater d1cdedc4d1 Merge pull request #303 from router-for-me/image
docker-image / docker (push) Has been cancelled
goreleaser / goreleaser (push) Has been cancelled
feat(translator): support image size and googleSearch tools
2025-11-22 11:20:58 +08:00
Luis Pater d291eb9489 Fixed: #302
docker-image / docker (push) Has been cancelled
goreleaser / goreleaser (push) Has been cancelled
**feat(executor): enhance WebSocket error handling and metadata logging**

- Added handling for stream closure before start with appropriate error recording.
- Improved metadata logging for non-OK HTTP status codes in WebSocket responses.
- Consolidated event processing logic with `processEvent` for better error handling and payload management.
- Refactored stream initialization to include the first event handling for smoother execution flow.
2025-11-22 11:18:13 +08:00
hkfires dc8d3201e1 feat(translator): support image size and googleSearch tools 2025-11-22 10:36:52 +08:00
14 changed files with 841 additions and 328 deletions
+60 -6
View File
@@ -129,18 +129,60 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
recordAPIResponseError(ctx, e.cfg, err) recordAPIResponseError(ctx, e.cfg, err)
return nil, err return nil, err
} }
firstEvent, ok := <-wsStream
if !ok {
err = fmt.Errorf("wsrelay: stream closed before start")
recordAPIResponseError(ctx, e.cfg, err)
return nil, err
}
if firstEvent.Status > 0 && firstEvent.Status != http.StatusOK {
metadataLogged := false
if firstEvent.Status > 0 {
recordAPIResponseMetadata(ctx, e.cfg, firstEvent.Status, firstEvent.Headers.Clone())
metadataLogged = true
}
var body bytes.Buffer
if len(firstEvent.Payload) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(firstEvent.Payload))
body.Write(firstEvent.Payload)
}
if firstEvent.Type == wsrelay.MessageTypeStreamEnd {
return nil, statusErr{code: firstEvent.Status, msg: body.String()}
}
for event := range wsStream {
if event.Err != nil {
recordAPIResponseError(ctx, e.cfg, event.Err)
if body.Len() == 0 {
body.WriteString(event.Err.Error())
}
break
}
if !metadataLogged && event.Status > 0 {
recordAPIResponseMetadata(ctx, e.cfg, event.Status, event.Headers.Clone())
metadataLogged = true
}
if len(event.Payload) > 0 {
appendAPIResponseChunk(ctx, e.cfg, bytes.Clone(event.Payload))
body.Write(event.Payload)
}
if event.Type == wsrelay.MessageTypeStreamEnd {
break
}
}
return nil, statusErr{code: firstEvent.Status, msg: body.String()}
}
out := make(chan cliproxyexecutor.StreamChunk) out := make(chan cliproxyexecutor.StreamChunk)
stream = out stream = out
go func() { go func(first wsrelay.StreamEvent) {
defer close(out) defer close(out)
var param any var param any
metadataLogged := false metadataLogged := false
for event := range wsStream { processEvent := func(event wsrelay.StreamEvent) bool {
if event.Err != nil { if event.Err != nil {
recordAPIResponseError(ctx, e.cfg, event.Err) recordAPIResponseError(ctx, e.cfg, event.Err)
reporter.publishFailure(ctx) reporter.publishFailure(ctx)
out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)} out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)}
return return false
} }
switch event.Type { switch event.Type {
case wsrelay.MessageTypeStreamStart: case wsrelay.MessageTypeStreamStart:
@@ -162,7 +204,7 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
break break
} }
case wsrelay.MessageTypeStreamEnd: case wsrelay.MessageTypeStreamEnd:
return return false
case wsrelay.MessageTypeHTTPResp: case wsrelay.MessageTypeHTTPResp:
if !metadataLogged && event.Status > 0 { if !metadataLogged && event.Status > 0 {
recordAPIResponseMetadata(ctx, e.cfg, event.Status, event.Headers.Clone()) recordAPIResponseMetadata(ctx, e.cfg, event.Status, event.Headers.Clone())
@@ -176,15 +218,24 @@ func (e *AIStudioExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth
out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON([]byte(lines[i]))} out <- cliproxyexecutor.StreamChunk{Payload: ensureColonSpacedJSON([]byte(lines[i]))}
} }
reporter.publish(ctx, parseGeminiUsage(event.Payload)) reporter.publish(ctx, parseGeminiUsage(event.Payload))
return return false
case wsrelay.MessageTypeError: case wsrelay.MessageTypeError:
recordAPIResponseError(ctx, e.cfg, event.Err) recordAPIResponseError(ctx, e.cfg, event.Err)
reporter.publishFailure(ctx) reporter.publishFailure(ctx)
out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)} out <- cliproxyexecutor.StreamChunk{Err: fmt.Errorf("wsrelay: %v", event.Err)}
return false
}
return true
}
if !processEvent(first) {
return
}
for event := range wsStream {
if !processEvent(event) {
return return
} }
} }
}() }(firstEvent)
return stream, nil return stream, nil
} }
@@ -268,6 +319,9 @@ func (e *AIStudioExecutor) translateRequest(req cliproxyexecutor.Request, opts c
payload = util.StripThinkingConfigIfUnsupported(req.Model, payload) payload = util.StripThinkingConfigIfUnsupported(req.Model, payload)
payload = fixGeminiImageAspectRatio(req.Model, payload) payload = fixGeminiImageAspectRatio(req.Model, payload)
payload = applyPayloadConfig(e.cfg, req.Model, payload) payload = applyPayloadConfig(e.cfg, req.Model, payload)
payload, _ = sjson.DeleteBytes(payload, "generationConfig.maxOutputTokens")
payload, _ = sjson.DeleteBytes(payload, "generationConfig.responseMimeType")
payload, _ = sjson.DeleteBytes(payload, "generationConfig.responseJsonSchema")
metadataAction := "generateContent" metadataAction := "generateContent"
if req.Metadata != nil { if req.Metadata != nil {
if action, _ := req.Metadata["action"].(string); action == "countTokens" { if action, _ := req.Metadata["action"].(string); action == "countTokens" {
+342 -152
View File
@@ -26,16 +26,18 @@ import (
) )
const ( const (
antigravityBaseURL = "https://daily-cloudcode-pa.sandbox.googleapis.com" antigravityBaseURLDaily = "https://daily-cloudcode-pa.sandbox.googleapis.com"
antigravityStreamPath = "/v1internal:streamGenerateContent" antigravityBaseURLAutopush = "https://autopush-cloudcode-pa.sandbox.googleapis.com"
antigravityGeneratePath = "/v1internal:generateContent" antigravityBaseURLProd = "https://cloudcode-pa.googleapis.com"
antigravityModelsPath = "/v1internal:fetchAvailableModels" antigravityStreamPath = "/v1internal:streamGenerateContent"
antigravityClientID = "1071006060591-tmhssin2h21lcre235vtolojh4g403ep.apps.googleusercontent.com" antigravityGeneratePath = "/v1internal:generateContent"
antigravityClientSecret = "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" antigravityModelsPath = "/v1internal:fetchAvailableModels"
defaultAntigravityAgent = "antigravity/1.11.3 windows/amd64" antigravityClientID = "1071006060591-tmhssin2h21lcre235vtolojh4g403ep.apps.googleusercontent.com"
antigravityAuthType = "antigravity" antigravityClientSecret = "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"
refreshSkew = 5 * time.Minute defaultAntigravityAgent = "antigravity/1.11.5 windows/amd64"
streamScannerBuffer int = 20_971_520 antigravityAuthType = "antigravity"
refreshSkew = 3000 * time.Second
streamScannerBuffer int = 20_971_520
) )
var randSource = rand.New(rand.NewSource(time.Now().UnixNano())) var randSource = rand.New(rand.NewSource(time.Now().UnixNano()))
@@ -73,42 +75,76 @@ func (e *AntigravityExecutor) Execute(ctx context.Context, auth *cliproxyauth.Au
to := sdktranslator.FromString("antigravity") to := sdktranslator.FromString("antigravity")
translated := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false) translated := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), false)
httpReq, errReq := e.buildRequest(ctx, auth, token, req.Model, translated, false, opts.Alt) baseURLs := antigravityBaseURLFallbackOrder(auth)
if errReq != nil {
return resp, errReq
}
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
httpResp, errDo := httpClient.Do(httpReq)
if errDo != nil { var lastStatus int
recordAPIResponseError(ctx, e.cfg, errDo) var lastBody []byte
return resp, errDo var lastErr error
}
defer func() { for idx, baseURL := range baseURLs {
httpReq, errReq := e.buildRequest(ctx, auth, token, req.Model, translated, false, opts.Alt, baseURL)
if errReq != nil {
err = errReq
return resp, err
}
httpResp, errDo := httpClient.Do(httpReq)
if errDo != nil {
recordAPIResponseError(ctx, e.cfg, errDo)
lastStatus = 0
lastBody = nil
lastErr = errDo
if idx+1 < len(baseURLs) {
log.Debugf("antigravity executor: request error on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
}
err = errDo
return resp, err
}
recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone())
bodyBytes, errRead := io.ReadAll(httpResp.Body)
if errClose := httpResp.Body.Close(); errClose != nil { if errClose := httpResp.Body.Close(); errClose != nil {
log.Errorf("antigravity executor: close response body error: %v", errClose) log.Errorf("antigravity executor: close response body error: %v", errClose)
} }
}() if errRead != nil {
recordAPIResponseError(ctx, e.cfg, errRead)
err = errRead
return resp, err
}
appendAPIResponseChunk(ctx, e.cfg, bodyBytes)
recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone()) if httpResp.StatusCode < http.StatusOK || httpResp.StatusCode >= http.StatusMultipleChoices {
bodyBytes, errRead := io.ReadAll(httpResp.Body) log.Debugf("antigravity executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), bodyBytes))
if errRead != nil { lastStatus = httpResp.StatusCode
recordAPIResponseError(ctx, e.cfg, errRead) lastBody = append([]byte(nil), bodyBytes...)
return resp, errRead lastErr = nil
} if httpResp.StatusCode == http.StatusTooManyRequests && idx+1 < len(baseURLs) {
appendAPIResponseChunk(ctx, e.cfg, bodyBytes) log.Debugf("antigravity executor: rate limited on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
}
err = statusErr{code: httpResp.StatusCode, msg: string(bodyBytes)}
return resp, err
}
if httpResp.StatusCode < http.StatusOK || httpResp.StatusCode >= http.StatusMultipleChoices { reporter.publish(ctx, parseAntigravityUsage(bodyBytes))
log.Debugf("antigravity executor: upstream error status: %d, body: %s", httpResp.StatusCode, summarizeErrorBody(httpResp.Header.Get("Content-Type"), bodyBytes)) var param any
err = statusErr{code: httpResp.StatusCode, msg: string(bodyBytes)} converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, bodyBytes, &param)
return resp, err resp = cliproxyexecutor.Response{Payload: []byte(converted)}
reporter.ensurePublished(ctx)
return resp, nil
} }
var param any switch {
converted := sdktranslator.TranslateNonStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, bodyBytes, &param) case lastStatus != 0:
resp = cliproxyexecutor.Response{Payload: []byte(converted)} err = statusErr{code: lastStatus, msg: string(lastBody)}
reporter.ensurePublished(ctx) case lastErr != nil:
return resp, nil err = lastErr
default:
err = statusErr{code: http.StatusServiceUnavailable, msg: "antigravity executor: no base url available"}
}
return resp, err
} }
// ExecuteStream handles streaming requests via the antigravity upstream. // ExecuteStream handles streaming requests via the antigravity upstream.
@@ -130,66 +166,121 @@ func (e *AntigravityExecutor) ExecuteStream(ctx context.Context, auth *cliproxya
to := sdktranslator.FromString("antigravity") to := sdktranslator.FromString("antigravity")
translated := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true) translated := sdktranslator.TranslateRequest(from, to, req.Model, bytes.Clone(req.Payload), true)
httpReq, errReq := e.buildRequest(ctx, auth, token, req.Model, translated, true, opts.Alt) baseURLs := antigravityBaseURLFallbackOrder(auth)
if errReq != nil {
return nil, errReq
}
httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0) httpClient := newProxyAwareHTTPClient(ctx, e.cfg, auth, 0)
httpResp, errDo := httpClient.Do(httpReq)
if errDo != nil {
recordAPIResponseError(ctx, e.cfg, errDo)
return nil, errDo
}
recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone())
if httpResp.StatusCode < http.StatusOK || httpResp.StatusCode >= http.StatusMultipleChoices {
bodyBytes, _ := io.ReadAll(httpResp.Body)
appendAPIResponseChunk(ctx, e.cfg, bodyBytes)
if errClose := httpResp.Body.Close(); errClose != nil {
log.Errorf("antigravity executor: close response body error: %v", errClose)
}
err = statusErr{code: httpResp.StatusCode, msg: string(bodyBytes)}
return nil, err
}
out := make(chan cliproxyexecutor.StreamChunk) var lastStatus int
stream = out var lastBody []byte
go func() { var lastErr error
defer close(out)
defer func() { for idx, baseURL := range baseURLs {
httpReq, errReq := e.buildRequest(ctx, auth, token, req.Model, translated, true, opts.Alt, baseURL)
if errReq != nil {
err = errReq
return nil, err
}
httpResp, errDo := httpClient.Do(httpReq)
if errDo != nil {
recordAPIResponseError(ctx, e.cfg, errDo)
lastStatus = 0
lastBody = nil
lastErr = errDo
if idx+1 < len(baseURLs) {
log.Debugf("antigravity executor: request error on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
}
err = errDo
return nil, err
}
recordAPIResponseMetadata(ctx, e.cfg, httpResp.StatusCode, httpResp.Header.Clone())
if httpResp.StatusCode < http.StatusOK || httpResp.StatusCode >= http.StatusMultipleChoices {
bodyBytes, errRead := io.ReadAll(httpResp.Body)
if errClose := httpResp.Body.Close(); errClose != nil { if errClose := httpResp.Body.Close(); errClose != nil {
log.Errorf("antigravity executor: close response body error: %v", errClose) log.Errorf("antigravity executor: close response body error: %v", errClose)
} }
}() if errRead != nil {
scanner := bufio.NewScanner(httpResp.Body) recordAPIResponseError(ctx, e.cfg, errRead)
scanner.Buffer(nil, streamScannerBuffer) lastStatus = 0
var param any lastBody = nil
for scanner.Scan() { lastErr = errRead
line := scanner.Bytes() if idx+1 < len(baseURLs) {
appendAPIResponseChunk(ctx, e.cfg, line) log.Debugf("antigravity executor: read error on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
// Filter usage metadata for all models }
// Only retain usage statistics in the terminal chunk err = errRead
line = FilterSSEUsageMetadata(line) return nil, err
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, bytes.Clone(line), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
} }
appendAPIResponseChunk(ctx, e.cfg, bodyBytes)
lastStatus = httpResp.StatusCode
lastBody = append([]byte(nil), bodyBytes...)
lastErr = nil
if httpResp.StatusCode == http.StatusTooManyRequests && idx+1 < len(baseURLs) {
log.Debugf("antigravity executor: rate limited on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
}
err = statusErr{code: httpResp.StatusCode, msg: string(bodyBytes)}
return nil, err
} }
tail := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, []byte("[DONE]"), &param)
for i := range tail { out := make(chan cliproxyexecutor.StreamChunk)
out <- cliproxyexecutor.StreamChunk{Payload: []byte(tail[i])} stream = out
} go func(resp *http.Response) {
if errScan := scanner.Err(); errScan != nil { defer close(out)
recordAPIResponseError(ctx, e.cfg, errScan) defer func() {
reporter.publishFailure(ctx) if errClose := resp.Body.Close(); errClose != nil {
out <- cliproxyexecutor.StreamChunk{Err: errScan} log.Errorf("antigravity executor: close response body error: %v", errClose)
} else { }
reporter.ensurePublished(ctx) }()
} scanner := bufio.NewScanner(resp.Body)
}() scanner.Buffer(nil, streamScannerBuffer)
return stream, nil var param any
for scanner.Scan() {
line := scanner.Bytes()
appendAPIResponseChunk(ctx, e.cfg, line)
// Filter usage metadata for all models
// Only retain usage statistics in the terminal chunk
line = FilterSSEUsageMetadata(line)
payload := jsonPayload(line)
if payload == nil {
continue
}
if detail, ok := parseAntigravityStreamUsage(payload); ok {
reporter.publish(ctx, detail)
}
chunks := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, bytes.Clone(payload), &param)
for i := range chunks {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(chunks[i])}
}
}
tail := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), translated, []byte("[DONE]"), &param)
for i := range tail {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(tail[i])}
}
if errScan := scanner.Err(); errScan != nil {
recordAPIResponseError(ctx, e.cfg, errScan)
reporter.publishFailure(ctx)
out <- cliproxyexecutor.StreamChunk{Err: errScan}
} else {
reporter.ensurePublished(ctx)
}
}(httpResp)
return stream, nil
}
switch {
case lastStatus != 0:
err = statusErr{code: lastStatus, msg: string(lastBody)}
case lastErr != nil:
err = lastErr
default:
err = statusErr{code: http.StatusServiceUnavailable, msg: "antigravity executor: no base url available"}
}
return nil, err
} }
// Refresh refreshes the OAuth token using the refresh token. // Refresh refreshes the OAuth token using the refresh token.
@@ -220,54 +311,72 @@ func FetchAntigravityModels(ctx context.Context, auth *cliproxyauth.Auth, cfg *c
auth = updatedAuth auth = updatedAuth
} }
modelsURL := buildBaseURL(auth) + antigravityModelsPath baseURLs := antigravityBaseURLFallbackOrder(auth)
httpReq, errReq := http.NewRequestWithContext(ctx, http.MethodPost, modelsURL, bytes.NewReader([]byte(`{}`)))
if errReq != nil {
return nil
}
httpReq.Header.Set("Content-Type", "application/json")
httpReq.Header.Set("Authorization", "Bearer "+token)
httpReq.Header.Set("User-Agent", resolveUserAgent(auth))
if host := resolveHost(auth); host != "" {
httpReq.Host = host
}
httpClient := newProxyAwareHTTPClient(ctx, cfg, auth, 0) httpClient := newProxyAwareHTTPClient(ctx, cfg, auth, 0)
httpResp, errDo := httpClient.Do(httpReq)
if errDo != nil { for idx, baseURL := range baseURLs {
return nil modelsURL := baseURL + antigravityModelsPath
} httpReq, errReq := http.NewRequestWithContext(ctx, http.MethodPost, modelsURL, bytes.NewReader([]byte(`{}`)))
defer func() { if errReq != nil {
return nil
}
httpReq.Header.Set("Content-Type", "application/json")
httpReq.Header.Set("Authorization", "Bearer "+token)
httpReq.Header.Set("User-Agent", resolveUserAgent(auth))
if host := resolveHost(baseURL); host != "" {
httpReq.Host = host
}
httpResp, errDo := httpClient.Do(httpReq)
if errDo != nil {
if idx+1 < len(baseURLs) {
log.Debugf("antigravity executor: models request error on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
}
return nil
}
bodyBytes, errRead := io.ReadAll(httpResp.Body)
if errClose := httpResp.Body.Close(); errClose != nil { if errClose := httpResp.Body.Close(); errClose != nil {
log.Errorf("antigravity executor: close response body error: %v", errClose) log.Errorf("antigravity executor: close response body error: %v", errClose)
} }
}() if errRead != nil {
if idx+1 < len(baseURLs) {
log.Debugf("antigravity executor: models read error on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
}
return nil
}
if httpResp.StatusCode < http.StatusOK || httpResp.StatusCode >= http.StatusMultipleChoices {
if httpResp.StatusCode == http.StatusTooManyRequests && idx+1 < len(baseURLs) {
log.Debugf("antigravity executor: models request rate limited on base url %s, retrying with fallback base url: %s", baseURL, baseURLs[idx+1])
continue
}
return nil
}
bodyBytes, errRead := io.ReadAll(httpResp.Body) result := gjson.GetBytes(bodyBytes, "models")
if errRead != nil { if !result.Exists() {
return nil return nil
} }
if httpResp.StatusCode < http.StatusOK || httpResp.StatusCode >= http.StatusMultipleChoices {
return nil
}
result := gjson.GetBytes(bodyBytes, "models") now := time.Now().Unix()
if !result.Exists() { models := make([]*registry.ModelInfo, 0, len(result.Map()))
return nil for id := range result.Map() {
id = modelName2Alias(id)
if id != "" {
models = append(models, &registry.ModelInfo{
ID: id,
Object: "model",
Created: now,
OwnedBy: antigravityAuthType,
Type: antigravityAuthType,
})
}
}
return models
} }
return nil
now := time.Now().Unix()
models := make([]*registry.ModelInfo, 0, len(result.Map()))
for id := range result.Map() {
models = append(models, &registry.ModelInfo{
ID: id,
Object: "model",
Created: now,
OwnedBy: antigravityAuthType,
Type: antigravityAuthType,
})
}
return models
} }
func (e *AntigravityExecutor) ensureAccessToken(ctx context.Context, auth *cliproxyauth.Auth) (string, *cliproxyauth.Auth, error) { func (e *AntigravityExecutor) ensureAccessToken(ctx context.Context, auth *cliproxyauth.Auth) (string, *cliproxyauth.Auth, error) {
@@ -353,12 +462,15 @@ func (e *AntigravityExecutor) refreshToken(ctx context.Context, auth *cliproxyau
return auth, nil return auth, nil
} }
func (e *AntigravityExecutor) buildRequest(ctx context.Context, auth *cliproxyauth.Auth, token, modelName string, payload []byte, stream bool, alt string) (*http.Request, error) { func (e *AntigravityExecutor) buildRequest(ctx context.Context, auth *cliproxyauth.Auth, token, modelName string, payload []byte, stream bool, alt, baseURL string) (*http.Request, error) {
if token == "" { if token == "" {
return nil, statusErr{code: http.StatusUnauthorized, msg: "missing access token"} return nil, statusErr{code: http.StatusUnauthorized, msg: "missing access token"}
} }
base := buildBaseURL(auth) base := strings.TrimSuffix(baseURL, "/")
if base == "" {
base = buildBaseURL(auth)
}
path := antigravityGeneratePath path := antigravityGeneratePath
if stream { if stream {
path = antigravityStreamPath path = antigravityStreamPath
@@ -379,6 +491,7 @@ func (e *AntigravityExecutor) buildRequest(ctx context.Context, auth *cliproxyau
} }
payload = geminiToAntigravity(modelName, payload) payload = geminiToAntigravity(modelName, payload)
payload, _ = sjson.SetBytes(payload, "model", alias2ModelName(modelName))
httpReq, errReq := http.NewRequestWithContext(ctx, http.MethodPost, requestURL.String(), bytes.NewReader(payload)) httpReq, errReq := http.NewRequestWithContext(ctx, http.MethodPost, requestURL.String(), bytes.NewReader(payload))
if errReq != nil { if errReq != nil {
return nil, errReq return nil, errReq
@@ -391,7 +504,7 @@ func (e *AntigravityExecutor) buildRequest(ctx context.Context, auth *cliproxyau
} else { } else {
httpReq.Header.Set("Accept", "application/json") httpReq.Header.Set("Accept", "application/json")
} }
if host := resolveHost(auth); host != "" { if host := resolveHost(base); host != "" {
httpReq.Host = host httpReq.Host = host
} }
@@ -475,26 +588,13 @@ func int64Value(value any) (int64, bool) {
} }
func buildBaseURL(auth *cliproxyauth.Auth) string { func buildBaseURL(auth *cliproxyauth.Auth) string {
if auth != nil { if baseURLs := antigravityBaseURLFallbackOrder(auth); len(baseURLs) > 0 {
if auth.Attributes != nil { return baseURLs[0]
if v := strings.TrimSpace(auth.Attributes["base_url"]); v != "" {
return strings.TrimSuffix(v, "/")
}
}
if auth.Metadata != nil {
if v, ok := auth.Metadata["base_url"].(string); ok {
v = strings.TrimSpace(v)
if v != "" {
return strings.TrimSuffix(v, "/")
}
}
}
} }
return antigravityBaseURL return antigravityBaseURLAutopush
} }
func resolveHost(auth *cliproxyauth.Auth) string { func resolveHost(base string) string {
base := buildBaseURL(auth)
parsed, errParse := url.Parse(base) parsed, errParse := url.Parse(base)
if errParse != nil { if errParse != nil {
return "" return ""
@@ -521,6 +621,37 @@ func resolveUserAgent(auth *cliproxyauth.Auth) string {
return defaultAntigravityAgent return defaultAntigravityAgent
} }
func antigravityBaseURLFallbackOrder(auth *cliproxyauth.Auth) []string {
if base := resolveCustomAntigravityBaseURL(auth); base != "" {
return []string{base}
}
return []string{
antigravityBaseURLDaily,
antigravityBaseURLAutopush,
// antigravityBaseURLProd,
}
}
func resolveCustomAntigravityBaseURL(auth *cliproxyauth.Auth) string {
if auth == nil {
return ""
}
if auth.Attributes != nil {
if v := strings.TrimSpace(auth.Attributes["base_url"]); v != "" {
return strings.TrimSuffix(v, "/")
}
}
if auth.Metadata != nil {
if v, ok := auth.Metadata["base_url"].(string); ok {
v = strings.TrimSpace(v)
if v != "" {
return strings.TrimSuffix(v, "/")
}
}
}
return ""
}
func geminiToAntigravity(modelName string, payload []byte) []byte { func geminiToAntigravity(modelName string, payload []byte) []byte {
template, _ := sjson.Set(string(payload), "model", modelName) template, _ := sjson.Set(string(payload), "model", modelName)
template, _ = sjson.Set(template, "userAgent", "antigravity") template, _ = sjson.Set(template, "userAgent", "antigravity")
@@ -530,12 +661,21 @@ func geminiToAntigravity(modelName string, payload []byte) []byte {
template, _ = sjson.Delete(template, "request.safetySettings") template, _ = sjson.Delete(template, "request.safetySettings")
template, _ = sjson.Set(template, "request.toolConfig.functionCallingConfig.mode", "VALIDATED") template, _ = sjson.Set(template, "request.toolConfig.functionCallingConfig.mode", "VALIDATED")
template, _ = sjson.Delete(template, "request.generationConfig.maxOutputTokens")
if !strings.HasPrefix(modelName, "gemini-3-") {
if thinkingLevel := gjson.Get(template, "request.generationConfig.thinkingConfig.thinkingLevel"); thinkingLevel.Exists() {
template, _ = sjson.Delete(template, "request.generationConfig.thinkingConfig.thinkingLevel")
template, _ = sjson.Set(template, "request.generationConfig.thinkingConfig.thinkingBudget", -1)
}
}
gjson.Get(template, "request.contents").ForEach(func(key, content gjson.Result) bool { gjson.Get(template, "request.contents").ForEach(func(key, content gjson.Result) bool {
if content.Get("role").String() == "model" { if content.Get("role").String() == "model" {
content.Get("parts").ForEach(func(partKey, part gjson.Result) bool { content.Get("parts").ForEach(func(partKey, part gjson.Result) bool {
if part.Get("functionCall").Exists() { if part.Get("functionCall").Exists() {
template, _ = sjson.Set(template, fmt.Sprintf("request.contents.%d.parts.%d.thoughtSignature", key.Int(), partKey.Int()), "skip_thought_signature_validator") template, _ = sjson.Set(template, fmt.Sprintf("request.contents.%d.parts.%d.thoughtSignature", key.Int(), partKey.Int()), "skip_thought_signature_validator")
} else if part.Get("thoughtSignature").Exists() {
template, _ = sjson.Set(template, fmt.Sprintf("request.contents.%d.parts.%d.thoughtSignature", key.Int(), partKey.Int()), "skip_thought_signature_validator")
} }
return true return true
}) })
@@ -543,6 +683,20 @@ func geminiToAntigravity(modelName string, payload []byte) []byte {
return true return true
}) })
if strings.HasPrefix(modelName, "claude-sonnet-") {
gjson.Get(template, "request.tools").ForEach(func(key, tool gjson.Result) bool {
tool.Get("functionDeclarations").ForEach(func(funKey, funcDecl gjson.Result) bool {
if funcDecl.Get("parametersJsonSchema").Exists() {
template, _ = sjson.SetRaw(template, fmt.Sprintf("request.tools.%d.functionDeclarations.%d.parameters", key.Int(), funKey.Int()), funcDecl.Get("parametersJsonSchema").Raw)
template, _ = sjson.Delete(template, fmt.Sprintf("request.tools.%d.functionDeclarations.%d.parameters.$schema", key.Int(), funKey.Int()))
template, _ = sjson.Delete(template, fmt.Sprintf("request.tools.%d.functionDeclarations.%d.parametersJsonSchema", key.Int(), funKey.Int()))
}
return true
})
return true
})
}
return []byte(template) return []byte(template)
} }
@@ -563,3 +717,39 @@ func generateProjectID() string {
randomPart := strings.ToLower(uuid.NewString())[:5] randomPart := strings.ToLower(uuid.NewString())[:5]
return adj + "-" + noun + "-" + randomPart return adj + "-" + noun + "-" + randomPart
} }
func modelName2Alias(modelName string) string {
switch modelName {
case "rev19-uic3-1p":
return "gemini-2.5-computer-use-preview-10-2025"
case "gemini-3-pro-image":
return "gemini-3-pro-image-preview"
case "gemini-3-pro-high":
return "gemini-3-pro-preview"
case "claude-sonnet-4-5":
return "gemini-claude-sonnet-4-5"
case "claude-sonnet-4-5-thinking":
return "gemini-claude-sonnet-4-5-thinking"
case "chat_20706", "chat_23310", "gemini-2.5-flash-thinking", "gemini-3-pro-low", "gemini-2.5-pro":
return ""
default:
return modelName
}
}
func alias2ModelName(modelName string) string {
switch modelName {
case "gemini-2.5-computer-use-preview-10-2025":
return "rev19-uic3-1p"
case "gemini-3-pro-image-preview":
return "gemini-3-pro-image"
case "gemini-3-pro-preview":
return "gemini-3-pro-high"
case "gemini-claude-sonnet-4-5":
return "claude-sonnet-4-5"
case "gemini-claude-sonnet-4-5-thinking":
return "claude-sonnet-4-5-thinking"
default:
return modelName
}
}
+7 -2
View File
@@ -256,10 +256,15 @@ func (e *GeminiExecutor) ExecuteStream(ctx context.Context, auth *cliproxyauth.A
for scanner.Scan() { for scanner.Scan() {
line := scanner.Bytes() line := scanner.Bytes()
appendAPIResponseChunk(ctx, e.cfg, line) appendAPIResponseChunk(ctx, e.cfg, line)
if detail, ok := parseGeminiStreamUsage(line); ok { filtered := FilterSSEUsageMetadata(line)
payload := jsonPayload(filtered)
if len(payload) == 0 {
continue
}
if detail, ok := parseGeminiStreamUsage(payload); ok {
reporter.publish(ctx, detail) reporter.publish(ctx, detail)
} }
lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(line), &param) lines := sdktranslator.TranslateStream(ctx, to, from, req.Model, bytes.Clone(opts.OriginalRequest), body, bytes.Clone(payload), &param)
for i := range lines { for i := range lines {
out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])} out <- cliproxyexecutor.StreamChunk{Payload: []byte(lines[i])}
} }
+198 -106
View File
@@ -365,6 +365,204 @@ func parseGeminiCLIStreamUsage(line []byte) (usage.Detail, bool) {
return detail, true return detail, true
} }
func parseAntigravityUsage(data []byte) usage.Detail {
usageNode := gjson.ParseBytes(data)
node := usageNode.Get("response.usageMetadata")
if !node.Exists() {
node = usageNode.Get("usageMetadata")
}
if !node.Exists() {
node = usageNode.Get("usage_metadata")
}
if !node.Exists() {
return usage.Detail{}
}
detail := usage.Detail{
InputTokens: node.Get("promptTokenCount").Int(),
OutputTokens: node.Get("candidatesTokenCount").Int(),
ReasoningTokens: node.Get("thoughtsTokenCount").Int(),
TotalTokens: node.Get("totalTokenCount").Int(),
}
if detail.TotalTokens == 0 {
detail.TotalTokens = detail.InputTokens + detail.OutputTokens + detail.ReasoningTokens
}
return detail
}
func parseAntigravityStreamUsage(line []byte) (usage.Detail, bool) {
payload := jsonPayload(line)
if len(payload) == 0 || !gjson.ValidBytes(payload) {
return usage.Detail{}, false
}
node := gjson.GetBytes(payload, "response.usageMetadata")
if !node.Exists() {
node = gjson.GetBytes(payload, "usageMetadata")
}
if !node.Exists() {
node = gjson.GetBytes(payload, "usage_metadata")
}
if !node.Exists() {
return usage.Detail{}, false
}
detail := usage.Detail{
InputTokens: node.Get("promptTokenCount").Int(),
OutputTokens: node.Get("candidatesTokenCount").Int(),
ReasoningTokens: node.Get("thoughtsTokenCount").Int(),
TotalTokens: node.Get("totalTokenCount").Int(),
}
if detail.TotalTokens == 0 {
detail.TotalTokens = detail.InputTokens + detail.OutputTokens + detail.ReasoningTokens
}
return detail, true
}
var stopChunkWithoutUsage sync.Map
func rememberStopWithoutUsage(traceID string) {
stopChunkWithoutUsage.Store(traceID, struct{}{})
time.AfterFunc(10*time.Minute, func() { stopChunkWithoutUsage.Delete(traceID) })
}
// FilterSSEUsageMetadata removes usageMetadata from SSE events that are not
// terminal (finishReason != "stop"). Stop chunks are left untouched. This
// function is shared between aistudio and antigravity executors.
func FilterSSEUsageMetadata(payload []byte) []byte {
if len(payload) == 0 {
return payload
}
lines := bytes.Split(payload, []byte("\n"))
modified := false
foundData := false
for idx, line := range lines {
trimmed := bytes.TrimSpace(line)
if len(trimmed) == 0 || !bytes.HasPrefix(trimmed, []byte("data:")) {
continue
}
foundData = true
dataIdx := bytes.Index(line, []byte("data:"))
if dataIdx < 0 {
continue
}
rawJSON := bytes.TrimSpace(line[dataIdx+5:])
traceID := gjson.GetBytes(rawJSON, "traceId").String()
if isStopChunkWithoutUsage(rawJSON) && traceID != "" {
rememberStopWithoutUsage(traceID)
continue
}
if traceID != "" {
if _, ok := stopChunkWithoutUsage.Load(traceID); ok && hasUsageMetadata(rawJSON) {
stopChunkWithoutUsage.Delete(traceID)
continue
}
}
cleaned, changed := StripUsageMetadataFromJSON(rawJSON)
if !changed {
continue
}
var rebuilt []byte
rebuilt = append(rebuilt, line[:dataIdx]...)
rebuilt = append(rebuilt, []byte("data:")...)
if len(cleaned) > 0 {
rebuilt = append(rebuilt, ' ')
rebuilt = append(rebuilt, cleaned...)
}
lines[idx] = rebuilt
modified = true
}
if !modified {
if !foundData {
// Handle payloads that are raw JSON without SSE data: prefix.
trimmed := bytes.TrimSpace(payload)
cleaned, changed := StripUsageMetadataFromJSON(trimmed)
if !changed {
return payload
}
return cleaned
}
return payload
}
return bytes.Join(lines, []byte("\n"))
}
// StripUsageMetadataFromJSON drops usageMetadata unless finishReason is present (terminal).
// It handles both formats:
// - Aistudio: candidates.0.finishReason
// - Antigravity: response.candidates.0.finishReason
func StripUsageMetadataFromJSON(rawJSON []byte) ([]byte, bool) {
jsonBytes := bytes.TrimSpace(rawJSON)
if len(jsonBytes) == 0 || !gjson.ValidBytes(jsonBytes) {
return rawJSON, false
}
// Check for finishReason in both aistudio and antigravity formats
finishReason := gjson.GetBytes(jsonBytes, "candidates.0.finishReason")
if !finishReason.Exists() {
finishReason = gjson.GetBytes(jsonBytes, "response.candidates.0.finishReason")
}
terminalReason := finishReason.Exists() && strings.TrimSpace(finishReason.String()) != ""
usageMetadata := gjson.GetBytes(jsonBytes, "usageMetadata")
if !usageMetadata.Exists() {
usageMetadata = gjson.GetBytes(jsonBytes, "response.usageMetadata")
}
// Terminal chunk: keep as-is.
if terminalReason {
return rawJSON, false
}
// Nothing to strip
if !usageMetadata.Exists() {
return rawJSON, false
}
// Remove usageMetadata from both possible locations
cleaned := jsonBytes
var changed bool
if gjson.GetBytes(cleaned, "usageMetadata").Exists() {
cleaned, _ = sjson.DeleteBytes(cleaned, "usageMetadata")
changed = true
}
if gjson.GetBytes(cleaned, "response.usageMetadata").Exists() {
cleaned, _ = sjson.DeleteBytes(cleaned, "response.usageMetadata")
changed = true
}
return cleaned, changed
}
func hasUsageMetadata(jsonBytes []byte) bool {
if len(jsonBytes) == 0 || !gjson.ValidBytes(jsonBytes) {
return false
}
if gjson.GetBytes(jsonBytes, "usageMetadata").Exists() {
return true
}
if gjson.GetBytes(jsonBytes, "response.usageMetadata").Exists() {
return true
}
return false
}
func isStopChunkWithoutUsage(jsonBytes []byte) bool {
if len(jsonBytes) == 0 || !gjson.ValidBytes(jsonBytes) {
return false
}
finishReason := gjson.GetBytes(jsonBytes, "candidates.0.finishReason")
if !finishReason.Exists() {
finishReason = gjson.GetBytes(jsonBytes, "response.candidates.0.finishReason")
}
trimmed := strings.TrimSpace(finishReason.String())
if !finishReason.Exists() || trimmed == "" {
return false
}
return !hasUsageMetadata(jsonBytes)
}
func jsonPayload(line []byte) []byte { func jsonPayload(line []byte) []byte {
trimmed := bytes.TrimSpace(line) trimmed := bytes.TrimSpace(line)
if len(trimmed) == 0 { if len(trimmed) == 0 {
@@ -384,109 +582,3 @@ func jsonPayload(line []byte) []byte {
} }
return trimmed return trimmed
} }
// FilterSSEUsageMetadata removes usageMetadata from intermediate SSE events so that
// only the terminal chunk retains token statistics.
// This function is shared between aistudio and antigravity executors.
func FilterSSEUsageMetadata(payload []byte) []byte {
if len(payload) == 0 {
return payload
}
lines := bytes.Split(payload, []byte("\n"))
modified := false
for idx, line := range lines {
trimmed := bytes.TrimSpace(line)
if len(trimmed) == 0 || !bytes.HasPrefix(trimmed, []byte("data:")) {
continue
}
dataIdx := bytes.Index(line, []byte("data:"))
if dataIdx < 0 {
continue
}
rawJSON := bytes.TrimSpace(line[dataIdx+5:])
cleaned, changed := StripUsageMetadataFromJSON(rawJSON)
if !changed {
continue
}
var rebuilt []byte
rebuilt = append(rebuilt, line[:dataIdx]...)
rebuilt = append(rebuilt, []byte("data:")...)
if len(cleaned) > 0 {
rebuilt = append(rebuilt, ' ')
rebuilt = append(rebuilt, cleaned...)
}
lines[idx] = rebuilt
modified = true
}
if !modified {
return payload
}
return bytes.Join(lines, []byte("\n"))
}
// StripUsageMetadataFromJSON drops usageMetadata when no finishReason is present.
// This function is shared between aistudio and antigravity executors.
// It handles both formats:
// - Aistudio: candidates.0.finishReason
// - Antigravity: response.candidates.0.finishReason
func StripUsageMetadataFromJSON(rawJSON []byte) ([]byte, bool) {
jsonBytes := bytes.TrimSpace(rawJSON)
if len(jsonBytes) == 0 || !gjson.ValidBytes(jsonBytes) {
return rawJSON, false
}
// Check for finishReason in both aistudio and antigravity formats
finishReason := gjson.GetBytes(jsonBytes, "candidates.0.finishReason")
if !finishReason.Exists() {
finishReason = gjson.GetBytes(jsonBytes, "response.candidates.0.finishReason")
}
// If finishReason exists and is not empty, keep the usageMetadata
if finishReason.Exists() && finishReason.String() != "" {
return rawJSON, false
}
// Check for usageMetadata in both possible locations
usageMetadata := gjson.GetBytes(jsonBytes, "usageMetadata")
if !usageMetadata.Exists() {
usageMetadata = gjson.GetBytes(jsonBytes, "response.usageMetadata")
}
if hasNonZeroUsageMetadata(usageMetadata) {
return rawJSON, false
}
if !usageMetadata.Exists() {
return rawJSON, false
}
// Remove usageMetadata from both possible locations
cleaned := jsonBytes
var changed bool
// Try to remove usageMetadata from root level
if gjson.GetBytes(cleaned, "usageMetadata").Exists() {
cleaned, _ = sjson.DeleteBytes(cleaned, "usageMetadata")
changed = true
}
// Try to remove usageMetadata from response level
if gjson.GetBytes(cleaned, "response.usageMetadata").Exists() {
cleaned, _ = sjson.DeleteBytes(cleaned, "response.usageMetadata")
changed = true
}
return cleaned, changed
}
// hasNonZeroUsageMetadata checks if any usage token counts are present.
func hasNonZeroUsageMetadata(node gjson.Result) bool {
if !node.Exists() {
return false
}
return node.Get("totalTokenCount").Int() > 0 ||
node.Get("promptTokenCount").Int() > 0 ||
node.Get("candidatesTokenCount").Int() > 0 ||
node.Get("thoughtsTokenCount").Int() > 0
}
@@ -131,6 +131,9 @@ func ConvertOpenAIRequestToAntigravity(modelName string, inputRawJSON []byte, _
if ar := imgCfg.Get("aspect_ratio"); ar.Exists() && ar.Type == gjson.String { if ar := imgCfg.Get("aspect_ratio"); ar.Exists() && ar.Type == gjson.String {
out, _ = sjson.SetBytes(out, "request.generationConfig.imageConfig.aspectRatio", ar.Str) out, _ = sjson.SetBytes(out, "request.generationConfig.imageConfig.aspectRatio", ar.Str)
} }
if size := imgCfg.Get("image_size"); size.Exists() && size.Type == gjson.String {
out, _ = sjson.SetBytes(out, "request.generationConfig.imageConfig.imageSize", size.Str)
}
} }
// messages -> systemInstruction + contents // messages -> systemInstruction + contents
@@ -281,11 +284,12 @@ func ConvertOpenAIRequestToAntigravity(modelName string, inputRawJSON []byte, _
} }
} }
// tools -> request.tools[0].functionDeclarations // tools -> request.tools[0].functionDeclarations + request.tools[0].googleSearch passthrough
tools := gjson.GetBytes(rawJSON, "tools") tools := gjson.GetBytes(rawJSON, "tools")
if tools.IsArray() && len(tools.Array()) > 0 { if tools.IsArray() && len(tools.Array()) > 0 {
out, _ = sjson.SetRawBytes(out, "request.tools", []byte(`[{"functionDeclarations":[]}]`)) toolNode := []byte(`{}`)
fdPath := "request.tools.0.functionDeclarations" hasTool := false
hasFunction := false
for _, t := range tools.Array() { for _, t := range tools.Array() {
if t.Get("type").String() == "function" { if t.Get("type").String() == "function" {
fn := t.Get("function") fn := t.Get("function")
@@ -323,14 +327,32 @@ func ConvertOpenAIRequestToAntigravity(modelName string, inputRawJSON []byte, _
} }
} }
fnRaw, _ = sjson.Delete(fnRaw, "strict") fnRaw, _ = sjson.Delete(fnRaw, "strict")
tmp, errSet := sjson.SetRawBytes(out, fdPath+".-1", []byte(fnRaw)) if !hasFunction {
toolNode, _ = sjson.SetRawBytes(toolNode, "functionDeclarations", []byte("[]"))
}
tmp, errSet := sjson.SetRawBytes(toolNode, "functionDeclarations.-1", []byte(fnRaw))
if errSet != nil { if errSet != nil {
log.Warnf("Failed to append tool declaration for '%s': %v", fn.Get("name").String(), errSet) log.Warnf("Failed to append tool declaration for '%s': %v", fn.Get("name").String(), errSet)
continue continue
} }
out = tmp toolNode = tmp
hasFunction = true
hasTool = true
} }
} }
if gs := t.Get("google_search"); gs.Exists() {
var errSet error
toolNode, errSet = sjson.SetRawBytes(toolNode, "googleSearch", []byte(gs.Raw))
if errSet != nil {
log.Warnf("Failed to set googleSearch tool: %v", errSet)
continue
}
hasTool = true
}
}
if hasTool {
out, _ = sjson.SetRawBytes(out, "request.tools", []byte("[]"))
out, _ = sjson.SetRawBytes(out, "request.tools.0", toolNode)
} }
} }
@@ -98,7 +98,6 @@ func ConvertAntigravityResponseToOpenAI(_ context.Context, _ string, originalReq
// Process the main content part of the response. // Process the main content part of the response.
partsResult := gjson.GetBytes(rawJSON, "response.candidates.0.content.parts") partsResult := gjson.GetBytes(rawJSON, "response.candidates.0.content.parts")
hasFunctionCall := false hasFunctionCall := false
hasValidContent := false
if partsResult.IsArray() { if partsResult.IsArray() {
partResults := partsResult.Array() partResults := partsResult.Array()
for i := 0; i < len(partResults); i++ { for i := 0; i < len(partResults); i++ {
@@ -119,10 +118,6 @@ func ConvertAntigravityResponseToOpenAI(_ context.Context, _ string, originalReq
if partTextResult.Exists() { if partTextResult.Exists() {
textContent := partTextResult.String() textContent := partTextResult.String()
// Skip empty text content to avoid generating unnecessary chunks
if textContent == "" {
continue
}
// Handle text content, distinguishing between regular content and reasoning/thoughts. // Handle text content, distinguishing between regular content and reasoning/thoughts.
if partResult.Get("thought").Bool() { if partResult.Get("thought").Bool() {
@@ -131,7 +126,6 @@ func ConvertAntigravityResponseToOpenAI(_ context.Context, _ string, originalReq
template, _ = sjson.Set(template, "choices.0.delta.content", textContent) template, _ = sjson.Set(template, "choices.0.delta.content", textContent)
} }
template, _ = sjson.Set(template, "choices.0.delta.role", "assistant") template, _ = sjson.Set(template, "choices.0.delta.role", "assistant")
hasValidContent = true
} else if functionCallResult.Exists() { } else if functionCallResult.Exists() {
// Handle function call content. // Handle function call content.
hasFunctionCall = true hasFunctionCall = true
@@ -191,12 +185,6 @@ func ConvertAntigravityResponseToOpenAI(_ context.Context, _ string, originalReq
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls") template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
} }
// Only return a chunk if there's actual content or a finish reason
finishReason := gjson.GetBytes(rawJSON, "response.candidates.0.finishReason")
if !hasValidContent && !finishReason.Exists() {
return []string{}
}
return []string{template} return []string{template}
} }
@@ -3,12 +3,12 @@ package responses
import ( import (
"bytes" "bytes"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini-cli/gemini" . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/antigravity/gemini"
. "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses" . "github.com/router-for-me/CLIProxyAPI/v6/internal/translator/gemini/openai/responses"
) )
func ConvertOpenAIResponsesRequestToAntigravity(modelName string, inputRawJSON []byte, stream bool) []byte { func ConvertOpenAIResponsesRequestToAntigravity(modelName string, inputRawJSON []byte, stream bool) []byte {
rawJSON := bytes.Clone(inputRawJSON) rawJSON := bytes.Clone(inputRawJSON)
rawJSON = ConvertOpenAIResponsesRequestToGemini(modelName, rawJSON, stream) rawJSON = ConvertOpenAIResponsesRequestToGemini(modelName, rawJSON, stream)
return ConvertGeminiRequestToGeminiCLI(modelName, rawJSON, stream) return ConvertGeminiRequestToAntigravity(modelName, rawJSON, stream)
} }
@@ -131,6 +131,9 @@ func ConvertOpenAIRequestToGeminiCLI(modelName string, inputRawJSON []byte, _ bo
if ar := imgCfg.Get("aspect_ratio"); ar.Exists() && ar.Type == gjson.String { if ar := imgCfg.Get("aspect_ratio"); ar.Exists() && ar.Type == gjson.String {
out, _ = sjson.SetBytes(out, "request.generationConfig.imageConfig.aspectRatio", ar.Str) out, _ = sjson.SetBytes(out, "request.generationConfig.imageConfig.aspectRatio", ar.Str)
} }
if size := imgCfg.Get("image_size"); size.Exists() && size.Type == gjson.String {
out, _ = sjson.SetBytes(out, "request.generationConfig.imageConfig.imageSize", size.Str)
}
} }
// messages -> systemInstruction + contents // messages -> systemInstruction + contents
@@ -281,11 +284,12 @@ func ConvertOpenAIRequestToGeminiCLI(modelName string, inputRawJSON []byte, _ bo
} }
} }
// tools -> request.tools[0].functionDeclarations // tools -> request.tools[0].functionDeclarations + request.tools[0].googleSearch passthrough
tools := gjson.GetBytes(rawJSON, "tools") tools := gjson.GetBytes(rawJSON, "tools")
if tools.IsArray() && len(tools.Array()) > 0 { if tools.IsArray() && len(tools.Array()) > 0 {
out, _ = sjson.SetRawBytes(out, "request.tools", []byte(`[{"functionDeclarations":[]}]`)) toolNode := []byte(`{}`)
fdPath := "request.tools.0.functionDeclarations" hasTool := false
hasFunction := false
for _, t := range tools.Array() { for _, t := range tools.Array() {
if t.Get("type").String() == "function" { if t.Get("type").String() == "function" {
fn := t.Get("function") fn := t.Get("function")
@@ -323,14 +327,32 @@ func ConvertOpenAIRequestToGeminiCLI(modelName string, inputRawJSON []byte, _ bo
} }
} }
fnRaw, _ = sjson.Delete(fnRaw, "strict") fnRaw, _ = sjson.Delete(fnRaw, "strict")
tmp, errSet := sjson.SetRawBytes(out, fdPath+".-1", []byte(fnRaw)) if !hasFunction {
toolNode, _ = sjson.SetRawBytes(toolNode, "functionDeclarations", []byte("[]"))
}
tmp, errSet := sjson.SetRawBytes(toolNode, "functionDeclarations.-1", []byte(fnRaw))
if errSet != nil { if errSet != nil {
log.Warnf("Failed to append tool declaration for '%s': %v", fn.Get("name").String(), errSet) log.Warnf("Failed to append tool declaration for '%s': %v", fn.Get("name").String(), errSet)
continue continue
} }
out = tmp toolNode = tmp
hasFunction = true
hasTool = true
} }
} }
if gs := t.Get("google_search"); gs.Exists() {
var errSet error
toolNode, errSet = sjson.SetRawBytes(toolNode, "googleSearch", []byte(gs.Raw))
if errSet != nil {
log.Warnf("Failed to set googleSearch tool: %v", errSet)
continue
}
hasTool = true
}
}
if hasTool {
out, _ = sjson.SetRawBytes(out, "request.tools", []byte("[]"))
out, _ = sjson.SetRawBytes(out, "request.tools.0", toolNode)
} }
} }
@@ -98,7 +98,6 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
// Process the main content part of the response. // Process the main content part of the response.
partsResult := gjson.GetBytes(rawJSON, "response.candidates.0.content.parts") partsResult := gjson.GetBytes(rawJSON, "response.candidates.0.content.parts")
hasFunctionCall := false hasFunctionCall := false
hasValidContent := false
if partsResult.IsArray() { if partsResult.IsArray() {
partResults := partsResult.Array() partResults := partsResult.Array()
for i := 0; i < len(partResults); i++ { for i := 0; i < len(partResults); i++ {
@@ -119,10 +118,6 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
if partTextResult.Exists() { if partTextResult.Exists() {
textContent := partTextResult.String() textContent := partTextResult.String()
// Skip empty text content to avoid generating unnecessary chunks
if textContent == "" {
continue
}
// Handle text content, distinguishing between regular content and reasoning/thoughts. // Handle text content, distinguishing between regular content and reasoning/thoughts.
if partResult.Get("thought").Bool() { if partResult.Get("thought").Bool() {
@@ -131,7 +126,6 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
template, _ = sjson.Set(template, "choices.0.delta.content", textContent) template, _ = sjson.Set(template, "choices.0.delta.content", textContent)
} }
template, _ = sjson.Set(template, "choices.0.delta.role", "assistant") template, _ = sjson.Set(template, "choices.0.delta.role", "assistant")
hasValidContent = true
} else if functionCallResult.Exists() { } else if functionCallResult.Exists() {
// Handle function call content. // Handle function call content.
hasFunctionCall = true hasFunctionCall = true
@@ -191,12 +185,6 @@ func ConvertCliResponseToOpenAI(_ context.Context, _ string, originalRequestRawJ
template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls") template, _ = sjson.Set(template, "choices.0.native_finish_reason", "tool_calls")
} }
// Only return a chunk if there's actual content or a finish reason
finishReason := gjson.GetBytes(rawJSON, "response.candidates.0.finishReason")
if !hasValidContent && !finishReason.Exists() {
return []string{}
}
return []string{template} return []string{template}
} }
@@ -122,6 +122,9 @@ func ConvertOpenAIRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
if ar := imgCfg.Get("aspect_ratio"); ar.Exists() && ar.Type == gjson.String { if ar := imgCfg.Get("aspect_ratio"); ar.Exists() && ar.Type == gjson.String {
out, _ = sjson.SetBytes(out, "generationConfig.imageConfig.aspectRatio", ar.Str) out, _ = sjson.SetBytes(out, "generationConfig.imageConfig.aspectRatio", ar.Str)
} }
if size := imgCfg.Get("image_size"); size.Exists() && size.Type == gjson.String {
out, _ = sjson.SetBytes(out, "generationConfig.imageConfig.imageSize", size.Str)
}
} }
// messages -> systemInstruction + contents // messages -> systemInstruction + contents
@@ -297,11 +300,12 @@ func ConvertOpenAIRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
} }
} }
// tools -> tools[0].functionDeclarations // tools -> tools[0].functionDeclarations + tools[0].googleSearch passthrough
tools := gjson.GetBytes(rawJSON, "tools") tools := gjson.GetBytes(rawJSON, "tools")
if tools.IsArray() && len(tools.Array()) > 0 { if tools.IsArray() && len(tools.Array()) > 0 {
out, _ = sjson.SetRawBytes(out, "tools", []byte(`[{"functionDeclarations":[]}]`)) toolNode := []byte(`{}`)
fdPath := "tools.0.functionDeclarations" hasTool := false
hasFunction := false
for _, t := range tools.Array() { for _, t := range tools.Array() {
if t.Get("type").String() == "function" { if t.Get("type").String() == "function" {
fn := t.Get("function") fn := t.Get("function")
@@ -311,6 +315,17 @@ func ConvertOpenAIRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
renamed, errRename := util.RenameKey(fnRaw, "parameters", "parametersJsonSchema") renamed, errRename := util.RenameKey(fnRaw, "parameters", "parametersJsonSchema")
if errRename != nil { if errRename != nil {
log.Warnf("Failed to rename parameters for tool '%s': %v", fn.Get("name").String(), errRename) log.Warnf("Failed to rename parameters for tool '%s': %v", fn.Get("name").String(), errRename)
var errSet error
fnRaw, errSet = sjson.Set(fnRaw, "parametersJsonSchema.type", "object")
if errSet != nil {
log.Warnf("Failed to set default schema type for tool '%s': %v", fn.Get("name").String(), errSet)
continue
}
fnRaw, errSet = sjson.Set(fnRaw, "parametersJsonSchema.properties", map[string]interface{}{})
if errSet != nil {
log.Warnf("Failed to set default schema properties for tool '%s': %v", fn.Get("name").String(), errSet)
continue
}
} else { } else {
fnRaw = renamed fnRaw = renamed
} }
@@ -328,14 +343,32 @@ func ConvertOpenAIRequestToGemini(modelName string, inputRawJSON []byte, _ bool)
} }
} }
fnRaw, _ = sjson.Delete(fnRaw, "strict") fnRaw, _ = sjson.Delete(fnRaw, "strict")
tmp, errSet := sjson.SetRawBytes(out, fdPath+".-1", []byte(fnRaw)) if !hasFunction {
toolNode, _ = sjson.SetRawBytes(toolNode, "functionDeclarations", []byte("[]"))
}
tmp, errSet := sjson.SetRawBytes(toolNode, "functionDeclarations.-1", []byte(fnRaw))
if errSet != nil { if errSet != nil {
log.Warnf("Failed to append tool declaration for '%s': %v", fn.Get("name").String(), errSet) log.Warnf("Failed to append tool declaration for '%s': %v", fn.Get("name").String(), errSet)
continue continue
} }
out = tmp toolNode = tmp
hasFunction = true
hasTool = true
} }
} }
if gs := t.Get("google_search"); gs.Exists() {
var errSet error
toolNode, errSet = sjson.SetRawBytes(toolNode, "googleSearch", []byte(gs.Raw))
if errSet != nil {
log.Warnf("Failed to set googleSearch tool: %v", errSet)
continue
}
hasTool = true
}
}
if hasTool {
out, _ = sjson.SetRawBytes(out, "tools", []byte("[]"))
out, _ = sjson.SetRawBytes(out, "tools.0", toolNode)
} }
} }
@@ -111,13 +111,23 @@ func ConvertGeminiResponseToOpenAI(_ context.Context, _ string, originalRequestR
if !inlineDataResult.Exists() { if !inlineDataResult.Exists() {
inlineDataResult = partResult.Get("inline_data") inlineDataResult = partResult.Get("inline_data")
} }
thoughtSignatureResult := partResult.Get("thoughtSignature")
if !thoughtSignatureResult.Exists() {
thoughtSignatureResult = partResult.Get("thought_signature")
}
// Skip thoughtSignature parts (encrypted reasoning not exposed downstream).
if thoughtSignatureResult.Exists() && thoughtSignatureResult.String() != "" {
continue
}
if partTextResult.Exists() { if partTextResult.Exists() {
text := partTextResult.String()
// Handle text content, distinguishing between regular content and reasoning/thoughts. // Handle text content, distinguishing between regular content and reasoning/thoughts.
if partResult.Get("thought").Bool() { if partResult.Get("thought").Bool() {
template, _ = sjson.Set(template, "choices.0.delta.reasoning_content", partTextResult.String()) template, _ = sjson.Set(template, "choices.0.delta.reasoning_content", text)
} else { } else {
template, _ = sjson.Set(template, "choices.0.delta.content", partTextResult.String()) template, _ = sjson.Set(template, "choices.0.delta.content", text)
} }
template, _ = sjson.Set(template, "choices.0.delta.role", "assistant") template, _ = sjson.Set(template, "choices.0.delta.role", "assistant")
} else if functionCallResult.Exists() { } else if functionCallResult.Exists() {
@@ -67,39 +67,101 @@ func ConvertOpenAIResponsesRequestToGemini(modelName string, inputRawJSON []byte
// even when the message.role is "user". We split such items into distinct Gemini messages // even when the message.role is "user". We split such items into distinct Gemini messages
// with roles derived from the content type to match docs/convert-2.md. // with roles derived from the content type to match docs/convert-2.md.
if contentArray := item.Get("content"); contentArray.Exists() && contentArray.IsArray() { if contentArray := item.Get("content"); contentArray.Exists() && contentArray.IsArray() {
currentRole := ""
var currentParts []string
flush := func() {
if currentRole == "" || len(currentParts) == 0 {
currentParts = nil
return
}
one := `{"role":"","parts":[]}`
one, _ = sjson.Set(one, "role", currentRole)
for _, part := range currentParts {
one, _ = sjson.SetRaw(one, "parts.-1", part)
}
out, _ = sjson.SetRaw(out, "contents.-1", one)
currentParts = nil
}
contentArray.ForEach(func(_, contentItem gjson.Result) bool { contentArray.ForEach(func(_, contentItem gjson.Result) bool {
contentType := contentItem.Get("type").String() contentType := contentItem.Get("type").String()
if contentType == "" { if contentType == "" {
contentType = "input_text" contentType = "input_text"
} }
effRole := "user"
if itemRole != "" {
switch strings.ToLower(itemRole) {
case "assistant", "model":
effRole = "model"
default:
effRole = strings.ToLower(itemRole)
}
}
if contentType == "output_text" {
effRole = "model"
}
if effRole == "assistant" {
effRole = "model"
}
if currentRole != "" && effRole != currentRole {
flush()
currentRole = ""
}
if currentRole == "" {
currentRole = effRole
}
var partJSON string
switch contentType { switch contentType {
case "input_text", "output_text": case "input_text", "output_text":
if text := contentItem.Get("text"); text.Exists() { if text := contentItem.Get("text"); text.Exists() {
effRole := "user" partJSON = `{"text":""}`
if itemRole != "" { partJSON, _ = sjson.Set(partJSON, "text", text.String())
switch strings.ToLower(itemRole) { }
case "assistant", "model": case "input_image":
effRole = "model" imageURL := contentItem.Get("image_url").String()
default: if imageURL == "" {
effRole = strings.ToLower(itemRole) imageURL = contentItem.Get("url").String()
}
if imageURL != "" {
mimeType := "application/octet-stream"
data := ""
if strings.HasPrefix(imageURL, "data:") {
trimmed := strings.TrimPrefix(imageURL, "data:")
mediaAndData := strings.SplitN(trimmed, ";base64,", 2)
if len(mediaAndData) == 2 {
if mediaAndData[0] != "" {
mimeType = mediaAndData[0]
}
data = mediaAndData[1]
} else {
mediaAndData = strings.SplitN(trimmed, ",", 2)
if len(mediaAndData) == 2 {
if mediaAndData[0] != "" {
mimeType = mediaAndData[0]
}
data = mediaAndData[1]
}
} }
} }
if contentType == "output_text" { if data != "" {
effRole = "model" partJSON = `{"inline_data":{"mime_type":"","data":""}}`
partJSON, _ = sjson.Set(partJSON, "inline_data.mime_type", mimeType)
partJSON, _ = sjson.Set(partJSON, "inline_data.data", data)
} }
if effRole == "assistant" {
effRole = "model"
}
one := `{"role":"","parts":[]}`
one, _ = sjson.Set(one, "role", effRole)
textPart := `{"text":""}`
textPart, _ = sjson.Set(textPart, "text", text.String())
one, _ = sjson.SetRaw(one, "parts.-1", textPart)
out, _ = sjson.SetRaw(out, "contents.-1", one)
} }
} }
if partJSON != "" {
currentParts = append(currentParts, partJSON)
}
return true return true
}) })
flush()
} }
case "function_call": case "function_call":
@@ -433,12 +433,18 @@ func ConvertGeminiResponseToOpenAIResponses(_ context.Context, modelName string,
// output tokens // output tokens
if v := um.Get("candidatesTokenCount"); v.Exists() { if v := um.Get("candidatesTokenCount"); v.Exists() {
completed, _ = sjson.Set(completed, "response.usage.output_tokens", v.Int()) completed, _ = sjson.Set(completed, "response.usage.output_tokens", v.Int())
} else {
completed, _ = sjson.Set(completed, "response.usage.output_tokens", 0)
} }
if v := um.Get("thoughtsTokenCount"); v.Exists() { if v := um.Get("thoughtsTokenCount"); v.Exists() {
completed, _ = sjson.Set(completed, "response.usage.output_tokens_details.reasoning_tokens", v.Int()) completed, _ = sjson.Set(completed, "response.usage.output_tokens_details.reasoning_tokens", v.Int())
} else {
completed, _ = sjson.Set(completed, "response.usage.output_tokens_details.reasoning_tokens", 0)
} }
if v := um.Get("totalTokenCount"); v.Exists() { if v := um.Get("totalTokenCount"); v.Exists() {
completed, _ = sjson.Set(completed, "response.usage.total_tokens", v.Int()) completed, _ = sjson.Set(completed, "response.usage.total_tokens", v.Int())
} else {
completed, _ = sjson.Set(completed, "response.usage.total_tokens", 0)
} }
} }
+42 -1
View File
@@ -474,6 +474,33 @@ func (w *Watcher) processEvents(ctx context.Context) {
} }
} }
func (w *Watcher) authFileUnchanged(path string) (bool, error) {
data, errRead := os.ReadFile(path)
if errRead != nil {
return false, errRead
}
if len(data) == 0 {
return false, nil
}
sum := sha256.Sum256(data)
curHash := hex.EncodeToString(sum[:])
w.clientsMutex.RLock()
prevHash, ok := w.lastAuthHashes[path]
w.clientsMutex.RUnlock()
if ok && prevHash == curHash {
return true, nil
}
return false, nil
}
func (w *Watcher) isKnownAuthFile(path string) bool {
w.clientsMutex.RLock()
defer w.clientsMutex.RUnlock()
_, ok := w.lastAuthHashes[path]
return ok
}
// handleEvent processes individual file system events // handleEvent processes individual file system events
func (w *Watcher) handleEvent(event fsnotify.Event) { func (w *Watcher) handleEvent(event fsnotify.Event) {
// Filter only relevant events: config file or auth-dir JSON files. // Filter only relevant events: config file or auth-dir JSON files.
@@ -497,19 +524,33 @@ func (w *Watcher) handleEvent(event fsnotify.Event) {
} }
// Handle auth directory changes incrementally (.json only) // Handle auth directory changes incrementally (.json only)
fmt.Printf("auth file changed (%s): %s, processing incrementally\n", event.Op.String(), filepath.Base(event.Name))
if event.Op&(fsnotify.Remove|fsnotify.Rename) != 0 { if event.Op&(fsnotify.Remove|fsnotify.Rename) != 0 {
// Atomic replace on some platforms may surface as Rename (or Remove) before the new file is ready. // Atomic replace on some platforms may surface as Rename (or Remove) before the new file is ready.
// Wait briefly; if the path exists again, treat as an update instead of removal. // Wait briefly; if the path exists again, treat as an update instead of removal.
time.Sleep(replaceCheckDelay) time.Sleep(replaceCheckDelay)
if _, statErr := os.Stat(event.Name); statErr == nil { if _, statErr := os.Stat(event.Name); statErr == nil {
if unchanged, errSame := w.authFileUnchanged(event.Name); errSame == nil && unchanged {
log.Debugf("auth file unchanged (hash match), skipping reload: %s", filepath.Base(event.Name))
return
}
fmt.Printf("auth file changed (%s): %s, processing incrementally\n", event.Op.String(), filepath.Base(event.Name))
w.addOrUpdateClient(event.Name) w.addOrUpdateClient(event.Name)
return return
} }
if !w.isKnownAuthFile(event.Name) {
log.Debugf("ignoring remove for unknown auth file: %s", filepath.Base(event.Name))
return
}
fmt.Printf("auth file changed (%s): %s, processing incrementally\n", event.Op.String(), filepath.Base(event.Name))
w.removeClient(event.Name) w.removeClient(event.Name)
return return
} }
if event.Op&(fsnotify.Create|fsnotify.Write) != 0 { if event.Op&(fsnotify.Create|fsnotify.Write) != 0 {
if unchanged, errSame := w.authFileUnchanged(event.Name); errSame == nil && unchanged {
log.Debugf("auth file unchanged (hash match), skipping reload: %s", filepath.Base(event.Name))
return
}
fmt.Printf("auth file changed (%s): %s, processing incrementally\n", event.Op.String(), filepath.Base(event.Name))
w.addOrUpdateClient(event.Name) w.addOrUpdateClient(event.Name)
} }
} }