Commit Graph

102 Commits

Author SHA1 Message Date
decolua
b72a443bd3 feat: add CommandCode provider support 2026-05-07 23:01:33 +07:00
Muhammad Mugni Hadi
7f93df3a92 feat: add audio input support for Gemini translation (#913)
Add input_audio and audio_url content type handlers to
convertOpenAIContentToParts() in geminiHelper.js, converting
OpenAI audio format to Gemini inlineData format.

Also add audio types to VALID_OPENAI_CONTENT_TYPES in
openaiHelper.js so they are not stripped by filterToOpenAIFormat().

Fixes #912
2026-05-07 15:51:30 +07:00
decolua
5c62e73cc6 - Cowork: ComboFormModal
- BaseUrlSelect: add cloud endpoint option, custom URL local state, always
  default to first option; new cliEndpointMatch helper; CLI tool cards refactor
- API: new /v1/audio/voices and /v1/models/info; /v1/models filters disabled
  models, drop unused timestamp
- initializeApp: guard tunnel/tailscale auto-resume to once-per-process
- geminiHelper: ensureObjectType for schemas with properties but no type
- skills: minor SKILL.md tweaks (chat/embeddings/image/stt/tts/web-*)
2026-05-07 15:45:09 +07:00
Rezky Hamid
30b114ab75 fix: strip output_config for MiniMax (#820) 2026-05-01 16:16:01 +07:00
decolua
936d65ae1c Enhance chat handling and introduce Caveman feature
- Refactored handleChatCore to include Caveman functionality, allowing for terse-style system prompts to reduce output token usage.
- Updated APIPageClient to manage Caveman settings, including enabling/disabling and selecting compression levels.
- Adjusted AntigravityExecutor to consolidate function declarations for compatibility with Gemini.
- Removed unnecessary console logs during translator initialization across multiple routes.
2026-04-30 18:00:38 +07:00
decolua
111e78940a Refactor cloudflared process management to improve port-specific termination and enhance tunnel management. Update Antigravity cloaking comments for clarity. 2026-04-28 10:20:31 +07:00
decolua
a3032f7a3e Merge branch 'pr-779-review' 2026-04-28 10:16:23 +07:00
Manuel B.
58a821d687 fix: granular reasoning_effort handling for Claude models (#791)
- github.js: split thinking vs reasoning_effort stripping
  - thinking (Claude-native format) still stripped for all Claude on Copilot
  - reasoning_effort now passed through for Opus 4.6 and Sonnet 4.6
  - still stripped for Haiku 4.5 and Opus 4.7 (rejected upstream)
  - reasoning_effort "none" stripped for all models (not all support it)
- openai-to-claude.js: map reasoning_effort → thinking.budget_tokens
  for direct Anthropic backend (none→skip, low→4096, medium→8192,
  high→16384, xhigh→32768)

Previously reasoning_effort was stripped for ALL Claude models,
meaning Opus 4.6 via Copilot never received thinking configuration.

AI-generated commit by Claude Opus 4.6 (Anthropic)
2026-04-28 09:49:27 +07:00
lukmanfauzie
c43f8c54d4 fix: Antigravity INVALID_ARGUMENT errors and Copilot agent mode parity 2026-04-26 19:53:08 +08:00
lukmanfauzie
222e22fa53 Fix GitHub Copilot agent mode with Antigravity
Co-authored-by: Copilot <copilot@github.com>
2026-04-26 17:47:13 +08:00
decolua
cca615eaff - Cap maximum cooldown for rate limit handling in account unavailability and single-model chat flows
- Dynamic custom model fetching for model selection
2026-04-24 16:14:18 +07:00
decolua
8de9aae90c Feature : RTK compress 2026-04-22 15:36:51 +07:00
Anurag Saxena
37f7e97348 fix: force Agent mode in Cursor protobuf when User-Agent contains Claude Code (closes #643) (#692) 2026-04-22 10:24:58 +07:00
anuragg-saxenaa
d0ace2a3cf fix(codex): await image URL fetches before sending to upstream (closes #575)
Remote HTTP(S) image URLs are fetched and inlined as base64 data URIs
in a new prefetchImages() step run before super.execute(), so the body
sent to Codex contains resolved image bytes instead of URLs the backend
cannot access.

Scope is limited to the Codex executor — base executor and other
providers are untouched.

Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-04-17 12:15:10 +07:00
anuragg-saxenaa
6e8aaab299 fix: strip enumDescriptions from tool schema in antigravity-to-openai (closes #566)
Co-authored-by: anuragg-saxenaa <anuragg.saxenaa@gmail.com>
Made-with: Cursor
2026-04-17 12:15:10 +07:00
decolua
04cdb75839 Add proactive token refresh lead times for providers and implement Codex proxy management 2026-04-14 11:41:06 +07:00
decolua
875a1282ea Fix bug 2026-04-11 11:36:33 +07:00
Anurag Saxena
23abe1a7bb fix: merge consecutive userInputMessages in openai-to-kiro translator (closes #510) (#524) 2026-04-08 15:38:28 +07:00
decolua
401772cb9a Fix bug strip image 2026-04-07 10:18:59 +07:00
decolua
67e0db77da Fix : Updated Anthropic-Beta header. 2026-04-05 07:46:26 +07:00
decolua
1973fe5a83 fix(translator): correct thought signatures for AG, Gemini CLI, Vertex; fix missing Vertex response translator
- Add DEFAULT_THINKING_AG_SIGNATURE, DEFAULT_THINKING_GEMINI_CLI_SIGNATURE, DEFAULT_THINKING_VERTEX_SIGNATURE
- Rename DEFAULT_THINKING_GEMINI_SIGNATURE → DEFAULT_THINKING_AG_SIGNATURE for clarity
- Pass provider-specific signature into openaiToGeminiBase (AG vs Gemini CLI)
- Replace synthetic thoughtSignatures with Vertex-native signature in postProcessForVertex
- Register Vertex → OpenAI response translator (fixes empty Vertex streaming responses)

Made-with: Cursor
2026-04-05 00:38:36 +07:00
decolua
333e704b2a MODEL_CAPS 2026-04-04 23:24:24 +07:00
Anurag Saxena
5fe2c81cf9 fix: skip function_call items with empty/missing name to prevent Codex 400 error (closes #444) (#487) 2026-04-04 08:51:00 +07:00
decolua
93b8668e9e Fix AG 2026-04-01 11:48:38 +07:00
Kwan96
ffa172c92d fix(claude-to-openai): emit closing </think> tag instead of empty reasoning_content (#454)
Replace empty reasoning_content with explicit </think> closing tag when exiting thinking block to properly signal end of reasoning section in streaming responses.
2026-03-31 09:21:11 +07:00
decolua
01787a3d5b Fix bug 2026-03-30 17:27:15 +07:00
kwanLeeFrmVi
054facb08b fix(gemini): preserve thoughtSignature via tool_call ID smuggling + fix ELOCKED mutex
- Encode thoughtSignature into tool_call.id using _TSIG_ delimiter and base64url
- Decode _TSIG_ on request to restore thoughtSignature for Gemini multi-turn thinking
- Track pendingThoughtSignature across parts for deferred signature attachment
- Add LocalMutex (2-layer locking) to prevent ELOCKED on concurrent DB access
- Increase lockfile retries from 5 to 15 for multi-process robustness
- Restore db.json seed on first run to prevent ENOENT on lockfile.lock
- Use process.env.BASE_URL fallback in models test route
- Remove gemini-3-flash-lite-preview from provider models

Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes #450

Made-with: Cursor
2026-03-30 16:57:28 +07:00
kwanLeeFrmVi
1c160cc8d9 feat(claude-code): spoof TLS fingerprint and stabilize headers for Anthropic
- Add claudeHeaderCache.js to intercept and cache live Claude Code client headers
- Forward cached headers dynamically to api.anthropic.com via default.js
- Strip first-party identity headers (x-app, claude-code-* beta) for non-Anthropic upstreams
- Validate and sanitize tool call IDs to match Anthropic pattern (^[a-zA-Z0-9_-]+$)
- Skip thinking blocks when applying cache_control; fix max_tokens buffer (+1024)
- Strip cache_control from thinking blocks in openai-to-claude translator
- Comment out thoughtSignature in Gemini translator (kept for reference)
- Expand .gitignore to match all deploy*.sh variants

Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes #433

Made-with: Cursor
2026-03-30 16:27:28 +07:00
decolua
e6299eef56 Fix Bug 2026-03-30 12:21:24 +07:00
decolua
abbf8ec86f feat: add GitLab Duo and CodeBuddy support, update observability settings 2026-03-30 11:28:07 +07:00
decolua
11e6004fcb fix: correct finish_reason for tool calls in OpenAI Responses translator
Apply fix from PR #354 by @tannk4w to properly signal tool_calls finish_reason
when model emits tool calls, allowing OpenAI-compatible clients to continue with
tool result processing instead of stopping prematurely.

Refactored finish_reason logic into computeFinishReason() helper to eliminate
duplication and improve maintainability across flush and completion paths.

Co-authored-by: tannk4w <tannk@tmi-soft.vn>

Thanks to @tannk4w, @trungtq2799, @quanhavn, and @East-rayyy for the thorough
review and improvement suggestions on the original PR.

Made-with: Cursor
2026-03-28 14:45:07 +07:00
Anurag Saxena
5abf7102c0 fix: inject placeholder message when Responses API input[] is empty (closes #389) (#419) 2026-03-27 10:51:02 +07:00
Anurag Saxena
4e631c4f37 fix: map OpenAI image_url data URLs to Ollama images[] (closes #427) (#432) 2026-03-27 10:48:18 +07:00
Anurag Saxena
e3a7733a08 fix: strip functionCall/functionResponse id and synthetic thoughtSignature for Vertex AI (closes #388) (#414) 2026-03-27 10:46:47 +07:00
Anurag Saxena
ade3f57d4c fix: sanitize Gemini function names to meet API requirements (closes #369) (#403) 2026-03-27 10:40:29 +07:00
Ryan
3b4184b09e fix: detect Claude format for /v1/messages + sanitize tool descriptions (#397) 2026-03-27 10:38:37 +07:00
Anurag Saxena
868eabffc0 fix: clamp Responses API call_id to 64 chars (closes #393) (#396) 2026-03-27 10:37:31 +07:00
Ryan
99cb9ed11f fix: support HTTP/HTTPS image URLs in Claude and Gemini translators (#344)
Previously only base64 data: URLs were handled in the OpenAI-to-Claude
and OpenAI-to-Gemini request translators. HTTP/HTTPS image URLs were
silently dropped, causing vision-capable models to respond with
"I don't see any image."
2026-03-23 14:56:40 +07:00
decolua
8df8b94180 Enhance image support in Kiro for Claude models. Update the message conversion logic to conditionally handle image types based on model capabilities. Additionally, hide the Basic Chat option in the sidebar for a cleaner UI. 2026-03-23 12:29:48 +07:00
decolua
4496bf96c8 Feat : Kiro Provider can now read images. 2026-03-23 12:17:51 +07:00
decolua
0c9ad12055 Fix : Fix error 400 2026-03-23 12:05:22 +07:00
Liam
01e4a28f0a fix: normalize finish_reason to 'tool_calls' when tool calls are present (#379)
Some upstream providers (e.g. Antigravity) return non-standard finish_reason
values like 'other' instead of the OpenAI-standard 'tool_calls' when the
model invokes tools. This causes downstream consumers (e.g. OpenClaw) to
fail to execute tool calls, breaking agentic sub-agent workflows.

Changes:
- nonStreamingHandler: post-translation guard that normalizes finish_reason
  to 'tool_calls' when message.tool_calls is present
- sseToJsonHandler: accumulate tool_calls from streaming deltas in
  parseSSEToOpenAIResponse; extract function_call items from Responses API
  output in handleForcedSSEToJson
- openai-responses translator: use toolCallIndex to choose between
  'tool_calls' and 'stop' in flush and response.completed events

Tested: 7 scenarios (non-stream text, single/multiple tool calls, stream
text/tool calls, multi-turn tool conversation, tools present but unused)
2026-03-23 09:35:25 +07:00
Anurag Saxena
4d7ddbfffe fix: add missing type:string to enum properties in Gemini tool schema translation (#380)
Gemini API requires enum properties to have an explicit type:"string"
declaration. Without it, tool calls with enum parameters return 400
Bad Request. Fixes #359.
2026-03-23 09:20:55 +07:00
Kwan96
1154244f1d refactor: clean JSON schemas for Gemini function declarations (#371)
Apply cleanJSONSchemaForAntigravity to both Anthropic and OpenAI format tool schemas before converting to Gemini function declarations
2026-03-23 09:18:22 +07:00
decolua
549223c8cf Fix codex image 2026-03-13 14:47:40 +07:00
Nick Roth
75270ea755 feat: Add OpenAI API response_format support for structured JSON output
Translates OpenAI response_format parameter into Claude-compatible system
prompt instructions, enabling structured JSON output for json_schema and
json_object types.

Co-authored-by: Nick Roth <nlr06886@gmail.com>
Made-with: Cursor
2026-03-12 16:30:47 +07:00
decolua
b0c6b61398 Refactor config 2026-03-12 16:20:46 +07:00
decolua
83d94daa82 feat(ollama): Enhance Ollama support by adding new models, updating API format handling, and integrating translation functionality. 2026-03-12 15:24:10 +07:00
decolua
d9dad5bcf3 Fix : Add custom to model selector 2026-03-11 11:59:07 +07:00
Xmllist
6437a1c55f refactor(claude-to-openai): simplify usage token calculation and final chunk assembly
Made-with: Cursor
2026-03-09 17:18:49 +07:00