Commit Graph

102 Commits

Author SHA1 Message Date
Thiên Toán
806bd4ae14 feat: add API endpoint dimension to usage statistics dashboard (#152)
- Tracks endpoints like /v1/chat/completions, /v1/messages, /v1/responses
- New sortable/groupable table in usage dashboard with expandable groups
- Enhanced usage database aggregation by endpoint + model + provider
- Added endpoint tracking to all saveRequestUsage/saveRequestDetail calls
- Maintains backward compatibility with existing data structure
2026-02-20 15:03:18 +07:00
Hồ Xuân Dũng
a57a8ce206 feat: add Gemini embeddings support + Letta compatibility fixes
Cherry-picked from decolua/9router#148 (author: xuandung38 / Hồ Xuân Dũng <me@hxd.vn>)

- Add Google AI (Gemini) embeddings support for /v1/embeddings endpoint
- Add Gemini embedding models: gemini-embedding-001, text-embedding-005, text-embedding-004
- Inject missing object/created fields for Letta and strict OpenAI clients
- Strip Azure-specific fields (prompt_filter_results, content_filter_results) from responses
- Fix Dockerfile: copy open-sse directory into Docker runner stage

Skipped: whitelist message field stripping (commit 3/7/8) — too aggressive for all providers
Skipped: default stream=false change (commit 9) — behavior change needs further review
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-20 15:01:10 +07:00
decolua
e1e5a81613 feat: add GLM 5 and MiniMax M2.5 models to providerModels.js; add Claude Sonnet 4.6 to CLI tools 2026-02-20 14:44:53 +07:00
zx
a229d79158 feat(antigravity): initial steps for Antigravity anti-ban alignment
Cherry-picked from decolua/9router#141 (author: LinearSakana / zx <me@char.moe>)

- Implement client identity spoofing with numeric enums (ideType: 9, pluginType: 2)
- Add runtime platform detection for User-Agent and metadata
- Implement per-connection session ID caching (binary-compatible format)
- Add ANTIGRAVITY_HEADERS (X-Client-Name, X-Client-Version, x-goog-api-client)
- Add X-Machine-Session-Id header injection
- Align metadata/mode parameters across all Antigravity API calls
- Implement double injection for system prompt (raw + [ignore] wrapped)
- Rename internal anti-loop header to x-request-source for anonymity

Skipped: commit 6 (signature side-channel caching) — kept DEFAULT_THINKING_GEMINI_SIGNATURE
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-20 14:44:29 +07:00
Thiên Toán
9fbd6e619d fix: correct token extraction for Claude non-streaming responses (#131)
- Add response logging for non-streaming requests (5_res_provider.json, 7_res_client.json)
- Fix extractUsageFromResponse() to check Claude format before OpenAI format
- Prevents format misidentification that caused tokens to show as 0
- Claude uses input_tokens/output_tokens vs OpenAI's prompt_tokens/completion_tokens

Fixes dashboard Details tab showing 0 tokens for Claude requests
2026-02-20 14:24:21 +07:00
zx07
f933dd9c61 feat: add Qwen3.5 Coder Model configuration (#156)
Co-authored-by: zx <me@char.moe>
2026-02-19 21:55:11 +07:00
EdamAmex
c4aa4247bd feat: Add GPT 5.3 Codex to GitHub Copilot (#150)
* Add GPT-5.3 Codex model to providerModels.js

* Add pricing constants for gpt-5.3-codex
2026-02-19 12:10:34 +07:00
すずねーう
4e2a3f888c feat: Add Claude Sonnet 4.6 to GitHub Copilot (#149)
* Add Claude Sonnet 4.6 to GitHub Copilot

Claude Sonnet 4.6 is available in GitHub Copilot now.
https://github.blog/changelog/2026-02-17-claude-sonnet-4-6-is-now-generally-available-in-github-copilot/

* Add pricing constants for Claude Sonnet 4.6 for GitHub Copilot
2026-02-19 07:59:44 +07:00
decolua
b057c43c27 feat(open-sse): add Claude Sonnet 4.6 2026-02-18 13:31:32 +07:00
misterdas
b9a697925e fix(open-sse): emit [DONE] in passthrough SSE mode (#142) 2026-02-18 13:26:23 +07:00
HXD.VN
e1b836168a feat: add /v1/embeddings endpoint (OpenAI-compatible) (#146)
* feat: implement /v1/embeddings endpoint (#117)

Add OpenAI-compatible POST /v1/embeddings endpoint that routes through
the existing provider credential + fallback infrastructure.

Changes:
- open-sse/handlers/embeddingsCore.js: core handler (handleEmbeddingsCore)
  * Validates input (string or array), encoding_format
  * Builds provider-specific URL and headers for openai, openrouter,
    and openai-compatible providers
  * Handles 401/403 token refresh via executor.refreshCredentials
  * Returns normalized OpenAI-format response { object: 'list', data, model, usage }
- cloud/src/handlers/embeddings.js: cloud Worker handler (handleEmbeddings)
  * Auth + machineId resolution identical to handleChat
  * Provider credential fallback loop with rate-limit tracking
- cloud/src/index.js: wire new routes
  * POST /v1/embeddings  (new format — machineId from API key)
  * POST /{machineId}/v1/embeddings  (old format — machineId from URL)

* test: add unit tests for /v1/embeddings endpoint

- Setup vitest as test framework (tests/ directory)
- embeddingsCore.test.js (36 tests):
  - buildEmbeddingsBody: single string, array, encoding_format, default float
  - buildEmbeddingsUrl: openai, openrouter, openai-compatible-*, unsupported
  - buildEmbeddingsHeaders: per-provider headers, accessToken fallback
  - handleEmbeddingsCore: input validation, success path, provider errors,
    network errors, invalid JSON, token refresh 401 handling
- embeddings.cloud.test.js (23 tests):
  - CORS OPTIONS preflight
  - Auth: missing/invalid/old-format/wrong key → 401/400
  - Body validation: bad JSON, missing model, missing input, bad model → 400
  - Happy path: single string, array, delegation, CORS header, machineId override
  - Rate limiting: all-rate-limited → 429 + Retry-After, no credentials → 400
  - Error propagation: non-fallback errors, 429 exhausts accounts

Total: 59/59 tests passing
Framework: vitest v4.0.18, Node v22.22.0

* feat: add Next.js API route for /v1/embeddings endpoint

Wire the embeddings handler into Next.js App Router.

- src/app/api/v1/embeddings/route.js: Next.js API route (POST + OPTIONS)
- src/sse/handlers/embeddings.js: SSE-layer handler mirroring chat.js pattern

Uses handleEmbeddingsCore from open-sse/handlers/embeddingsCore.js with
the same auth, credential fallback, and token refresh logic as the chat
handler. Supports REQUIRE_API_KEY env var, provider fallback loop, and
consistent logging.
2026-02-18 13:24:02 +07:00
zx07
c7d44101b5 feat: add GPT 5.3 Codex Spark model to pricing and provider models (#133)
Co-authored-by: zx <me@char.moe>
2026-02-16 12:31:12 +07:00
decolua
3e4ca1889f - Add new model "minimax-m2.5" to providerModels. 2026-02-15 13:03:32 +07:00
decolua
e2db638982 feat: enhance request handling and error management in chatCore and streamToJsonConverter
- Added detailed request logging and latency tracking in handleChatCore.
- Improved error handling for SSE to JSON conversion and JSON parsing in streamToJsonConverter.
- Introduced a safe JSON parsing utility to handle potential parsing errors gracefully in requestDetailsDb.

Co-authored-by: zx <me@char.moe>
2026-02-15 12:02:53 +07:00
zx07
3d29b86d44 feat: enhance disconnect handling and request tracking in chatCore.js (#126)
Co-authored-by: zx <me@char.moe>
2026-02-15 11:51:37 +07:00
apeltekci
ac7cedd27e feat(responses): respect client streaming preference + string input support (#121)
- Remove forced stream=true from responsesHandler
- Add stream-to-JSON converter for non-streaming clients (Codex)
- Accept string input in Responses API (normalize to array)
- Codex SSE header fallback for missing Content-Type
- Refactor: extract shared normalizeResponsesInput()

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-15 11:47:55 +07:00
Bexultan
69131295db fix(github): Implement dynamic fallback for Codex models requiring /responses endpoint (#127)
* fix(github): add dynamic fallback to /responses for Codex models

* Refactor GithubExecutor: use config for URL detection
2026-02-15 07:40:55 +07:00
zx07
03ab554d1c feat: add support for GLM 5 (if) (#123)
(cherry picked from commit e26d65aa55726e330f6806aa1abfe05ac6801619)

Co-authored-by: zx <me@char.moe>
2026-02-13 19:37:13 +07:00
apple-techie
d7d5dc90bc fix: update Codex executor for gpt-5.3-codex support
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-12 18:12:38 +07:00
Blade
9c9af25acd Fix/minimax cn cant use in combo (#107)
* fix(provider): correct minimax-cn to alias mapping

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
(cherry picked from commit 315c6fc91b06584b101a4078affef3bb3b0f7001)

* fix(provider): add minimax-cn model list to PROVIDER_MODELS

(cherry picked from commit 15bc2d070306f48da4887a6286ec2a6007300705)

---------

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
2026-02-11 19:58:33 +07:00
Blade096
1ae4e311b7 feat: add GLM Coding (China) provider and Usage by API Keys statistics
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 15:44:08 +07:00
Blade096
bd23ab41ee feat(iflow): add IFlowExecutor with HMAC-SHA256 signature and enable models
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 15:32:22 +07:00
decolua
e3dbd448af feat: add GPT-3.5 Turbo to GitHub Copilot provider 2026-02-11 15:29:21 +07:00
decolua
6ade8ef39a feat: add GPT-4 to GitHub Copilot provider 2026-02-11 15:27:26 +07:00
decolua
053e490bb5 feat: add GPT-4o mini to GitHub Copilot provider 2026-02-11 15:22:25 +07:00
Bexultan
c090bb01b2 feat: add GPT 4o to GitHub Copilot provider (#98) 2026-02-11 07:19:22 +07:00
Bexultan
553346b522 Fix incorrect model ID for Raptor Mini in GitHub provider (#96) 2026-02-11 07:18:13 +07:00
Bexultan
3d605977b3 feat: add Claude Opus 4.6 to GitHub Copilot provider (#97) 2026-02-11 07:17:21 +07:00
Bexultan
4ea9a9da1c Fix: incorrect Gemini 3 Flash ID for GitHub provider (#94) 2026-02-10 19:45:49 +07:00
decolua
b179dc2647 Refactor AntigravityExecutor to improve part filtering logic 2026-02-10 19:31:49 +07:00
Bexultan
d36bd63e28 Fix: incorrect Gemini 3 Pro ID for GitHub provider (#95) 2026-02-10 19:19:08 +07:00
decolua
d3c3a4ae0a Remove Docker publish workflow and update error handling in various modules
- Added handling for HTTP_STATUS.NOT_ACCEPTABLE in error types and messages.
- Enhanced the `prepareClaudeRequest` function to filter built-in tools for non-Anthropic providers and clean up empty tool arrays.
- Updated the `openaiToClaudeRequest` function to handle built-in tools more effectively and ensure proper tool conversion.
- Improved the `claudeToOpenAIResponse` function to skip processing for built-in server tool blocks.
- Refined error message handling in the `parseUpstreamError` function to ensure meaningful output.
- Adjusted command checks for tool installations across various settings routes to use `command -v` for better compatibility.
2026-02-10 19:18:40 +07:00
Bexultan
1d8251cb30 Fix: Restore Claude Opus 4.5 entry in provider models (#92) 2026-02-10 18:38:53 +07:00
decolua
4ad344e462 Update iflow provider models 2026-02-09 23:08:18 +07:00
Blade
c68b875a36 Add GLM Coding (China) provider with OpenAI-compatible API (#83) 2026-02-09 10:31:38 +07:00
Blade
85b7a0b136 Feature/ai observability dashboard (#79)
* feat: add AI request details feature with latency tracking

Add comprehensive request history and debugging capability to the Usage dashboard:

**Storage Layer** (usageDb.js):
- Add saveRequestDetail() for storing full request/response details
- Implement FIFO queue with 1000-record limit in request-details.json
- Auto-sanitize sensitive headers (authorization, api-key, cookie, token)
- Add getRequestDetails() with pagination and filtering support
- Add getRequestDetailById() for single record lookup

**Pipeline Integration** (chatCore.js):
- Track request start time and calculate total latency
- Record TTFT (Time To First Token) and total latency for all requests
- Capture full request details (messages, model, parameters)
- Save response content for non-streaming, mark streaming responses
- Handle error cases with detailed error information
- Async non-blocking saves to avoid impacting request performance

**API Layer** (/api/usage/request-details):
- GET endpoint with pagination (page, pageSize: 1-100)
- Filter by provider, model, connectionId, status, date range
- Returns { details: [...], pagination: {...} } format

**UI Components**:
- Drawer.js: Right slide-out panel with backdrop blur and ESC close
- Pagination.js: Full pagination with page size selector (10/20/50)
- RequestDetailsTab.js: Complete table view with filters and detail drawer

**Dashboard Integration**:
- Add "Details" tab to Usage page (4th tab after Overview/Logger/Limits)
- Table columns: Timestamp, Model, Provider, Input Tokens, Output Tokens, Latency (TTFT/Total), Action
- Provider filter dropdown (9 providers supported)
- Date range filters (start/end datetime)
- Click "Detail" button to view full request/response JSON in slide-out drawer

**Features**:
- Real-time latency monitoring (TTFT & Total)
- Complete request/response inspection for debugging
- Filterable and searchable request history
- Responsive design with mobile-friendly filters
- Data security with automatic header sanitization
- Performance: async saves don't block request pipeline

**Files Created/Modified**:
- src/lib/usageDb.js (modified)
- open-sse/handlers/chatCore.js (modified)
- src/app/api/usage/request-details/route.js (new)
- src/shared/components/Drawer.js (new)
- src/shared/components/Pagination.js (new)
- src/app/(dashboard)/dashboard/usage/components/RequestDetailsTab.js (new)
- src/app/(dashboard)/dashboard/usage/page.js (modified)

Closes: AI Observability Dashboard feature

* feat: enhance request details with full config and streaming content capture

Improve Request Details feature to capture comprehensive request parameters
and actual streaming response content:

**Request Configuration Enhancement** (chatCore.js):
- Add extractRequestConfig() helper function to capture all request parameters
- Include temperature controls: temperature, top_p, top_k
- Include token limits: max_tokens, max_completion_tokens
- Include thinking/reasoning modes: thinking, reasoning, enable_thinking
- Include OpenAI parameters: presence_penalty, frequency_penalty, seed, stop,
  tools, tool_choice, response_format, n, logprobs, top_logprobs, logit_bias,
  user, parallel_tool_calls, prediction, store, metadata
- Apply to all request types: non-streaming, streaming, and error cases

**Streaming Content Capture** (chatCore.js & stream.js):
- Add onStreamComplete callback mechanism to stream processors
- Accumulate content from all formats: OpenAI, Claude, Gemini
- Track content from delta.content, delta.reasoning_content, delta.text,
  delta.thinking, and Gemini content.parts
- Save initial record with "[Streaming in progress...]" marker
- Update record with actual content when stream completes
- Include usage tokens when available from stream

**Files Modified**:
- open-sse/handlers/chatCore.js - extractRequestConfig() + streaming capture
- open-sse/utils/stream.js - onStreamComplete callback + content accumulation

**Benefits**:
- View complete request configuration in Request Details (thinking mode, etc.)
- See actual streaming response content instead of placeholder
- Better debugging and observability for AI requests

Refs: #request-details-enhancement

* feat: separate thinking/reasoning content from response content

Improve Request Details to display thinking process separately from final response:

**Backend Changes**:
- stream.js: Capture content and thinking separately in streaming mode
  - Add accumulatedThinking variable alongside accumulatedContent
  - Route delta.content to content, delta.reasoning_content to thinking
  - Support OpenAI (reasoning_content), Claude (thinking), Gemini (part.thought)
  - Update onStreamComplete callback to return { content, thinking } object

- chatCore.js: Update response structure to include thinking field
  - Non-streaming: Extract thinking from reasoning_content field
  - Streaming: Receive { content, thinking } from stream callback
  - Error responses: Include thinking: null
  - Initial streaming save: Include thinking: null

**Frontend Changes**:
- RequestDetailsTab.js: Display thinking and content in separate sections
  - Add amber/yellow themed "Thinking Process" section with psychology icon
  - Show "Final Response" label when thinking is present
  - Use distinct visual styling for thinking (amber bg) vs content (gray bg)
  - Only show thinking section when thinking content exists

**Benefits**:
- Users can clearly see model's reasoning process vs final answer
- Better debugging for models with thinking capabilities (Claude, o1, etc.)
- Visual distinction makes it easy to identify thinking vs response

Refs: #thinking-content-separation

* fix: map Claude thinking to reasoning_content field

Fix Claude thinking content to be properly captured as reasoning_content
instead of regular content, enabling separate display in Request Details:

**Changes**:
- claude-to-openai.js: Use reasoning_content field for thinking blocks
  - thinking start: send { reasoning_content: "" } instead of { content: "```\n```" }
  - thinking delta: map to reasoning_content instead of content
  - thinking stop: send { reasoning_content: "" } instead of { content: "```\n```" }

**Why This Matters**:
- Previously Claude thinking was sent as `content` field, mixed with actual response
- Now thinking uses `reasoning_content` field, matching OpenAI's o1 format
- stream.js can now properly route thinking to accumulatedThinking variable
- Request Details UI will show Claude thinking in separate "Thinking Process" section

**Supported Thinking Formats**:
- OpenAI: delta.reasoning_content → thinking
- Claude: delta.thinking → reasoning_content (now fixed)
- Gemini: part.thought === true → thinking

Refs: #claude-thinking-fix

* feat(observability): capture and display full 4-layer request chain

Capture complete request/response chain in AI Request Details:
- Add providerRequest field (translated request sent to provider)
- Add providerResponse field (raw provider response, streaming indicator)
- Update chatCore.js at all 5 saveRequestDetail() call sites
- Reorganize UI into 4 collapsible sections with Material icons
- Preserve backward compatibility for old records
- Add distinct styling for streaming indicator

* fix(observability): resolve React duplicate key warning in request details table

- Use composite key (detail.id + index) to ensure unique keys
- Prevents React warnings when database contains duplicate IDs from old ID generation

* fix(observability): display actual content in streaming request details

Change providerResponse field for streaming requests from placeholder
"[Streaming - raw response not captured]" to actual final content.

This improves debugging experience by showing the real AI response
in the "Provider Response (Raw)" section instead of a confusing
placeholder message.

Files changed:
- open-sse/handlers/chatCore.js: Save contentObj.content to providerResponse
- src/app/.../RequestDetailsTab.js: Remove special handling for placeholder

* refactor(observability): migrate request details to SQLite for improved concurrency

- Replace LowDB JSON storage with better-sqlite3
- Enable WAL mode for true concurrent read/write support
- Add 5 indexes to accelerate queries (timestamp, provider, model, connection_id, status)
- Perform pagination at the database level to reduce memory footprint
- Maintain 1000 record limit with automatic cleanup of old data
- Ensure API compatibility via re-exports, requiring no caller changes

Performance improvements:
- Concurrent Writes: Lock-free WAL mode prevents data contention
- Query Efficiency: Index-based searches replace full dataset loading
- Data Integrity: Atomic operations prevent file corruption

* fix(observability): resolve pagination statistics display issues

- Fix issue where totalItems=0 showed 'Showing 1 to 0 of 0 results'
- Hide pagination controls when totalItems=0 or totalPages<=1
- Standardize API response fields: pagination.total -> pagination.totalItems

Before: Incorrect stats shown for empty data, and pager visible even for single-page results
After: Stats hidden for empty data, pager hidden when navigation is unnecessary

* feat(observability): display friendly provider names in request details

- Add /api/usage/providers endpoint to dynamically fetch provider list with names
- Replace hardcoded provider options with dynamic loading from database
- Display friendly provider names instead of IDs in both table and detail drawer
- Support custom provider nodes (e.g., OpenAI-compatible) with user-defined names
- Add provider name caching to optimize performance

* fix(observability): use INSERT OR REPLACE for request details to handle streaming updates

* fix(observability): resolve zero-token display issue by ensuring streaming usage capture and fixing key mismatch

* fix(observability): separate TTFT and total latency calculation for streaming requests

* feat(observability): implement SQLite write queue and JSON size limits

- Added in-memory buffer and batch writing for SQLite to prevent lock contention
- Implemented  with configurable 1MB limit to prevent DB bloat
- Added dashboard UI for observability performance and data management settings
- Integrated graceful shutdown handlers to prevent data loss

* fix(observability): resolve ReferenceError by declaring dbInstance
2026-02-09 10:30:42 +07:00
decolua
388389c972 Revert "feat(request-details): implement observability settings and enhance request detail tracking"
This reverts commit cbabf5547c.
2026-02-09 10:29:38 +07:00
decolua
cbabf5547c feat(request-details): implement observability settings and enhance request detail tracking
- Added new observability settings in the dashboard for max records, batch size, flush interval, and max JSON size.
- Introduced `extractRequestConfig` function to capture full request configurations.
- Enhanced error handling by saving detailed request information on failures.
- Updated usage tracking to include new token metrics.
- Modified streaming functions to support detailed content and reasoning tracking.
2026-02-09 10:20:24 +07:00
decolua
635d327dbc chore: update package version to 0.2.71 and enhance MITM tools 2026-02-09 09:58:24 +07:00
decolua
e7dfdc9274 Merge pull request #77 from Blade096/fix/openai_to_claude_missingthinking
feat(translator): add thinking parameter support in OpenAI → Claude
2026-02-08 16:47:24 +07:00
Diego Souza
3d439839d9 feat(cloud): harden sync/auth flow, SSE fallback, and update changelog
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-08 16:45:31 +07:00
decolua
2e854bd4c9 feat(antigravity): integrate Antigravity tool with MITM support and update CLI tools 2026-02-08 16:28:13 +07:00
Blade096
54e01d617d feat(translator): add thinking parameter support in OpenAI → Claude
Preserve thinking configuration when converting OpenAI requests to Claude format.

- Handle thinking.type with 'enabled' as default

- Preserve thinking.budget_tokens when present

- Preserve thinking.max_tokens when present

This enables proper thinking mode support for o1-series models

when routed through 9Router to Claude endpoints.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
(cherry picked from commit 65d80e9269cc6789cb1522b276e8b8399fddbcab)
2026-02-07 21:45:50 +08:00
decolua
18712b24cf Delete debug log 2026-02-07 15:20:16 +07:00
decolua
bdbe8162e7 feat(provider): add free providers and enhance error handling 2026-02-07 11:17:06 +07:00
Blade096
9e357a7ee6 feat(iflow): add kimi-k2.5 model support 2026-02-06 21:26:31 +08:00
Blade096
7c609d7a3e feat(providers): add Minimax Coding (China) provider
- Add minimax-cn provider with China endpoint (api.minimaxi.com)
- Add provider icon and configuration
- Add validation and test support
- Add API configuration in open-sse

Co-authored-by: Blade096 <46746496+Blade096@users.noreply.github.com>
Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 15:10:57 +07:00
Hellodebasishsahu
8c6e3b8b62 fix(codex): use user-agent detection for Droid CLI compatibility
The previous merge used sourceFormat check which broke Cursor when it
sends openai-responses format requests. Now uses user-agent detection:
- Droid CLI (user-agent contains 'droid' or 'codex-cli') → passthrough
- Other clients (Cursor, etc.) → translate to Chat Completions format

This fixes the API translation for both clients.

Co-authored-by: Hellodebasishsahu <itsyourboydevil@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 15:02:56 +07:00
decolua
e8aa3e21fe - Added new model "Claude Opus 4.6" to the provider models. 2026-02-06 11:23:08 +07:00
decolua
39c555ca7e docs: clarify Droid CLI compatibility comment in Responses API translator
Co-authored-by: Emanuel Covelli <emanuel.covelli@netserv.it>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 09:56:57 +07:00