- Integrated Google TTS languages from a separate module for better maintainability.
- Updated local device voice fetching to support both macOS and Windows, improving cross-platform compatibility.
- Enhanced dashboard route protection by adding dynamic settings for login requirements and tunnel access.
- Introduced UI elements for managing security settings related to API key requirements and dashboard access via tunnel.
- Added default TTS response example in the media provider page for better user guidance.
- Updated constants to reflect changes in TTS provider configurations.
This commit improves the overall user experience and security of the TTS features.
The fetchCompatibleModelIds() function had no timeout on its fetch() call,
causing /v1/models to hang indefinitely when an openai-compatible provider
was unreachable or slow to respond.
Additionally, upstream/cross-instance connections (provider IDs containing
a UUID suffix like openai-compatible-chat-XXXXXXXX) would trigger recursive
/models fetches between instances, creating infinite loops.
Fixes:
- Add AbortController with 5-second timeout to the fetch() call
- Skip dynamic model fetching for upstream/cross-instance connections
(detected by UUID suffix pattern in provider ID)
- Existing try/catch already handles abort errors gracefully
Co-authored-by: Agent Zero <agent@agent-zero.local>
sseToJsonHandler.js unconditionally deleted reasoning_content from all
non-streaming responses (added for Firecrawl SDK compatibility). This
breaks thinking models (Qwen3.5, Claude extended thinking, etc.) where
the model may use all tokens for reasoning, leaving content empty.
When reasoning_content is stripped in that case, the response appears
completely empty to the client.
Fix: only strip reasoning_content when the response also has non-empty
content, so that reasoning output is preserved when it is the only
useful output.
Co-authored-by: Agent Zero <agent@agent-zero.local>
- Allow selecting multiple models for 9router provider
- Merge models instead of overwriting (backwards-compatible)
- Support setting active model via click or Apply
- Click active model to clear default (model = '')
- Remove individual models via X button (persists to config)
- Add PATCH endpoint for clearing active model
- Update GET to return normalized model list
- Preserve subagent model configuration for explorer subagent
Made-with: Cursor
- Replaced `bun run build:bun` with `npm run build` in Dockerfile for consistency.
- Enhanced `DOCKER.md` to include `DATA_DIR` environment variable usage for database persistence.
- Clarified paths for container and host data storage.
- Added `DOCKER.md` documentation to guide users on using Docker with the project.
- Migrated Dockerfile to use `oven/bun:1-alpine` for performance improvements.
- Refined build process and permissions in the container for better compatibility.
- Excluded `.idea/` files in `.gitignore`.
- Enhanced `.npmignore` to clean redundant blank lines.
The shutdown API calls `process.exit(0)` on POST without any authentication or authorization checks. Any party that can reach this endpoint can terminate the server process, causing immediate service disruption.
Affected files: route.js
Signed-off-by: tuanaiseo <221258316+tuanaiseo@users.noreply.github.com>
Cursor's API now rejects requests with outdated client versions,
returning [400]: Update Required for Composer 2. Bump
x-cursor-client-version from 2.3.41 to 3.1.0 across all three
locations where it is set.
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sửa lỗi /v1/models chỉ biết lấy model từ danh sách tĩnh hoặc từ providerSpecificData.enabledModels. Với API Key Compatible Providers, endpoint test /api/providers/<id>/models vẫn lấy được model động từ upstream, nhưng /v1/models lại không fallback sang danh sách động đó. Ngoài ra alias trả ra cũng đang dùng providerId nội bộ thay vì prefix trong cấu hình.
Đã fix để OpenAI/Anthropic Compatible dùng đúng prefix làm alias public nếu chưa có enabledModels, /v1/models sẽ tự fetch động từ upstream /models
checkAndRefreshToken() updated providerSpecificData.copilotToken but
not the top-level creds.copilotToken. GithubExecutor.buildHeaders()
reads the top-level key, so every request after a proactive refresh
still sent the expired token, causing 401 "IDE token expired".
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Add clientDetector utility to identify CLI tools (Claude Code, Gemini CLI,
Antigravity, Codex) from request headers. When the CLI tool and provider
are a native pair, skip all translation — only swap model and Bearer token.
Made-with: Cursor
- Add /api/providers/kilo/free-models endpoint with 1hr cache
- Fetch and merge Kilo free models with hardcoded models for kilocode provider
- Display 'Free' badge on models fetched from Kilo API
- Fix Windows build: add cross-env, remove --webpack flag, add turbopack config
- Add outputFileTracingExcludes for Windows system directories
Replace empty reasoning_content with explicit </think> closing tag when exiting thinking block to properly signal end of reasoning section in streaming responses.
- Encode thoughtSignature into tool_call.id using _TSIG_ delimiter and base64url
- Decode _TSIG_ on request to restore thoughtSignature for Gemini multi-turn thinking
- Track pendingThoughtSignature across parts for deferred signature attachment
- Add LocalMutex (2-layer locking) to prevent ELOCKED on concurrent DB access
- Increase lockfile retries from 5 to 15 for multi-process robustness
- Restore db.json seed on first run to prevent ENOENT on lockfile.lock
- Use process.env.BASE_URL fallback in models test route
- Remove gemini-3-flash-lite-preview from provider models
Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes#450
Made-with: Cursor
- Add claudeHeaderCache.js to intercept and cache live Claude Code client headers
- Forward cached headers dynamically to api.anthropic.com via default.js
- Strip first-party identity headers (x-app, claude-code-* beta) for non-Anthropic upstreams
- Validate and sanitize tool call IDs to match Anthropic pattern (^[a-zA-Z0-9_-]+$)
- Skip thinking blocks when applying cache_control; fix max_tokens buffer (+1024)
- Strip cache_control from thinking blocks in openai-to-claude translator
- Comment out thoughtSignature in Gemini translator (kept for reference)
- Expand .gitignore to match all deploy*.sh variants
Co-authored-by: kwanLeeFrmVi <quanle96@outlook.com>
Closes#433
Made-with: Cursor
The BaseExecutor's buildUrl() and buildHeaders() methods only handled
openai-compatible-* providers but not anthropic-compatible-* providers.
This caused Anthropic-compatible synthetic providers to fail API testing
by hitting the wrong endpoint (returning documentation instead of valid
API responses) and using incorrect auth headers.
Changes:
- Added buildUrl() handling for anthropic-compatible-* providers
to append /messages path
- Added buildHeaders() handling for x-api-key header and
anthropic-version for anthropic-compatible providers
Fixes #XXX
Co-authored-by: Bitgineer <bitgineer@bitgineer.shop>
The github provider in open-sse/config/providers.js was missing clientId,
causing refreshGitHubToken() to send client_id=undefined on 401 retry.
Also guard against undefined clientSecret in both refresh implementations.
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>