541 Commits

Author SHA1 Message Date
Blade096
bd23ab41ee feat(iflow): add IFlowExecutor with HMAC-SHA256 signature and enable models
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 15:32:22 +07:00
decolua
e3dbd448af feat: add GPT-3.5 Turbo to GitHub Copilot provider 2026-02-11 15:29:21 +07:00
decolua
6ade8ef39a feat: add GPT-4 to GitHub Copilot provider 2026-02-11 15:27:26 +07:00
decolua
053e490bb5 feat: add GPT-4o mini to GitHub Copilot provider 2026-02-11 15:22:25 +07:00
Bexultan
c090bb01b2 feat: add GPT 4o to GitHub Copilot provider (#98) 2026-02-11 07:19:22 +07:00
Bexultan
553346b522 Fix incorrect model ID for Raptor Mini in GitHub provider (#96) 2026-02-11 07:18:13 +07:00
Bexultan
3d605977b3 feat: add Claude Opus 4.6 to GitHub Copilot provider (#97) 2026-02-11 07:17:21 +07:00
Bexultan
4ea9a9da1c Fix: incorrect Gemini 3 Flash ID for GitHub provider (#94) 2026-02-10 19:45:49 +07:00
decolua
c3baf52988 Update version to 0.2.76 and enhance MITM manager for cross-platform compatibility 2026-02-10 19:40:06 +07:00
decolua
b179dc2647 Refactor AntigravityExecutor to improve part filtering logic 2026-02-10 19:31:49 +07:00
Bexultan
d36bd63e28 Fix: incorrect Gemini 3 Pro ID for GitHub provider (#95) 2026-02-10 19:19:08 +07:00
decolua
d3c3a4ae0a Remove Docker publish workflow and update error handling in various modules
- Added handling for HTTP_STATUS.NOT_ACCEPTABLE in error types and messages.
- Enhanced the `prepareClaudeRequest` function to filter built-in tools for non-Anthropic providers and clean up empty tool arrays.
- Updated the `openaiToClaudeRequest` function to handle built-in tools more effectively and ensure proper tool conversion.
- Improved the `claudeToOpenAIResponse` function to skip processing for built-in server tool blocks.
- Refined error message handling in the `parseUpstreamError` function to ensure meaningful output.
- Adjusted command checks for tool installations across various settings routes to use `command -v` for better compatibility.
2026-02-10 19:18:40 +07:00
Bexultan
1d8251cb30 Fix: Restore Claude Opus 4.5 entry in provider models (#92) 2026-02-10 18:38:53 +07:00
Thiên Toán
3df0a4d4b0 Add GitHub Actions workflow for Docker build and push (#91) 2026-02-10 18:38:14 +07:00
decolua
4ad344e462 Update iflow provider models 2026-02-09 23:08:18 +07:00
decolua
dd043f6ff4 Fix OpenClaw configuration options 2026-02-09 22:36:57 +07:00
decolua
102c193112 Feat : Setup cloudflare worker for cloud endpoint 2026-02-09 11:27:41 +07:00
Blade
c68b875a36 Add GLM Coding (China) provider with OpenAI-compatible API (#83) 2026-02-09 10:31:38 +07:00
Blade
85b7a0b136 Feature/ai observability dashboard (#79)
* feat: add AI request details feature with latency tracking

Add comprehensive request history and debugging capability to the Usage dashboard:

**Storage Layer** (usageDb.js):
- Add saveRequestDetail() for storing full request/response details
- Implement FIFO queue with 1000-record limit in request-details.json
- Auto-sanitize sensitive headers (authorization, api-key, cookie, token)
- Add getRequestDetails() with pagination and filtering support
- Add getRequestDetailById() for single record lookup

**Pipeline Integration** (chatCore.js):
- Track request start time and calculate total latency
- Record TTFT (Time To First Token) and total latency for all requests
- Capture full request details (messages, model, parameters)
- Save response content for non-streaming, mark streaming responses
- Handle error cases with detailed error information
- Async non-blocking saves to avoid impacting request performance

**API Layer** (/api/usage/request-details):
- GET endpoint with pagination (page, pageSize: 1-100)
- Filter by provider, model, connectionId, status, date range
- Returns { details: [...], pagination: {...} } format

**UI Components**:
- Drawer.js: Right slide-out panel with backdrop blur and ESC close
- Pagination.js: Full pagination with page size selector (10/20/50)
- RequestDetailsTab.js: Complete table view with filters and detail drawer

**Dashboard Integration**:
- Add "Details" tab to Usage page (4th tab after Overview/Logger/Limits)
- Table columns: Timestamp, Model, Provider, Input Tokens, Output Tokens, Latency (TTFT/Total), Action
- Provider filter dropdown (9 providers supported)
- Date range filters (start/end datetime)
- Click "Detail" button to view full request/response JSON in slide-out drawer

**Features**:
- Real-time latency monitoring (TTFT & Total)
- Complete request/response inspection for debugging
- Filterable and searchable request history
- Responsive design with mobile-friendly filters
- Data security with automatic header sanitization
- Performance: async saves don't block request pipeline

**Files Created/Modified**:
- src/lib/usageDb.js (modified)
- open-sse/handlers/chatCore.js (modified)
- src/app/api/usage/request-details/route.js (new)
- src/shared/components/Drawer.js (new)
- src/shared/components/Pagination.js (new)
- src/app/(dashboard)/dashboard/usage/components/RequestDetailsTab.js (new)
- src/app/(dashboard)/dashboard/usage/page.js (modified)

Closes: AI Observability Dashboard feature

* feat: enhance request details with full config and streaming content capture

Improve Request Details feature to capture comprehensive request parameters
and actual streaming response content:

**Request Configuration Enhancement** (chatCore.js):
- Add extractRequestConfig() helper function to capture all request parameters
- Include temperature controls: temperature, top_p, top_k
- Include token limits: max_tokens, max_completion_tokens
- Include thinking/reasoning modes: thinking, reasoning, enable_thinking
- Include OpenAI parameters: presence_penalty, frequency_penalty, seed, stop,
  tools, tool_choice, response_format, n, logprobs, top_logprobs, logit_bias,
  user, parallel_tool_calls, prediction, store, metadata
- Apply to all request types: non-streaming, streaming, and error cases

**Streaming Content Capture** (chatCore.js & stream.js):
- Add onStreamComplete callback mechanism to stream processors
- Accumulate content from all formats: OpenAI, Claude, Gemini
- Track content from delta.content, delta.reasoning_content, delta.text,
  delta.thinking, and Gemini content.parts
- Save initial record with "[Streaming in progress...]" marker
- Update record with actual content when stream completes
- Include usage tokens when available from stream

**Files Modified**:
- open-sse/handlers/chatCore.js - extractRequestConfig() + streaming capture
- open-sse/utils/stream.js - onStreamComplete callback + content accumulation

**Benefits**:
- View complete request configuration in Request Details (thinking mode, etc.)
- See actual streaming response content instead of placeholder
- Better debugging and observability for AI requests

Refs: #request-details-enhancement

* feat: separate thinking/reasoning content from response content

Improve Request Details to display thinking process separately from final response:

**Backend Changes**:
- stream.js: Capture content and thinking separately in streaming mode
  - Add accumulatedThinking variable alongside accumulatedContent
  - Route delta.content to content, delta.reasoning_content to thinking
  - Support OpenAI (reasoning_content), Claude (thinking), Gemini (part.thought)
  - Update onStreamComplete callback to return { content, thinking } object

- chatCore.js: Update response structure to include thinking field
  - Non-streaming: Extract thinking from reasoning_content field
  - Streaming: Receive { content, thinking } from stream callback
  - Error responses: Include thinking: null
  - Initial streaming save: Include thinking: null

**Frontend Changes**:
- RequestDetailsTab.js: Display thinking and content in separate sections
  - Add amber/yellow themed "Thinking Process" section with psychology icon
  - Show "Final Response" label when thinking is present
  - Use distinct visual styling for thinking (amber bg) vs content (gray bg)
  - Only show thinking section when thinking content exists

**Benefits**:
- Users can clearly see model's reasoning process vs final answer
- Better debugging for models with thinking capabilities (Claude, o1, etc.)
- Visual distinction makes it easy to identify thinking vs response

Refs: #thinking-content-separation

* fix: map Claude thinking to reasoning_content field

Fix Claude thinking content to be properly captured as reasoning_content
instead of regular content, enabling separate display in Request Details:

**Changes**:
- claude-to-openai.js: Use reasoning_content field for thinking blocks
  - thinking start: send { reasoning_content: "" } instead of { content: "```\n```" }
  - thinking delta: map to reasoning_content instead of content
  - thinking stop: send { reasoning_content: "" } instead of { content: "```\n```" }

**Why This Matters**:
- Previously Claude thinking was sent as `content` field, mixed with actual response
- Now thinking uses `reasoning_content` field, matching OpenAI's o1 format
- stream.js can now properly route thinking to accumulatedThinking variable
- Request Details UI will show Claude thinking in separate "Thinking Process" section

**Supported Thinking Formats**:
- OpenAI: delta.reasoning_content → thinking
- Claude: delta.thinking → reasoning_content (now fixed)
- Gemini: part.thought === true → thinking

Refs: #claude-thinking-fix

* feat(observability): capture and display full 4-layer request chain

Capture complete request/response chain in AI Request Details:
- Add providerRequest field (translated request sent to provider)
- Add providerResponse field (raw provider response, streaming indicator)
- Update chatCore.js at all 5 saveRequestDetail() call sites
- Reorganize UI into 4 collapsible sections with Material icons
- Preserve backward compatibility for old records
- Add distinct styling for streaming indicator

* fix(observability): resolve React duplicate key warning in request details table

- Use composite key (detail.id + index) to ensure unique keys
- Prevents React warnings when database contains duplicate IDs from old ID generation

* fix(observability): display actual content in streaming request details

Change providerResponse field for streaming requests from placeholder
"[Streaming - raw response not captured]" to actual final content.

This improves debugging experience by showing the real AI response
in the "Provider Response (Raw)" section instead of a confusing
placeholder message.

Files changed:
- open-sse/handlers/chatCore.js: Save contentObj.content to providerResponse
- src/app/.../RequestDetailsTab.js: Remove special handling for placeholder

* refactor(observability): migrate request details to SQLite for improved concurrency

- Replace LowDB JSON storage with better-sqlite3
- Enable WAL mode for true concurrent read/write support
- Add 5 indexes to accelerate queries (timestamp, provider, model, connection_id, status)
- Perform pagination at the database level to reduce memory footprint
- Maintain 1000 record limit with automatic cleanup of old data
- Ensure API compatibility via re-exports, requiring no caller changes

Performance improvements:
- Concurrent Writes: Lock-free WAL mode prevents data contention
- Query Efficiency: Index-based searches replace full dataset loading
- Data Integrity: Atomic operations prevent file corruption

* fix(observability): resolve pagination statistics display issues

- Fix issue where totalItems=0 showed 'Showing 1 to 0 of 0 results'
- Hide pagination controls when totalItems=0 or totalPages<=1
- Standardize API response fields: pagination.total -> pagination.totalItems

Before: Incorrect stats shown for empty data, and pager visible even for single-page results
After: Stats hidden for empty data, pager hidden when navigation is unnecessary

* feat(observability): display friendly provider names in request details

- Add /api/usage/providers endpoint to dynamically fetch provider list with names
- Replace hardcoded provider options with dynamic loading from database
- Display friendly provider names instead of IDs in both table and detail drawer
- Support custom provider nodes (e.g., OpenAI-compatible) with user-defined names
- Add provider name caching to optimize performance

* fix(observability): use INSERT OR REPLACE for request details to handle streaming updates

* fix(observability): resolve zero-token display issue by ensuring streaming usage capture and fixing key mismatch

* fix(observability): separate TTFT and total latency calculation for streaming requests

* feat(observability): implement SQLite write queue and JSON size limits

- Added in-memory buffer and batch writing for SQLite to prevent lock contention
- Implemented  with configurable 1MB limit to prevent DB bloat
- Added dashboard UI for observability performance and data management settings
- Integrated graceful shutdown handlers to prevent data loss

* fix(observability): resolve ReferenceError by declaring dbInstance
2026-02-09 10:30:42 +07:00
decolua
388389c972 Revert "feat(request-details): implement observability settings and enhance request detail tracking"
This reverts commit cbabf5547c.
2026-02-09 10:29:38 +07:00
decolua
cbabf5547c feat(request-details): implement observability settings and enhance request detail tracking
- Added new observability settings in the dashboard for max records, batch size, flush interval, and max JSON size.
- Introduced `extractRequestConfig` function to capture full request configurations.
- Enhanced error handling by saving detailed request information on failures.
- Updated usage tracking to include new token metrics.
- Modified streaming functions to support detailed content and reasoning tracking.
2026-02-09 10:20:24 +07:00
decolua
635d327dbc chore: update package version to 0.2.71 and enhance MITM tools 2026-02-09 09:58:24 +07:00
decolua
bd0cebcfff Merge pull request #80 from Blade096/fix/free-provider-not-found
fix(dashboard): resolve 'Provider not found' for free providers
2026-02-08 16:51:54 +07:00
decolua
e7dfdc9274 Merge pull request #77 from Blade096/fix/openai_to_claude_missingthinking
feat(translator): add thinking parameter support in OpenAI → Claude
2026-02-08 16:47:24 +07:00
Diego Souza
3d439839d9 feat(cloud): harden sync/auth flow, SSE fallback, and update changelog
Co-authored-by: Cursor <cursoragent@cursor.com>
v0.2.71
2026-02-08 16:45:31 +07:00
decolua
2e854bd4c9 feat(antigravity): integrate Antigravity tool with MITM support and update CLI tools 2026-02-08 16:28:13 +07:00
Blade096
45a4d3ba68 fix(dashboard): resolve 'Provider not found' for free providers 2026-02-08 01:19:50 +08:00
Blade096
54e01d617d feat(translator): add thinking parameter support in OpenAI → Claude
Preserve thinking configuration when converting OpenAI requests to Claude format.

- Handle thinking.type with 'enabled' as default

- Preserve thinking.budget_tokens when present

- Preserve thinking.max_tokens when present

This enables proper thinking mode support for o1-series models

when routed through 9Router to Claude endpoints.

Ultraworked with [Sisyphus](https://github.com/code-yeongyu/oh-my-opencode)

Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
(cherry picked from commit 65d80e9269cc6789cb1522b276e8b8399fddbcab)
2026-02-07 21:45:50 +08:00
decolua
18712b24cf Delete debug log 2026-02-07 15:20:16 +07:00
decolua
20a86d6561 Merge pull request #67 from diegosouzapw/feature/docker-runtime-readme-updates
feat: add Docker runtime setup and README docs updates
v0.2.70
2026-02-07 11:24:10 +07:00
decolua
bdbe8162e7 feat(provider): add free providers and enhance error handling 2026-02-07 11:17:06 +07:00
decolua
53a5f43993 Merge pull request #68 from diegosouzapw/fix/logout-not-working
fix(auth): prevent auto-login after logout
2026-02-07 09:35:53 +07:00
Diego Souza
01c9410530 fix(login): avoid infinite loading on settings fetch failure 2026-02-06 23:41:29 +00:00
Diego Souza
49df3dce90 fix(auth): prevent auto-login after logout 2026-02-06 23:14:10 +00:00
Diego Souza
5e4a15bb0c feat(docker): add Docker setup, environment examples, and architecture docs 2026-02-06 22:45:03 +00:00
Diego Souza
6c41573203 feat(cli-tools): update default local endpoint port to 20128 2026-02-06 22:44:58 +00:00
decolua
ff2ba87161 Merge pull request #65 from ramhaidar/fix/provider-connection-ux
fix(api-key): auto-validate on save to improve UX
2026-02-06 21:21:03 +07:00
decolua
2e3eccf687 Update Readme v0.2.67 2026-02-06 21:05:52 +07:00
ram/haidar
e6ef8528fc fix(db): improve error handling and null checks
- Added null checks for undefined/null values in database operations
- Improved error handling for corrupt JSON recovery
- Added schema migration support for missing keys
- Target: database stability and data integrity
2026-02-06 20:54:42 +07:00
ram/haidar
b275dfdc9c feat(providers): auto-validate API keys on save
- AddApiKeyModal and EditConnectionModal now automatically validate API keys during save
- Sets testStatus to 'active' when validation succeeds, removing need for manual Check button
- Added saving state to prevent duplicate submissions during validation
- Target: provider connection management UX
2026-02-06 20:54:19 +07:00
decolua
a2122e3e48 feat(cli-tools): update CLI tools and add new models
- Add Droid and OpenClaw tool cards to CLI tools
- Enhance ClaudeToolCard and CodexToolCard to display current base URLs
2026-02-06 20:53:20 +07:00
decolua
f68ef4c933 Merge pull request #64 from Blade096/feature/iFlowAI-Kimi2.5-Support
feat(iflow): add kimi-k2.5 model support
2026-02-06 20:52:07 +07:00
Blade096
9e357a7ee6 feat(iflow): add kimi-k2.5 model support 2026-02-06 21:26:31 +08:00
decolua
01343c6325 Update ReadMe 2026-02-06 15:18:20 +07:00
Blade096
7c609d7a3e feat(providers): add Minimax Coding (China) provider
- Add minimax-cn provider with China endpoint (api.minimaxi.com)
- Add provider icon and configuration
- Add validation and test support
- Add API configuration in open-sse

Co-authored-by: Blade096 <46746496+Blade096@users.noreply.github.com>
Co-authored-by: Sisyphus <clio-agent@sisyphuslabs.ai>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 15:10:57 +07:00
Hellodebasishsahu
8c6e3b8b62 fix(codex): use user-agent detection for Droid CLI compatibility
The previous merge used sourceFormat check which broke Cursor when it
sends openai-responses format requests. Now uses user-agent detection:
- Droid CLI (user-agent contains 'droid' or 'codex-cli') → passthrough
- Other clients (Cursor, etc.) → translate to Chat Completions format

This fixes the API translation for both clients.

Co-authored-by: Hellodebasishsahu <itsyourboydevil@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 15:02:56 +07:00
decolua
fafa77316b Fix version 2026-02-06 11:38:52 +07:00
decolua
e8aa3e21fe - Added new model "Claude Opus 4.6" to the provider models. 2026-02-06 11:23:08 +07:00
decolua
39c555ca7e docs: clarify Droid CLI compatibility comment in Responses API translator
Co-authored-by: Emanuel Covelli <emanuel.covelli@netserv.it>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 09:56:57 +07:00
Hellodebasishsahu
127475df84 feat(codex): add GPT 5.3, fix API translation, add thinking levels
- Add GPT 5.3 Codex model with thinking level variants (none/low/medium/high/xhigh)
- Extract thinking level from model name suffix (e.g., gpt-5.3-codex-high)
- Fix Codex translation: preserve openai-responses format for Droid CLI
- Add effort level logging in request logs

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 09:46:11 +07:00