Commit Graph

33 Commits

Author SHA1 Message Date
Thiên Toán
806bd4ae14 feat: add API endpoint dimension to usage statistics dashboard (#152)
- Tracks endpoints like /v1/chat/completions, /v1/messages, /v1/responses
- New sortable/groupable table in usage dashboard with expandable groups
- Enhanced usage database aggregation by endpoint + model + provider
- Added endpoint tracking to all saveRequestUsage/saveRequestDetail calls
- Maintains backward compatibility with existing data structure
2026-02-20 15:03:18 +07:00
Hồ Xuân Dũng
a57a8ce206 feat: add Gemini embeddings support + Letta compatibility fixes
Cherry-picked from decolua/9router#148 (author: xuandung38 / Hồ Xuân Dũng <me@hxd.vn>)

- Add Google AI (Gemini) embeddings support for /v1/embeddings endpoint
- Add Gemini embedding models: gemini-embedding-001, text-embedding-005, text-embedding-004
- Inject missing object/created fields for Letta and strict OpenAI clients
- Strip Azure-specific fields (prompt_filter_results, content_filter_results) from responses
- Fix Dockerfile: copy open-sse directory into Docker runner stage

Skipped: whitelist message field stripping (commit 3/7/8) — too aggressive for all providers
Skipped: default stream=false change (commit 9) — behavior change needs further review
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-20 15:01:10 +07:00
Thiên Toán
9fbd6e619d fix: correct token extraction for Claude non-streaming responses (#131)
- Add response logging for non-streaming requests (5_res_provider.json, 7_res_client.json)
- Fix extractUsageFromResponse() to check Claude format before OpenAI format
- Prevents format misidentification that caused tokens to show as 0
- Claude uses input_tokens/output_tokens vs OpenAI's prompt_tokens/completion_tokens

Fixes dashboard Details tab showing 0 tokens for Claude requests
2026-02-20 14:24:21 +07:00
decolua
e2db638982 feat: enhance request handling and error management in chatCore and streamToJsonConverter
- Added detailed request logging and latency tracking in handleChatCore.
- Improved error handling for SSE to JSON conversion and JSON parsing in streamToJsonConverter.
- Introduced a safe JSON parsing utility to handle potential parsing errors gracefully in requestDetailsDb.

Co-authored-by: zx <me@char.moe>
2026-02-15 12:02:53 +07:00
zx07
3d29b86d44 feat: enhance disconnect handling and request tracking in chatCore.js (#126)
Co-authored-by: zx <me@char.moe>
2026-02-15 11:51:37 +07:00
apeltekci
ac7cedd27e feat(responses): respect client streaming preference + string input support (#121)
- Remove forced stream=true from responsesHandler
- Add stream-to-JSON converter for non-streaming clients (Codex)
- Accept string input in Responses API (normalize to array)
- Codex SSE header fallback for missing Content-Type
- Refactor: extract shared normalizeResponsesInput()

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-15 11:47:55 +07:00
Blade096
1ae4e311b7 feat: add GLM Coding (China) provider and Usage by API Keys statistics
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-11 15:44:08 +07:00
Blade
85b7a0b136 Feature/ai observability dashboard (#79)
* feat: add AI request details feature with latency tracking

Add comprehensive request history and debugging capability to the Usage dashboard:

**Storage Layer** (usageDb.js):
- Add saveRequestDetail() for storing full request/response details
- Implement FIFO queue with 1000-record limit in request-details.json
- Auto-sanitize sensitive headers (authorization, api-key, cookie, token)
- Add getRequestDetails() with pagination and filtering support
- Add getRequestDetailById() for single record lookup

**Pipeline Integration** (chatCore.js):
- Track request start time and calculate total latency
- Record TTFT (Time To First Token) and total latency for all requests
- Capture full request details (messages, model, parameters)
- Save response content for non-streaming, mark streaming responses
- Handle error cases with detailed error information
- Async non-blocking saves to avoid impacting request performance

**API Layer** (/api/usage/request-details):
- GET endpoint with pagination (page, pageSize: 1-100)
- Filter by provider, model, connectionId, status, date range
- Returns { details: [...], pagination: {...} } format

**UI Components**:
- Drawer.js: Right slide-out panel with backdrop blur and ESC close
- Pagination.js: Full pagination with page size selector (10/20/50)
- RequestDetailsTab.js: Complete table view with filters and detail drawer

**Dashboard Integration**:
- Add "Details" tab to Usage page (4th tab after Overview/Logger/Limits)
- Table columns: Timestamp, Model, Provider, Input Tokens, Output Tokens, Latency (TTFT/Total), Action
- Provider filter dropdown (9 providers supported)
- Date range filters (start/end datetime)
- Click "Detail" button to view full request/response JSON in slide-out drawer

**Features**:
- Real-time latency monitoring (TTFT & Total)
- Complete request/response inspection for debugging
- Filterable and searchable request history
- Responsive design with mobile-friendly filters
- Data security with automatic header sanitization
- Performance: async saves don't block request pipeline

**Files Created/Modified**:
- src/lib/usageDb.js (modified)
- open-sse/handlers/chatCore.js (modified)
- src/app/api/usage/request-details/route.js (new)
- src/shared/components/Drawer.js (new)
- src/shared/components/Pagination.js (new)
- src/app/(dashboard)/dashboard/usage/components/RequestDetailsTab.js (new)
- src/app/(dashboard)/dashboard/usage/page.js (modified)

Closes: AI Observability Dashboard feature

* feat: enhance request details with full config and streaming content capture

Improve Request Details feature to capture comprehensive request parameters
and actual streaming response content:

**Request Configuration Enhancement** (chatCore.js):
- Add extractRequestConfig() helper function to capture all request parameters
- Include temperature controls: temperature, top_p, top_k
- Include token limits: max_tokens, max_completion_tokens
- Include thinking/reasoning modes: thinking, reasoning, enable_thinking
- Include OpenAI parameters: presence_penalty, frequency_penalty, seed, stop,
  tools, tool_choice, response_format, n, logprobs, top_logprobs, logit_bias,
  user, parallel_tool_calls, prediction, store, metadata
- Apply to all request types: non-streaming, streaming, and error cases

**Streaming Content Capture** (chatCore.js & stream.js):
- Add onStreamComplete callback mechanism to stream processors
- Accumulate content from all formats: OpenAI, Claude, Gemini
- Track content from delta.content, delta.reasoning_content, delta.text,
  delta.thinking, and Gemini content.parts
- Save initial record with "[Streaming in progress...]" marker
- Update record with actual content when stream completes
- Include usage tokens when available from stream

**Files Modified**:
- open-sse/handlers/chatCore.js - extractRequestConfig() + streaming capture
- open-sse/utils/stream.js - onStreamComplete callback + content accumulation

**Benefits**:
- View complete request configuration in Request Details (thinking mode, etc.)
- See actual streaming response content instead of placeholder
- Better debugging and observability for AI requests

Refs: #request-details-enhancement

* feat: separate thinking/reasoning content from response content

Improve Request Details to display thinking process separately from final response:

**Backend Changes**:
- stream.js: Capture content and thinking separately in streaming mode
  - Add accumulatedThinking variable alongside accumulatedContent
  - Route delta.content to content, delta.reasoning_content to thinking
  - Support OpenAI (reasoning_content), Claude (thinking), Gemini (part.thought)
  - Update onStreamComplete callback to return { content, thinking } object

- chatCore.js: Update response structure to include thinking field
  - Non-streaming: Extract thinking from reasoning_content field
  - Streaming: Receive { content, thinking } from stream callback
  - Error responses: Include thinking: null
  - Initial streaming save: Include thinking: null

**Frontend Changes**:
- RequestDetailsTab.js: Display thinking and content in separate sections
  - Add amber/yellow themed "Thinking Process" section with psychology icon
  - Show "Final Response" label when thinking is present
  - Use distinct visual styling for thinking (amber bg) vs content (gray bg)
  - Only show thinking section when thinking content exists

**Benefits**:
- Users can clearly see model's reasoning process vs final answer
- Better debugging for models with thinking capabilities (Claude, o1, etc.)
- Visual distinction makes it easy to identify thinking vs response

Refs: #thinking-content-separation

* fix: map Claude thinking to reasoning_content field

Fix Claude thinking content to be properly captured as reasoning_content
instead of regular content, enabling separate display in Request Details:

**Changes**:
- claude-to-openai.js: Use reasoning_content field for thinking blocks
  - thinking start: send { reasoning_content: "" } instead of { content: "```\n```" }
  - thinking delta: map to reasoning_content instead of content
  - thinking stop: send { reasoning_content: "" } instead of { content: "```\n```" }

**Why This Matters**:
- Previously Claude thinking was sent as `content` field, mixed with actual response
- Now thinking uses `reasoning_content` field, matching OpenAI's o1 format
- stream.js can now properly route thinking to accumulatedThinking variable
- Request Details UI will show Claude thinking in separate "Thinking Process" section

**Supported Thinking Formats**:
- OpenAI: delta.reasoning_content → thinking
- Claude: delta.thinking → reasoning_content (now fixed)
- Gemini: part.thought === true → thinking

Refs: #claude-thinking-fix

* feat(observability): capture and display full 4-layer request chain

Capture complete request/response chain in AI Request Details:
- Add providerRequest field (translated request sent to provider)
- Add providerResponse field (raw provider response, streaming indicator)
- Update chatCore.js at all 5 saveRequestDetail() call sites
- Reorganize UI into 4 collapsible sections with Material icons
- Preserve backward compatibility for old records
- Add distinct styling for streaming indicator

* fix(observability): resolve React duplicate key warning in request details table

- Use composite key (detail.id + index) to ensure unique keys
- Prevents React warnings when database contains duplicate IDs from old ID generation

* fix(observability): display actual content in streaming request details

Change providerResponse field for streaming requests from placeholder
"[Streaming - raw response not captured]" to actual final content.

This improves debugging experience by showing the real AI response
in the "Provider Response (Raw)" section instead of a confusing
placeholder message.

Files changed:
- open-sse/handlers/chatCore.js: Save contentObj.content to providerResponse
- src/app/.../RequestDetailsTab.js: Remove special handling for placeholder

* refactor(observability): migrate request details to SQLite for improved concurrency

- Replace LowDB JSON storage with better-sqlite3
- Enable WAL mode for true concurrent read/write support
- Add 5 indexes to accelerate queries (timestamp, provider, model, connection_id, status)
- Perform pagination at the database level to reduce memory footprint
- Maintain 1000 record limit with automatic cleanup of old data
- Ensure API compatibility via re-exports, requiring no caller changes

Performance improvements:
- Concurrent Writes: Lock-free WAL mode prevents data contention
- Query Efficiency: Index-based searches replace full dataset loading
- Data Integrity: Atomic operations prevent file corruption

* fix(observability): resolve pagination statistics display issues

- Fix issue where totalItems=0 showed 'Showing 1 to 0 of 0 results'
- Hide pagination controls when totalItems=0 or totalPages<=1
- Standardize API response fields: pagination.total -> pagination.totalItems

Before: Incorrect stats shown for empty data, and pager visible even for single-page results
After: Stats hidden for empty data, pager hidden when navigation is unnecessary

* feat(observability): display friendly provider names in request details

- Add /api/usage/providers endpoint to dynamically fetch provider list with names
- Replace hardcoded provider options with dynamic loading from database
- Display friendly provider names instead of IDs in both table and detail drawer
- Support custom provider nodes (e.g., OpenAI-compatible) with user-defined names
- Add provider name caching to optimize performance

* fix(observability): use INSERT OR REPLACE for request details to handle streaming updates

* fix(observability): resolve zero-token display issue by ensuring streaming usage capture and fixing key mismatch

* fix(observability): separate TTFT and total latency calculation for streaming requests

* feat(observability): implement SQLite write queue and JSON size limits

- Added in-memory buffer and batch writing for SQLite to prevent lock contention
- Implemented  with configurable 1MB limit to prevent DB bloat
- Added dashboard UI for observability performance and data management settings
- Integrated graceful shutdown handlers to prevent data loss

* fix(observability): resolve ReferenceError by declaring dbInstance
2026-02-09 10:30:42 +07:00
decolua
388389c972 Revert "feat(request-details): implement observability settings and enhance request detail tracking"
This reverts commit cbabf5547c.
2026-02-09 10:29:38 +07:00
decolua
cbabf5547c feat(request-details): implement observability settings and enhance request detail tracking
- Added new observability settings in the dashboard for max records, batch size, flush interval, and max JSON size.
- Introduced `extractRequestConfig` function to capture full request configurations.
- Enhanced error handling by saving detailed request information on failures.
- Updated usage tracking to include new token metrics.
- Modified streaming functions to support detailed content and reasoning tracking.
2026-02-09 10:20:24 +07:00
Diego Souza
3d439839d9 feat(cloud): harden sync/auth flow, SSE fallback, and update changelog
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-08 16:45:31 +07:00
decolua
bdbe8162e7 feat(provider): add free providers and enhance error handling 2026-02-07 11:17:06 +07:00
Hellodebasishsahu
8c6e3b8b62 fix(codex): use user-agent detection for Droid CLI compatibility
The previous merge used sourceFormat check which broke Cursor when it
sends openai-responses format requests. Now uses user-agent detection:
- Droid CLI (user-agent contains 'droid' or 'codex-cli') → passthrough
- Other clients (Cursor, etc.) → translate to Chat Completions format

This fixes the API translation for both clients.

Co-authored-by: Hellodebasishsahu <itsyourboydevil@gmail.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 15:02:56 +07:00
Hellodebasishsahu
127475df84 feat(codex): add GPT 5.3, fix API translation, add thinking levels
- Add GPT 5.3 Codex model with thinking level variants (none/low/medium/high/xhigh)
- Extract thinking level from model name suffix (e.g., gpt-5.3-codex-high)
- Fix Codex translation: preserve openai-responses format for Droid CLI
- Add effort level logging in request logs

Co-authored-by: Cursor <cursoragent@cursor.com>
2026-02-06 09:46:11 +07:00
decolua
3c65e0c5f2 feat: Improve Antigravity quota monitoring and fix Droid CLI compatibility 2026-02-05 22:15:26 +07:00
decolua
e6e44ac364 Fix : usage convert 2026-02-04 09:54:11 +07:00
decolua
7881db81ec feat: Implement buffer addition to usage tracking for improved context handling 2026-02-03 10:39:20 +07:00
decolua
df0e1d6485 feat: Update response handling and logging for improved usage tracking 2026-02-03 10:22:43 +07:00
decolua
fa06226972 fix: Correct indentation for clarity in chatCore and claude-to-openai response handlers 2026-02-02 19:47:09 +07:00
decolua
0a28f9f924 feat: Add OpenAI-compatible provider nodes
- Support multiple OpenAI-compatible providers with custom prefix/baseUrl
- Add provider nodes CRUD (create/read/update/delete)
- URL building: baseUrl + /chat/completions or /responses
- Model import from /models endpoint
- API key validation via /models
- Usage type safety across all translators
- OAuth token auto-refresh for expired tokens
2026-02-02 19:45:12 +07:00
decolua
7b864a9dcb feat(codex): Cursor compatibility + Next.js 16 proxy migration
- Force streaming for Codex/OpenAI models to fix non-streaming bug
- Strip unsupported params (user, metadata, stream_options, prompt_cache_retention)
- Force response translation from openai-responses to openai format
- Migrate middleware.js to proxy.js for Next.js 16
- Use webpack explicitly in dev/build scripts
- Updated Codex User-Agent
2026-02-02 09:17:15 +07:00
decolua
63f2da87b0 Implement non-streaming response translation for multiple formats in chatCore.js 2026-01-29 23:38:30 +07:00
decolua
79342c0c3e ok 2026-01-27 10:49:16 +07:00
decolua
26b61e5fbb Feat Kiro OAuth, Fix Codex 2026-01-15 18:29:47 +07:00
decolua
c208f244ee Enhance chat handling. 2026-01-14 15:42:38 +07:00
decolua
f9ef718fc6 Fix Antigravity 2026-01-13 17:19:41 +07:00
decolua
f46ff42cb3 refactor: streamline provider interactions and enhance error handling 2026-01-11 21:45:01 +07:00
decolua
e4769070b3 feat: add request logging functionality and usage metrics display 2026-01-09 17:40:59 +07:00
Catalin Stanciu
e4f92cd104 feat: implement request tracking and enhance usage stats display 2026-01-09 17:37:10 +07:00
Catalin Stanciu
9c3d6f4ad8 feat: implement usage tracking for AI requests
Adds local token usage tracking for all AI providers. Usage data is
captured during stream processing and stored in a local database.
Includes a new Usage tab in the Providers dashboard to visualize
historical token consumption.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-09 17:23:28 +07:00
decolua
18533505ef refactor: restructure translator from from-openai/to-openai to request/response folders 2026-01-09 17:14:51 +07:00
decolua
f0698eee9d Refactor error handling in chatCore.js and update formatProviderError function to include status code. This improves error message clarity by incorporating HTTP status codes in the formatted output. 2026-01-05 10:57:45 +07:00
decolua
e35421beb1 Update jsconfig.json and package.json to correct open-sse path references from relative to local directory. 2026-01-05 10:37:09 +07:00