mirror of
https://github.com/decolua/9router.git
synced 2026-05-08 12:01:28 +00:00
feat: OpenAI compatibility improvements & build fixes
- Fix hydration mismatches and initialization errors - Add /v1/models endpoint for OpenAI clients - Add Codex response translator (Responses → OpenAI) - Fix circular dependencies and PropTypes - Add Material Symbols font and CSS fixes - Update README with deployment guide Co-merged from PR #18 (14/15 commits, skipped debug)
This commit is contained in:
7
.vscode/settings.json
vendored
7
.vscode/settings.json
vendored
@@ -1,3 +1,8 @@
|
||||
{
|
||||
"css.lint.unknownAtRules": "ignore"
|
||||
"css.lint.unknownAtRules": "ignore",
|
||||
"sonarlint.rules": {
|
||||
"css:S4662": {
|
||||
"level": "off"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
310
README.md
310
README.md
@@ -1,8 +1,8 @@
|
||||
# 🚀 9ROUTER - AI Proxy
|
||||
# 9ROUTER - AI Proxy
|
||||
|
||||
> Universal AI Proxy for Claude Code, Codex, Cursor | OpenAI, Claude, Gemini, Copilot
|
||||
|
||||
🌐 **Website: [9router.com](https://9router.com)**
|
||||
**Website: [9router.com](https://9router.com)**
|
||||
|
||||
[](https://www.npmjs.com/package/9router)
|
||||
[](https://www.npmjs.com/package/9router)
|
||||
@@ -12,44 +12,64 @@ A JavaScript port of CLIProxyAPI with web dashboard.
|
||||
|
||||

|
||||
|
||||
## 📖 Introduction
|
||||
## Introduction
|
||||
|
||||
**9Router** is a powerful AI API proxy server that provides unified access to multiple AI providers through a single endpoint. It features automatic format translation, intelligent fallback routing, OAuth authentication, and a modern web dashboard for easy management.
|
||||
|
||||
**Key Highlights:**
|
||||
- **JavaScript Port**: Converted from CLIProxyAPI (Go) to JavaScript/Node.js.
|
||||
- **Universal CLI Support**: Works seamlessly with Claude Code, OpenAI Codex, Cline, RooCode, AmpCode, and other CLI tools
|
||||
- **Universal CLI Support**: Works seamlessly with Claude Code, OpenAI Codex, Cline, RooCode, AmpCode, Kilo, and other CLI tools
|
||||
- **Cross-Platform**: Runs on Windows, Linux, and macOS
|
||||
- **Easy Deployment**: Simple installation via npx, or deploy to VPS
|
||||
- **Easy Deployment**: Simple installation via npx, or deploy to VPS/Dokploy
|
||||
|
||||
## ✨ Features
|
||||
## Recent Updates
|
||||
|
||||
### v0.2.27
|
||||
- **OpenAI Responses API Support**: Full support for Codex CLI streaming via the Responses API format
|
||||
- **`/v1/models` Endpoint**: OpenAI-compatible models endpoint for client discovery
|
||||
- **Combo Support in Models**: Model combos now appear in the `/v1/models` endpoint
|
||||
- **Improved Usage Tracking**: Better handling of request status for streaming responses
|
||||
- **Kiro (AWS CodeWhisperer) Support**: New provider integration
|
||||
|
||||
### Provider Support
|
||||
| Provider | Alias | Auth Type | Format |
|
||||
|----------|-------|-----------|--------|
|
||||
| Claude (Anthropic) | `cc` | OAuth | Claude |
|
||||
| Codex (OpenAI) | `cx` | OAuth | Responses API |
|
||||
| Gemini CLI | `gc` | OAuth | Gemini CLI |
|
||||
| Antigravity (Google) | `ag` | OAuth | Antigravity |
|
||||
| GitHub Copilot | `gh` | OAuth | OpenAI |
|
||||
| Qwen | `qw` | OAuth | OpenAI |
|
||||
| iFlow | `if` | OAuth | OpenAI |
|
||||
| Kiro (AWS) | `kr` | OAuth | Kiro |
|
||||
| OpenAI | `openai` | API Key | OpenAI |
|
||||
| Anthropic | `anthropic` | API Key | Claude |
|
||||
| Gemini | `gemini` | API Key | Gemini |
|
||||
| OpenRouter | `openrouter` | API Key | OpenAI |
|
||||
|
||||
## Features
|
||||
|
||||
### Core Features
|
||||
- **🔄 Multi-Provider Support**: Unified endpoint for 15+ AI providers (Claude, OpenAI, Gemini, GitHub Copilot, Qwen, iFlow, DeepSeek, Kimi, MiniMax, GLM, etc.)
|
||||
- **🔐 OAuth & API Key Authentication**: Supports both OAuth2 flow and API key authentication
|
||||
- **🎯 Format Translation**: Automatic request/response translation between OpenAI, Claude, Gemini, Codex, and Ollama formats
|
||||
- **🌐 Web Dashboard**: Beautiful React-based dashboard for managing providers, combos, API keys, and settings
|
||||
- **📊 Usage Tracking**: Real-time monitoring and analytics for all API requests
|
||||
- **Multi-Provider Support**: Unified endpoint for 15+ AI providers
|
||||
- **OAuth & API Key Authentication**: Supports both OAuth2 flow and API key authentication
|
||||
- **Format Translation**: Automatic request/response translation between OpenAI, Claude, Gemini, Codex, and Kiro formats
|
||||
- **Web Dashboard**: React-based dashboard for managing providers, combos, API keys, and settings
|
||||
- **Usage Tracking**: Real-time monitoring and analytics for all API requests
|
||||
|
||||
### Advanced Features
|
||||
- **🎲 Combo System**: Create model combos with automatic fallback support
|
||||
- **♻️ Intelligent Fallback**: Automatic account rotation when rate limits or errors occur
|
||||
- **⚡ Response Caching**: Optimized caching for Claude Code (1-hour cache vs default 5 minutes)
|
||||
- **🔧 Model Aliases**: Create custom model aliases for easier management
|
||||
- **☁️ Cloud Deployment**: Deploy to Cloud for Cursor IDE integration with global edge performance
|
||||
- **Combo System**: Create model combos with automatic fallback support
|
||||
- **Intelligent Fallback**: Automatic account rotation when rate limits or errors occur
|
||||
- **Response Caching**: Optimized caching for Claude Code
|
||||
- **Model Aliases**: Create custom model aliases for easier management
|
||||
|
||||
### Format Support
|
||||
- **OpenAI Format**: Standard OpenAI Chat Completions API
|
||||
- **OpenAI Responses API**: Codex CLI format with streaming
|
||||
- **Claude Format**: Anthropic Messages API
|
||||
- **Gemini Format**: Google Generative AI API
|
||||
- **OpenAI Responses API**: Codex CLI format
|
||||
- **Ollama Format**: Compatible with Ollama-based tools
|
||||
- **Kiro Format**: AWS CodeWhisperer format
|
||||
|
||||
### CLI Integration
|
||||
- Works with: Cursor, Claude Code, OpenAI Codex, Cline, RooCode, AmpCode, and more
|
||||
- Seamless integration with popular AI coding assistants
|
||||
|
||||
## 📦 Install
|
||||
## Install
|
||||
|
||||
```bash
|
||||
# Install globally
|
||||
@@ -60,7 +80,7 @@ npm install -g 9router
|
||||
npx 9router
|
||||
```
|
||||
|
||||
## 🚀 Quick Start
|
||||
## Quick Start
|
||||
|
||||
```bash
|
||||
9router # Start server with default settings
|
||||
@@ -72,13 +92,204 @@ npx 9router
|
||||
|
||||
**Dashboard**: `http://localhost:20128/dashboard`
|
||||
|
||||
## 💾 Data Location
|
||||
## Remote Deployment
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Configure these environment variables for remote deployment:
|
||||
|
||||
| Variable | Required | Default | Description |
|
||||
|----------|----------|---------|-------------|
|
||||
| `DATA_DIR` | No | `~/.9router` | Custom data directory path for database storage |
|
||||
| `JWT_SECRET` | **Yes** | `9router-default-secret-change-me` | Secret key for JWT authentication. **Change this in production!** |
|
||||
| `INITIAL_PASSWORD` | No | `123456` | Initial dashboard login password |
|
||||
| `API_KEY_SECRET` | No | Auto-generated | Secret for API key generation/validation |
|
||||
| `MACHINE_ID_SALT` | No | Auto-generated | Salt for machine ID hashing |
|
||||
| `NEXT_PUBLIC_BASE_URL` | No | `http://localhost:3000` | Public base URL of your deployment |
|
||||
| `NEXT_PUBLIC_CLOUD_URL` | No | `https://9router.com` | Cloud sync URL (for cloud features) |
|
||||
| `ENABLE_REQUEST_LOGS` | No | `false` | Enable detailed request/response logging to files |
|
||||
| `NODE_ENV` | No | `development` | Set to `production` for production deployments |
|
||||
|
||||
### Deploying to Dokploy
|
||||
|
||||
1. **Create a new application** in Dokploy
|
||||
2. **Connect your Git repository** or use Docker
|
||||
3. **Set environment variables** in Dokploy's settings:
|
||||
|
||||
```env
|
||||
JWT_SECRET=your-secure-random-secret-here
|
||||
INITIAL_PASSWORD=your-secure-password
|
||||
DATA_DIR=/app/data
|
||||
NODE_ENV=production
|
||||
```
|
||||
|
||||
4. **Build command**: `npm run build`
|
||||
5. **Start command**: `npm run start`
|
||||
6. **Port**: `3000` (or configure via `PORT` env var)
|
||||
|
||||
### Deploying to VPS (Manual)
|
||||
|
||||
```bash
|
||||
# Clone the repository
|
||||
git clone https://github.com/yourusername/9router.git
|
||||
cd 9router
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Set environment variables
|
||||
export JWT_SECRET="your-secure-random-secret"
|
||||
export INITIAL_PASSWORD="your-secure-password"
|
||||
export DATA_DIR="/var/lib/9router"
|
||||
export NODE_ENV="production"
|
||||
|
||||
# Build the application
|
||||
npm run build
|
||||
|
||||
# Start the server
|
||||
npm run start
|
||||
```
|
||||
|
||||
### Using Docker
|
||||
|
||||
Create a `Dockerfile`:
|
||||
|
||||
```dockerfile
|
||||
FROM node:20-alpine
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY package*.json ./
|
||||
RUN npm ci --only=production
|
||||
|
||||
COPY . .
|
||||
RUN npm run build
|
||||
|
||||
ENV NODE_ENV=production
|
||||
ENV DATA_DIR=/app/data
|
||||
|
||||
EXPOSE 3000
|
||||
|
||||
CMD ["npm", "run", "start"]
|
||||
```
|
||||
|
||||
Build and run:
|
||||
|
||||
```bash
|
||||
docker build -t 9router .
|
||||
docker run -d \
|
||||
-p 3000:3000 \
|
||||
-e JWT_SECRET="your-secure-secret" \
|
||||
-e INITIAL_PASSWORD="your-password" \
|
||||
-v 9router-data:/app/data \
|
||||
9router
|
||||
```
|
||||
|
||||
### Using with Reverse Proxy (Nginx)
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name your-domain.com;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:3000;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
|
||||
# SSE support
|
||||
proxy_buffering off;
|
||||
proxy_read_timeout 86400;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## API Endpoints
|
||||
|
||||
### Chat Completions
|
||||
```
|
||||
POST /v1/chat/completions
|
||||
```
|
||||
OpenAI-compatible chat completions endpoint. Supports all providers with automatic format translation.
|
||||
|
||||
### Models List
|
||||
```
|
||||
GET /v1/models
|
||||
```
|
||||
Returns available models in OpenAI-compatible format, including combos.
|
||||
|
||||
### Responses API (Codex)
|
||||
```
|
||||
POST /v1/responses
|
||||
POST /codex/responses
|
||||
```
|
||||
OpenAI Responses API endpoint for Codex CLI compatibility.
|
||||
|
||||
## CLI Integration Examples
|
||||
|
||||
### Claude Code
|
||||
```bash
|
||||
# Set your 9router endpoint
|
||||
export ANTHROPIC_BASE_URL="http://your-server:3000/v1"
|
||||
|
||||
# Use with Claude Code
|
||||
claude
|
||||
```
|
||||
|
||||
### Codex CLI
|
||||
```bash
|
||||
# Configure Codex to use 9router
|
||||
export OPENAI_BASE_URL="http://your-server:3000"
|
||||
|
||||
# Use Codex
|
||||
codex
|
||||
```
|
||||
|
||||
### Cursor IDE
|
||||
Configure in Cursor settings:
|
||||
- API Base URL: `http://your-server:3000/v1`
|
||||
- Use your generated API key from the dashboard
|
||||
|
||||
## Debugging
|
||||
|
||||
### Enable Request Logging
|
||||
|
||||
Set the environment variable to capture full request/response data:
|
||||
|
||||
```bash
|
||||
ENABLE_REQUEST_LOGS=true npm run start
|
||||
```
|
||||
|
||||
Logs are saved to the `logs/` directory with the format:
|
||||
```
|
||||
logs/
|
||||
└── {sourceFormat}_{targetFormat}_{model}_{timestamp}/
|
||||
├── 1_client_raw_request.json
|
||||
├── 2_raw_request.json
|
||||
├── 3_converted_request.json
|
||||
├── 4_provider_response.txt
|
||||
├── 5_converted_response.txt
|
||||
└── 6_error.json (if error occurred)
|
||||
```
|
||||
|
||||
### Console Debug Logging
|
||||
|
||||
The application includes debug logging for troubleshooting provider issues. Check your container/server logs for `[DEBUG]` prefixed messages.
|
||||
|
||||
## Data Location
|
||||
|
||||
User data stored at:
|
||||
- macOS/Linux: `~/.9router/db.json`
|
||||
- Windows: `%APPDATA%/9router/db.json`
|
||||
- **macOS/Linux**: `~/.9router/db.json`
|
||||
- **Windows**: `%APPDATA%/9router/db.json`
|
||||
- **Custom**: Set `DATA_DIR` environment variable
|
||||
|
||||
## 🛠️ Development
|
||||
## Development
|
||||
|
||||
### Setup
|
||||
```bash
|
||||
@@ -102,14 +313,18 @@ npm run dev
|
||||
│ ├── shared/ # Shared components & utilities
|
||||
│ └── sse/ # SSE streaming handlers
|
||||
├── open-sse/ # Core proxy engine (translator, handlers)
|
||||
│ ├── translator/ # Format translators
|
||||
│ ├── translator/ # Format translators (request/response)
|
||||
│ │ ├── request/ # Request translators
|
||||
│ │ └── response/ # Response translators
|
||||
│ ├── handlers/ # Request handlers
|
||||
│ ├── services/ # Core services
|
||||
│ ├── executors/ # Provider-specific executors
|
||||
│ ├── services/ # Core services (fallback, token refresh)
|
||||
│ └── config/ # Provider configurations
|
||||
├── tester/ # Testing utilities
|
||||
└── public/ # Static assets
|
||||
```
|
||||
|
||||
## 🧰 Tech Stack
|
||||
## Tech Stack
|
||||
|
||||
| Layer | Technology |
|
||||
|-------|------------|
|
||||
@@ -120,21 +335,44 @@ npm run dev
|
||||
| **CLI** | Node.js CLI with auto-update |
|
||||
| **Streaming** | Server-Sent Events (SSE) |
|
||||
| **Auth** | OAuth 2.0 (PKCE) + API Keys |
|
||||
| **Deployment** | Standalone / VPS |
|
||||
| **Deployment** | Standalone / VPS / Docker |
|
||||
| **State Management** | Zustand |
|
||||
|
||||
### Core Libraries
|
||||
- **lowdb**: Lightweight JSON database
|
||||
- **undici**: High-performance HTTP client
|
||||
- **uuid**: Unique identifier generation
|
||||
- **open**: Cross-platform browser launcher
|
||||
- **jose**: JWT handling
|
||||
- **bcryptjs**: Password hashing
|
||||
|
||||
## 🙏 Acknowledgments
|
||||
## Troubleshooting
|
||||
|
||||
### "The language model did not provide any assistant messages"
|
||||
|
||||
This error typically means the upstream provider returned an empty or malformed response. Check:
|
||||
1. Your provider credentials are valid and not rate-limited
|
||||
2. The model name is correct (e.g., `ag/gemini-3-pro-high`)
|
||||
3. Enable debug logging to see the actual provider response
|
||||
|
||||
### OAuth Token Expired
|
||||
|
||||
OAuth tokens are automatically refreshed. If you see authentication errors:
|
||||
1. Re-authenticate via the dashboard
|
||||
2. Check if the provider's OAuth credentials are still valid
|
||||
|
||||
### Rate Limiting
|
||||
|
||||
9Router implements automatic fallback when rate limits are hit:
|
||||
1. Add multiple accounts for the same provider
|
||||
2. Configure account priorities in the dashboard
|
||||
3. Use combos to fallback between different providers
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
Special thanks to:
|
||||
|
||||
- **CLIProxyAPI**: The original Go implementation that inspired this project. 9Router is a JavaScript port with some features and web dashboard.
|
||||
- **CLIProxyAPI**: The original Go implementation that inspired this project. 9Router is a JavaScript port with additional features and web dashboard.
|
||||
|
||||
## 📄 License
|
||||
## License
|
||||
|
||||
MIT License - see [LICENSE](LICENSE) for details.
|
||||
|
||||
@@ -23,6 +23,16 @@ export class CodexExecutor extends BaseExecutor {
|
||||
// Ensure store is false (Codex requirement)
|
||||
body.store = false;
|
||||
|
||||
// Remove unsupported parameters for Codex API
|
||||
delete body.temperature;
|
||||
delete body.top_p;
|
||||
delete body.frequency_penalty;
|
||||
delete body.presence_penalty;
|
||||
delete body.logprobs;
|
||||
delete body.top_logprobs;
|
||||
delete body.n;
|
||||
delete body.seed;
|
||||
|
||||
return body;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -355,6 +355,153 @@ function flushEvents(state) {
|
||||
return events;
|
||||
}
|
||||
|
||||
// Register
|
||||
register(FORMATS.OPENAI, FORMATS.OPENAI_RESPONSES, null, openaiToOpenAIResponsesResponse);
|
||||
/**
|
||||
* Translate OpenAI Responses API chunk to OpenAI Chat Completions format
|
||||
* This is for when Codex returns data and we need to send it to an OpenAI-compatible client
|
||||
*/
|
||||
function openaiResponsesToOpenAIResponse(chunk, state) {
|
||||
if (!chunk) {
|
||||
// Flush: send final chunk with finish_reason
|
||||
if (!state.finishReasonSent && state.started) {
|
||||
state.finishReasonSent = true;
|
||||
return {
|
||||
id: state.chatId || `chatcmpl-${Date.now()}`,
|
||||
object: "chat.completion.chunk",
|
||||
created: state.created || Math.floor(Date.now() / 1000),
|
||||
model: state.model || "gpt-4",
|
||||
choices: [{
|
||||
index: 0,
|
||||
delta: {},
|
||||
finish_reason: "stop"
|
||||
}]
|
||||
};
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
// Handle different event types from Responses API
|
||||
const eventType = chunk.type || chunk.event;
|
||||
const data = chunk.data || chunk;
|
||||
|
||||
// Initialize state
|
||||
if (!state.started) {
|
||||
state.started = true;
|
||||
state.chatId = `chatcmpl-${Date.now()}`;
|
||||
state.created = Math.floor(Date.now() / 1000);
|
||||
state.toolCallIndex = 0;
|
||||
state.currentToolCallId = null;
|
||||
}
|
||||
|
||||
// Text content delta
|
||||
if (eventType === "response.output_text.delta") {
|
||||
const delta = data.delta || "";
|
||||
if (!delta) return null;
|
||||
|
||||
return {
|
||||
id: state.chatId,
|
||||
object: "chat.completion.chunk",
|
||||
created: state.created,
|
||||
model: state.model || "gpt-4",
|
||||
choices: [{
|
||||
index: 0,
|
||||
delta: { content: delta },
|
||||
finish_reason: null
|
||||
}]
|
||||
};
|
||||
}
|
||||
|
||||
// Text content done (ignore, we handle via delta)
|
||||
if (eventType === "response.output_text.done") {
|
||||
return null;
|
||||
}
|
||||
|
||||
// Function call started
|
||||
if (eventType === "response.output_item.added" && data.item?.type === "function_call") {
|
||||
const item = data.item;
|
||||
state.currentToolCallId = item.call_id || `call_${Date.now()}`;
|
||||
|
||||
return {
|
||||
id: state.chatId,
|
||||
object: "chat.completion.chunk",
|
||||
created: state.created,
|
||||
model: state.model || "gpt-4",
|
||||
choices: [{
|
||||
index: 0,
|
||||
delta: {
|
||||
tool_calls: [{
|
||||
index: state.toolCallIndex,
|
||||
id: state.currentToolCallId,
|
||||
type: "function",
|
||||
function: {
|
||||
name: item.name || "",
|
||||
arguments: ""
|
||||
}
|
||||
}]
|
||||
},
|
||||
finish_reason: null
|
||||
}]
|
||||
};
|
||||
}
|
||||
|
||||
// Function call arguments delta
|
||||
if (eventType === "response.function_call_arguments.delta") {
|
||||
const argsDelta = data.delta || "";
|
||||
if (!argsDelta) return null;
|
||||
|
||||
return {
|
||||
id: state.chatId,
|
||||
object: "chat.completion.chunk",
|
||||
created: state.created,
|
||||
model: state.model || "gpt-4",
|
||||
choices: [{
|
||||
index: 0,
|
||||
delta: {
|
||||
tool_calls: [{
|
||||
index: state.toolCallIndex,
|
||||
function: { arguments: argsDelta }
|
||||
}]
|
||||
},
|
||||
finish_reason: null
|
||||
}]
|
||||
};
|
||||
}
|
||||
|
||||
// Function call done
|
||||
if (eventType === "response.output_item.done" && data.item?.type === "function_call") {
|
||||
state.toolCallIndex++;
|
||||
return null;
|
||||
}
|
||||
|
||||
// Response completed
|
||||
if (eventType === "response.completed") {
|
||||
if (!state.finishReasonSent) {
|
||||
state.finishReasonSent = true;
|
||||
return {
|
||||
id: state.chatId,
|
||||
object: "chat.completion.chunk",
|
||||
created: state.created,
|
||||
model: state.model || "gpt-4",
|
||||
choices: [{
|
||||
index: 0,
|
||||
delta: {},
|
||||
finish_reason: "stop"
|
||||
}]
|
||||
};
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
// Reasoning events (convert to content or skip)
|
||||
if (eventType === "response.reasoning_summary_text.delta") {
|
||||
// Optionally include reasoning as content, or skip
|
||||
return null;
|
||||
}
|
||||
|
||||
// Ignore other events
|
||||
return null;
|
||||
}
|
||||
|
||||
// Register both directions
|
||||
register(FORMATS.OPENAI, FORMATS.OPENAI_RESPONSES, null, openaiToOpenAIResponsesResponse);
|
||||
register(FORMATS.OPENAI_RESPONSES, FORMATS.OPENAI, null, openaiResponsesToOpenAIResponse);
|
||||
|
||||
|
||||
@@ -7,7 +7,7 @@ function getTimeString() {
|
||||
return new Date().toLocaleTimeString("en-US", { hour12: false, hour: "2-digit", minute: "2-digit", second: "2-digit" });
|
||||
}
|
||||
|
||||
// Extract usage from any format (Claude, OpenAI, Gemini)
|
||||
// Extract usage from any format (Claude, OpenAI, Gemini, Responses API)
|
||||
function extractUsage(chunk) {
|
||||
// Claude format (message_delta event)
|
||||
if (chunk.type === "message_delta" && chunk.usage) {
|
||||
@@ -18,6 +18,16 @@ function extractUsage(chunk) {
|
||||
cache_creation_input_tokens: chunk.usage.cache_creation_input_tokens
|
||||
};
|
||||
}
|
||||
// OpenAI Responses API format (response.completed or response.done)
|
||||
if ((chunk.type === "response.completed" || chunk.type === "response.done") && chunk.response?.usage) {
|
||||
const usage = chunk.response.usage;
|
||||
return {
|
||||
prompt_tokens: usage.input_tokens || usage.prompt_tokens || 0,
|
||||
completion_tokens: usage.output_tokens || usage.completion_tokens || 0,
|
||||
cached_tokens: usage.input_tokens_details?.cached_tokens,
|
||||
reasoning_tokens: usage.output_tokens_details?.reasoning_tokens
|
||||
};
|
||||
}
|
||||
// OpenAI format
|
||||
if (chunk.usage?.prompt_tokens !== undefined) {
|
||||
return {
|
||||
@@ -253,7 +263,12 @@ export function createSSEStream(options = {}) {
|
||||
reqLogger?.appendConvertedChunk?.(output);
|
||||
controller.enqueue(encoder.encode(output));
|
||||
}
|
||||
if (usage) logUsage(provider, usage, model, connectionId);
|
||||
if (usage) {
|
||||
logUsage(provider, usage, model, connectionId);
|
||||
} else {
|
||||
// No usage data available - still mark request as completed
|
||||
appendRequestLog({ model, provider, connectionId, tokens: null, status: "200 OK" }).catch(() => {});
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
@@ -287,7 +302,12 @@ export function createSSEStream(options = {}) {
|
||||
reqLogger?.appendConvertedChunk?.(doneOutput);
|
||||
controller.enqueue(encoder.encode(doneOutput));
|
||||
|
||||
if (state?.usage) logUsage(state.provider || targetFormat, state.usage, model, connectionId);
|
||||
if (state?.usage) {
|
||||
logUsage(state.provider || targetFormat, state.usage, model, connectionId);
|
||||
} else {
|
||||
// No usage data available - still mark request as completed
|
||||
appendRequestLog({ model, provider, connectionId, tokens: null, status: "200 OK" }).catch(() => {});
|
||||
}
|
||||
} catch (error) {
|
||||
console.log("Error in flush:", error);
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
"use client";
|
||||
|
||||
import { useState, useEffect } from "react";
|
||||
import PropTypes from "prop-types";
|
||||
import { Card, Button, Input, Modal, CardSkeleton } from "@/shared/components";
|
||||
import { useCopyToClipboard } from "@/shared/hooks/useCopyToClipboard";
|
||||
|
||||
@@ -198,10 +199,16 @@ export default function APIPageClient({ machineId }) {
|
||||
}
|
||||
};
|
||||
|
||||
const baseUrl = typeof window !== "undefined" ? `${window.location.origin}/v1` : "/v1";
|
||||
// New format: /v1 (machineId in key), Old format: /{machineId}/v1
|
||||
const [baseUrl, setBaseUrl] = useState("/v1");
|
||||
const cloudEndpointNew = `${CLOUD_URL}/v1`;
|
||||
|
||||
// Hydration fix: Only access window on client side
|
||||
useEffect(() => {
|
||||
if (typeof window !== "undefined") {
|
||||
setBaseUrl(`${window.location.origin}/v1`);
|
||||
}
|
||||
}, []);
|
||||
|
||||
if (loading) {
|
||||
return (
|
||||
<div className="flex flex-col gap-8">
|
||||
@@ -601,5 +608,5 @@ export default function APIPageClient({ machineId }) {
|
||||
}
|
||||
|
||||
APIPageClient.propTypes = {
|
||||
machineId: import("prop-types").string.isRequired,
|
||||
machineId: PropTypes.string.isRequired,
|
||||
};
|
||||
@@ -159,7 +159,7 @@ export default function ProfilePage() {
|
||||
)}
|
||||
|
||||
<div className="pt-2">
|
||||
<Button type="submit" variant="primary" isLoading={passLoading}>
|
||||
<Button type="submit" variant="primary" loading={passLoading}>
|
||||
Update Password
|
||||
</Button>
|
||||
</div>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"use client";
|
||||
|
||||
import { useState, useEffect, useMemo } from "react";
|
||||
import { useState, useEffect, useMemo, useCallback } from "react";
|
||||
import { useParams } from "next/navigation";
|
||||
import Link from "next/link";
|
||||
import Image from "next/image";
|
||||
@@ -27,11 +27,7 @@ export default function ProviderDetailPage() {
|
||||
const models = getModelsByProviderId(providerId);
|
||||
const providerAlias = getProviderAlias(providerId);
|
||||
|
||||
useEffect(() => {
|
||||
fetchConnections();
|
||||
fetchAliases();
|
||||
}, [fetchConnections, fetchAliases]);
|
||||
|
||||
// Define callbacks BEFORE the useEffect that uses them
|
||||
const fetchAliases = useCallback(async () => {
|
||||
try {
|
||||
const res = await fetch("/api/models/alias");
|
||||
@@ -44,6 +40,26 @@ export default function ProviderDetailPage() {
|
||||
}
|
||||
}, []);
|
||||
|
||||
const fetchConnections = useCallback(async () => {
|
||||
try {
|
||||
const res = await fetch("/api/providers");
|
||||
const data = await res.json();
|
||||
if (res.ok) {
|
||||
const filtered = (data.connections || []).filter(c => c.provider === providerId);
|
||||
setConnections(filtered);
|
||||
}
|
||||
} catch (error) {
|
||||
console.log("Error fetching connections:", error);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, [providerId]);
|
||||
|
||||
useEffect(() => {
|
||||
fetchConnections();
|
||||
fetchAliases();
|
||||
}, [fetchConnections, fetchAliases]);
|
||||
|
||||
const handleSetAlias = async (modelId, alias) => {
|
||||
const fullModel = `${providerAlias}/${modelId}`;
|
||||
try {
|
||||
@@ -76,21 +92,6 @@ export default function ProviderDetailPage() {
|
||||
}
|
||||
};
|
||||
|
||||
const fetchConnections = useCallback(async () => {
|
||||
try {
|
||||
const res = await fetch("/api/providers");
|
||||
const data = await res.json();
|
||||
if (res.ok) {
|
||||
const filtered = (data.connections || []).filter(c => c.provider === providerId);
|
||||
setConnections(filtered);
|
||||
}
|
||||
} catch (error) {
|
||||
console.log("Error fetching connections:", error);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
}, [providerId]);
|
||||
|
||||
const handleDelete = async (id) => {
|
||||
if (!confirm("Delete this connection?")) return;
|
||||
try {
|
||||
|
||||
101
src/app/api/v1/models/route.js
Normal file
101
src/app/api/v1/models/route.js
Normal file
@@ -0,0 +1,101 @@
|
||||
import { PROVIDER_MODELS, PROVIDER_ID_TO_ALIAS } from "@/shared/constants/models";
|
||||
import { getProviderConnections, getCombos } from "@/lib/localDb";
|
||||
|
||||
/**
|
||||
* Handle CORS preflight
|
||||
*/
|
||||
export async function OPTIONS() {
|
||||
return new Response(null, {
|
||||
headers: {
|
||||
"Access-Control-Allow-Origin": "*",
|
||||
"Access-Control-Allow-Methods": "GET, OPTIONS",
|
||||
"Access-Control-Allow-Headers": "*",
|
||||
},
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* GET /v1/models - OpenAI compatible models list
|
||||
* Returns models from all active providers and combos in OpenAI format
|
||||
*/
|
||||
export async function GET() {
|
||||
try {
|
||||
// Get active provider connections
|
||||
let connections = [];
|
||||
try {
|
||||
connections = await getProviderConnections();
|
||||
// Filter to only active connections
|
||||
connections = connections.filter(c => c.isActive !== false);
|
||||
} catch (e) {
|
||||
// If database not available, return all models
|
||||
console.log("Could not fetch providers, returning all models");
|
||||
}
|
||||
|
||||
// Get combos
|
||||
let combos = [];
|
||||
try {
|
||||
combos = await getCombos();
|
||||
} catch (e) {
|
||||
console.log("Could not fetch combos");
|
||||
}
|
||||
|
||||
// Build set of active provider aliases
|
||||
const activeAliases = new Set();
|
||||
for (const conn of connections) {
|
||||
const alias = PROVIDER_ID_TO_ALIAS[conn.provider] || conn.provider;
|
||||
activeAliases.add(alias);
|
||||
}
|
||||
|
||||
// Collect models from active providers (or all if none active)
|
||||
const models = [];
|
||||
const timestamp = Math.floor(Date.now() / 1000);
|
||||
|
||||
// Add combos first (they appear at the top)
|
||||
for (const combo of combos) {
|
||||
models.push({
|
||||
id: combo.name,
|
||||
object: "model",
|
||||
created: timestamp,
|
||||
owned_by: "combo",
|
||||
permission: [],
|
||||
root: combo.name,
|
||||
parent: null,
|
||||
});
|
||||
}
|
||||
|
||||
// Add provider models
|
||||
for (const [alias, providerModels] of Object.entries(PROVIDER_MODELS)) {
|
||||
// If we have active providers, only include those; otherwise include all
|
||||
if (connections.length > 0 && !activeAliases.has(alias)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
for (const model of providerModels) {
|
||||
models.push({
|
||||
id: `${alias}/${model.id}`,
|
||||
object: "model",
|
||||
created: timestamp,
|
||||
owned_by: alias,
|
||||
permission: [],
|
||||
root: model.id,
|
||||
parent: null,
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return Response.json({
|
||||
object: "list",
|
||||
data: models,
|
||||
}, {
|
||||
headers: {
|
||||
"Access-Control-Allow-Origin": "*",
|
||||
},
|
||||
});
|
||||
} catch (error) {
|
||||
console.log("Error fetching models:", error);
|
||||
return Response.json(
|
||||
{ error: { message: error.message, type: "server_error" } },
|
||||
{ status: 500 }
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -120,6 +120,20 @@ body {
|
||||
|
||||
/* Material Symbols */
|
||||
.material-symbols-outlined {
|
||||
font-family: 'Material Symbols Outlined', sans-serif;
|
||||
font-weight: normal;
|
||||
font-style: normal;
|
||||
font-size: 24px;
|
||||
line-height: 1;
|
||||
letter-spacing: normal;
|
||||
text-transform: none;
|
||||
display: inline-block;
|
||||
white-space: nowrap;
|
||||
word-wrap: normal;
|
||||
direction: ltr;
|
||||
font-feature-settings: 'liga';
|
||||
-webkit-font-feature-settings: 'liga';
|
||||
-webkit-font-smoothing: antialiased;
|
||||
font-variation-settings: 'FILL' 0, 'wght' 400, 'GRAD' 0, 'opsz' 24;
|
||||
}
|
||||
|
||||
|
||||
@@ -11,14 +11,22 @@ const inter = Inter({
|
||||
export const metadata = {
|
||||
title: "9Router - AI Infrastructure Management",
|
||||
description: "One endpoint for all your AI providers. Manage keys, monitor usage, and scale effortlessly.",
|
||||
icons: {
|
||||
icon: "/favicon.svg",
|
||||
},
|
||||
};
|
||||
|
||||
export default function RootLayout({ children }) {
|
||||
return (
|
||||
<html lang="en" suppressHydrationWarning>
|
||||
<head>
|
||||
|
||||
<link rel="icon" href="/favicon.svg" type="image/svg+xml" />
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com" />
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossOrigin="anonymous" />
|
||||
{/* eslint-disable-next-line @next/next/no-page-custom-font */}
|
||||
<link
|
||||
href="https://fonts.googleapis.com/css2?family=Material+Symbols+Outlined:opsz,wght,FILL,GRAD@20..48,100..700,0..1,-50..200&display=swap"
|
||||
rel="stylesheet"
|
||||
/>
|
||||
</head>
|
||||
<body className={`${inter.variable} font-sans antialiased`}>
|
||||
<ThemeProvider>
|
||||
|
||||
@@ -106,7 +106,7 @@ export default function LoginPage() {
|
||||
type="submit"
|
||||
variant="primary"
|
||||
className="w-full"
|
||||
isLoading={loading}
|
||||
loading={loading}
|
||||
>
|
||||
Login
|
||||
</Button>
|
||||
|
||||
@@ -20,101 +20,22 @@ export default function OAuthModal({ isOpen, provider, providerInfo, onSuccess,
|
||||
const popupRef = useRef(null);
|
||||
const { copied, copy } = useCopyToClipboard();
|
||||
|
||||
// Detect if running on localhost
|
||||
const isLocalhost = typeof window !== "undefined" &&
|
||||
(window.location.hostname === "localhost" || window.location.hostname === "127.0.0.1");
|
||||
|
||||
// Reset state and start OAuth when modal opens
|
||||
useEffect(() => {
|
||||
if (isOpen && provider) {
|
||||
setAuthData(null);
|
||||
setCallbackUrl("");
|
||||
setError(null);
|
||||
setIsDeviceCode(false);
|
||||
setDeviceData(null);
|
||||
setPolling(false);
|
||||
// Auto start OAuth
|
||||
startOAuthFlow();
|
||||
}
|
||||
}, [isOpen, provider, startOAuthFlow]);
|
||||
|
||||
// Listen for OAuth callback via multiple methods
|
||||
// State for client-only values to avoid hydration mismatch
|
||||
const [isLocalhost, setIsLocalhost] = useState(false);
|
||||
const [placeholderUrl, setPlaceholderUrl] = useState("/callback?code=...");
|
||||
const callbackProcessedRef = useRef(false);
|
||||
|
||||
// Detect if running on localhost (client-side only)
|
||||
useEffect(() => {
|
||||
if (!authData) return;
|
||||
callbackProcessedRef.current = false; // Reset when authData changes
|
||||
|
||||
// Handler for callback data - only process once
|
||||
const handleCallback = async (data) => {
|
||||
if (callbackProcessedRef.current) return; // Already processed
|
||||
|
||||
const { code, state, error: callbackError, errorDescription } = data;
|
||||
|
||||
if (callbackError) {
|
||||
callbackProcessedRef.current = true;
|
||||
setError(errorDescription || callbackError);
|
||||
setStep("error");
|
||||
return;
|
||||
if (typeof window !== "undefined") {
|
||||
setIsLocalhost(
|
||||
window.location.hostname === "localhost" || window.location.hostname === "127.0.0.1"
|
||||
);
|
||||
setPlaceholderUrl(`${window.location.origin}/callback?code=...`);
|
||||
}
|
||||
}, []);
|
||||
|
||||
if (code) {
|
||||
callbackProcessedRef.current = true;
|
||||
await exchangeTokens(code, state);
|
||||
}
|
||||
};
|
||||
|
||||
// Method 1: postMessage from popup
|
||||
const handleMessage = (event) => {
|
||||
if (event.origin !== window.location.origin) return;
|
||||
if (event.data?.type === "oauth_callback") {
|
||||
handleCallback(event.data.data);
|
||||
}
|
||||
};
|
||||
window.addEventListener("message", handleMessage);
|
||||
|
||||
// Method 2: BroadcastChannel
|
||||
let channel;
|
||||
try {
|
||||
channel = new BroadcastChannel("oauth_callback");
|
||||
channel.onmessage = (event) => handleCallback(event.data);
|
||||
} catch (e) {
|
||||
console.log("BroadcastChannel not supported");
|
||||
}
|
||||
|
||||
// Method 3: localStorage event
|
||||
const handleStorage = (event) => {
|
||||
if (event.key === "oauth_callback" && event.newValue) {
|
||||
try {
|
||||
const data = JSON.parse(event.newValue);
|
||||
handleCallback(data);
|
||||
localStorage.removeItem("oauth_callback");
|
||||
} catch (e) {
|
||||
console.log("Failed to parse localStorage data");
|
||||
}
|
||||
}
|
||||
};
|
||||
window.addEventListener("storage", handleStorage);
|
||||
|
||||
// Also check localStorage on mount (in case callback already happened)
|
||||
try {
|
||||
const stored = localStorage.getItem("oauth_callback");
|
||||
if (stored) {
|
||||
const data = JSON.parse(stored);
|
||||
// Only use if recent (within 30 seconds)
|
||||
if (data.timestamp && Date.now() - data.timestamp < 30000) {
|
||||
handleCallback(data);
|
||||
localStorage.removeItem("oauth_callback");
|
||||
}
|
||||
}
|
||||
} catch (e) {}
|
||||
|
||||
return () => {
|
||||
window.removeEventListener("message", handleMessage);
|
||||
window.removeEventListener("storage", handleStorage);
|
||||
if (channel) channel.close();
|
||||
};
|
||||
}, [authData, exchangeTokens]);
|
||||
// Define all useCallback hooks BEFORE the useEffects that reference them
|
||||
|
||||
// Exchange tokens
|
||||
const exchangeTokens = useCallback(async (code, state) => {
|
||||
@@ -254,6 +175,96 @@ export default function OAuthModal({ isOpen, provider, providerInfo, onSuccess,
|
||||
}
|
||||
}, [provider, isLocalhost, startPolling]);
|
||||
|
||||
// Reset state and start OAuth when modal opens
|
||||
useEffect(() => {
|
||||
if (isOpen && provider) {
|
||||
setAuthData(null);
|
||||
setCallbackUrl("");
|
||||
setError(null);
|
||||
setIsDeviceCode(false);
|
||||
setDeviceData(null);
|
||||
setPolling(false);
|
||||
// Auto start OAuth
|
||||
startOAuthFlow();
|
||||
}
|
||||
}, [isOpen, provider, startOAuthFlow]);
|
||||
|
||||
// Listen for OAuth callback via multiple methods
|
||||
useEffect(() => {
|
||||
if (!authData) return;
|
||||
callbackProcessedRef.current = false; // Reset when authData changes
|
||||
|
||||
// Handler for callback data - only process once
|
||||
const handleCallback = async (data) => {
|
||||
if (callbackProcessedRef.current) return; // Already processed
|
||||
|
||||
const { code, state, error: callbackError, errorDescription } = data;
|
||||
|
||||
if (callbackError) {
|
||||
callbackProcessedRef.current = true;
|
||||
setError(errorDescription || callbackError);
|
||||
setStep("error");
|
||||
return;
|
||||
}
|
||||
|
||||
if (code) {
|
||||
callbackProcessedRef.current = true;
|
||||
await exchangeTokens(code, state);
|
||||
}
|
||||
};
|
||||
|
||||
// Method 1: postMessage from popup
|
||||
const handleMessage = (event) => {
|
||||
if (event.origin !== window.location.origin) return;
|
||||
if (event.data?.type === "oauth_callback") {
|
||||
handleCallback(event.data.data);
|
||||
}
|
||||
};
|
||||
window.addEventListener("message", handleMessage);
|
||||
|
||||
// Method 2: BroadcastChannel
|
||||
let channel;
|
||||
try {
|
||||
channel = new BroadcastChannel("oauth_callback");
|
||||
channel.onmessage = (event) => handleCallback(event.data);
|
||||
} catch (e) {
|
||||
console.log("BroadcastChannel not supported");
|
||||
}
|
||||
|
||||
// Method 3: localStorage event
|
||||
const handleStorage = (event) => {
|
||||
if (event.key === "oauth_callback" && event.newValue) {
|
||||
try {
|
||||
const data = JSON.parse(event.newValue);
|
||||
handleCallback(data);
|
||||
localStorage.removeItem("oauth_callback");
|
||||
} catch (e) {
|
||||
console.log("Failed to parse localStorage data");
|
||||
}
|
||||
}
|
||||
};
|
||||
window.addEventListener("storage", handleStorage);
|
||||
|
||||
// Also check localStorage on mount (in case callback already happened)
|
||||
try {
|
||||
const stored = localStorage.getItem("oauth_callback");
|
||||
if (stored) {
|
||||
const data = JSON.parse(stored);
|
||||
// Only use if recent (within 30 seconds)
|
||||
if (data.timestamp && Date.now() - data.timestamp < 30000) {
|
||||
handleCallback(data);
|
||||
localStorage.removeItem("oauth_callback");
|
||||
}
|
||||
}
|
||||
} catch (e) {}
|
||||
|
||||
return () => {
|
||||
window.removeEventListener("message", handleMessage);
|
||||
window.removeEventListener("storage", handleStorage);
|
||||
if (channel) channel.close();
|
||||
};
|
||||
}, [authData, exchangeTokens]);
|
||||
|
||||
// Handle manual URL input
|
||||
const handleManualSubmit = async () => {
|
||||
try {
|
||||
@@ -364,7 +375,7 @@ export default function OAuthModal({ isOpen, provider, providerInfo, onSuccess,
|
||||
<Input
|
||||
value={callbackUrl}
|
||||
onChange={(e) => setCallbackUrl(e.target.value)}
|
||||
placeholder={`${window.location.origin}/callback?code=...`}
|
||||
placeholder={placeholderUrl}
|
||||
className="font-mono text-xs"
|
||||
/>
|
||||
</div>
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
"use client";
|
||||
|
||||
import { ThemeToggle } from "@/shared/components";
|
||||
import ThemeToggle from "../ThemeToggle";
|
||||
|
||||
export default function AuthLayout({ children }) {
|
||||
return (
|
||||
|
||||
@@ -1,31 +1,60 @@
|
||||
"use client";
|
||||
|
||||
import { useEffect } from "react";
|
||||
import { useEffect, useState, useSyncExternalStore } from "react";
|
||||
import useThemeStore from "@/store/themeStore";
|
||||
|
||||
// Subscribe to system theme changes
|
||||
function subscribeToSystemTheme(callback) {
|
||||
if (typeof window === "undefined") return () => {};
|
||||
const mediaQuery = window.matchMedia("(prefers-color-scheme: dark)");
|
||||
mediaQuery.addEventListener("change", callback);
|
||||
return () => mediaQuery.removeEventListener("change", callback);
|
||||
}
|
||||
|
||||
// Get current system theme preference
|
||||
function getSystemThemeSnapshot() {
|
||||
if (typeof window === "undefined") return false;
|
||||
return window.matchMedia("(prefers-color-scheme: dark)").matches;
|
||||
}
|
||||
|
||||
// Server snapshot always returns false
|
||||
function getServerSnapshot() {
|
||||
return false;
|
||||
}
|
||||
|
||||
export function useTheme() {
|
||||
const { theme, setTheme, toggleTheme, initTheme } = useThemeStore();
|
||||
|
||||
// Use useSyncExternalStore to safely subscribe to system theme
|
||||
const systemPrefersDark = useSyncExternalStore(
|
||||
subscribeToSystemTheme,
|
||||
getSystemThemeSnapshot,
|
||||
getServerSnapshot
|
||||
);
|
||||
|
||||
useEffect(() => {
|
||||
initTheme();
|
||||
}, [initTheme]);
|
||||
|
||||
// Listen for system theme changes when theme is "system"
|
||||
useEffect(() => {
|
||||
if (theme !== "system") return;
|
||||
|
||||
// Listen for system theme changes
|
||||
const mediaQuery = window.matchMedia("(prefers-color-scheme: dark)");
|
||||
const handleChange = () => {
|
||||
if (theme === "system") {
|
||||
initTheme();
|
||||
}
|
||||
};
|
||||
const handleChange = () => initTheme();
|
||||
|
||||
mediaQuery.addEventListener("change", handleChange);
|
||||
return () => mediaQuery.removeEventListener("change", handleChange);
|
||||
}, [theme, initTheme]);
|
||||
|
||||
// Compute isDark from current state (no effect needed)
|
||||
const isDark = theme === "dark" || (theme === "system" && systemPrefersDark);
|
||||
|
||||
return {
|
||||
theme,
|
||||
setTheme,
|
||||
toggleTheme,
|
||||
isDark: theme === "dark" || (theme === "system" && typeof window !== "undefined" && window.matchMedia("(prefers-color-scheme: dark)").matches),
|
||||
isDark,
|
||||
};
|
||||
}
|
||||
|
||||
|
||||
Reference in New Issue
Block a user