mirror of
https://github.com/QuantumNous/new-api.git
synced 2026-04-08 11:37:26 +00:00
Compare commits
129 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
90d85a6f0a | ||
|
|
d40429ad93 | ||
|
|
30806ef270 | ||
|
|
3458476115 | ||
|
|
61c685ad79 | ||
|
|
0121795a84 | ||
|
|
ae254f5368 | ||
|
|
562448b441 | ||
|
|
04f7d89399 | ||
|
|
0d456df588 | ||
|
|
dc3b453b05 | ||
|
|
b19e1b8207 | ||
|
|
97b5ca8099 | ||
|
|
4ecf5dde14 | ||
|
|
65ccfd0848 | ||
|
|
2621b77f9a | ||
|
|
65a15dbc17 | ||
|
|
c0095d4521 | ||
|
|
5043075135 | ||
|
|
10ef61eedb | ||
|
|
dc9e3b4139 | ||
|
|
27e3aa828c | ||
|
|
d859e3fa64 | ||
|
|
459c277c94 | ||
|
|
5639f1c2d8 | ||
|
|
0cf4c59d22 | ||
|
|
1c67dd3c31 | ||
|
|
b7fd1e4a20 | ||
|
|
18b3300ff1 | ||
|
|
bae57c05c1 | ||
|
|
3def2bbd30 | ||
|
|
419a056fbf | ||
|
|
48af027903 | ||
|
|
9bf90c3baf | ||
|
|
fe3232bf23 | ||
|
|
1236fa8fe4 | ||
|
|
e097d5a538 | ||
|
|
425feb88d8 | ||
|
|
fd6838e690 | ||
|
|
b64480b750 | ||
|
|
da6423de33 | ||
|
|
efc9d200b1 | ||
|
|
fe37718259 | ||
|
|
c412fd9cde | ||
|
|
54f5b1a951 | ||
|
|
a9b9d23586 | ||
|
|
168226ba10 | ||
|
|
1a8fd61a98 | ||
|
|
2bd2d73d33 | ||
|
|
62da481dc6 | ||
|
|
4217358de7 | ||
|
|
bb9f5a4a6d | ||
|
|
935acccca4 | ||
|
|
453a42fad9 | ||
|
|
58101328c5 | ||
|
|
a03c615fa4 | ||
|
|
487ef35c58 | ||
|
|
f9f32a0158 | ||
|
|
ea10806cf9 | ||
|
|
1a9ebb54b2 | ||
|
|
6de3857150 | ||
|
|
32cd890b6e | ||
|
|
f968d77365 | ||
|
|
dc22f7d32f | ||
|
|
c2b33e3b23 | ||
|
|
db3326deae | ||
|
|
25ae077ac9 | ||
|
|
aaa41a8074 | ||
|
|
26f5b954c5 | ||
|
|
79c6dd08c9 | ||
|
|
17e8a3432a | ||
|
|
790af65b2c | ||
|
|
6522147183 | ||
|
|
0755ac9991 | ||
|
|
4c4dc6e8b4 | ||
|
|
1eebdc4773 | ||
|
|
9b6c898675 | ||
|
|
ee4f27d01b | ||
|
|
995c19a997 | ||
|
|
e385e347ea | ||
|
|
71d0d759da | ||
|
|
eb75ff232f | ||
|
|
272662089d | ||
|
|
214ca4db56 | ||
|
|
473e8e0eaf | ||
|
|
99efc1fbb6 | ||
|
|
d283f6b35f | ||
|
|
2f3acd9d22 | ||
|
|
eee6dee599 | ||
|
|
dcf7878772 | ||
|
|
97bc2b4474 | ||
|
|
ef8ae4db80 | ||
|
|
90576d0261 | ||
|
|
4b3e30e669 | ||
|
|
75570af967 | ||
|
|
cca9c0479f | ||
|
|
8a2332074f | ||
|
|
2ec4565601 | ||
|
|
a4fb33957f | ||
|
|
909c5eb276 | ||
|
|
8723e3f239 | ||
|
|
9328b907f2 | ||
|
|
8efa12b941 | ||
|
|
7b997b3a2c | ||
|
|
700c05b826 | ||
|
|
c5103237b0 | ||
|
|
f500eb17a8 | ||
|
|
86f6bb7abe | ||
|
|
c4c1099ae5 | ||
|
|
c869455456 | ||
|
|
f89d8a0fe5 | ||
|
|
3d6d19903b | ||
|
|
c5f1a0c712 | ||
|
|
524d4a65bf | ||
|
|
082218173a | ||
|
|
67cbbc2266 | ||
|
|
79b35e385f | ||
|
|
03e8ab4126 | ||
|
|
30f32c6a6d | ||
|
|
5813ca780f | ||
|
|
aa34c3035a | ||
|
|
fb9f595044 | ||
|
|
f24de65626 | ||
|
|
e34dccbc65 | ||
|
|
f6e8887482 | ||
|
|
09adc6f201 | ||
|
|
6b79b89dc0 | ||
|
|
3223c7e181 | ||
|
|
ccfac06645 |
3
.gitignore
vendored
3
.gitignore
vendored
@@ -9,4 +9,5 @@ logs
|
||||
web/dist
|
||||
.env
|
||||
one-api
|
||||
.DS_Store
|
||||
.DS_Store
|
||||
tiktoken_cache
|
||||
237
README.en.md
237
README.en.md
@@ -1,10 +1,13 @@
|
||||
<p align="right">
|
||||
<a href="./README.md">中文</a> | <strong>English</strong>
|
||||
</p>
|
||||
<div align="center">
|
||||
|
||||

|
||||
|
||||
# New API
|
||||
|
||||
🍥 Next Generation LLM Gateway and AI Asset Management System
|
||||
🍥 Next-Generation Large Model Gateway and AI Asset Management System
|
||||
|
||||
<a href="https://trendshift.io/repositories/8227" target="_blank"><img src="https://trendshift.io/api/badge/repositories/8227" alt="Calcium-Ion%2Fnew-api | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
|
||||
|
||||
@@ -33,171 +36,155 @@
|
||||
> This is an open-source project developed based on [One API](https://github.com/songquanpeng/one-api)
|
||||
|
||||
> [!IMPORTANT]
|
||||
> - Users must comply with OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) and relevant laws and regulations. Not to be used for illegal purposes.
|
||||
> - This project is for personal learning only. Stability is not guaranteed, and no technical support is provided.
|
||||
> - This project is for personal learning purposes only, with no guarantee of stability or technical support.
|
||||
> - Users must comply with OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) and **applicable laws and regulations**, and must not use it for illegal purposes.
|
||||
> - According to the [《Interim Measures for the Management of Generative Artificial Intelligence Services》](http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm), please do not provide any unregistered generative AI services to the public in China.
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For detailed documentation, please visit our official Wiki: [https://docs.newapi.pro/](https://docs.newapi.pro/)
|
||||
|
||||
## ✨ Key Features
|
||||
|
||||
1. 🎨 New UI interface (some interfaces pending update)
|
||||
2. 🌍 Multi-language support (work in progress)
|
||||
3. 🎨 Added [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy) interface support, [Integration Guide](Midjourney.md)
|
||||
4. 💰 Online recharge support, configurable in system settings:
|
||||
- [x] EasyPay
|
||||
5. 🔍 Query usage quota by key:
|
||||
- Works with [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool)
|
||||
6. 📑 Configurable items per page in pagination
|
||||
7. 🔄 Compatible with original One API database (one-api.db)
|
||||
8. 💵 Support per-request model pricing, configurable in System Settings - Operation Settings
|
||||
9. ⚖️ Support channel **weighted random** selection
|
||||
10. 📈 Data dashboard (console)
|
||||
11. 🔒 Configurable model access per token
|
||||
12. 🤖 Telegram authorization login support:
|
||||
1. System Settings - Configure Login Registration - Allow Telegram Login
|
||||
2. Send /setdomain command to [@Botfather](https://t.me/botfather)
|
||||
3. Select your bot, then enter http(s)://your-website/login
|
||||
4. Telegram Bot name is the bot username without @
|
||||
13. 🎵 Added [Suno API](https://github.com/Suno-API/Suno-API) interface support, [Integration Guide](Suno.md)
|
||||
14. 🔄 Support for Rerank models, compatible with Cohere and Jina, can integrate with Dify, [Integration Guide](Rerank.md)
|
||||
15. ⚡ **[OpenAI Realtime API](https://platform.openai.com/docs/guides/realtime/integration)** - Support for OpenAI's Realtime API, including Azure channels
|
||||
16. 🧠 Support for setting reasoning effort through model name suffix:
|
||||
- Add suffix `-high` to set high reasoning effort (e.g., `o3-mini-high`)
|
||||
- Add suffix `-medium` to set medium reasoning effort
|
||||
- Add suffix `-low` to set low reasoning effort
|
||||
17. 🔄 Thinking to content option `thinking_to_content` in `Channel->Edit->Channel Extra Settings`, default is `false`, when `true`, the `reasoning_content` of the thinking content will be converted to `<think>` tags and concatenated to the content returned.
|
||||
18. 🔄 Model rate limit, support setting total request limit and successful request limit in `System Settings->Rate Limit Settings`
|
||||
19. 💰 Cache billing support, when enabled can charge a configurable ratio for cache hits:
|
||||
1. Set `Prompt Cache Ratio` in `System Settings -> Operation Settings`
|
||||
2. Set `Prompt Cache Ratio` in channel settings, range 0-1 (e.g., 0.5 means 50% charge on cache hits)
|
||||
New API offers a wide range of features, please refer to [Features Introduction](https://docs.newapi.pro/wiki/features-introduction) for details:
|
||||
|
||||
1. 🎨 Brand new UI interface
|
||||
2. 🌍 Multi-language support
|
||||
3. 💰 Online recharge functionality (YiPay)
|
||||
4. 🔍 Support for querying usage quotas with keys (works with [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool))
|
||||
5. 🔄 Compatible with the original One API database
|
||||
6. 💵 Support for pay-per-use model pricing
|
||||
7. ⚖️ Support for weighted random channel selection
|
||||
8. 📈 Data dashboard (console)
|
||||
9. 🔒 Token grouping and model restrictions
|
||||
10. 🤖 Support for more authorization login methods (LinuxDO, Telegram, OIDC)
|
||||
11. 🔄 Support for Rerank models (Cohere and Jina), [API Documentation](https://docs.newapi.pro/api/jinaai-rerank)
|
||||
12. ⚡ Support for OpenAI Realtime API (including Azure channels), [API Documentation](https://docs.newapi.pro/api/openai-realtime)
|
||||
13. ⚡ Support for Claude Messages format, [API Documentation](https://docs.newapi.pro/api/anthropic-chat)
|
||||
14. Support for entering chat interface via /chat2link route
|
||||
15. 🧠 Support for setting reasoning effort through model name suffixes:
|
||||
1. OpenAI o-series models
|
||||
- Add `-high` suffix for high reasoning effort (e.g.: `o3-mini-high`)
|
||||
- Add `-medium` suffix for medium reasoning effort (e.g.: `o3-mini-medium`)
|
||||
- Add `-low` suffix for low reasoning effort (e.g.: `o3-mini-low`)
|
||||
2. Claude thinking models
|
||||
- Add `-thinking` suffix to enable thinking mode (e.g.: `claude-3-7-sonnet-20250219-thinking`)
|
||||
16. 🔄 Thinking-to-content functionality
|
||||
17. 🔄 Model rate limiting for users
|
||||
18. 💰 Cache billing support, which allows billing at a set ratio when cache is hit:
|
||||
1. Set the `Prompt Cache Ratio` option in `System Settings-Operation Settings`
|
||||
2. Set `Prompt Cache Ratio` in the channel, range 0-1, e.g., setting to 0.5 means billing at 50% when cache is hit
|
||||
3. Supported channels:
|
||||
- [x] OpenAI
|
||||
- [x] Azure
|
||||
- [x] Azure
|
||||
- [x] DeepSeek
|
||||
- [ ] Claude
|
||||
- [x] Claude
|
||||
|
||||
## Model Support
|
||||
This version additionally supports:
|
||||
1. Third-party model **gpts** (gpt-4-gizmo-*)
|
||||
2. [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy) interface, [Integration Guide](Midjourney.md)
|
||||
3. Custom channels with full API URL support
|
||||
4. [Suno API](https://github.com/Suno-API/Suno-API) interface, [Integration Guide](Suno.md)
|
||||
5. Rerank models, supporting [Cohere](https://cohere.ai/) and [Jina](https://jina.ai/), [Integration Guide](Rerank.md)
|
||||
6. Dify
|
||||
|
||||
You can add custom models gpt-4-gizmo-* in channels. These are third-party models and cannot be called with official OpenAI keys.
|
||||
This version supports multiple models, please refer to [API Documentation-Relay Interface](https://docs.newapi.pro/api) for details:
|
||||
|
||||
## Additional Configurations Beyond One API
|
||||
- `GENERATE_DEFAULT_TOKEN`: Generate initial token for new users, default `false`
|
||||
- `STREAMING_TIMEOUT`: Set streaming response timeout, default 60 seconds
|
||||
- `DIFY_DEBUG`: Output workflow and node info to client for Dify channel, default `true`
|
||||
- `FORCE_STREAM_OPTION`: Override client stream_options parameter, default `true`
|
||||
- `GET_MEDIA_TOKEN`: Calculate image tokens, default `true`
|
||||
- `GET_MEDIA_TOKEN_NOT_STREAM`: Calculate image tokens in non-stream mode, default `true`
|
||||
- `UPDATE_TASK`: Update async tasks (Midjourney, Suno), default `true`
|
||||
- `GEMINI_MODEL_MAP`: Specify Gemini model versions (v1/v1beta), format: "model:version", comma-separated
|
||||
- `COHERE_SAFETY_SETTING`: Cohere model [safety settings](https://docs.cohere.com/docs/safety-modes#overview), options: `NONE`, `CONTEXTUAL`, `STRICT`, default `NONE`
|
||||
- `GEMINI_VISION_MAX_IMAGE_NUM`: Gemini model maximum image number, default `16`, set to `-1` to disable
|
||||
- `MAX_FILE_DOWNLOAD_MB`: Maximum file download size in MB, default `20`
|
||||
- `CRYPTO_SECRET`: Encryption key for encrypting database content
|
||||
- `AZURE_DEFAULT_API_VERSION`: Azure channel default API version, if not specified in channel settings, use this version, default `2024-12-01-preview`
|
||||
- `NOTIFICATION_LIMIT_DURATION_MINUTE`: Duration of notification limit in minutes, default `10`
|
||||
- `NOTIFY_LIMIT_COUNT`: Maximum number of user notifications in the specified duration, default `2`
|
||||
1. Third-party models **gpts** (gpt-4-gizmo-*)
|
||||
2. Third-party channel [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy) interface, [API Documentation](https://docs.newapi.pro/api/midjourney-proxy-image)
|
||||
3. Third-party channel [Suno API](https://github.com/Suno-API/Suno-API) interface, [API Documentation](https://docs.newapi.pro/api/suno-music)
|
||||
4. Custom channels, supporting full call address input
|
||||
5. Rerank models ([Cohere](https://cohere.ai/) and [Jina](https://jina.ai/)), [API Documentation](https://docs.newapi.pro/api/jinaai-rerank)
|
||||
6. Claude Messages format, [API Documentation](https://docs.newapi.pro/api/anthropic-chat)
|
||||
7. Dify, currently only supports chatflow
|
||||
|
||||
## Environment Variable Configuration
|
||||
|
||||
For detailed configuration instructions, please refer to [Installation Guide-Environment Variables Configuration](https://docs.newapi.pro/installation/environment-variables):
|
||||
|
||||
- `GENERATE_DEFAULT_TOKEN`: Whether to generate initial tokens for newly registered users, default is `false`
|
||||
- `STREAMING_TIMEOUT`: Streaming response timeout, default is 60 seconds
|
||||
- `DIFY_DEBUG`: Whether to output workflow and node information for Dify channels, default is `true`
|
||||
- `FORCE_STREAM_OPTION`: Whether to override client stream_options parameter, default is `true`
|
||||
- `GET_MEDIA_TOKEN`: Whether to count image tokens, default is `true`
|
||||
- `GET_MEDIA_TOKEN_NOT_STREAM`: Whether to count image tokens in non-streaming cases, default is `true`
|
||||
- `UPDATE_TASK`: Whether to update asynchronous tasks (Midjourney, Suno), default is `true`
|
||||
- `COHERE_SAFETY_SETTING`: Cohere model safety settings, options are `NONE`, `CONTEXTUAL`, `STRICT`, default is `NONE`
|
||||
- `GEMINI_VISION_MAX_IMAGE_NUM`: Maximum number of images for Gemini models, default is `16`
|
||||
- `MAX_FILE_DOWNLOAD_MB`: Maximum file download size in MB, default is `20`
|
||||
- `CRYPTO_SECRET`: Encryption key used for encrypting database content
|
||||
- `AZURE_DEFAULT_API_VERSION`: Azure channel default API version, default is `2024-12-01-preview`
|
||||
- `NOTIFICATION_LIMIT_DURATION_MINUTE`: Notification limit duration, default is `10` minutes
|
||||
- `NOTIFY_LIMIT_COUNT`: Maximum number of user notifications within the specified duration, default is `2`
|
||||
|
||||
## Deployment
|
||||
|
||||
For detailed deployment guides, please refer to [Installation Guide-Deployment Methods](https://docs.newapi.pro/installation):
|
||||
|
||||
> [!TIP]
|
||||
> Latest Docker image: `calciumion/new-api:latest`
|
||||
> Default account: root, password: 123456
|
||||
> Latest Docker image: `calciumion/new-api:latest`
|
||||
|
||||
### Multi-Server Deployment
|
||||
- Must set `SESSION_SECRET` environment variable, otherwise login state will not be consistent across multiple servers.
|
||||
- If using a public Redis, must set `CRYPTO_SECRET` environment variable, otherwise Redis content will not be able to be obtained in multi-server deployment.
|
||||
### Multi-machine Deployment Considerations
|
||||
- Environment variable `SESSION_SECRET` must be set, otherwise login status will be inconsistent across multiple machines
|
||||
- If sharing Redis, `CRYPTO_SECRET` must be set, otherwise Redis content cannot be accessed across multiple machines
|
||||
|
||||
### Requirements
|
||||
- Local database (default): SQLite (Docker deployment must mount `/data` directory)
|
||||
- Remote database: MySQL >= 5.7.8, PgSQL >= 9.6
|
||||
### Deployment Requirements
|
||||
- Local database (default): SQLite (Docker deployment must mount the `/data` directory)
|
||||
- Remote database: MySQL version >= 5.7.8, PgSQL version >= 9.6
|
||||
|
||||
### Deployment with BT Panel
|
||||
Install BT Panel (**version 9.2.0** or above) from [BT Panel Official Website](https://www.bt.cn/new/download.html), choose the stable version script to download and install.
|
||||
After installation, log in to BT Panel and click Docker in the menu bar. First-time access will prompt to install Docker service. Click Install Now and follow the prompts to complete installation.
|
||||
After installation, find **New-API** in the app store, click install, configure basic options to complete installation.
|
||||
[Pictorial Guide](BT.md)
|
||||
### Deployment Methods
|
||||
|
||||
### Docker Deployment
|
||||
#### Using BaoTa Panel Docker Feature
|
||||
Install BaoTa Panel (version **9.2.0** or above), find **New-API** in the application store and install it.
|
||||
[Tutorial with images](./docs/BT.md)
|
||||
|
||||
### Using Docker Compose (Recommended)
|
||||
#### Using Docker Compose (Recommended)
|
||||
```shell
|
||||
# Clone project
|
||||
# Download the project
|
||||
git clone https://github.com/Calcium-Ion/new-api.git
|
||||
cd new-api
|
||||
# Edit docker-compose.yml as needed
|
||||
# nano docker-compose.yml
|
||||
# vim docker-compose.yml
|
||||
# Start
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
#### Update Version
|
||||
#### Using Docker Image Directly
|
||||
```shell
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
### Direct Docker Image Usage
|
||||
```shell
|
||||
# SQLite deployment:
|
||||
# Using SQLite
|
||||
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
|
||||
|
||||
# MySQL deployment (add -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"), modify database connection parameters as needed
|
||||
# Example:
|
||||
# Using MySQL
|
||||
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
|
||||
```
|
||||
|
||||
#### Update Version
|
||||
```shell
|
||||
# Pull the latest image
|
||||
docker pull calciumion/new-api:latest
|
||||
# Stop and remove the old container
|
||||
docker stop new-api
|
||||
docker rm new-api
|
||||
# Run the new container with the same parameters as before
|
||||
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
|
||||
```
|
||||
## Channel Retry and Cache
|
||||
Channel retry functionality has been implemented, you can set the number of retries in `Settings->Operation Settings->General Settings`. It is **recommended to enable caching**.
|
||||
|
||||
Alternatively, you can use Watchtower for automatic updates (not recommended, may cause database incompatibility):
|
||||
```shell
|
||||
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR
|
||||
```
|
||||
### Cache Configuration Method
|
||||
1. `REDIS_CONN_STRING`: Set Redis as cache
|
||||
2. `MEMORY_CACHE_ENABLED`: Enable memory cache (no need to set manually if Redis is set)
|
||||
|
||||
## Channel Retry
|
||||
Channel retry is implemented, configurable in `Settings->Operation Settings->General Settings`. **Cache recommended**.
|
||||
If retry is enabled, the system will automatically use the next priority channel for the same request after a failed request.
|
||||
## API Documentation
|
||||
|
||||
### Cache Configuration
|
||||
1. `REDIS_CONN_STRING`: Use Redis as cache
|
||||
+ Example: `REDIS_CONN_STRING=redis://default:redispw@localhost:49153`
|
||||
2. `MEMORY_CACHE_ENABLED`: Enable memory cache, default `false`
|
||||
+ Example: `MEMORY_CACHE_ENABLED=true`
|
||||
For detailed API documentation, please refer to [API Documentation](https://docs.newapi.pro/api):
|
||||
|
||||
### Why Some Errors Don't Retry
|
||||
Error codes 400, 504, 524 won't retry
|
||||
### To Enable Retry for 400
|
||||
In `Channel->Edit`, set `Status Code Override` to:
|
||||
```json
|
||||
{
|
||||
"400": "500"
|
||||
}
|
||||
```
|
||||
|
||||
## Integration Guides
|
||||
- [Midjourney Integration](Midjourney.md)
|
||||
- [Suno Integration](Suno.md)
|
||||
- [Chat API](https://docs.newapi.pro/api/openai-chat)
|
||||
- [Image API](https://docs.newapi.pro/api/openai-image)
|
||||
- [Rerank API](https://docs.newapi.pro/api/jinaai-rerank)
|
||||
- [Realtime API](https://docs.newapi.pro/api/openai-realtime)
|
||||
- [Claude Chat API (messages)](https://docs.newapi.pro/api/anthropic-chat)
|
||||
|
||||
## Related Projects
|
||||
- [One API](https://github.com/songquanpeng/one-api): Original project
|
||||
- [Midjourney-Proxy](https://github.com/novicezk/midjourney-proxy): Midjourney interface support
|
||||
- [chatnio](https://github.com/Deeptrain-Community/chatnio): Next-gen AI B/C solution
|
||||
- [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool): Query usage quota by key
|
||||
- [chatnio](https://github.com/Deeptrain-Community/chatnio): Next-generation AI one-stop B/C-end solution
|
||||
- [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool): Query usage quota with key
|
||||
|
||||
Other projects based on New API:
|
||||
- [new-api-horizon](https://github.com/Calcium-Ion/new-api-horizon): High-performance optimized version of New API
|
||||
- [VoAPI](https://github.com/VoAPI/VoAPI): Frontend beautified version based on New API
|
||||
|
||||
## Help and Support
|
||||
|
||||
If you have any questions, please refer to [Help and Support](https://docs.newapi.pro/support):
|
||||
- [Community Interaction](https://docs.newapi.pro/support/community-interaction)
|
||||
- [Issue Feedback](https://docs.newapi.pro/support/feedback-issues)
|
||||
- [FAQ](https://docs.newapi.pro/support/faq)
|
||||
|
||||
## 🌟 Star History
|
||||
|
||||
[](https://star-history.com/#Calcium-Ion/new-api&Date)
|
||||
[](https://star-history.com/#Calcium-Ion/new-api&Date)
|
||||
|
||||
@@ -130,7 +130,7 @@ New API提供了丰富的功能,详细特性请参考[特性说明](https://do
|
||||
|
||||
#### 使用宝塔面板Docker功能部署
|
||||
安装宝塔面板(**9.2.0版本**及以上),在应用商店中找到**New-API**安装即可。
|
||||
[图文教程](BT.md)
|
||||
[图文教程](./docs/BT.md)
|
||||
|
||||
#### 使用Docker Compose部署(推荐)
|
||||
```shell
|
||||
|
||||
@@ -1,8 +1,8 @@
|
||||
package common
|
||||
|
||||
import (
|
||||
"os"
|
||||
"strconv"
|
||||
//"os"
|
||||
//"strconv"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -62,9 +62,13 @@ var EmailDomainWhitelist = []string{
|
||||
"yahoo.com",
|
||||
"foxmail.com",
|
||||
}
|
||||
var EmailLoginAuthServerList = []string{
|
||||
"smtp.sendcloud.net",
|
||||
"smtp.azurecomm.net",
|
||||
}
|
||||
|
||||
var DebugEnabled = os.Getenv("DEBUG") == "true"
|
||||
var MemoryCacheEnabled = os.Getenv("MEMORY_CACHE_ENABLED") == "true"
|
||||
var DebugEnabled bool
|
||||
var MemoryCacheEnabled bool
|
||||
|
||||
var LogConsumeEnabled = true
|
||||
|
||||
@@ -103,22 +107,22 @@ var RetryTimes = 0
|
||||
|
||||
//var RootUserEmail = ""
|
||||
|
||||
var IsMasterNode = os.Getenv("NODE_TYPE") != "slave"
|
||||
var IsMasterNode bool
|
||||
|
||||
var requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
|
||||
var RequestInterval = time.Duration(requestInterval) * time.Second
|
||||
var requestInterval int
|
||||
var RequestInterval time.Duration
|
||||
|
||||
var SyncFrequency = GetEnvOrDefault("SYNC_FREQUENCY", 60) // unit is second
|
||||
var SyncFrequency int // unit is second
|
||||
|
||||
var BatchUpdateEnabled = false
|
||||
var BatchUpdateInterval = GetEnvOrDefault("BATCH_UPDATE_INTERVAL", 5)
|
||||
var BatchUpdateInterval int
|
||||
|
||||
var RelayTimeout = GetEnvOrDefault("RELAY_TIMEOUT", 0) // unit is second
|
||||
var RelayTimeout int // unit is second
|
||||
|
||||
var GeminiSafetySetting = GetEnvOrDefaultString("GEMINI_SAFETY_SETTING", "BLOCK_NONE")
|
||||
var GeminiSafetySetting string
|
||||
|
||||
// https://docs.cohere.com/docs/safety-modes Type; NONE/CONTEXTUAL/STRICT
|
||||
var CohereSafetySetting = GetEnvOrDefaultString("COHERE_SAFETY_SETTING", "NONE")
|
||||
var CohereSafetySetting string
|
||||
|
||||
const (
|
||||
RequestIdKey = "X-Oneapi-Request-Id"
|
||||
@@ -145,13 +149,13 @@ var (
|
||||
// All duration's unit is seconds
|
||||
// Shouldn't larger then RateLimitKeyExpirationDuration
|
||||
var (
|
||||
GlobalApiRateLimitEnable = GetEnvOrDefaultBool("GLOBAL_API_RATE_LIMIT_ENABLE", true)
|
||||
GlobalApiRateLimitNum = GetEnvOrDefault("GLOBAL_API_RATE_LIMIT", 180)
|
||||
GlobalApiRateLimitDuration = int64(GetEnvOrDefault("GLOBAL_API_RATE_LIMIT_DURATION", 180))
|
||||
GlobalApiRateLimitEnable bool
|
||||
GlobalApiRateLimitNum int
|
||||
GlobalApiRateLimitDuration int64
|
||||
|
||||
GlobalWebRateLimitEnable = GetEnvOrDefaultBool("GLOBAL_WEB_RATE_LIMIT_ENABLE", true)
|
||||
GlobalWebRateLimitNum = GetEnvOrDefault("GLOBAL_WEB_RATE_LIMIT", 60)
|
||||
GlobalWebRateLimitDuration = int64(GetEnvOrDefault("GLOBAL_WEB_RATE_LIMIT_DURATION", 180))
|
||||
GlobalWebRateLimitEnable bool
|
||||
GlobalWebRateLimitNum int
|
||||
GlobalWebRateLimitDuration int64
|
||||
|
||||
UploadRateLimitNum = 10
|
||||
UploadRateLimitDuration int64 = 60
|
||||
@@ -235,6 +239,7 @@ const (
|
||||
ChannelTypeVolcEngine = 45
|
||||
ChannelTypeBaiduV2 = 46
|
||||
ChannelTypeXinference = 47
|
||||
ChannelTypeXai = 48
|
||||
ChannelTypeDummy // this one is only for count, do not add any channel after this
|
||||
|
||||
)
|
||||
@@ -288,4 +293,5 @@ var ChannelBaseURLs = []string{
|
||||
"https://ark.cn-beijing.volces.com", //45
|
||||
"https://qianfan.baidubce.com", //46
|
||||
"", //47
|
||||
"https://api.x.ai", //48
|
||||
}
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"encoding/base64"
|
||||
"fmt"
|
||||
"net/smtp"
|
||||
"slices"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
@@ -79,7 +80,7 @@ func SendEmail(subject string, receiver string, content string) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else if isOutlookServer(SMTPAccount) || SMTPServer == "smtp.azurecomm.net" {
|
||||
} else if isOutlookServer(SMTPAccount) || slices.Contains(EmailLoginAuthServerList, SMTPServer) {
|
||||
auth = LoginAuth(SMTPAccount, SMTPToken)
|
||||
err = smtp.SendMail(addr, auth, SMTPFrom, to, mail)
|
||||
} else {
|
||||
|
||||
@@ -6,6 +6,8 @@ import (
|
||||
"log"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
"time"
|
||||
)
|
||||
|
||||
var (
|
||||
@@ -66,4 +68,31 @@ func LoadEnv() {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Initialize variables from constants.go that were using environment variables
|
||||
DebugEnabled = os.Getenv("DEBUG") == "true"
|
||||
MemoryCacheEnabled = os.Getenv("MEMORY_CACHE_ENABLED") == "true"
|
||||
IsMasterNode = os.Getenv("NODE_TYPE") != "slave"
|
||||
|
||||
// Parse requestInterval and set RequestInterval
|
||||
requestInterval, _ = strconv.Atoi(os.Getenv("POLLING_INTERVAL"))
|
||||
RequestInterval = time.Duration(requestInterval) * time.Second
|
||||
|
||||
// Initialize variables with GetEnvOrDefault
|
||||
SyncFrequency = GetEnvOrDefault("SYNC_FREQUENCY", 60)
|
||||
BatchUpdateInterval = GetEnvOrDefault("BATCH_UPDATE_INTERVAL", 5)
|
||||
RelayTimeout = GetEnvOrDefault("RELAY_TIMEOUT", 0)
|
||||
|
||||
// Initialize string variables with GetEnvOrDefaultString
|
||||
GeminiSafetySetting = GetEnvOrDefaultString("GEMINI_SAFETY_SETTING", "BLOCK_NONE")
|
||||
CohereSafetySetting = GetEnvOrDefaultString("COHERE_SAFETY_SETTING", "NONE")
|
||||
|
||||
// Initialize rate limit variables
|
||||
GlobalApiRateLimitEnable = GetEnvOrDefaultBool("GLOBAL_API_RATE_LIMIT_ENABLE", true)
|
||||
GlobalApiRateLimitNum = GetEnvOrDefault("GLOBAL_API_RATE_LIMIT", 180)
|
||||
GlobalApiRateLimitDuration = int64(GetEnvOrDefault("GLOBAL_API_RATE_LIMIT_DURATION", 180))
|
||||
|
||||
GlobalWebRateLimitEnable = GetEnvOrDefaultBool("GLOBAL_WEB_RATE_LIMIT_ENABLE", true)
|
||||
GlobalWebRateLimitNum = GetEnvOrDefault("GLOBAL_WEB_RATE_LIMIT", 60)
|
||||
GlobalWebRateLimitDuration = int64(GetEnvOrDefault("GLOBAL_WEB_RATE_LIMIT_DURATION", 180))
|
||||
}
|
||||
|
||||
@@ -12,3 +12,7 @@ func DecodeJson(data []byte, v any) error {
|
||||
func DecodeJsonStr(data string, v any) error {
|
||||
return DecodeJson(StringToByteSlice(data), v)
|
||||
}
|
||||
|
||||
func EncodeJson(v any) ([]byte, error) {
|
||||
return json.Marshal(v)
|
||||
}
|
||||
|
||||
89
common/limiter/limiter.go
Normal file
89
common/limiter/limiter.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package limiter
|
||||
|
||||
import (
|
||||
"context"
|
||||
_ "embed"
|
||||
"fmt"
|
||||
"github.com/go-redis/redis/v8"
|
||||
"one-api/common"
|
||||
"sync"
|
||||
)
|
||||
|
||||
//go:embed lua/rate_limit.lua
|
||||
var rateLimitScript string
|
||||
|
||||
type RedisLimiter struct {
|
||||
client *redis.Client
|
||||
limitScriptSHA string
|
||||
}
|
||||
|
||||
var (
|
||||
instance *RedisLimiter
|
||||
once sync.Once
|
||||
)
|
||||
|
||||
func New(ctx context.Context, r *redis.Client) *RedisLimiter {
|
||||
once.Do(func() {
|
||||
// 预加载脚本
|
||||
limitSHA, err := r.ScriptLoad(ctx, rateLimitScript).Result()
|
||||
if err != nil {
|
||||
common.SysLog(fmt.Sprintf("Failed to load rate limit script: %v", err))
|
||||
}
|
||||
instance = &RedisLimiter{
|
||||
client: r,
|
||||
limitScriptSHA: limitSHA,
|
||||
}
|
||||
})
|
||||
|
||||
return instance
|
||||
}
|
||||
|
||||
func (rl *RedisLimiter) Allow(ctx context.Context, key string, opts ...Option) (bool, error) {
|
||||
// 默认配置
|
||||
config := &Config{
|
||||
Capacity: 10,
|
||||
Rate: 1,
|
||||
Requested: 1,
|
||||
}
|
||||
|
||||
// 应用选项模式
|
||||
for _, opt := range opts {
|
||||
opt(config)
|
||||
}
|
||||
|
||||
// 执行限流
|
||||
result, err := rl.client.EvalSha(
|
||||
ctx,
|
||||
rl.limitScriptSHA,
|
||||
[]string{key},
|
||||
config.Requested,
|
||||
config.Rate,
|
||||
config.Capacity,
|
||||
).Int()
|
||||
|
||||
if err != nil {
|
||||
return false, fmt.Errorf("rate limit failed: %w", err)
|
||||
}
|
||||
return result == 1, nil
|
||||
}
|
||||
|
||||
// Config 配置选项模式
|
||||
type Config struct {
|
||||
Capacity int64
|
||||
Rate int64
|
||||
Requested int64
|
||||
}
|
||||
|
||||
type Option func(*Config)
|
||||
|
||||
func WithCapacity(c int64) Option {
|
||||
return func(cfg *Config) { cfg.Capacity = c }
|
||||
}
|
||||
|
||||
func WithRate(r int64) Option {
|
||||
return func(cfg *Config) { cfg.Rate = r }
|
||||
}
|
||||
|
||||
func WithRequested(n int64) Option {
|
||||
return func(cfg *Config) { cfg.Requested = n }
|
||||
}
|
||||
44
common/limiter/lua/rate_limit.lua
Normal file
44
common/limiter/lua/rate_limit.lua
Normal file
@@ -0,0 +1,44 @@
|
||||
-- 令牌桶限流器
|
||||
-- KEYS[1]: 限流器唯一标识
|
||||
-- ARGV[1]: 请求令牌数 (通常为1)
|
||||
-- ARGV[2]: 令牌生成速率 (每秒)
|
||||
-- ARGV[3]: 桶容量
|
||||
|
||||
local key = KEYS[1]
|
||||
local requested = tonumber(ARGV[1])
|
||||
local rate = tonumber(ARGV[2])
|
||||
local capacity = tonumber(ARGV[3])
|
||||
|
||||
-- 获取当前时间(Redis服务器时间)
|
||||
local now = redis.call('TIME')
|
||||
local nowInSeconds = tonumber(now[1])
|
||||
|
||||
-- 获取桶状态
|
||||
local bucket = redis.call('HMGET', key, 'tokens', 'last_time')
|
||||
local tokens = tonumber(bucket[1])
|
||||
local last_time = tonumber(bucket[2])
|
||||
|
||||
-- 初始化桶(首次请求或过期)
|
||||
if not tokens or not last_time then
|
||||
tokens = capacity
|
||||
last_time = nowInSeconds
|
||||
else
|
||||
-- 计算新增令牌
|
||||
local elapsed = nowInSeconds - last_time
|
||||
local add_tokens = elapsed * rate
|
||||
tokens = math.min(capacity, tokens + add_tokens)
|
||||
last_time = nowInSeconds
|
||||
end
|
||||
|
||||
-- 判断是否允许请求
|
||||
local allowed = false
|
||||
if tokens >= requested then
|
||||
tokens = tokens - requested
|
||||
allowed = true
|
||||
end
|
||||
|
||||
---- 更新桶状态并设置过期时间
|
||||
redis.call('HMSET', key, 'tokens', tokens, 'last_time', last_time)
|
||||
--redis.call('EXPIRE', key, math.ceil(capacity / rate) + 60) -- 适当延长过期时间
|
||||
|
||||
return allowed and 1 or 0
|
||||
@@ -7,7 +7,6 @@ import (
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"github.com/pkg/errors"
|
||||
"html/template"
|
||||
"io"
|
||||
"log"
|
||||
@@ -22,6 +21,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/pkg/errors"
|
||||
)
|
||||
|
||||
func OpenBrowser(url string) {
|
||||
|
||||
5
constant/azure.go
Normal file
5
constant/azure.go
Normal file
@@ -0,0 +1,5 @@
|
||||
package constant
|
||||
|
||||
import "time"
|
||||
|
||||
var AzureNoRemoveDotTime = time.Date(2025, time.May, 10, 0, 0, 0, 0, time.UTC).Unix()
|
||||
@@ -4,32 +4,42 @@ import (
|
||||
"one-api/common"
|
||||
)
|
||||
|
||||
var StreamingTimeout = common.GetEnvOrDefault("STREAMING_TIMEOUT", 60)
|
||||
var DifyDebug = common.GetEnvOrDefaultBool("DIFY_DEBUG", true)
|
||||
|
||||
var MaxFileDownloadMB = common.GetEnvOrDefault("MAX_FILE_DOWNLOAD_MB", 20)
|
||||
|
||||
// ForceStreamOption 覆盖请求参数,强制返回usage信息
|
||||
var ForceStreamOption = common.GetEnvOrDefaultBool("FORCE_STREAM_OPTION", true)
|
||||
|
||||
var GetMediaToken = common.GetEnvOrDefaultBool("GET_MEDIA_TOKEN", true)
|
||||
|
||||
var GetMediaTokenNotStream = common.GetEnvOrDefaultBool("GET_MEDIA_TOKEN_NOT_STREAM", true)
|
||||
|
||||
var UpdateTask = common.GetEnvOrDefaultBool("UPDATE_TASK", true)
|
||||
|
||||
var AzureDefaultAPIVersion = common.GetEnvOrDefaultString("AZURE_DEFAULT_API_VERSION", "2024-12-01-preview")
|
||||
var StreamingTimeout int
|
||||
var DifyDebug bool
|
||||
var MaxFileDownloadMB int
|
||||
var ForceStreamOption bool
|
||||
var GetMediaToken bool
|
||||
var GetMediaTokenNotStream bool
|
||||
var UpdateTask bool
|
||||
var AzureDefaultAPIVersion string
|
||||
var GeminiVisionMaxImageNum int
|
||||
var NotifyLimitCount int
|
||||
var NotificationLimitDurationMinute int
|
||||
var GenerateDefaultToken bool
|
||||
var ErrorLogEnabled bool
|
||||
|
||||
//var GeminiModelMap = map[string]string{
|
||||
// "gemini-1.0-pro": "v1",
|
||||
//}
|
||||
|
||||
var GeminiVisionMaxImageNum = common.GetEnvOrDefault("GEMINI_VISION_MAX_IMAGE_NUM", 16)
|
||||
|
||||
var NotifyLimitCount = common.GetEnvOrDefault("NOTIFY_LIMIT_COUNT", 2)
|
||||
var NotificationLimitDurationMinute = common.GetEnvOrDefault("NOTIFICATION_LIMIT_DURATION_MINUTE", 10)
|
||||
|
||||
func InitEnv() {
|
||||
StreamingTimeout = common.GetEnvOrDefault("STREAMING_TIMEOUT", 60)
|
||||
DifyDebug = common.GetEnvOrDefaultBool("DIFY_DEBUG", true)
|
||||
MaxFileDownloadMB = common.GetEnvOrDefault("MAX_FILE_DOWNLOAD_MB", 20)
|
||||
// ForceStreamOption 覆盖请求参数,强制返回usage信息
|
||||
ForceStreamOption = common.GetEnvOrDefaultBool("FORCE_STREAM_OPTION", true)
|
||||
GetMediaToken = common.GetEnvOrDefaultBool("GET_MEDIA_TOKEN", true)
|
||||
GetMediaTokenNotStream = common.GetEnvOrDefaultBool("GET_MEDIA_TOKEN_NOT_STREAM", true)
|
||||
UpdateTask = common.GetEnvOrDefaultBool("UPDATE_TASK", true)
|
||||
AzureDefaultAPIVersion = common.GetEnvOrDefaultString("AZURE_DEFAULT_API_VERSION", "2024-12-01-preview")
|
||||
GeminiVisionMaxImageNum = common.GetEnvOrDefault("GEMINI_VISION_MAX_IMAGE_NUM", 16)
|
||||
NotifyLimitCount = common.GetEnvOrDefault("NOTIFY_LIMIT_COUNT", 2)
|
||||
NotificationLimitDurationMinute = common.GetEnvOrDefault("NOTIFICATION_LIMIT_DURATION_MINUTE", 10)
|
||||
// GenerateDefaultToken 是否生成初始令牌,默认关闭。
|
||||
GenerateDefaultToken = common.GetEnvOrDefaultBool("GENERATE_DEFAULT_TOKEN", false)
|
||||
// 是否启用错误日志
|
||||
ErrorLogEnabled = common.GetEnvOrDefaultBool("ERROR_LOG_ENABLED", false)
|
||||
|
||||
//modelVersionMapStr := strings.TrimSpace(os.Getenv("GEMINI_MODEL_MAP"))
|
||||
//if modelVersionMapStr == "" {
|
||||
// return
|
||||
@@ -43,6 +53,3 @@ func InitEnv() {
|
||||
// }
|
||||
//}
|
||||
}
|
||||
|
||||
// GenerateDefaultToken 是否生成初始令牌,默认关闭。
|
||||
var GenerateDefaultToken = common.GetEnvOrDefaultBool("GENERATE_DEFAULT_TOKEN", false)
|
||||
|
||||
@@ -103,7 +103,10 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
|
||||
}
|
||||
|
||||
request := buildTestRequest(testModel)
|
||||
common.SysLog(fmt.Sprintf("testing channel %d with model %s , info %v ", channel.Id, testModel, info))
|
||||
// 创建一个用于日志的 info 副本,移除 ApiKey
|
||||
logInfo := *info
|
||||
logInfo.ApiKey = ""
|
||||
common.SysLog(fmt.Sprintf("testing channel %d with model %s , info %+v ", channel.Id, testModel, logInfo))
|
||||
|
||||
priceData, err := helper.ModelPriceHelper(c, info, 0, int(request.MaxTokens))
|
||||
if err != nil {
|
||||
@@ -186,12 +189,14 @@ func buildTestRequest(model string) *dto.GeneralOpenAIRequest {
|
||||
return testRequest
|
||||
}
|
||||
// 并非Embedding 模型
|
||||
if strings.HasPrefix(model, "o1") || strings.HasPrefix(model, "o3") {
|
||||
if strings.HasPrefix(model, "o") {
|
||||
testRequest.MaxCompletionTokens = 10
|
||||
} else if strings.Contains(model, "thinking") {
|
||||
if !strings.Contains(model, "claude") {
|
||||
testRequest.MaxTokens = 50
|
||||
}
|
||||
} else if strings.Contains(model, "gemini") {
|
||||
testRequest.MaxTokens = 300
|
||||
} else {
|
||||
testRequest.MaxTokens = 10
|
||||
}
|
||||
|
||||
@@ -119,6 +119,9 @@ func FetchUpstreamModels(c *gin.Context) {
|
||||
baseURL = channel.GetBaseURL()
|
||||
}
|
||||
url := fmt.Sprintf("%s/v1/models", baseURL)
|
||||
if channel.Type == common.ChannelTypeGemini {
|
||||
url = fmt.Sprintf("%s/v1beta/openai/models", baseURL)
|
||||
}
|
||||
body, err := GetResponseBody("GET", url, channel, GetAuthHeader(channel.Key))
|
||||
if err != nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
@@ -139,7 +142,11 @@ func FetchUpstreamModels(c *gin.Context) {
|
||||
|
||||
var ids []string
|
||||
for _, model := range result.Data {
|
||||
ids = append(ids, model.ID)
|
||||
id := model.ID
|
||||
if channel.Type == common.ChannelTypeGemini {
|
||||
id = strings.TrimPrefix(id, "models/")
|
||||
}
|
||||
ids = append(ids, id)
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
|
||||
@@ -196,7 +196,7 @@ func DeleteHistoryLogs(c *gin.Context) {
|
||||
})
|
||||
return
|
||||
}
|
||||
count, err := model.DeleteOldLog(targetTimestamp)
|
||||
count, err := model.DeleteOldLog(c.Request.Context(), targetTimestamp, 100)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": false,
|
||||
|
||||
@@ -4,12 +4,11 @@ import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/gorilla/websocket"
|
||||
"io"
|
||||
"log"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
constant2 "one-api/constant"
|
||||
"one-api/dto"
|
||||
"one-api/middleware"
|
||||
"one-api/model"
|
||||
@@ -19,12 +18,15 @@ import (
|
||||
"one-api/relay/helper"
|
||||
"one-api/service"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/gorilla/websocket"
|
||||
)
|
||||
|
||||
func relayHandler(c *gin.Context, relayMode int) *dto.OpenAIErrorWithStatusCode {
|
||||
var err *dto.OpenAIErrorWithStatusCode
|
||||
switch relayMode {
|
||||
case relayconstant.RelayModeImagesGenerations:
|
||||
case relayconstant.RelayModeImagesGenerations, relayconstant.RelayModeImagesEdits:
|
||||
err = relay.ImageHelper(c)
|
||||
case relayconstant.RelayModeAudioSpeech:
|
||||
fallthrough
|
||||
@@ -36,9 +38,31 @@ func relayHandler(c *gin.Context, relayMode int) *dto.OpenAIErrorWithStatusCode
|
||||
err = relay.RerankHelper(c, relayMode)
|
||||
case relayconstant.RelayModeEmbeddings:
|
||||
err = relay.EmbeddingHelper(c)
|
||||
case relayconstant.RelayModeResponses:
|
||||
err = relay.ResponsesHelper(c)
|
||||
default:
|
||||
err = relay.TextHelper(c)
|
||||
}
|
||||
|
||||
if constant2.ErrorLogEnabled && err != nil {
|
||||
// 保存错误日志到mysql中
|
||||
userId := c.GetInt("id")
|
||||
tokenName := c.GetString("token_name")
|
||||
modelName := c.GetString("original_model")
|
||||
tokenId := c.GetInt("token_id")
|
||||
userGroup := c.GetString("group")
|
||||
channelId := c.GetInt("channel_id")
|
||||
other := make(map[string]interface{})
|
||||
other["error_type"] = err.Error.Type
|
||||
other["error_code"] = err.Error.Code
|
||||
other["status_code"] = err.StatusCode
|
||||
other["channel_id"] = channelId
|
||||
other["channel_name"] = c.GetString("channel_name")
|
||||
other["channel_type"] = c.GetInt("channel_type")
|
||||
|
||||
model.RecordErrorLog(c, userId, channelId, modelName, tokenName, err.Error.Message, tokenId, 0, false, userGroup, other)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
|
||||
@@ -592,7 +592,14 @@ func UpdateSelf(c *gin.Context) {
|
||||
user.Password = "" // rollback to what it should be
|
||||
cleanUser.Password = ""
|
||||
}
|
||||
updatePassword := user.Password != ""
|
||||
updatePassword, err := checkUpdatePassword(user.OriginalPassword, user.Password, cleanUser.Id)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": false,
|
||||
"message": err.Error(),
|
||||
})
|
||||
return
|
||||
}
|
||||
if err := cleanUser.Update(updatePassword); err != nil {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"success": false,
|
||||
@@ -608,6 +615,23 @@ func UpdateSelf(c *gin.Context) {
|
||||
return
|
||||
}
|
||||
|
||||
func checkUpdatePassword(originalPassword string, newPassword string, userId int) (updatePassword bool, err error) {
|
||||
var currentUser *model.User
|
||||
currentUser, err = model.GetUserById(userId, true)
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
if !common.ValidatePasswordAndHash(originalPassword, currentUser.Password) {
|
||||
err = fmt.Errorf("原密码错误")
|
||||
return
|
||||
}
|
||||
if newPassword == "" {
|
||||
return
|
||||
}
|
||||
updatePassword = true
|
||||
return
|
||||
}
|
||||
|
||||
func DeleteUser(c *gin.Context) {
|
||||
id, err := strconv.Atoi(c.Param("id"))
|
||||
if err != nil {
|
||||
|
||||
@@ -15,6 +15,8 @@ services:
|
||||
- SQL_DSN=root:123456@tcp(mysql:3306)/new-api # Point to the mysql service
|
||||
- REDIS_CONN_STRING=redis://redis
|
||||
- TZ=Asia/Shanghai
|
||||
- ERROR_LOG_ENABLED=true # 是否启用错误日志记录
|
||||
# - TIKTOKEN_CACHE_DIR=./tiktoken_cache # 如果需要使用tiktoken_cache,请取消注释
|
||||
# - SESSION_SECRET=random_string # 多机部署时设置,必须修改这个随机字符串!!!!!!!
|
||||
# - NODE_TYPE=slave # Uncomment for slave node in multi-node deployment
|
||||
# - SYNC_FREQUENCY=60 # Uncomment if regular database syncing is needed
|
||||
|
||||
@@ -1,3 +1,3 @@
|
||||
密钥为环境变量SESSION_SECRET
|
||||
|
||||

|
||||
密钥为环境变量SESSION_SECRET
|
||||
|
||||

|
||||
@@ -7,7 +7,7 @@ type ClaudeMetadata struct {
|
||||
}
|
||||
|
||||
type ClaudeMediaMessage struct {
|
||||
Type string `json:"type"`
|
||||
Type string `json:"type,omitempty"`
|
||||
Text *string `json:"text,omitempty"`
|
||||
Model string `json:"model,omitempty"`
|
||||
Source *ClaudeMessageSource `json:"source,omitempty"`
|
||||
@@ -50,6 +50,11 @@ func (c *ClaudeMediaMessage) GetStringContent() string {
|
||||
return ""
|
||||
}
|
||||
|
||||
func (c *ClaudeMediaMessage) GetJsonRowString() string {
|
||||
jsonContent, _ := json.Marshal(c)
|
||||
return string(jsonContent)
|
||||
}
|
||||
|
||||
func (c *ClaudeMediaMessage) SetContent(content any) {
|
||||
jsonContent, _ := json.Marshal(content)
|
||||
c.Content = jsonContent
|
||||
@@ -65,8 +70,9 @@ func (c *ClaudeMediaMessage) ParseMediaContent() []ClaudeMediaMessage {
|
||||
|
||||
type ClaudeMessageSource struct {
|
||||
Type string `json:"type"`
|
||||
MediaType string `json:"media_type"`
|
||||
Data any `json:"data"`
|
||||
MediaType string `json:"media_type,omitempty"`
|
||||
Data any `json:"data,omitempty"`
|
||||
Url string `json:"url,omitempty"`
|
||||
}
|
||||
|
||||
type ClaudeMessage struct {
|
||||
|
||||
19
dto/dalle.go
19
dto/dalle.go
@@ -1,14 +1,17 @@
|
||||
package dto
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
type ImageRequest struct {
|
||||
Model string `json:"model"`
|
||||
Prompt string `json:"prompt" binding:"required"`
|
||||
N int `json:"n,omitempty"`
|
||||
Size string `json:"size,omitempty"`
|
||||
Quality string `json:"quality,omitempty"`
|
||||
ResponseFormat string `json:"response_format,omitempty"`
|
||||
Style string `json:"style,omitempty"`
|
||||
User string `json:"user,omitempty"`
|
||||
Model string `json:"model"`
|
||||
Prompt string `json:"prompt" binding:"required"`
|
||||
N int `json:"n,omitempty"`
|
||||
Size string `json:"size,omitempty"`
|
||||
Quality string `json:"quality,omitempty"`
|
||||
ResponseFormat string `json:"response_format,omitempty"`
|
||||
Style string `json:"style,omitempty"`
|
||||
User string `json:"user,omitempty"`
|
||||
ExtraFields json.RawMessage `json:"extra_fields,omitempty"`
|
||||
}
|
||||
|
||||
type ImageResponse struct {
|
||||
|
||||
@@ -18,39 +18,41 @@ type FormatJsonSchema struct {
|
||||
}
|
||||
|
||||
type GeneralOpenAIRequest struct {
|
||||
Model string `json:"model,omitempty"`
|
||||
Messages []Message `json:"messages,omitempty"`
|
||||
Prompt any `json:"prompt,omitempty"`
|
||||
Prefix any `json:"prefix,omitempty"`
|
||||
Suffix any `json:"suffix,omitempty"`
|
||||
Stream bool `json:"stream,omitempty"`
|
||||
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
|
||||
MaxTokens uint `json:"max_tokens,omitempty"`
|
||||
MaxCompletionTokens uint `json:"max_completion_tokens,omitempty"`
|
||||
ReasoningEffort string `json:"reasoning_effort,omitempty"`
|
||||
Temperature *float64 `json:"temperature,omitempty"`
|
||||
TopP float64 `json:"top_p,omitempty"`
|
||||
TopK int `json:"top_k,omitempty"`
|
||||
Stop any `json:"stop,omitempty"`
|
||||
N int `json:"n,omitempty"`
|
||||
Input any `json:"input,omitempty"`
|
||||
Instruction string `json:"instruction,omitempty"`
|
||||
Size string `json:"size,omitempty"`
|
||||
Functions any `json:"functions,omitempty"`
|
||||
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
|
||||
PresencePenalty float64 `json:"presence_penalty,omitempty"`
|
||||
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
|
||||
EncodingFormat any `json:"encoding_format,omitempty"`
|
||||
Seed float64 `json:"seed,omitempty"`
|
||||
Tools []ToolCallRequest `json:"tools,omitempty"`
|
||||
ToolChoice any `json:"tool_choice,omitempty"`
|
||||
User string `json:"user,omitempty"`
|
||||
LogProbs bool `json:"logprobs,omitempty"`
|
||||
TopLogProbs int `json:"top_logprobs,omitempty"`
|
||||
Dimensions int `json:"dimensions,omitempty"`
|
||||
Modalities any `json:"modalities,omitempty"`
|
||||
Audio any `json:"audio,omitempty"`
|
||||
ExtraBody any `json:"extra_body,omitempty"`
|
||||
Model string `json:"model,omitempty"`
|
||||
Messages []Message `json:"messages,omitempty"`
|
||||
Prompt any `json:"prompt,omitempty"`
|
||||
Prefix any `json:"prefix,omitempty"`
|
||||
Suffix any `json:"suffix,omitempty"`
|
||||
Stream bool `json:"stream,omitempty"`
|
||||
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
|
||||
MaxTokens uint `json:"max_tokens,omitempty"`
|
||||
MaxCompletionTokens uint `json:"max_completion_tokens,omitempty"`
|
||||
ReasoningEffort string `json:"reasoning_effort,omitempty"`
|
||||
//Reasoning json.RawMessage `json:"reasoning,omitempty"`
|
||||
Temperature *float64 `json:"temperature,omitempty"`
|
||||
TopP float64 `json:"top_p,omitempty"`
|
||||
TopK int `json:"top_k,omitempty"`
|
||||
Stop any `json:"stop,omitempty"`
|
||||
N int `json:"n,omitempty"`
|
||||
Input any `json:"input,omitempty"`
|
||||
Instruction string `json:"instruction,omitempty"`
|
||||
Size string `json:"size,omitempty"`
|
||||
Functions any `json:"functions,omitempty"`
|
||||
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
|
||||
PresencePenalty float64 `json:"presence_penalty,omitempty"`
|
||||
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
|
||||
EncodingFormat any `json:"encoding_format,omitempty"`
|
||||
Seed float64 `json:"seed,omitempty"`
|
||||
Tools []ToolCallRequest `json:"tools,omitempty"`
|
||||
ToolChoice any `json:"tool_choice,omitempty"`
|
||||
User string `json:"user,omitempty"`
|
||||
LogProbs bool `json:"logprobs,omitempty"`
|
||||
TopLogProbs int `json:"top_logprobs,omitempty"`
|
||||
Dimensions int `json:"dimensions,omitempty"`
|
||||
Modalities any `json:"modalities,omitempty"`
|
||||
Audio any `json:"audio,omitempty"`
|
||||
EnableThinking any `json:"enable_thinking,omitempty"` // ali
|
||||
ExtraBody any `json:"extra_body,omitempty"`
|
||||
}
|
||||
|
||||
type ToolCallRequest struct {
|
||||
@@ -111,6 +113,8 @@ type MediaContent struct {
|
||||
Text string `json:"text,omitempty"`
|
||||
ImageUrl any `json:"image_url,omitempty"`
|
||||
InputAudio any `json:"input_audio,omitempty"`
|
||||
File any `json:"file,omitempty"`
|
||||
VideoUrl any `json:"video_url,omitempty"`
|
||||
}
|
||||
|
||||
func (m *MediaContent) GetImageMedia() *MessageImageUrl {
|
||||
@@ -120,6 +124,20 @@ func (m *MediaContent) GetImageMedia() *MessageImageUrl {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MediaContent) GetInputAudio() *MessageInputAudio {
|
||||
if m.InputAudio != nil {
|
||||
return m.InputAudio.(*MessageInputAudio)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (m *MediaContent) GetFile() *MessageFile {
|
||||
if m.File != nil {
|
||||
return m.File.(*MessageFile)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
type MessageImageUrl struct {
|
||||
Url string `json:"url"`
|
||||
Detail string `json:"detail"`
|
||||
@@ -135,10 +153,22 @@ type MessageInputAudio struct {
|
||||
Format string `json:"format"`
|
||||
}
|
||||
|
||||
type MessageFile struct {
|
||||
FileName string `json:"filename,omitempty"`
|
||||
FileData string `json:"file_data,omitempty"`
|
||||
FileId string `json:"file_id,omitempty"`
|
||||
}
|
||||
|
||||
type MessageVideoUrl struct {
|
||||
Url string `json:"url"`
|
||||
}
|
||||
|
||||
const (
|
||||
ContentTypeText = "text"
|
||||
ContentTypeImageURL = "image_url"
|
||||
ContentTypeInputAudio = "input_audio"
|
||||
ContentTypeFile = "file"
|
||||
ContentTypeVideoUrl = "video_url" // 阿里百炼视频识别
|
||||
)
|
||||
|
||||
func (m *Message) GetPrefix() bool {
|
||||
@@ -192,6 +222,12 @@ func (m *Message) StringContent() string {
|
||||
return stringContent
|
||||
}
|
||||
|
||||
func (m *Message) SetNullContent() {
|
||||
m.Content = nil
|
||||
m.parsedStringContent = nil
|
||||
m.parsedContent = nil
|
||||
}
|
||||
|
||||
func (m *Message) SetStringContent(content string) {
|
||||
jsonContent, _ := json.Marshal(content)
|
||||
m.Content = jsonContent
|
||||
@@ -292,6 +328,39 @@ func (m *Message) ParseContent() []MediaContent {
|
||||
})
|
||||
}
|
||||
}
|
||||
case ContentTypeFile:
|
||||
if fileData, ok := contentItem["file"].(map[string]interface{}); ok {
|
||||
fileId, ok3 := fileData["file_id"].(string)
|
||||
if ok3 {
|
||||
contentList = append(contentList, MediaContent{
|
||||
Type: ContentTypeFile,
|
||||
File: &MessageFile{
|
||||
FileId: fileId,
|
||||
},
|
||||
})
|
||||
} else {
|
||||
fileName, ok1 := fileData["filename"].(string)
|
||||
fileDataStr, ok2 := fileData["file_data"].(string)
|
||||
if ok1 && ok2 {
|
||||
contentList = append(contentList, MediaContent{
|
||||
Type: ContentTypeFile,
|
||||
File: &MessageFile{
|
||||
FileName: fileName,
|
||||
FileData: fileDataStr,
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
case ContentTypeVideoUrl:
|
||||
if videoUrl, ok := contentItem["video_url"].(string); ok {
|
||||
contentList = append(contentList, MediaContent{
|
||||
Type: ContentTypeVideoUrl,
|
||||
VideoUrl: &MessageVideoUrl{
|
||||
Url: videoUrl,
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -301,3 +370,49 @@ func (m *Message) ParseContent() []MediaContent {
|
||||
}
|
||||
return contentList
|
||||
}
|
||||
|
||||
type OpenAIResponsesRequest struct {
|
||||
Model string `json:"model"`
|
||||
Input json.RawMessage `json:"input,omitempty"`
|
||||
Include json.RawMessage `json:"include,omitempty"`
|
||||
Instructions json.RawMessage `json:"instructions,omitempty"`
|
||||
MaxOutputTokens uint `json:"max_output_tokens,omitempty"`
|
||||
Metadata json.RawMessage `json:"metadata,omitempty"`
|
||||
ParallelToolCalls bool `json:"parallel_tool_calls,omitempty"`
|
||||
PreviousResponseID string `json:"previous_response_id,omitempty"`
|
||||
Reasoning *Reasoning `json:"reasoning,omitempty"`
|
||||
ServiceTier string `json:"service_tier,omitempty"`
|
||||
Store bool `json:"store,omitempty"`
|
||||
Stream bool `json:"stream,omitempty"`
|
||||
Temperature float64 `json:"temperature,omitempty"`
|
||||
Text json.RawMessage `json:"text,omitempty"`
|
||||
ToolChoice json.RawMessage `json:"tool_choice,omitempty"`
|
||||
Tools []ResponsesToolsCall `json:"tools,omitempty"`
|
||||
TopP float64 `json:"top_p,omitempty"`
|
||||
Truncation string `json:"truncation,omitempty"`
|
||||
User string `json:"user,omitempty"`
|
||||
}
|
||||
|
||||
type Reasoning struct {
|
||||
Effort string `json:"effort,omitempty"`
|
||||
Summary string `json:"summary,omitempty"`
|
||||
}
|
||||
|
||||
type ResponsesToolsCall struct {
|
||||
Type string `json:"type"`
|
||||
// Web Search
|
||||
UserLocation json.RawMessage `json:"user_location,omitempty"`
|
||||
SearchContextSize string `json:"search_context_size,omitempty"`
|
||||
// File Search
|
||||
VectorStoreIds []string `json:"vector_store_ids,omitempty"`
|
||||
MaxNumResults uint `json:"max_num_results,omitempty"`
|
||||
Filters json.RawMessage `json:"filters,omitempty"`
|
||||
// Computer Use
|
||||
DisplayWidth uint `json:"display_width,omitempty"`
|
||||
DisplayHeight uint `json:"display_height,omitempty"`
|
||||
Environment string `json:"environment,omitempty"`
|
||||
// Function
|
||||
Name string `json:"name,omitempty"`
|
||||
Description string `json:"description,omitempty"`
|
||||
Parameters json.RawMessage `json:"parameters,omitempty"`
|
||||
}
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
package dto
|
||||
|
||||
import "encoding/json"
|
||||
|
||||
type SimpleResponse struct {
|
||||
Usage `json:"usage"`
|
||||
Error *OpenAIError `json:"error"`
|
||||
@@ -166,10 +168,93 @@ type CompletionsStreamResponse struct {
|
||||
}
|
||||
|
||||
type Usage struct {
|
||||
PromptTokens int `json:"prompt_tokens"`
|
||||
CompletionTokens int `json:"completion_tokens"`
|
||||
TotalTokens int `json:"total_tokens"`
|
||||
PromptCacheHitTokens int `json:"prompt_cache_hit_tokens,omitempty"`
|
||||
PromptTokens int `json:"prompt_tokens"`
|
||||
CompletionTokens int `json:"completion_tokens"`
|
||||
TotalTokens int `json:"total_tokens"`
|
||||
PromptCacheHitTokens int `json:"prompt_cache_hit_tokens,omitempty"`
|
||||
|
||||
PromptTokensDetails InputTokenDetails `json:"prompt_tokens_details"`
|
||||
CompletionTokenDetails OutputTokenDetails `json:"completion_tokens_details"`
|
||||
InputTokens int `json:"input_tokens"`
|
||||
OutputTokens int `json:"output_tokens"`
|
||||
InputTokensDetails *InputTokenDetails `json:"input_tokens_details"`
|
||||
}
|
||||
|
||||
type InputTokenDetails struct {
|
||||
CachedTokens int `json:"cached_tokens"`
|
||||
CachedCreationTokens int `json:"-"`
|
||||
TextTokens int `json:"text_tokens"`
|
||||
AudioTokens int `json:"audio_tokens"`
|
||||
ImageTokens int `json:"image_tokens"`
|
||||
}
|
||||
|
||||
type OutputTokenDetails struct {
|
||||
TextTokens int `json:"text_tokens"`
|
||||
AudioTokens int `json:"audio_tokens"`
|
||||
ReasoningTokens int `json:"reasoning_tokens"`
|
||||
}
|
||||
|
||||
type OpenAIResponsesResponse struct {
|
||||
ID string `json:"id"`
|
||||
Object string `json:"object"`
|
||||
CreatedAt int `json:"created_at"`
|
||||
Status string `json:"status"`
|
||||
Error *OpenAIError `json:"error,omitempty"`
|
||||
IncompleteDetails *IncompleteDetails `json:"incomplete_details,omitempty"`
|
||||
Instructions string `json:"instructions"`
|
||||
MaxOutputTokens int `json:"max_output_tokens"`
|
||||
Model string `json:"model"`
|
||||
Output []ResponsesOutput `json:"output"`
|
||||
ParallelToolCalls bool `json:"parallel_tool_calls"`
|
||||
PreviousResponseID string `json:"previous_response_id"`
|
||||
Reasoning *Reasoning `json:"reasoning"`
|
||||
Store bool `json:"store"`
|
||||
Temperature float64 `json:"temperature"`
|
||||
ToolChoice string `json:"tool_choice"`
|
||||
Tools []ResponsesToolsCall `json:"tools"`
|
||||
TopP float64 `json:"top_p"`
|
||||
Truncation string `json:"truncation"`
|
||||
Usage *Usage `json:"usage"`
|
||||
User json.RawMessage `json:"user"`
|
||||
Metadata json.RawMessage `json:"metadata"`
|
||||
}
|
||||
|
||||
type IncompleteDetails struct {
|
||||
Reasoning string `json:"reasoning"`
|
||||
}
|
||||
|
||||
type ResponsesOutput struct {
|
||||
Type string `json:"type"`
|
||||
ID string `json:"id"`
|
||||
Status string `json:"status"`
|
||||
Role string `json:"role"`
|
||||
Content []ResponsesOutputContent `json:"content"`
|
||||
}
|
||||
|
||||
type ResponsesOutputContent struct {
|
||||
Type string `json:"type"`
|
||||
Text string `json:"text"`
|
||||
Annotations []interface{} `json:"annotations"`
|
||||
}
|
||||
|
||||
const (
|
||||
BuildInToolWebSearchPreview = "web_search_preview"
|
||||
BuildInToolFileSearch = "file_search"
|
||||
)
|
||||
|
||||
const (
|
||||
BuildInCallWebSearchCall = "web_search_call"
|
||||
)
|
||||
|
||||
const (
|
||||
ResponsesOutputTypeItemAdded = "response.output_item.added"
|
||||
ResponsesOutputTypeItemDone = "response.output_item.done"
|
||||
)
|
||||
|
||||
// ResponsesStreamResponse 用于处理 /v1/responses 流式响应
|
||||
type ResponsesStreamResponse struct {
|
||||
Type string `json:"type"`
|
||||
Response *OpenAIResponsesResponse `json:"response,omitempty"`
|
||||
Delta string `json:"delta,omitempty"`
|
||||
Item *ResponsesOutput `json:"item,omitempty"`
|
||||
}
|
||||
|
||||
@@ -43,19 +43,6 @@ type RealtimeUsage struct {
|
||||
OutputTokenDetails OutputTokenDetails `json:"output_token_details"`
|
||||
}
|
||||
|
||||
type InputTokenDetails struct {
|
||||
CachedTokens int `json:"cached_tokens"`
|
||||
CachedCreationTokens int
|
||||
TextTokens int `json:"text_tokens"`
|
||||
AudioTokens int `json:"audio_tokens"`
|
||||
ImageTokens int `json:"image_tokens"`
|
||||
}
|
||||
|
||||
type OutputTokenDetails struct {
|
||||
TextTokens int `json:"text_tokens"`
|
||||
AudioTokens int `json:"audio_tokens"`
|
||||
}
|
||||
|
||||
type RealtimeSession struct {
|
||||
Modalities []string `json:"modalities"`
|
||||
Instructions string `json:"instructions"`
|
||||
|
||||
13
main.go
13
main.go
@@ -12,6 +12,7 @@ import (
|
||||
"one-api/model"
|
||||
"one-api/router"
|
||||
"one-api/service"
|
||||
"one-api/setting/operation_setting"
|
||||
"os"
|
||||
"strconv"
|
||||
|
||||
@@ -33,7 +34,7 @@ var indexPage []byte
|
||||
func main() {
|
||||
err := godotenv.Load(".env")
|
||||
if err != nil {
|
||||
common.SysLog("Support for .env file is disabled")
|
||||
common.SysLog("Support for .env file is disabled: " + err.Error())
|
||||
}
|
||||
|
||||
common.LoadEnv()
|
||||
@@ -51,6 +52,9 @@ func main() {
|
||||
if err != nil {
|
||||
common.FatalLog("failed to initialize database: " + err.Error())
|
||||
}
|
||||
|
||||
model.CheckSetup()
|
||||
|
||||
// Initialize SQL Database
|
||||
err = model.InitLogDB()
|
||||
if err != nil {
|
||||
@@ -69,10 +73,15 @@ func main() {
|
||||
common.FatalLog("failed to initialize Redis: " + err.Error())
|
||||
}
|
||||
|
||||
// Initialize model settings
|
||||
operation_setting.InitRatioSettings()
|
||||
// Initialize constants
|
||||
constant.InitEnv()
|
||||
// Initialize options
|
||||
model.InitOptionMap()
|
||||
|
||||
service.InitTokenEncoders()
|
||||
|
||||
if common.RedisEnabled {
|
||||
// for compatibility with old versions
|
||||
common.MemoryCacheEnabled = true
|
||||
@@ -126,8 +135,6 @@ func main() {
|
||||
common.SysLog("pprof enabled")
|
||||
}
|
||||
|
||||
service.InitTokenEncoders()
|
||||
|
||||
// Initialize HTTP server
|
||||
server := gin.New()
|
||||
server.Use(gin.CustomRecovery(func(c *gin.Context, err any) {
|
||||
|
||||
@@ -162,7 +162,7 @@ func getModelRequest(c *gin.Context) (*ModelRequest, bool, error) {
|
||||
}
|
||||
c.Set("platform", string(constant.TaskPlatformSuno))
|
||||
c.Set("relay_mode", relayMode)
|
||||
} else if !strings.HasPrefix(c.Request.URL.Path, "/v1/audio/transcriptions") {
|
||||
} else if !strings.HasPrefix(c.Request.URL.Path, "/v1/audio/transcriptions") && !strings.HasPrefix(c.Request.URL.Path, "/v1/images/edits") {
|
||||
err = common.UnmarshalBodyReusable(c, &modelRequest)
|
||||
}
|
||||
if err != nil {
|
||||
@@ -184,6 +184,8 @@ func getModelRequest(c *gin.Context) (*ModelRequest, bool, error) {
|
||||
}
|
||||
if strings.HasPrefix(c.Request.URL.Path, "/v1/images/generations") {
|
||||
modelRequest.Model = common.GetStringIfEmpty(modelRequest.Model, "dall-e")
|
||||
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/images/edits") {
|
||||
modelRequest.Model = common.GetStringIfEmpty(modelRequest.Model, "gpt-image-1")
|
||||
}
|
||||
if strings.HasPrefix(c.Request.URL.Path, "/v1/audio") {
|
||||
relayMode := relayconstant.RelayModeAudioSpeech
|
||||
@@ -211,6 +213,7 @@ func SetupContextForSelectedChannel(c *gin.Context, channel *model.Channel, mode
|
||||
c.Set("channel_id", channel.Id)
|
||||
c.Set("channel_name", channel.Name)
|
||||
c.Set("channel_type", channel.Type)
|
||||
c.Set("channel_create_time", channel.CreatedTime)
|
||||
c.Set("channel_setting", channel.GetSetting())
|
||||
c.Set("param_override", channel.GetParamOverride())
|
||||
if nil != channel.OpenAIOrganization && "" != *channel.OpenAIOrganization {
|
||||
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/common/limiter"
|
||||
"one-api/setting"
|
||||
"strconv"
|
||||
"time"
|
||||
@@ -78,21 +79,9 @@ func redisRateLimitHandler(duration int64, totalMaxCount, successMaxCount int) g
|
||||
ctx := context.Background()
|
||||
rdb := common.RDB
|
||||
|
||||
// 1. 检查总请求数限制(当totalMaxCount为0时会自动跳过)
|
||||
totalKey := fmt.Sprintf("rateLimit:%s:%s", ModelRequestRateLimitCountMark, userId)
|
||||
allowed, err := checkRedisRateLimit(ctx, rdb, totalKey, totalMaxCount, duration)
|
||||
if err != nil {
|
||||
fmt.Println("检查总请求数限制失败:", err.Error())
|
||||
abortWithOpenAiMessage(c, http.StatusInternalServerError, "rate_limit_check_failed")
|
||||
return
|
||||
}
|
||||
if !allowed {
|
||||
abortWithOpenAiMessage(c, http.StatusTooManyRequests, fmt.Sprintf("您已达到总请求数限制:%d分钟内最多请求%d次,包括失败次数,请检查您的请求是否正确", setting.ModelRequestRateLimitDurationMinutes, totalMaxCount))
|
||||
}
|
||||
|
||||
// 2. 检查成功请求数限制
|
||||
// 1. 检查成功请求数限制
|
||||
successKey := fmt.Sprintf("rateLimit:%s:%s", ModelRequestRateLimitSuccessCountMark, userId)
|
||||
allowed, err = checkRedisRateLimit(ctx, rdb, successKey, successMaxCount, duration)
|
||||
allowed, err := checkRedisRateLimit(ctx, rdb, successKey, successMaxCount, duration)
|
||||
if err != nil {
|
||||
fmt.Println("检查成功请求数限制失败:", err.Error())
|
||||
abortWithOpenAiMessage(c, http.StatusInternalServerError, "rate_limit_check_failed")
|
||||
@@ -103,8 +92,29 @@ func redisRateLimitHandler(duration int64, totalMaxCount, successMaxCount int) g
|
||||
return
|
||||
}
|
||||
|
||||
// 3. 记录总请求(当totalMaxCount为0时会自动跳过)
|
||||
recordRedisRequest(ctx, rdb, totalKey, totalMaxCount)
|
||||
//2.检查总请求数限制并记录总请求(当totalMaxCount为0时会自动跳过,使用令牌桶限流器
|
||||
if totalMaxCount > 0 {
|
||||
totalKey := fmt.Sprintf("rateLimit:%s", userId)
|
||||
// 初始化
|
||||
tb := limiter.New(ctx, rdb)
|
||||
allowed, err = tb.Allow(
|
||||
ctx,
|
||||
totalKey,
|
||||
limiter.WithCapacity(int64(totalMaxCount)*duration),
|
||||
limiter.WithRate(int64(totalMaxCount)),
|
||||
limiter.WithRequested(duration),
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
fmt.Println("检查总请求数限制失败:", err.Error())
|
||||
abortWithOpenAiMessage(c, http.StatusInternalServerError, "rate_limit_check_failed")
|
||||
return
|
||||
}
|
||||
|
||||
if !allowed {
|
||||
abortWithOpenAiMessage(c, http.StatusTooManyRequests, fmt.Sprintf("您已达到总请求数限制:%d分钟内最多请求%d次,包括失败次数,请检查您的请求是否正确", setting.ModelRequestRateLimitDurationMinutes, totalMaxCount))
|
||||
}
|
||||
}
|
||||
|
||||
// 4. 处理请求
|
||||
c.Next()
|
||||
|
||||
@@ -84,9 +84,11 @@ func CacheGetRandomSatisfiedChannel(group string, model string, retry int) (*Cha
|
||||
if !common.MemoryCacheEnabled {
|
||||
return GetRandomSatisfiedChannel(group, model, retry)
|
||||
}
|
||||
|
||||
channelSyncLock.RLock()
|
||||
defer channelSyncLock.RUnlock()
|
||||
channels := group2model2channels[group][model]
|
||||
channelSyncLock.RUnlock()
|
||||
|
||||
if len(channels) == 0 {
|
||||
return nil, errors.New("channel not found")
|
||||
}
|
||||
|
||||
@@ -119,10 +119,15 @@ func SearchChannels(keyword string, group string, model string, idSort bool) ([]
|
||||
|
||||
// 如果是 PostgreSQL,使用双引号
|
||||
if common.UsingPostgreSQL {
|
||||
keyCol = `"key"`
|
||||
modelsCol = `"models"`
|
||||
}
|
||||
|
||||
baseURLCol := "`base_url`"
|
||||
// 如果是 PostgreSQL,使用双引号
|
||||
if common.UsingPostgreSQL {
|
||||
baseURLCol = `"base_url"`
|
||||
}
|
||||
|
||||
order := "priority desc"
|
||||
if idSort {
|
||||
order = "id desc"
|
||||
@@ -142,11 +147,11 @@ func SearchChannels(keyword string, group string, model string, idSort bool) ([]
|
||||
// sqlite, PostgreSQL
|
||||
groupCondition = `(',' || ` + groupCol + ` || ',') LIKE ?`
|
||||
}
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%", "%,"+group+",%")
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%", "%,"+group+",%")
|
||||
} else {
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + " LIKE ?"
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%")
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + " LIKE ?"
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%")
|
||||
}
|
||||
|
||||
// 执行查询
|
||||
@@ -450,6 +455,12 @@ func SearchTags(keyword string, group string, model string, idSort bool) ([]*str
|
||||
modelsCol = `"models"`
|
||||
}
|
||||
|
||||
baseURLCol := "`base_url`"
|
||||
// 如果是 PostgreSQL,使用双引号
|
||||
if common.UsingPostgreSQL {
|
||||
baseURLCol = `"base_url"`
|
||||
}
|
||||
|
||||
order := "priority desc"
|
||||
if idSort {
|
||||
order = "id desc"
|
||||
@@ -469,11 +480,11 @@ func SearchTags(keyword string, group string, model string, idSort bool) ([]*str
|
||||
// sqlite, PostgreSQL
|
||||
groupCondition = `(',' || ` + groupCol + ` || ',') LIKE ?`
|
||||
}
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%", "%,"+group+",%")
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%", "%,"+group+",%")
|
||||
} else {
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + " LIKE ?"
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%")
|
||||
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + " LIKE ?"
|
||||
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%")
|
||||
}
|
||||
|
||||
subQuery := baseQuery.Where(whereClause, args...).
|
||||
|
||||
55
model/log.go
55
model/log.go
@@ -1,6 +1,7 @@
|
||||
package model
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"one-api/common"
|
||||
"os"
|
||||
@@ -40,6 +41,7 @@ const (
|
||||
LogTypeConsume
|
||||
LogTypeManage
|
||||
LogTypeSystem
|
||||
LogTypeError
|
||||
)
|
||||
|
||||
func formatUserLogs(logs []*Log) {
|
||||
@@ -88,6 +90,35 @@ func RecordLog(userId int, logType int, content string) {
|
||||
}
|
||||
}
|
||||
|
||||
func RecordErrorLog(c *gin.Context, userId int, channelId int, modelName string, tokenName string, content string, tokenId int, useTimeSeconds int,
|
||||
isStream bool, group string, other map[string]interface{}) {
|
||||
common.LogInfo(c, fmt.Sprintf("record error log: userId=%d, channelId=%d, modelName=%s, tokenName=%s, content=%s", userId, channelId, modelName, tokenName, content))
|
||||
username := c.GetString("username")
|
||||
otherStr := common.MapToJsonStr(other)
|
||||
log := &Log{
|
||||
UserId: userId,
|
||||
Username: username,
|
||||
CreatedAt: common.GetTimestamp(),
|
||||
Type: LogTypeError,
|
||||
Content: content,
|
||||
PromptTokens: 0,
|
||||
CompletionTokens: 0,
|
||||
TokenName: tokenName,
|
||||
ModelName: modelName,
|
||||
Quota: 0,
|
||||
ChannelId: channelId,
|
||||
TokenId: tokenId,
|
||||
UseTime: useTimeSeconds,
|
||||
IsStream: isStream,
|
||||
Group: group,
|
||||
Other: otherStr,
|
||||
}
|
||||
err := LOG_DB.Create(log).Error
|
||||
if err != nil {
|
||||
common.LogError(c, "failed to record log: "+err.Error())
|
||||
}
|
||||
}
|
||||
|
||||
func RecordConsumeLog(c *gin.Context, userId int, channelId int, promptTokens int, completionTokens int,
|
||||
modelName string, tokenName string, quota int, content string, tokenId int, userQuota int, useTimeSeconds int,
|
||||
isStream bool, group string, other map[string]interface{}) {
|
||||
@@ -310,7 +341,25 @@ func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelNa
|
||||
return token
|
||||
}
|
||||
|
||||
func DeleteOldLog(targetTimestamp int64) (int64, error) {
|
||||
result := LOG_DB.Where("created_at < ?", targetTimestamp).Delete(&Log{})
|
||||
return result.RowsAffected, result.Error
|
||||
func DeleteOldLog(ctx context.Context, targetTimestamp int64, limit int) (int64, error) {
|
||||
var total int64 = 0
|
||||
|
||||
for {
|
||||
if nil != ctx.Err() {
|
||||
return total, ctx.Err()
|
||||
}
|
||||
|
||||
result := LOG_DB.Where("created_at < ?", targetTimestamp).Limit(limit).Delete(&Log{})
|
||||
if nil != result.Error {
|
||||
return total, result.Error
|
||||
}
|
||||
|
||||
total += result.RowsAffected
|
||||
|
||||
if result.RowsAffected < int64(limit) {
|
||||
break
|
||||
}
|
||||
}
|
||||
|
||||
return total, nil
|
||||
}
|
||||
|
||||
@@ -56,7 +56,7 @@ func createRootAccountIfNeed() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func checkSetup() {
|
||||
func CheckSetup() {
|
||||
setup := GetSetup()
|
||||
if setup == nil {
|
||||
// No setup record exists, check if we have a root user
|
||||
@@ -244,7 +244,6 @@ func migrateDB() error {
|
||||
}
|
||||
err = DB.AutoMigrate(&Setup{})
|
||||
common.SysLog("database migrated")
|
||||
checkSetup()
|
||||
//err = createRootAccountIfNeed()
|
||||
return err
|
||||
}
|
||||
|
||||
@@ -18,6 +18,7 @@ type User struct {
|
||||
Id int `json:"id"`
|
||||
Username string `json:"username" gorm:"unique;index" validate:"max=12"`
|
||||
Password string `json:"password" gorm:"not null;" validate:"min=8,max=20"`
|
||||
OriginalPassword string `json:"original_password" gorm:"-:all"` // this field is only for Password change verification, don't save it to database!
|
||||
DisplayName string `json:"display_name" gorm:"index" validate:"max=20"`
|
||||
Role int `json:"role" gorm:"type:int;default:1"` // admin, common
|
||||
Status int `json:"status" gorm:"type:int;default:1"` // enabled, disabled
|
||||
@@ -108,7 +109,7 @@ func CheckUserExistOrDeleted(username string, email string) (bool, error) {
|
||||
|
||||
func GetMaxUserId() int {
|
||||
var user User
|
||||
DB.Last(&user)
|
||||
DB.Unscoped().Last(&user)
|
||||
return user.Id
|
||||
}
|
||||
|
||||
|
||||
@@ -1,11 +1,12 @@
|
||||
package channel
|
||||
|
||||
import (
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
relaycommon "one-api/relay/common"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor interface {
|
||||
@@ -18,6 +19,7 @@ type Adaptor interface {
|
||||
ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error)
|
||||
ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error)
|
||||
ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error)
|
||||
ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error)
|
||||
DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error)
|
||||
DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode)
|
||||
GetModelList() []string
|
||||
|
||||
@@ -3,7 +3,6 @@ package ali
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -11,6 +10,8 @@ import (
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/constant"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -79,6 +80,11 @@ func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInf
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,12 @@
|
||||
package ali
|
||||
|
||||
var ModelList = []string{
|
||||
"qwen-turbo", "qwen-plus", "qwen-max", "qwen-max-longcontext",
|
||||
"qwen-turbo",
|
||||
"qwen-plus",
|
||||
"qwen-max",
|
||||
"qwen-max-longcontext",
|
||||
"qwq-32b",
|
||||
"qwen3-235b-a22b",
|
||||
"text-embedding-v1",
|
||||
}
|
||||
|
||||
|
||||
@@ -2,13 +2,14 @@ package aws
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel/claude"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/setting/model_setting"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -74,6 +75,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package baidu
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -11,6 +10,8 @@ import (
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/constant"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -130,6 +131,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return baiduEmbeddingRequest, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,13 +3,14 @@ package baidu_v2
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -60,6 +61,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package claude
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -11,6 +10,8 @@ import (
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/setting/model_setting"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -84,6 +85,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -24,6 +24,8 @@ func stopReasonClaude2OpenAI(reason string) string {
|
||||
return "stop"
|
||||
case "max_tokens":
|
||||
return "max_tokens"
|
||||
case "tool_use":
|
||||
return "tool_calls"
|
||||
default:
|
||||
return reason
|
||||
}
|
||||
@@ -298,6 +300,13 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *dto.ClaudeResponse
|
||||
response.Model = claudeResponse.Model
|
||||
response.Choices = make([]dto.ChatCompletionsStreamResponseChoice, 0)
|
||||
tools := make([]dto.ToolCallResponse, 0)
|
||||
fcIdx := 0
|
||||
if claudeResponse.Index != nil {
|
||||
fcIdx = *claudeResponse.Index - 1
|
||||
if fcIdx < 0 {
|
||||
fcIdx = 0
|
||||
}
|
||||
}
|
||||
var choice dto.ChatCompletionsStreamResponseChoice
|
||||
if reqMode == RequestModeCompletion {
|
||||
choice.Delta.SetContentString(claudeResponse.Completion)
|
||||
@@ -317,8 +326,9 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *dto.ClaudeResponse
|
||||
//choice.Delta.SetContentString(claudeResponse.ContentBlock.Text)
|
||||
if claudeResponse.ContentBlock.Type == "tool_use" {
|
||||
tools = append(tools, dto.ToolCallResponse{
|
||||
ID: claudeResponse.ContentBlock.Id,
|
||||
Type: "function",
|
||||
Index: common.GetPointer(fcIdx),
|
||||
ID: claudeResponse.ContentBlock.Id,
|
||||
Type: "function",
|
||||
Function: dto.FunctionResponse{
|
||||
Name: claudeResponse.ContentBlock.Name,
|
||||
Arguments: "",
|
||||
@@ -330,11 +340,12 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *dto.ClaudeResponse
|
||||
}
|
||||
} else if claudeResponse.Type == "content_block_delta" {
|
||||
if claudeResponse.Delta != nil {
|
||||
choice.Index = *claudeResponse.Index
|
||||
choice.Delta.Content = claudeResponse.Delta.Text
|
||||
switch claudeResponse.Delta.Type {
|
||||
case "input_json_delta":
|
||||
tools = append(tools, dto.ToolCallResponse{
|
||||
Type: "function",
|
||||
Index: common.GetPointer(fcIdx),
|
||||
Function: dto.FunctionResponse{
|
||||
Arguments: *claudeResponse.Delta.PartialJson,
|
||||
},
|
||||
|
||||
@@ -55,6 +55,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
}
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,13 +3,14 @@ package cohere
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/constant"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -52,6 +53,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
return requestOpenAI2Cohere(*request), nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package deepseek
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -11,6 +10,9 @@ import (
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/constant"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -36,9 +38,13 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
|
||||
}
|
||||
|
||||
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
fimBaseUrl := info.BaseUrl
|
||||
if !strings.HasSuffix(info.BaseUrl, "/beta") {
|
||||
fimBaseUrl += "/beta"
|
||||
}
|
||||
switch info.RelayMode {
|
||||
case constant.RelayModeCompletions:
|
||||
return fmt.Sprintf("%s/beta/completions", info.BaseUrl), nil
|
||||
return fmt.Sprintf("%s/completions", fimBaseUrl), nil
|
||||
default:
|
||||
return fmt.Sprintf("%s/v1/chat/completions", info.BaseUrl), nil
|
||||
}
|
||||
@@ -66,6 +72,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,12 +3,13 @@ package dify
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
relaycommon "one-api/relay/common"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -86,6 +87,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
package dify
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/base64"
|
||||
"encoding/json"
|
||||
@@ -213,12 +212,8 @@ func streamResponseDify2OpenAI(difyResponse DifyChunkChatCompletionResponse) *dt
|
||||
func difyStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
var responseText string
|
||||
usage := &dto.Usage{}
|
||||
scanner := bufio.NewScanner(resp.Body)
|
||||
scanner.Split(bufio.ScanLines)
|
||||
var nodeToken int
|
||||
|
||||
helper.SetEventStreamHeaders(c)
|
||||
|
||||
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
|
||||
var difyResponse DifyChunkChatCompletionResponse
|
||||
err := json.Unmarshal([]byte(data), &difyResponse)
|
||||
@@ -247,13 +242,10 @@ func difyStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
|
||||
}
|
||||
return true
|
||||
})
|
||||
if err := scanner.Err(); err != nil {
|
||||
common.SysError("error reading stream: " + err.Error())
|
||||
}
|
||||
helper.Done(c)
|
||||
err := resp.Body.Close()
|
||||
if err != nil {
|
||||
//return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
// return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
common.SysError("close_response_body_failed: " + err.Error())
|
||||
}
|
||||
if usage.TotalTokens == 0 {
|
||||
|
||||
@@ -12,7 +12,6 @@ import (
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/service"
|
||||
"one-api/setting/model_setting"
|
||||
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -70,6 +69,16 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
|
||||
}
|
||||
|
||||
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
|
||||
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
|
||||
// suffix -thinking and -nothinking
|
||||
if strings.HasSuffix(info.OriginModelName, "-thinking") {
|
||||
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-thinking")
|
||||
} else if strings.HasSuffix(info.OriginModelName, "-nothinking") {
|
||||
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-nothinking")
|
||||
}
|
||||
}
|
||||
|
||||
version := model_setting.GetGeminiVersionSetting(info.UpstreamModelName)
|
||||
|
||||
if strings.HasPrefix(info.UpstreamModelName, "imagen") {
|
||||
@@ -99,11 +108,13 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
if request == nil {
|
||||
return nil, errors.New("request is nil")
|
||||
}
|
||||
ai, err := CovertGemini2OpenAI(*request)
|
||||
|
||||
geminiRequest, err := CovertGemini2OpenAI(*request, info)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return ai, nil
|
||||
|
||||
return geminiRequest, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dto.RerankRequest) (any, error) {
|
||||
@@ -144,6 +155,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return geminiRequest, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
@@ -165,6 +181,18 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
|
||||
} else {
|
||||
err, usage = GeminiChatHandler(c, resp, info)
|
||||
}
|
||||
|
||||
//if usage.(*dto.Usage).CompletionTokenDetails.ReasoningTokens > 100 {
|
||||
// // 没有请求-thinking的情况下,产生思考token,则按照思考模型计费
|
||||
// if !strings.HasSuffix(info.OriginModelName, "-thinking") &&
|
||||
// !strings.HasSuffix(info.OriginModelName, "-nothinking") {
|
||||
// thinkingModelName := info.OriginModelName + "-thinking"
|
||||
// if operation_setting.SelfUseModeEnabled || helper.ContainPriceOrRatio(thinkingModelName) {
|
||||
// info.OriginModelName = thinkingModelName
|
||||
// }
|
||||
// }
|
||||
//}
|
||||
|
||||
return
|
||||
}
|
||||
|
||||
|
||||
@@ -8,6 +8,15 @@ type GeminiChatRequest struct {
|
||||
SystemInstructions *GeminiChatContent `json:"system_instruction,omitempty"`
|
||||
}
|
||||
|
||||
type GeminiThinkingConfig struct {
|
||||
IncludeThoughts bool `json:"includeThoughts,omitempty"`
|
||||
ThinkingBudget *int `json:"thinkingBudget,omitempty"`
|
||||
}
|
||||
|
||||
func (c *GeminiThinkingConfig) SetThinkingBudget(budget int) {
|
||||
c.ThinkingBudget = &budget
|
||||
}
|
||||
|
||||
type GeminiInlineData struct {
|
||||
MimeType string `json:"mimeType"`
|
||||
Data string `json:"data"`
|
||||
@@ -71,15 +80,17 @@ type GeminiChatTool struct {
|
||||
}
|
||||
|
||||
type GeminiChatGenerationConfig struct {
|
||||
Temperature *float64 `json:"temperature,omitempty"`
|
||||
TopP float64 `json:"topP,omitempty"`
|
||||
TopK float64 `json:"topK,omitempty"`
|
||||
MaxOutputTokens uint `json:"maxOutputTokens,omitempty"`
|
||||
CandidateCount int `json:"candidateCount,omitempty"`
|
||||
StopSequences []string `json:"stopSequences,omitempty"`
|
||||
ResponseMimeType string `json:"responseMimeType,omitempty"`
|
||||
ResponseSchema any `json:"responseSchema,omitempty"`
|
||||
Seed int64 `json:"seed,omitempty"`
|
||||
Temperature *float64 `json:"temperature,omitempty"`
|
||||
TopP float64 `json:"topP,omitempty"`
|
||||
TopK float64 `json:"topK,omitempty"`
|
||||
MaxOutputTokens uint `json:"maxOutputTokens,omitempty"`
|
||||
CandidateCount int `json:"candidateCount,omitempty"`
|
||||
StopSequences []string `json:"stopSequences,omitempty"`
|
||||
ResponseMimeType string `json:"responseMimeType,omitempty"`
|
||||
ResponseSchema any `json:"responseSchema,omitempty"`
|
||||
Seed int64 `json:"seed,omitempty"`
|
||||
ResponseModalities []string `json:"responseModalities,omitempty"`
|
||||
ThinkingConfig *GeminiThinkingConfig `json:"thinkingConfig,omitempty"`
|
||||
}
|
||||
|
||||
type GeminiChatCandidate struct {
|
||||
@@ -108,6 +119,7 @@ type GeminiUsageMetadata struct {
|
||||
PromptTokenCount int `json:"promptTokenCount"`
|
||||
CandidatesTokenCount int `json:"candidatesTokenCount"`
|
||||
TotalTokenCount int `json:"totalTokenCount"`
|
||||
ThoughtsTokenCount int `json:"thoughtsTokenCount"`
|
||||
}
|
||||
|
||||
// Imagen related structs
|
||||
|
||||
@@ -19,11 +19,10 @@ import (
|
||||
)
|
||||
|
||||
// Setting safety to the lowest possible values since Gemini is already powerless enough
|
||||
func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatRequest, error) {
|
||||
func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest, info *relaycommon.RelayInfo) (*GeminiChatRequest, error) {
|
||||
|
||||
geminiRequest := GeminiChatRequest{
|
||||
Contents: make([]GeminiChatContent, 0, len(textRequest.Messages)),
|
||||
//SafetySettings: []GeminiChatSafetySettings{},
|
||||
GenerationConfig: GeminiChatGenerationConfig{
|
||||
Temperature: textRequest.Temperature,
|
||||
TopP: textRequest.TopP,
|
||||
@@ -32,6 +31,30 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
|
||||
},
|
||||
}
|
||||
|
||||
if model_setting.IsGeminiModelSupportImagine(info.UpstreamModelName) {
|
||||
geminiRequest.GenerationConfig.ResponseModalities = []string{
|
||||
"TEXT",
|
||||
"IMAGE",
|
||||
}
|
||||
}
|
||||
|
||||
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
|
||||
if strings.HasSuffix(info.OriginModelName, "-thinking") {
|
||||
budgetTokens := model_setting.GetGeminiSettings().ThinkingAdapterBudgetTokensPercentage * float64(geminiRequest.GenerationConfig.MaxOutputTokens)
|
||||
if budgetTokens == 0 || budgetTokens > 24576 {
|
||||
budgetTokens = 24576
|
||||
}
|
||||
geminiRequest.GenerationConfig.ThinkingConfig = &GeminiThinkingConfig{
|
||||
ThinkingBudget: common.GetPointer(int(budgetTokens)),
|
||||
IncludeThoughts: true,
|
||||
}
|
||||
} else if strings.HasSuffix(info.OriginModelName, "-nothinking") {
|
||||
geminiRequest.GenerationConfig.ThinkingConfig = &GeminiThinkingConfig{
|
||||
ThinkingBudget: common.GetPointer(0),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
safetySettings := make([]GeminiChatSafetySettings, 0, len(SafetySettingList))
|
||||
for _, category := range SafetySettingList {
|
||||
safetySettings = append(safetySettings, GeminiChatSafetySettings{
|
||||
@@ -56,6 +79,7 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
|
||||
continue
|
||||
}
|
||||
if tool.Function.Parameters != nil {
|
||||
|
||||
params, ok := tool.Function.Parameters.(map[string]interface{})
|
||||
if ok {
|
||||
if props, hasProps := params["properties"].(map[string]interface{}); hasProps {
|
||||
@@ -65,6 +89,9 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
|
||||
}
|
||||
}
|
||||
}
|
||||
// Clean the parameters before appending
|
||||
cleanedParams := cleanFunctionParameters(tool.Function.Parameters)
|
||||
tool.Function.Parameters = cleanedParams
|
||||
functions = append(functions, tool.Function)
|
||||
}
|
||||
if codeExecution {
|
||||
@@ -86,11 +113,11 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
|
||||
// json_data, _ := json.Marshal(geminiRequest.Tools)
|
||||
// common.SysLog("tools_json: " + string(json_data))
|
||||
} else if textRequest.Functions != nil {
|
||||
geminiRequest.Tools = []GeminiChatTool{
|
||||
{
|
||||
FunctionDeclarations: textRequest.Functions,
|
||||
},
|
||||
}
|
||||
//geminiRequest.Tools = []GeminiChatTool{
|
||||
// {
|
||||
// FunctionDeclarations: textRequest.Functions,
|
||||
// },
|
||||
//}
|
||||
}
|
||||
|
||||
if textRequest.ResponseFormat != nil && (textRequest.ResponseFormat.Type == "json_schema" || textRequest.ResponseFormat.Type == "json_object") {
|
||||
@@ -204,6 +231,34 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
|
||||
},
|
||||
})
|
||||
}
|
||||
} else if part.Type == dto.ContentTypeFile {
|
||||
if part.GetFile().FileId != "" {
|
||||
return nil, fmt.Errorf("only base64 file is supported in gemini")
|
||||
}
|
||||
format, base64String, err := service.DecodeBase64FileData(part.GetFile().FileData)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("decode base64 file data failed: %s", err.Error())
|
||||
}
|
||||
parts = append(parts, GeminiPart{
|
||||
InlineData: &GeminiInlineData{
|
||||
MimeType: format,
|
||||
Data: base64String,
|
||||
},
|
||||
})
|
||||
} else if part.Type == dto.ContentTypeInputAudio {
|
||||
if part.GetInputAudio().Data == "" {
|
||||
return nil, fmt.Errorf("only base64 audio is supported in gemini")
|
||||
}
|
||||
format, base64String, err := service.DecodeBase64FileData(part.GetInputAudio().Data)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("decode base64 audio data failed: %s", err.Error())
|
||||
}
|
||||
parts = append(parts, GeminiPart{
|
||||
InlineData: &GeminiInlineData{
|
||||
MimeType: format,
|
||||
Data: base64String,
|
||||
},
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -229,6 +284,102 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
|
||||
return &geminiRequest, nil
|
||||
}
|
||||
|
||||
// cleanFunctionParameters recursively removes unsupported fields from Gemini function parameters.
|
||||
func cleanFunctionParameters(params interface{}) interface{} {
|
||||
if params == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
paramMap, ok := params.(map[string]interface{})
|
||||
if !ok {
|
||||
// Not a map, return as is (e.g., could be an array or primitive)
|
||||
return params
|
||||
}
|
||||
|
||||
// Create a copy to avoid modifying the original
|
||||
cleanedMap := make(map[string]interface{})
|
||||
for k, v := range paramMap {
|
||||
cleanedMap[k] = v
|
||||
}
|
||||
|
||||
// Remove unsupported root-level fields
|
||||
delete(cleanedMap, "default")
|
||||
delete(cleanedMap, "exclusiveMaximum")
|
||||
delete(cleanedMap, "exclusiveMinimum")
|
||||
delete(cleanedMap, "$schema")
|
||||
delete(cleanedMap, "additionalProperties")
|
||||
|
||||
// Clean properties
|
||||
if props, ok := cleanedMap["properties"].(map[string]interface{}); ok && props != nil {
|
||||
cleanedProps := make(map[string]interface{})
|
||||
for propName, propValue := range props {
|
||||
propMap, ok := propValue.(map[string]interface{})
|
||||
if !ok {
|
||||
cleanedProps[propName] = propValue // Keep non-map properties
|
||||
continue
|
||||
}
|
||||
|
||||
// Create a copy of the property map
|
||||
cleanedPropMap := make(map[string]interface{})
|
||||
for k, v := range propMap {
|
||||
cleanedPropMap[k] = v
|
||||
}
|
||||
|
||||
// Remove unsupported fields
|
||||
delete(cleanedPropMap, "default")
|
||||
delete(cleanedPropMap, "exclusiveMaximum")
|
||||
delete(cleanedPropMap, "exclusiveMinimum")
|
||||
delete(cleanedPropMap, "$schema")
|
||||
delete(cleanedPropMap, "additionalProperties")
|
||||
|
||||
// Check and clean 'format' for string types
|
||||
if propType, typeExists := cleanedPropMap["type"].(string); typeExists && propType == "string" {
|
||||
if formatValue, formatExists := cleanedPropMap["format"].(string); formatExists {
|
||||
if formatValue != "enum" && formatValue != "date-time" {
|
||||
delete(cleanedPropMap, "format")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Recursively clean nested properties within this property if it's an object/array
|
||||
// Check the type before recursing
|
||||
if propType, typeExists := cleanedPropMap["type"].(string); typeExists && (propType == "object" || propType == "array") {
|
||||
cleanedProps[propName] = cleanFunctionParameters(cleanedPropMap)
|
||||
} else {
|
||||
cleanedProps[propName] = cleanedPropMap // Assign the cleaned map back if not recursing
|
||||
}
|
||||
|
||||
}
|
||||
cleanedMap["properties"] = cleanedProps
|
||||
}
|
||||
|
||||
// Recursively clean items in arrays if needed (e.g., type: array, items: { ... })
|
||||
if items, ok := cleanedMap["items"].(map[string]interface{}); ok && items != nil {
|
||||
cleanedMap["items"] = cleanFunctionParameters(items)
|
||||
}
|
||||
// Also handle items if it's an array of schemas
|
||||
if itemsArray, ok := cleanedMap["items"].([]interface{}); ok {
|
||||
cleanedItemsArray := make([]interface{}, len(itemsArray))
|
||||
for i, item := range itemsArray {
|
||||
cleanedItemsArray[i] = cleanFunctionParameters(item)
|
||||
}
|
||||
cleanedMap["items"] = cleanedItemsArray
|
||||
}
|
||||
|
||||
// Recursively clean other schema composition keywords if necessary
|
||||
for _, field := range []string{"allOf", "anyOf", "oneOf"} {
|
||||
if nested, ok := cleanedMap[field].([]interface{}); ok {
|
||||
cleanedNested := make([]interface{}, len(nested))
|
||||
for i, item := range nested {
|
||||
cleanedNested[i] = cleanFunctionParameters(item)
|
||||
}
|
||||
cleanedMap[field] = cleanedNested
|
||||
}
|
||||
}
|
||||
|
||||
return cleanedMap
|
||||
}
|
||||
|
||||
func removeAdditionalPropertiesWithDepth(schema interface{}, depth int) interface{} {
|
||||
if depth >= 5 {
|
||||
return schema
|
||||
@@ -240,6 +391,7 @@ func removeAdditionalPropertiesWithDepth(schema interface{}, depth int) interfac
|
||||
}
|
||||
// 删除所有的title字段
|
||||
delete(v, "title")
|
||||
delete(v, "$schema")
|
||||
// 如果type不为object和array,则直接返回
|
||||
if typeVal, exists := v["type"]; !exists || (typeVal != "object" && typeVal != "array") {
|
||||
return schema
|
||||
@@ -427,9 +579,10 @@ func responseGeminiChat2OpenAI(response *GeminiChatResponse) *dto.OpenAITextResp
|
||||
return &fullTextResponse
|
||||
}
|
||||
|
||||
func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.ChatCompletionsStreamResponse, bool) {
|
||||
func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.ChatCompletionsStreamResponse, bool, bool) {
|
||||
choices := make([]dto.ChatCompletionsStreamResponseChoice, 0, len(geminiResponse.Candidates))
|
||||
isStop := false
|
||||
hasImage := false
|
||||
for _, candidate := range geminiResponse.Candidates {
|
||||
if candidate.FinishReason != nil && *candidate.FinishReason == "STOP" {
|
||||
isStop = true
|
||||
@@ -455,7 +608,13 @@ func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.C
|
||||
}
|
||||
}
|
||||
for _, part := range candidate.Content.Parts {
|
||||
if part.FunctionCall != nil {
|
||||
if part.InlineData != nil {
|
||||
if strings.HasPrefix(part.InlineData.MimeType, "image") {
|
||||
imgText := ""
|
||||
texts = append(texts, imgText)
|
||||
hasImage = true
|
||||
}
|
||||
} else if part.FunctionCall != nil {
|
||||
isTools = true
|
||||
if call := getResponseToolCall(&part); call != nil {
|
||||
call.SetIndex(len(choice.Delta.ToolCalls))
|
||||
@@ -483,7 +642,7 @@ func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.C
|
||||
var response dto.ChatCompletionsStreamResponse
|
||||
response.Object = "chat.completion.chunk"
|
||||
response.Choices = choices
|
||||
return &response, isStop
|
||||
return &response, isStop, hasImage
|
||||
}
|
||||
|
||||
func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
@@ -491,23 +650,28 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
|
||||
id := fmt.Sprintf("chatcmpl-%s", common.GetUUID())
|
||||
createAt := common.GetTimestamp()
|
||||
var usage = &dto.Usage{}
|
||||
var imageCount int
|
||||
|
||||
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
|
||||
var geminiResponse GeminiChatResponse
|
||||
err := json.Unmarshal([]byte(data), &geminiResponse)
|
||||
err := common.DecodeJsonStr(data, &geminiResponse)
|
||||
if err != nil {
|
||||
common.LogError(c, "error unmarshalling stream response: "+err.Error())
|
||||
return false
|
||||
}
|
||||
|
||||
response, isStop := streamResponseGeminiChat2OpenAI(&geminiResponse)
|
||||
response, isStop, hasImage := streamResponseGeminiChat2OpenAI(&geminiResponse)
|
||||
if hasImage {
|
||||
imageCount++
|
||||
}
|
||||
response.Id = id
|
||||
response.Created = createAt
|
||||
response.Model = info.UpstreamModelName
|
||||
// responseText += response.Choices[0].Delta.GetContentString()
|
||||
if geminiResponse.UsageMetadata.TotalTokenCount != 0 {
|
||||
usage.PromptTokens = geminiResponse.UsageMetadata.PromptTokenCount
|
||||
usage.CompletionTokens = geminiResponse.UsageMetadata.CandidatesTokenCount
|
||||
usage.CompletionTokenDetails.ReasoningTokens = geminiResponse.UsageMetadata.ThoughtsTokenCount
|
||||
usage.TotalTokens = geminiResponse.UsageMetadata.TotalTokenCount
|
||||
}
|
||||
err = helper.ObjectData(c, response)
|
||||
if err != nil {
|
||||
@@ -522,9 +686,14 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
|
||||
|
||||
var response *dto.ChatCompletionsStreamResponse
|
||||
|
||||
usage.TotalTokens = usage.PromptTokens + usage.CompletionTokens
|
||||
if imageCount != 0 {
|
||||
if usage.CompletionTokens == 0 {
|
||||
usage.CompletionTokens = imageCount * 258
|
||||
}
|
||||
}
|
||||
|
||||
usage.PromptTokensDetails.TextTokens = usage.PromptTokens
|
||||
usage.CompletionTokenDetails.TextTokens = usage.CompletionTokens
|
||||
usage.CompletionTokens = usage.TotalTokens - usage.PromptTokens
|
||||
|
||||
if info.ShouldIncludeUsage {
|
||||
response = helper.GenerateFinalUsageResponse(id, createAt, info.UpstreamModelName, *usage)
|
||||
@@ -570,6 +739,10 @@ func GeminiChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
|
||||
CompletionTokens: geminiResponse.UsageMetadata.CandidatesTokenCount,
|
||||
TotalTokens: geminiResponse.UsageMetadata.TotalTokenCount,
|
||||
}
|
||||
|
||||
usage.CompletionTokenDetails.ReasoningTokens = geminiResponse.UsageMetadata.ThoughtsTokenCount
|
||||
usage.CompletionTokens = usage.TotalTokens - usage.PromptTokens
|
||||
|
||||
fullTextResponse.Usage = usage
|
||||
jsonResponse, err := json.Marshal(fullTextResponse)
|
||||
if err != nil {
|
||||
|
||||
@@ -3,7 +3,6 @@ package jina
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -12,6 +11,8 @@ import (
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/common_handler"
|
||||
"one-api/relay/constant"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -55,6 +56,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
return request, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -2,13 +2,14 @@ package mistral
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -59,6 +60,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package mokaai
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -11,6 +10,8 @@ import (
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/constant"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -74,6 +75,11 @@ func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dt
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -2,7 +2,6 @@ package ollama
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -10,6 +9,8 @@ import (
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
relayconstant "one-api/relay/constant"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -64,6 +65,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return requestOpenAI2Embeddings(request), nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
"io"
|
||||
"mime/multipart"
|
||||
"net/http"
|
||||
"net/textproto"
|
||||
"one-api/common"
|
||||
constant2 "one-api/constant"
|
||||
"one-api/dto"
|
||||
@@ -22,6 +23,7 @@ import (
|
||||
"one-api/relay/common_handler"
|
||||
"one-api/relay/constant"
|
||||
"one-api/service"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -36,7 +38,7 @@ func (a *Adaptor) ConvertClaudeRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
if !strings.Contains(request.Model, "claude") {
|
||||
return nil, fmt.Errorf("you are using openai channel type with path /v1/messages, only claude model supported convert, but got %s", request.Model)
|
||||
}
|
||||
aiRequest, err := service.ClaudeToOpenAIRequest(*request)
|
||||
aiRequest, err := service.ClaudeToOpenAIRequest(*request, info)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -65,6 +67,9 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
if info.RelayFormat == relaycommon.RelayFormatClaude {
|
||||
return fmt.Sprintf("%s/v1/chat/completions", info.BaseUrl), nil
|
||||
}
|
||||
if info.RelayMode == constant.RelayModeResponses {
|
||||
return fmt.Sprintf("%s/v1/responses", info.BaseUrl), nil
|
||||
}
|
||||
if info.RelayMode == constant.RelayModeRealtime {
|
||||
if strings.HasPrefix(info.BaseUrl, "https://") {
|
||||
baseUrl := strings.TrimPrefix(info.BaseUrl, "https://")
|
||||
@@ -87,7 +92,10 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
requestURL = fmt.Sprintf("%s?api-version=%s", requestURL, apiVersion)
|
||||
task := strings.TrimPrefix(requestURL, "/v1/")
|
||||
model_ := info.UpstreamModelName
|
||||
model_ = strings.Replace(model_, ".", "", -1)
|
||||
// 2025年5月10日后创建的渠道不移除.
|
||||
if info.ChannelCreateTime < constant2.AzureNoRemoveDotTime {
|
||||
model_ = strings.Replace(model_, ".", "", -1)
|
||||
}
|
||||
// https://github.com/songquanpeng/one-api/issues/67
|
||||
requestURL = fmt.Sprintf("/openai/deployments/%s/%s", model_, task)
|
||||
if info.RelayMode == constant.RelayModeRealtime {
|
||||
@@ -147,14 +155,12 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
if info.ChannelType != common.ChannelTypeOpenAI && info.ChannelType != common.ChannelTypeAzure {
|
||||
request.StreamOptions = nil
|
||||
}
|
||||
if strings.HasPrefix(request.Model, "o1") || strings.HasPrefix(request.Model, "o3") {
|
||||
if strings.HasPrefix(request.Model, "o") {
|
||||
if request.MaxCompletionTokens == 0 && request.MaxTokens != 0 {
|
||||
request.MaxCompletionTokens = request.MaxTokens
|
||||
request.MaxTokens = 0
|
||||
}
|
||||
if strings.HasPrefix(request.Model, "o3") || strings.HasPrefix(request.Model, "o1") {
|
||||
request.Temperature = nil
|
||||
}
|
||||
request.Temperature = nil
|
||||
if strings.HasSuffix(request.Model, "-high") {
|
||||
request.ReasoningEffort = "high"
|
||||
request.Model = strings.TrimSuffix(request.Model, "-high")
|
||||
@@ -167,11 +173,13 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
}
|
||||
info.ReasoningEffort = request.ReasoningEffort
|
||||
info.UpstreamModelName = request.Model
|
||||
}
|
||||
if request.Model == "o1" || request.Model == "o1-2024-12-17" || strings.HasPrefix(request.Model, "o3") {
|
||||
//修改第一个Message的内容,将system改为developer
|
||||
if len(request.Messages) > 0 && request.Messages[0].Role == "system" {
|
||||
request.Messages[0].Role = "developer"
|
||||
|
||||
// o系列模型developer适配(o1-mini除外)
|
||||
if !strings.HasPrefix(request.Model, "o1-mini") && !strings.HasPrefix(request.Model, "o1-preview") {
|
||||
//修改第一个Message的内容,将system改为developer
|
||||
if len(request.Messages) > 0 && request.Messages[0].Role == "system" {
|
||||
request.Messages[0].Role = "developer"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -236,11 +244,167 @@ func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInf
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
|
||||
switch info.RelayMode {
|
||||
case constant.RelayModeImagesEdits:
|
||||
|
||||
var requestBody bytes.Buffer
|
||||
writer := multipart.NewWriter(&requestBody)
|
||||
|
||||
writer.WriteField("model", request.Model)
|
||||
// 获取所有表单字段
|
||||
formData := c.Request.PostForm
|
||||
// 遍历表单字段并打印输出
|
||||
for key, values := range formData {
|
||||
if key == "model" {
|
||||
continue
|
||||
}
|
||||
for _, value := range values {
|
||||
writer.WriteField(key, value)
|
||||
}
|
||||
}
|
||||
|
||||
// Parse the multipart form to handle both single image and multiple images
|
||||
if err := c.Request.ParseMultipartForm(32 << 20); err != nil { // 32MB max memory
|
||||
return nil, errors.New("failed to parse multipart form")
|
||||
}
|
||||
|
||||
if c.Request.MultipartForm != nil && c.Request.MultipartForm.File != nil {
|
||||
// Check if "image" field exists in any form, including array notation
|
||||
var imageFiles []*multipart.FileHeader
|
||||
var exists bool
|
||||
|
||||
// First check for standard "image" field
|
||||
if imageFiles, exists = c.Request.MultipartForm.File["image"]; !exists || len(imageFiles) == 0 {
|
||||
// If not found, check for "image[]" field
|
||||
if imageFiles, exists = c.Request.MultipartForm.File["image[]"]; !exists || len(imageFiles) == 0 {
|
||||
// If still not found, iterate through all fields to find any that start with "image["
|
||||
foundArrayImages := false
|
||||
for fieldName, files := range c.Request.MultipartForm.File {
|
||||
if strings.HasPrefix(fieldName, "image[") && len(files) > 0 {
|
||||
foundArrayImages = true
|
||||
for _, file := range files {
|
||||
imageFiles = append(imageFiles, file)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no image fields found at all
|
||||
if !foundArrayImages && (len(imageFiles) == 0) {
|
||||
return nil, errors.New("image is required")
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Process all image files
|
||||
for i, fileHeader := range imageFiles {
|
||||
file, err := fileHeader.Open()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open image file %d: %w", i, err)
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
// If multiple images, use image[] as the field name
|
||||
fieldName := "image"
|
||||
if len(imageFiles) > 1 {
|
||||
fieldName = "image[]"
|
||||
}
|
||||
|
||||
// Determine MIME type based on file extension
|
||||
mimeType := detectImageMimeType(fileHeader.Filename)
|
||||
|
||||
// Create a form file with the appropriate content type
|
||||
h := make(textproto.MIMEHeader)
|
||||
h.Set("Content-Disposition", fmt.Sprintf(`form-data; name="%s"; filename="%s"`, fieldName, fileHeader.Filename))
|
||||
h.Set("Content-Type", mimeType)
|
||||
|
||||
part, err := writer.CreatePart(h)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("create form part failed for image %d: %w", i, err)
|
||||
}
|
||||
|
||||
if _, err := io.Copy(part, file); err != nil {
|
||||
return nil, fmt.Errorf("copy file failed for image %d: %w", i, err)
|
||||
}
|
||||
}
|
||||
|
||||
// Handle mask file if present
|
||||
if maskFiles, exists := c.Request.MultipartForm.File["mask"]; exists && len(maskFiles) > 0 {
|
||||
maskFile, err := maskFiles[0].Open()
|
||||
if err != nil {
|
||||
return nil, errors.New("failed to open mask file")
|
||||
}
|
||||
defer maskFile.Close()
|
||||
|
||||
// Determine MIME type for mask file
|
||||
mimeType := detectImageMimeType(maskFiles[0].Filename)
|
||||
|
||||
// Create a form file with the appropriate content type
|
||||
h := make(textproto.MIMEHeader)
|
||||
h.Set("Content-Disposition", fmt.Sprintf(`form-data; name="mask"; filename="%s"`, maskFiles[0].Filename))
|
||||
h.Set("Content-Type", mimeType)
|
||||
|
||||
maskPart, err := writer.CreatePart(h)
|
||||
if err != nil {
|
||||
return nil, errors.New("create form file failed for mask")
|
||||
}
|
||||
|
||||
if _, err := io.Copy(maskPart, maskFile); err != nil {
|
||||
return nil, errors.New("copy mask file failed")
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return nil, errors.New("no multipart form data found")
|
||||
}
|
||||
|
||||
// 关闭 multipart 编写器以设置分界线
|
||||
writer.Close()
|
||||
c.Request.Header.Set("Content-Type", writer.FormDataContentType())
|
||||
return bytes.NewReader(requestBody.Bytes()), nil
|
||||
|
||||
default:
|
||||
return request, nil
|
||||
}
|
||||
}
|
||||
|
||||
// detectImageMimeType determines the MIME type based on the file extension
|
||||
func detectImageMimeType(filename string) string {
|
||||
ext := strings.ToLower(filepath.Ext(filename))
|
||||
switch ext {
|
||||
case ".jpg", ".jpeg":
|
||||
return "image/jpeg"
|
||||
case ".png":
|
||||
return "image/png"
|
||||
case ".webp":
|
||||
return "image/webp"
|
||||
default:
|
||||
// Try to detect from extension if possible
|
||||
if strings.HasPrefix(ext, ".jp") {
|
||||
return "image/jpeg"
|
||||
}
|
||||
// Default to png as a fallback
|
||||
return "image/png"
|
||||
}
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// 模型后缀转换 reasoning effort
|
||||
if strings.HasSuffix(request.Model, "-high") {
|
||||
request.Reasoning.Effort = "high"
|
||||
request.Model = strings.TrimSuffix(request.Model, "-high")
|
||||
} else if strings.HasSuffix(request.Model, "-low") {
|
||||
request.Reasoning.Effort = "low"
|
||||
request.Model = strings.TrimSuffix(request.Model, "-low")
|
||||
} else if strings.HasSuffix(request.Model, "-medium") {
|
||||
request.Reasoning.Effort = "medium"
|
||||
request.Model = strings.TrimSuffix(request.Model, "-medium")
|
||||
}
|
||||
return request, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
if info.RelayMode == constant.RelayModeAudioTranscription || info.RelayMode == constant.RelayModeAudioTranslation {
|
||||
if info.RelayMode == constant.RelayModeAudioTranscription ||
|
||||
info.RelayMode == constant.RelayModeAudioTranslation ||
|
||||
info.RelayMode == constant.RelayModeImagesEdits {
|
||||
return channel.DoFormRequest(a, c, info, requestBody)
|
||||
} else if info.RelayMode == constant.RelayModeRealtime {
|
||||
return channel.DoWssRequest(a, c, info, requestBody)
|
||||
@@ -259,10 +423,16 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
|
||||
fallthrough
|
||||
case constant.RelayModeAudioTranscription:
|
||||
err, usage = OpenaiSTTHandler(c, resp, info, a.ResponseFormat)
|
||||
case constant.RelayModeImagesGenerations:
|
||||
err, usage = OpenaiTTSHandler(c, resp, info)
|
||||
case constant.RelayModeImagesGenerations, constant.RelayModeImagesEdits:
|
||||
err, usage = OpenaiHandlerWithUsage(c, resp, info)
|
||||
case constant.RelayModeRerank:
|
||||
err, usage = common_handler.RerankHandler(c, info, resp)
|
||||
case constant.RelayModeResponses:
|
||||
if info.IsStream {
|
||||
err, usage = OaiResponsesStreamHandler(c, resp, info)
|
||||
} else {
|
||||
err, usage = OaiResponsesHandler(c, resp, info)
|
||||
}
|
||||
default:
|
||||
if info.IsStream {
|
||||
err, usage = OaiStreamHandler(c, resp, info)
|
||||
|
||||
@@ -31,6 +31,9 @@ func handleClaudeFormat(c *gin.Context, data string, info *relaycommon.RelayInfo
|
||||
return err
|
||||
}
|
||||
|
||||
if streamResponse.Usage != nil {
|
||||
info.ClaudeConvertInfo.Usage = streamResponse.Usage
|
||||
}
|
||||
claudeResponses := service.StreamResponseOpenAI2Claude(&streamResponse, info)
|
||||
for _, resp := range claudeResponses {
|
||||
helper.ClaudeData(c, *resp)
|
||||
@@ -38,12 +41,7 @@ func handleClaudeFormat(c *gin.Context, data string, info *relaycommon.RelayInfo
|
||||
return nil
|
||||
}
|
||||
|
||||
func processStreamResponse(item string, responseTextBuilder *strings.Builder, toolCount *int) error {
|
||||
var streamResponse dto.ChatCompletionsStreamResponse
|
||||
if err := json.Unmarshal(common.StringToByteSlice(item), &streamResponse); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
func ProcessStreamResponse(streamResponse dto.ChatCompletionsStreamResponse, responseTextBuilder *strings.Builder, toolCount *int) error {
|
||||
for _, choice := range streamResponse.Choices {
|
||||
responseTextBuilder.WriteString(choice.Delta.GetContentString())
|
||||
responseTextBuilder.WriteString(choice.Delta.GetReasoningContent())
|
||||
@@ -78,7 +76,11 @@ func processChatCompletions(streamResp string, streamItems []string, responseTex
|
||||
// 一次性解析失败,逐个解析
|
||||
common.SysError("error unmarshalling stream response: " + err.Error())
|
||||
for _, item := range streamItems {
|
||||
if err := processStreamResponse(item, responseTextBuilder, toolCount); err != nil {
|
||||
var streamResponse dto.ChatCompletionsStreamResponse
|
||||
if err := json.Unmarshal(common.StringToByteSlice(item), &streamResponse); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := ProcessStreamResponse(streamResponse, responseTextBuilder, toolCount); err != nil {
|
||||
common.SysError("error processing stream response: " + err.Error())
|
||||
}
|
||||
}
|
||||
@@ -170,15 +172,14 @@ func handleFinalResponse(c *gin.Context, info *relaycommon.RelayInfo, lastStream
|
||||
helper.Done(c)
|
||||
|
||||
case relaycommon.RelayFormatClaude:
|
||||
info.ClaudeConvertInfo.Done = true
|
||||
var streamResponse dto.ChatCompletionsStreamResponse
|
||||
if err := json.Unmarshal(common.StringToByteSlice(lastStreamData), &streamResponse); err != nil {
|
||||
common.SysError("error unmarshalling stream response: " + err.Error())
|
||||
return
|
||||
}
|
||||
|
||||
if !containStreamUsage {
|
||||
streamResponse.Usage = usage
|
||||
}
|
||||
info.ClaudeConvertInfo.Usage = usage
|
||||
|
||||
claudeResponses := service.StreamResponseOpenAI2Claude(&streamResponse, info)
|
||||
for _, resp := range claudeResponses {
|
||||
@@ -186,3 +187,10 @@ func handleFinalResponse(c *gin.Context, info *relaycommon.RelayInfo, lastStream
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func sendResponsesStreamData(c *gin.Context, streamResponse dto.ResponsesStreamResponse, data string) {
|
||||
if data == "" {
|
||||
return
|
||||
}
|
||||
helper.ResponseChunkData(c, streamResponse, data)
|
||||
}
|
||||
|
||||
@@ -117,6 +117,7 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
|
||||
model := info.UpstreamModelName
|
||||
|
||||
var responseTextBuilder strings.Builder
|
||||
var toolCount int
|
||||
var usage = &dto.Usage{}
|
||||
var streamItems []string // store stream items
|
||||
var forceFormat bool
|
||||
@@ -130,8 +131,6 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
|
||||
thinkToContent = think2Content
|
||||
}
|
||||
|
||||
toolCount := 0
|
||||
|
||||
var (
|
||||
lastStreamData string
|
||||
)
|
||||
@@ -142,7 +141,6 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
|
||||
if err != nil {
|
||||
common.SysError("error handling stream format: " + err.Error())
|
||||
}
|
||||
info.SetFirstResponseTime()
|
||||
}
|
||||
lastStreamData = data
|
||||
streamItems = append(streamItems, data)
|
||||
@@ -170,8 +168,10 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if shouldSendLastResp {
|
||||
sendStreamData(c, info, lastStreamData, forceFormat, thinkToContent)
|
||||
//err = handleStreamFormat(c, info, lastStreamData, forceFormat, thinkToContent)
|
||||
}
|
||||
|
||||
// 处理token计算
|
||||
@@ -595,3 +595,52 @@ func preConsumeUsage(ctx *gin.Context, info *relaycommon.RelayInfo, usage *dto.R
|
||||
err := service.PreWssConsumeQuota(ctx, info, usage)
|
||||
return err
|
||||
}
|
||||
|
||||
func OpenaiHandlerWithUsage(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
responseBody, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = resp.Body.Close()
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
// Reset response body
|
||||
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
|
||||
// We shouldn't set the header before we parse the response body, because the parse part may fail.
|
||||
// And then we will have to send an error response, but in this case, the header has already been set.
|
||||
// So the httpClient will be confused by the response.
|
||||
// For example, Postman will report error, and we cannot check the response at all.
|
||||
for k, v := range resp.Header {
|
||||
c.Writer.Header().Set(k, v[0])
|
||||
}
|
||||
// reset content length
|
||||
c.Writer.Header().Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
|
||||
c.Writer.WriteHeader(resp.StatusCode)
|
||||
_, err = io.Copy(c.Writer, resp.Body)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = resp.Body.Close()
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
|
||||
var usageResp dto.SimpleResponse
|
||||
err = json.Unmarshal(responseBody, &usageResp)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "parse_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
// format
|
||||
if usageResp.InputTokens > 0 {
|
||||
usageResp.PromptTokens += usageResp.InputTokens
|
||||
}
|
||||
if usageResp.OutputTokens > 0 {
|
||||
usageResp.CompletionTokens += usageResp.OutputTokens
|
||||
}
|
||||
if usageResp.InputTokensDetails != nil {
|
||||
usageResp.PromptTokensDetails.ImageTokens += usageResp.InputTokensDetails.ImageTokens
|
||||
usageResp.PromptTokensDetails.TextTokens += usageResp.InputTokensDetails.TextTokens
|
||||
}
|
||||
return nil, &usageResp.Usage
|
||||
}
|
||||
|
||||
119
relay/channel/openai/relay_responses.go
Normal file
119
relay/channel/openai/relay_responses.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package openai
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/dto"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/helper"
|
||||
"one-api/service"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func OaiResponsesHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
// read response body
|
||||
var responsesResponse dto.OpenAIResponsesResponse
|
||||
responseBody, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = resp.Body.Close()
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = common.DecodeJson(responseBody, &responsesResponse)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
if responsesResponse.Error != nil {
|
||||
return &dto.OpenAIErrorWithStatusCode{
|
||||
Error: dto.OpenAIError{
|
||||
Message: responsesResponse.Error.Message,
|
||||
Type: "openai_error",
|
||||
Code: responsesResponse.Error.Code,
|
||||
},
|
||||
StatusCode: resp.StatusCode,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// reset response body
|
||||
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
|
||||
// We shouldn't set the header before we parse the response body, because the parse part may fail.
|
||||
// And then we will have to send an error response, but in this case, the header has already been set.
|
||||
// So the httpClient will be confused by the response.
|
||||
// For example, Postman will report error, and we cannot check the response at all.
|
||||
for k, v := range resp.Header {
|
||||
c.Writer.Header().Set(k, v[0])
|
||||
}
|
||||
c.Writer.WriteHeader(resp.StatusCode)
|
||||
// copy response body
|
||||
_, err = io.Copy(c.Writer, resp.Body)
|
||||
if err != nil {
|
||||
common.SysError("error copying response body: " + err.Error())
|
||||
}
|
||||
resp.Body.Close()
|
||||
// compute usage
|
||||
usage := dto.Usage{}
|
||||
usage.PromptTokens = responsesResponse.Usage.InputTokens
|
||||
usage.CompletionTokens = responsesResponse.Usage.OutputTokens
|
||||
usage.TotalTokens = responsesResponse.Usage.TotalTokens
|
||||
// 解析 Tools 用量
|
||||
for _, tool := range responsesResponse.Tools {
|
||||
info.ResponsesUsageInfo.BuiltInTools[tool.Type].CallCount++
|
||||
}
|
||||
return nil, &usage
|
||||
}
|
||||
|
||||
func OaiResponsesStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
if resp == nil || resp.Body == nil {
|
||||
common.LogError(c, "invalid response or response body")
|
||||
return service.OpenAIErrorWrapper(fmt.Errorf("invalid response"), "invalid_response", http.StatusInternalServerError), nil
|
||||
}
|
||||
|
||||
var usage = &dto.Usage{}
|
||||
var responseTextBuilder strings.Builder
|
||||
|
||||
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
|
||||
|
||||
// 检查当前数据是否包含 completed 状态和 usage 信息
|
||||
var streamResponse dto.ResponsesStreamResponse
|
||||
if err := common.DecodeJsonStr(data, &streamResponse); err == nil {
|
||||
sendResponsesStreamData(c, streamResponse, data)
|
||||
switch streamResponse.Type {
|
||||
case "response.completed":
|
||||
usage.PromptTokens = streamResponse.Response.Usage.InputTokens
|
||||
usage.CompletionTokens = streamResponse.Response.Usage.OutputTokens
|
||||
usage.TotalTokens = streamResponse.Response.Usage.TotalTokens
|
||||
case "response.output_text.delta":
|
||||
// 处理输出文本
|
||||
responseTextBuilder.WriteString(streamResponse.Delta)
|
||||
case dto.ResponsesOutputTypeItemDone:
|
||||
// 函数调用处理
|
||||
if streamResponse.Item != nil {
|
||||
switch streamResponse.Item.Type {
|
||||
case dto.BuildInCallWebSearchCall:
|
||||
info.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolWebSearchPreview].CallCount++
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
if usage.CompletionTokens == 0 {
|
||||
// 计算输出文本的 token 数量
|
||||
tempStr := responseTextBuilder.String()
|
||||
if len(tempStr) > 0 {
|
||||
// 非正常结束,使用输出文本的 token 数量
|
||||
completionTokens, _ := service.CountTextToken(tempStr, info.UpstreamModelName)
|
||||
usage.CompletionTokens = completionTokens
|
||||
}
|
||||
}
|
||||
|
||||
return nil, usage
|
||||
}
|
||||
@@ -3,13 +3,14 @@ package palm
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/service"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -60,6 +61,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,13 +3,14 @@ package perplexity
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -63,6 +64,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package siliconflow
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -11,6 +10,8 @@ import (
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/constant"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -58,6 +59,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
return request, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
@@ -74,13 +80,9 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
|
||||
switch info.RelayMode {
|
||||
case constant.RelayModeRerank:
|
||||
err, usage = siliconflowRerankHandler(c, resp)
|
||||
case constant.RelayModeChatCompletions:
|
||||
if info.IsStream {
|
||||
err, usage = openai.OaiStreamHandler(c, resp, info)
|
||||
} else {
|
||||
err, usage = openai.OpenaiHandler(c, resp, info)
|
||||
}
|
||||
case constant.RelayModeCompletions:
|
||||
fallthrough
|
||||
case constant.RelayModeChatCompletions:
|
||||
if info.IsStream {
|
||||
err, usage = openai.OaiStreamHandler(c, resp, info)
|
||||
} else {
|
||||
|
||||
@@ -3,7 +3,6 @@ package tencent
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
@@ -13,6 +12,8 @@ import (
|
||||
"one-api/service"
|
||||
"strconv"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -84,6 +85,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -4,7 +4,6 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -13,7 +12,10 @@ import (
|
||||
"one-api/relay/channel/gemini"
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/setting/model_setting"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
const (
|
||||
@@ -77,6 +79,15 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
a.AccountCredentials = *adc
|
||||
suffix := ""
|
||||
if a.RequestMode == RequestModeGemini {
|
||||
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
|
||||
// suffix -thinking and -nothinking
|
||||
if strings.HasSuffix(info.OriginModelName, "-thinking") {
|
||||
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-thinking")
|
||||
} else if strings.HasSuffix(info.OriginModelName, "-nothinking") {
|
||||
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-nothinking")
|
||||
}
|
||||
}
|
||||
|
||||
if info.IsStream {
|
||||
suffix = "streamGenerateContent?alt=sse"
|
||||
} else {
|
||||
@@ -143,7 +154,7 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
info.UpstreamModelName = claudeReq.Model
|
||||
return vertexClaudeReq, nil
|
||||
} else if a.RequestMode == RequestModeGemini {
|
||||
geminiRequest, err := gemini.CovertGemini2OpenAI(*request)
|
||||
geminiRequest, err := gemini.CovertGemini2OpenAI(*request, info)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
@@ -164,6 +175,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
@@ -3,7 +3,6 @@ package volcengine
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -12,6 +11,8 @@ import (
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/constant"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -71,6 +72,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return request, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
118
relay/channel/xai/adaptor.go
Normal file
118
relay/channel/xai/adaptor.go
Normal file
@@ -0,0 +1,118 @@
|
||||
package xai
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
"strings"
|
||||
|
||||
"one-api/relay/constant"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertClaudeRequest(*gin.Context, *relaycommon.RelayInfo, *dto.ClaudeRequest) (any, error) {
|
||||
//TODO implement me
|
||||
//panic("implement me")
|
||||
return nil, errors.New("not available")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error) {
|
||||
//not available
|
||||
return nil, errors.New("not available")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
|
||||
xaiRequest := ImageRequest{
|
||||
Model: request.Model,
|
||||
Prompt: request.Prompt,
|
||||
N: request.N,
|
||||
ResponseFormat: request.ResponseFormat,
|
||||
}
|
||||
return xaiRequest, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
|
||||
}
|
||||
|
||||
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
return relaycommon.GetFullRequestURL(info.BaseUrl, info.RequestURLPath, info.ChannelType), nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
|
||||
channel.SetupApiRequestHeader(info, c, req)
|
||||
req.Set("Authorization", "Bearer "+info.ApiKey)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayInfo, request *dto.GeneralOpenAIRequest) (any, error) {
|
||||
if request == nil {
|
||||
return nil, errors.New("request is nil")
|
||||
}
|
||||
if strings.HasPrefix(request.Model, "grok-3-mini") {
|
||||
if request.MaxCompletionTokens == 0 && request.MaxTokens != 0 {
|
||||
request.MaxCompletionTokens = request.MaxTokens
|
||||
request.MaxTokens = 0
|
||||
}
|
||||
if strings.HasSuffix(request.Model, "-high") {
|
||||
request.ReasoningEffort = "high"
|
||||
request.Model = strings.TrimSuffix(request.Model, "-high")
|
||||
} else if strings.HasSuffix(request.Model, "-low") {
|
||||
request.ReasoningEffort = "low"
|
||||
request.Model = strings.TrimSuffix(request.Model, "-low")
|
||||
} else if strings.HasSuffix(request.Model, "-medium") {
|
||||
request.ReasoningEffort = "medium"
|
||||
request.Model = strings.TrimSuffix(request.Model, "-medium")
|
||||
}
|
||||
info.ReasoningEffort = request.ReasoningEffort
|
||||
info.UpstreamModelName = request.Model
|
||||
}
|
||||
return request, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dto.RerankRequest) (any, error) {
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error) {
|
||||
//not available
|
||||
return nil, errors.New("not available")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
|
||||
switch info.RelayMode {
|
||||
case constant.RelayModeImagesGenerations, constant.RelayModeImagesEdits:
|
||||
err, usage = openai.OpenaiHandlerWithUsage(c, resp, info)
|
||||
default:
|
||||
if info.IsStream {
|
||||
err, usage = xAIStreamHandler(c, resp, info)
|
||||
} else {
|
||||
err, usage = xAIHandler(c, resp, info)
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func (a *Adaptor) GetModelList() []string {
|
||||
return ModelList
|
||||
}
|
||||
|
||||
func (a *Adaptor) GetChannelName() string {
|
||||
return ChannelName
|
||||
}
|
||||
18
relay/channel/xai/constants.go
Normal file
18
relay/channel/xai/constants.go
Normal file
@@ -0,0 +1,18 @@
|
||||
package xai
|
||||
|
||||
var ModelList = []string{
|
||||
// grok-3
|
||||
"grok-3-beta", "grok-3-mini-beta",
|
||||
// grok-3 mini
|
||||
"grok-3-fast-beta", "grok-3-mini-fast-beta",
|
||||
// extend grok-3-mini reasoning
|
||||
"grok-3-mini-beta-high", "grok-3-mini-beta-low", "grok-3-mini-beta-medium",
|
||||
"grok-3-mini-fast-beta-high", "grok-3-mini-fast-beta-low", "grok-3-mini-fast-beta-medium",
|
||||
// image model
|
||||
"grok-2-image",
|
||||
// legacy models
|
||||
"grok-2", "grok-2-vision",
|
||||
"grok-beta", "grok-vision-beta",
|
||||
}
|
||||
|
||||
var ChannelName = "xai"
|
||||
27
relay/channel/xai/dto.go
Normal file
27
relay/channel/xai/dto.go
Normal file
@@ -0,0 +1,27 @@
|
||||
package xai
|
||||
|
||||
import "one-api/dto"
|
||||
|
||||
// ChatCompletionResponse represents the response from XAI chat completion API
|
||||
type ChatCompletionResponse struct {
|
||||
Id string `json:"id"`
|
||||
Object string `json:"object"`
|
||||
Created int64 `json:"created"`
|
||||
Model string `json:"model"`
|
||||
Choices []dto.ChatCompletionsStreamResponseChoice
|
||||
Usage *dto.Usage `json:"usage"`
|
||||
SystemFingerprint string `json:"system_fingerprint"`
|
||||
}
|
||||
|
||||
// quality, size or style are not supported by xAI API at the moment.
|
||||
type ImageRequest struct {
|
||||
Model string `json:"model"`
|
||||
Prompt string `json:"prompt" binding:"required"`
|
||||
N int `json:"n,omitempty"`
|
||||
// Size string `json:"size,omitempty"`
|
||||
// Quality string `json:"quality,omitempty"`
|
||||
ResponseFormat string `json:"response_format,omitempty"`
|
||||
// Style string `json:"style,omitempty"`
|
||||
// User string `json:"user,omitempty"`
|
||||
// ExtraFields json.RawMessage `json:"extra_fields,omitempty"`
|
||||
}
|
||||
119
relay/channel/xai/text.go
Normal file
119
relay/channel/xai/text.go
Normal file
@@ -0,0 +1,119 @@
|
||||
package xai
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/helper"
|
||||
"one-api/service"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func streamResponseXAI2OpenAI(xAIResp *dto.ChatCompletionsStreamResponse, usage *dto.Usage) *dto.ChatCompletionsStreamResponse {
|
||||
if xAIResp == nil {
|
||||
return nil
|
||||
}
|
||||
if xAIResp.Usage != nil {
|
||||
xAIResp.Usage.CompletionTokens = usage.CompletionTokens
|
||||
}
|
||||
openAIResp := &dto.ChatCompletionsStreamResponse{
|
||||
Id: xAIResp.Id,
|
||||
Object: xAIResp.Object,
|
||||
Created: xAIResp.Created,
|
||||
Model: xAIResp.Model,
|
||||
Choices: xAIResp.Choices,
|
||||
Usage: xAIResp.Usage,
|
||||
}
|
||||
|
||||
return openAIResp
|
||||
}
|
||||
|
||||
func xAIStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
usage := &dto.Usage{}
|
||||
var responseTextBuilder strings.Builder
|
||||
var toolCount int
|
||||
var containStreamUsage bool
|
||||
|
||||
helper.SetEventStreamHeaders(c)
|
||||
|
||||
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
|
||||
var xAIResp *dto.ChatCompletionsStreamResponse
|
||||
err := json.Unmarshal([]byte(data), &xAIResp)
|
||||
if err != nil {
|
||||
common.SysError("error unmarshalling stream response: " + err.Error())
|
||||
return true
|
||||
}
|
||||
|
||||
// 把 xAI 的usage转换为 OpenAI 的usage
|
||||
if xAIResp.Usage != nil {
|
||||
containStreamUsage = true
|
||||
usage.PromptTokens = xAIResp.Usage.PromptTokens
|
||||
usage.TotalTokens = xAIResp.Usage.TotalTokens
|
||||
usage.CompletionTokens = usage.TotalTokens - usage.PromptTokens
|
||||
}
|
||||
|
||||
openaiResponse := streamResponseXAI2OpenAI(xAIResp, usage)
|
||||
_ = openai.ProcessStreamResponse(*openaiResponse, &responseTextBuilder, &toolCount)
|
||||
err = helper.ObjectData(c, openaiResponse)
|
||||
if err != nil {
|
||||
common.SysError(err.Error())
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
if !containStreamUsage {
|
||||
usage, _ = service.ResponseText2Usage(responseTextBuilder.String(), info.UpstreamModelName, info.PromptTokens)
|
||||
usage.CompletionTokens += toolCount * 7
|
||||
}
|
||||
|
||||
helper.Done(c)
|
||||
err := resp.Body.Close()
|
||||
if err != nil {
|
||||
//return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
common.SysError("close_response_body_failed: " + err.Error())
|
||||
}
|
||||
return nil, usage
|
||||
}
|
||||
|
||||
func xAIHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
responseBody, err := io.ReadAll(resp.Body)
|
||||
var response *dto.TextResponse
|
||||
err = common.DecodeJson(responseBody, &response)
|
||||
if err != nil {
|
||||
common.SysError("error unmarshalling stream response: " + err.Error())
|
||||
return nil, nil
|
||||
}
|
||||
response.Usage.CompletionTokens = response.Usage.TotalTokens - response.Usage.PromptTokens
|
||||
response.Usage.CompletionTokenDetails.TextTokens = response.Usage.CompletionTokens - response.Usage.CompletionTokenDetails.ReasoningTokens
|
||||
|
||||
// new body
|
||||
encodeJson, err := common.EncodeJson(response)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling stream response: " + err.Error())
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
// set new body
|
||||
resp.Body = io.NopCloser(bytes.NewBuffer(encodeJson))
|
||||
|
||||
for k, v := range resp.Header {
|
||||
c.Writer.Header().Set(k, v[0])
|
||||
}
|
||||
c.Writer.WriteHeader(resp.StatusCode)
|
||||
_, err = io.Copy(c.Writer, resp.Body)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = resp.Body.Close()
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
|
||||
return nil, &response.Usage
|
||||
}
|
||||
@@ -2,7 +2,6 @@ package xunfei
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
@@ -10,6 +9,8 @@ import (
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/service"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -61,6 +62,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
|
||||
// xunfei's request is not http request, so we don't need to do anything here
|
||||
dummyResp := &http.Response{}
|
||||
|
||||
@@ -3,12 +3,13 @@ package zhipu
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
relaycommon "one-api/relay/common"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -71,6 +72,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
|
||||
return channel.DoApiRequest(a, c, info, requestBody)
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
|
||||
if info.IsStream {
|
||||
err, usage = zhipuStreamHandler(c, resp)
|
||||
|
||||
@@ -3,13 +3,15 @@ package zhipu_4v
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/dto"
|
||||
"one-api/relay/channel"
|
||||
"one-api/relay/channel/openai"
|
||||
relaycommon "one-api/relay/common"
|
||||
relayconstant "one-api/relay/constant"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
type Adaptor struct {
|
||||
@@ -35,7 +37,13 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
|
||||
}
|
||||
|
||||
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
return fmt.Sprintf("%s/api/paas/v4/chat/completions", info.BaseUrl), nil
|
||||
baseUrl := fmt.Sprintf("%s/api/paas/v4", info.BaseUrl)
|
||||
switch info.RelayMode {
|
||||
case relayconstant.RelayModeEmbeddings:
|
||||
return fmt.Sprintf("%s/embeddings", baseUrl), nil
|
||||
default:
|
||||
return fmt.Sprintf("%s/chat/completions", baseUrl), nil
|
||||
}
|
||||
}
|
||||
|
||||
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
|
||||
@@ -60,7 +68,11 @@ func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dt
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error) {
|
||||
//TODO implement me
|
||||
return request, nil
|
||||
}
|
||||
|
||||
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
|
||||
// TODO implement me
|
||||
return nil, errors.New("not implemented")
|
||||
}
|
||||
|
||||
|
||||
@@ -1,17 +1,9 @@
|
||||
package zhipu_4v
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/golang-jwt/jwt"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/dto"
|
||||
"one-api/relay/helper"
|
||||
"one-api/service"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
@@ -119,163 +111,3 @@ func requestOpenAI2Zhipu(request dto.GeneralOpenAIRequest) *dto.GeneralOpenAIReq
|
||||
ToolChoice: request.ToolChoice,
|
||||
}
|
||||
}
|
||||
|
||||
//func responseZhipu2OpenAI(response *dto.OpenAITextResponse) *dto.OpenAITextResponse {
|
||||
// fullTextResponse := dto.OpenAITextResponse{
|
||||
// Id: response.Id,
|
||||
// Object: "chat.completion",
|
||||
// Created: common.GetTimestamp(),
|
||||
// Choices: make([]dto.OpenAITextResponseChoice, 0, len(response.TextResponseChoices)),
|
||||
// Usage: response.Usage,
|
||||
// }
|
||||
// for i, choice := range response.TextResponseChoices {
|
||||
// content, _ := json.Marshal(strings.Trim(choice.Content, "\""))
|
||||
// openaiChoice := dto.OpenAITextResponseChoice{
|
||||
// Index: i,
|
||||
// Message: dto.Message{
|
||||
// Role: choice.Role,
|
||||
// Content: content,
|
||||
// },
|
||||
// FinishReason: "",
|
||||
// }
|
||||
// if i == len(response.TextResponseChoices)-1 {
|
||||
// openaiChoice.FinishReason = "stop"
|
||||
// }
|
||||
// fullTextResponse.Choices = append(fullTextResponse.Choices, openaiChoice)
|
||||
// }
|
||||
// return &fullTextResponse
|
||||
//}
|
||||
|
||||
func streamResponseZhipu2OpenAI(zhipuResponse *ZhipuV4StreamResponse) *dto.ChatCompletionsStreamResponse {
|
||||
var choice dto.ChatCompletionsStreamResponseChoice
|
||||
choice.Delta.Content = zhipuResponse.Choices[0].Delta.Content
|
||||
choice.Delta.Role = zhipuResponse.Choices[0].Delta.Role
|
||||
choice.Delta.ToolCalls = zhipuResponse.Choices[0].Delta.ToolCalls
|
||||
choice.Index = zhipuResponse.Choices[0].Index
|
||||
choice.FinishReason = zhipuResponse.Choices[0].FinishReason
|
||||
response := dto.ChatCompletionsStreamResponse{
|
||||
Id: zhipuResponse.Id,
|
||||
Object: "chat.completion.chunk",
|
||||
Created: zhipuResponse.Created,
|
||||
Model: "glm-4v",
|
||||
Choices: []dto.ChatCompletionsStreamResponseChoice{choice},
|
||||
}
|
||||
return &response
|
||||
}
|
||||
|
||||
func lastStreamResponseZhipuV42OpenAI(zhipuResponse *ZhipuV4StreamResponse) (*dto.ChatCompletionsStreamResponse, *dto.Usage) {
|
||||
response := streamResponseZhipu2OpenAI(zhipuResponse)
|
||||
return response, &zhipuResponse.Usage
|
||||
}
|
||||
|
||||
func zhipuStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
var usage *dto.Usage
|
||||
scanner := bufio.NewScanner(resp.Body)
|
||||
scanner.Split(func(data []byte, atEOF bool) (advance int, token []byte, err error) {
|
||||
if atEOF && len(data) == 0 {
|
||||
return 0, nil, nil
|
||||
}
|
||||
if i := strings.Index(string(data), "\n"); i >= 0 {
|
||||
return i + 1, data[0:i], nil
|
||||
}
|
||||
if atEOF {
|
||||
return len(data), data, nil
|
||||
}
|
||||
return 0, nil, nil
|
||||
})
|
||||
dataChan := make(chan string)
|
||||
stopChan := make(chan bool)
|
||||
go func() {
|
||||
for scanner.Scan() {
|
||||
data := scanner.Text()
|
||||
if len(data) < 6 { // ignore blank line or wrong format
|
||||
continue
|
||||
}
|
||||
if data[:6] != "data: " && data[:6] != "[DONE]" {
|
||||
continue
|
||||
}
|
||||
dataChan <- data
|
||||
}
|
||||
stopChan <- true
|
||||
}()
|
||||
helper.SetEventStreamHeaders(c)
|
||||
c.Stream(func(w io.Writer) bool {
|
||||
select {
|
||||
case data := <-dataChan:
|
||||
if strings.HasPrefix(data, "data: [DONE]") {
|
||||
data = data[:12]
|
||||
}
|
||||
// some implementations may add \r at the end of data
|
||||
data = strings.TrimSuffix(data, "\r")
|
||||
|
||||
var streamResponse ZhipuV4StreamResponse
|
||||
err := json.Unmarshal([]byte(data), &streamResponse)
|
||||
if err != nil {
|
||||
common.SysError("error unmarshalling stream response: " + err.Error())
|
||||
}
|
||||
var response *dto.ChatCompletionsStreamResponse
|
||||
if strings.Contains(data, "prompt_tokens") {
|
||||
response, usage = lastStreamResponseZhipuV42OpenAI(&streamResponse)
|
||||
} else {
|
||||
response = streamResponseZhipu2OpenAI(&streamResponse)
|
||||
}
|
||||
jsonResponse, err := json.Marshal(response)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling stream response: " + err.Error())
|
||||
return true
|
||||
}
|
||||
c.Render(-1, common.CustomEvent{Data: "data: " + string(jsonResponse)})
|
||||
return true
|
||||
case <-stopChan:
|
||||
return false
|
||||
}
|
||||
})
|
||||
err := resp.Body.Close()
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
return nil, usage
|
||||
}
|
||||
|
||||
func zhipuHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
|
||||
var textResponse ZhipuV4Response
|
||||
responseBody, err := io.ReadAll(resp.Body)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = resp.Body.Close()
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = json.Unmarshal(responseBody, &textResponse)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
if textResponse.Error.Type != "" {
|
||||
return &dto.OpenAIErrorWithStatusCode{
|
||||
Error: textResponse.Error,
|
||||
StatusCode: resp.StatusCode,
|
||||
}, nil
|
||||
}
|
||||
// Reset response body
|
||||
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
|
||||
|
||||
// We shouldn't set the header before we parse the response body, because the parse part may fail.
|
||||
// And then we will have to send an error response, but in this case, the header has already been set.
|
||||
// So the HTTPClient will be confused by the response.
|
||||
// For example, Postman will report error, and we cannot check the response at all.
|
||||
for k, v := range resp.Header {
|
||||
c.Writer.Header().Set(k, v[0])
|
||||
}
|
||||
c.Writer.WriteHeader(resp.StatusCode)
|
||||
_, err = io.Copy(c.Writer, resp.Body)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
err = resp.Body.Close()
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
|
||||
}
|
||||
|
||||
return nil, &textResponse.Usage
|
||||
}
|
||||
|
||||
@@ -19,18 +19,24 @@ type ThinkingContentInfo struct {
|
||||
}
|
||||
|
||||
const (
|
||||
LastMessageTypeText = "text"
|
||||
LastMessageTypeTools = "tools"
|
||||
LastMessageTypeNone = "none"
|
||||
LastMessageTypeText = "text"
|
||||
LastMessageTypeTools = "tools"
|
||||
LastMessageTypeThinking = "thinking"
|
||||
)
|
||||
|
||||
type ClaudeConvertInfo struct {
|
||||
LastMessagesType string
|
||||
Index int
|
||||
Usage *dto.Usage
|
||||
FinishReason string
|
||||
Done bool
|
||||
}
|
||||
|
||||
const (
|
||||
RelayFormatOpenAI = "openai"
|
||||
RelayFormatClaude = "claude"
|
||||
RelayFormatGemini = "gemini"
|
||||
)
|
||||
|
||||
type RerankerInfo struct {
|
||||
@@ -38,6 +44,16 @@ type RerankerInfo struct {
|
||||
ReturnDocuments bool
|
||||
}
|
||||
|
||||
type BuildInToolInfo struct {
|
||||
ToolName string
|
||||
CallCount int
|
||||
SearchContextSize string
|
||||
}
|
||||
|
||||
type ResponsesUsageInfo struct {
|
||||
BuiltInTools map[string]*BuildInToolInfo
|
||||
}
|
||||
|
||||
type RelayInfo struct {
|
||||
ChannelType int
|
||||
ChannelId int
|
||||
@@ -82,9 +98,11 @@ type RelayInfo struct {
|
||||
UserQuota int
|
||||
RelayFormat string
|
||||
SendResponseCount int
|
||||
ChannelCreateTime int64
|
||||
ThinkingContentInfo
|
||||
ClaudeConvertInfo
|
||||
*ClaudeConvertInfo
|
||||
*RerankerInfo
|
||||
*ResponsesUsageInfo
|
||||
}
|
||||
|
||||
// 定义支持流式选项的通道类型
|
||||
@@ -97,6 +115,9 @@ var streamSupportedChannels = map[int]bool{
|
||||
common.ChannelTypeAzure: true,
|
||||
common.ChannelTypeVolcEngine: true,
|
||||
common.ChannelTypeOllama: true,
|
||||
common.ChannelTypeXai: true,
|
||||
common.ChannelTypeDeepSeek: true,
|
||||
common.ChannelTypeBaiduV2: true,
|
||||
}
|
||||
|
||||
func GenRelayInfoWs(c *gin.Context, ws *websocket.Conn) *RelayInfo {
|
||||
@@ -112,8 +133,8 @@ func GenRelayInfoClaude(c *gin.Context) *RelayInfo {
|
||||
info := GenRelayInfo(c)
|
||||
info.RelayFormat = RelayFormatClaude
|
||||
info.ShouldIncludeUsage = false
|
||||
info.ClaudeConvertInfo = ClaudeConvertInfo{
|
||||
LastMessagesType: LastMessageTypeText,
|
||||
info.ClaudeConvertInfo = &ClaudeConvertInfo{
|
||||
LastMessagesType: LastMessageTypeNone,
|
||||
}
|
||||
return info
|
||||
}
|
||||
@@ -128,6 +149,31 @@ func GenRelayInfoRerank(c *gin.Context, req *dto.RerankRequest) *RelayInfo {
|
||||
return info
|
||||
}
|
||||
|
||||
func GenRelayInfoResponses(c *gin.Context, req *dto.OpenAIResponsesRequest) *RelayInfo {
|
||||
info := GenRelayInfo(c)
|
||||
info.RelayMode = relayconstant.RelayModeResponses
|
||||
info.ResponsesUsageInfo = &ResponsesUsageInfo{
|
||||
BuiltInTools: make(map[string]*BuildInToolInfo),
|
||||
}
|
||||
if len(req.Tools) > 0 {
|
||||
for _, tool := range req.Tools {
|
||||
info.ResponsesUsageInfo.BuiltInTools[tool.Type] = &BuildInToolInfo{
|
||||
ToolName: tool.Type,
|
||||
CallCount: 0,
|
||||
}
|
||||
switch tool.Type {
|
||||
case dto.BuildInToolWebSearchPreview:
|
||||
if tool.SearchContextSize == "" {
|
||||
tool.SearchContextSize = "medium"
|
||||
}
|
||||
info.ResponsesUsageInfo.BuiltInTools[tool.Type].SearchContextSize = tool.SearchContextSize
|
||||
}
|
||||
}
|
||||
}
|
||||
info.IsStream = req.Stream
|
||||
return info
|
||||
}
|
||||
|
||||
func GenRelayInfo(c *gin.Context) *RelayInfo {
|
||||
channelType := c.GetInt("channel_type")
|
||||
channelId := c.GetInt("channel_id")
|
||||
@@ -164,14 +210,15 @@ func GenRelayInfo(c *gin.Context) *RelayInfo {
|
||||
OriginModelName: c.GetString("original_model"),
|
||||
UpstreamModelName: c.GetString("original_model"),
|
||||
//RecodeModelName: c.GetString("original_model"),
|
||||
IsModelMapped: false,
|
||||
ApiType: apiType,
|
||||
ApiVersion: c.GetString("api_version"),
|
||||
ApiKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
|
||||
Organization: c.GetString("channel_organization"),
|
||||
ChannelSetting: channelSetting,
|
||||
ParamOverride: paramOverride,
|
||||
RelayFormat: RelayFormatOpenAI,
|
||||
IsModelMapped: false,
|
||||
ApiType: apiType,
|
||||
ApiVersion: c.GetString("api_version"),
|
||||
ApiKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
|
||||
Organization: c.GetString("channel_organization"),
|
||||
ChannelSetting: channelSetting,
|
||||
ChannelCreateTime: c.GetInt64("channel_create_time"),
|
||||
ParamOverride: paramOverride,
|
||||
RelayFormat: RelayFormatOpenAI,
|
||||
ThinkingContentInfo: ThinkingContentInfo{
|
||||
IsFirstThinkingContent: true,
|
||||
SendLastThinkingContent: false,
|
||||
@@ -194,6 +241,10 @@ func GenRelayInfo(c *gin.Context) *RelayInfo {
|
||||
if streamSupportedChannels[info.ChannelType] {
|
||||
info.SupportStreamOptions = true
|
||||
}
|
||||
// responses 模式不支持 StreamOptions
|
||||
if relayconstant.RelayModeResponses == info.RelayMode {
|
||||
info.SupportStreamOptions = false
|
||||
}
|
||||
return info
|
||||
}
|
||||
|
||||
@@ -212,6 +263,10 @@ func (info *RelayInfo) SetFirstResponseTime() {
|
||||
}
|
||||
}
|
||||
|
||||
func (info *RelayInfo) HasSendResponse() bool {
|
||||
return info.FirstResponseTime.After(info.StartTime)
|
||||
}
|
||||
|
||||
type TaskRelayInfo struct {
|
||||
*RelayInfo
|
||||
Action string
|
||||
|
||||
@@ -32,6 +32,7 @@ const (
|
||||
APITypeBaiduV2
|
||||
APITypeOpenRouter
|
||||
APITypeXinference
|
||||
APITypeXai
|
||||
APITypeDummy // this one is only for count, do not add any channel after this
|
||||
)
|
||||
|
||||
@@ -92,6 +93,8 @@ func ChannelType2APIType(channelType int) (int, bool) {
|
||||
apiType = APITypeOpenRouter
|
||||
case common.ChannelTypeXinference:
|
||||
apiType = APITypeXinference
|
||||
case common.ChannelTypeXai:
|
||||
apiType = APITypeXai
|
||||
}
|
||||
if apiType == -1 {
|
||||
return APITypeOpenAI, false
|
||||
|
||||
@@ -12,6 +12,7 @@ const (
|
||||
RelayModeEmbeddings
|
||||
RelayModeModerations
|
||||
RelayModeImagesGenerations
|
||||
RelayModeImagesEdits
|
||||
RelayModeEdits
|
||||
|
||||
RelayModeMidjourneyImagine
|
||||
@@ -39,6 +40,8 @@ const (
|
||||
|
||||
RelayModeRerank
|
||||
|
||||
RelayModeResponses
|
||||
|
||||
RelayModeRealtime
|
||||
)
|
||||
|
||||
@@ -56,8 +59,12 @@ func Path2RelayMode(path string) int {
|
||||
relayMode = RelayModeModerations
|
||||
} else if strings.HasPrefix(path, "/v1/images/generations") {
|
||||
relayMode = RelayModeImagesGenerations
|
||||
} else if strings.HasPrefix(path, "/v1/images/edits") {
|
||||
relayMode = RelayModeImagesEdits
|
||||
} else if strings.HasPrefix(path, "/v1/edits") {
|
||||
relayMode = RelayModeEdits
|
||||
} else if strings.HasPrefix(path, "/v1/responses") {
|
||||
relayMode = RelayModeResponses
|
||||
} else if strings.HasPrefix(path, "/v1/audio/speech") {
|
||||
relayMode = RelayModeAudioSpeech
|
||||
} else if strings.HasPrefix(path, "/v1/audio/transcriptions") {
|
||||
|
||||
@@ -43,6 +43,14 @@ func ClaudeChunkData(c *gin.Context, resp dto.ClaudeResponse, data string) {
|
||||
}
|
||||
}
|
||||
|
||||
func ResponseChunkData(c *gin.Context, resp dto.ResponsesStreamResponse, data string) {
|
||||
c.Render(-1, common.CustomEvent{Data: fmt.Sprintf("event: %s\n", resp.Type)})
|
||||
c.Render(-1, common.CustomEvent{Data: fmt.Sprintf("data: %s", data)})
|
||||
if flusher, ok := c.Writer.(http.Flusher); ok {
|
||||
flusher.Flush()
|
||||
}
|
||||
}
|
||||
|
||||
func StringData(c *gin.Context, str string) error {
|
||||
//str = strings.TrimPrefix(str, "data: ")
|
||||
//str = strings.TrimSuffix(str, "\r")
|
||||
@@ -55,7 +63,20 @@ func StringData(c *gin.Context, str string) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func PingData(c *gin.Context) error {
|
||||
c.Writer.Write([]byte(": PING\n\n"))
|
||||
if flusher, ok := c.Writer.(http.Flusher); ok {
|
||||
flusher.Flush()
|
||||
} else {
|
||||
return errors.New("streaming error: flusher not found")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func ObjectData(c *gin.Context, object interface{}) error {
|
||||
if object == nil {
|
||||
return errors.New("object is nil")
|
||||
}
|
||||
jsonData, err := json.Marshal(object)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error marshalling object: %w", err)
|
||||
|
||||
@@ -2,9 +2,11 @@ package helper
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"one-api/relay/common"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func ModelMappedHelper(c *gin.Context, info *common.RelayInfo) error {
|
||||
@@ -16,9 +18,36 @@ func ModelMappedHelper(c *gin.Context, info *common.RelayInfo) error {
|
||||
if err != nil {
|
||||
return fmt.Errorf("unmarshal_model_mapping_failed")
|
||||
}
|
||||
if modelMap[info.OriginModelName] != "" {
|
||||
info.UpstreamModelName = modelMap[info.OriginModelName]
|
||||
info.IsModelMapped = true
|
||||
|
||||
// 支持链式模型重定向,最终使用链尾的模型
|
||||
currentModel := info.OriginModelName
|
||||
visitedModels := map[string]bool{
|
||||
currentModel: true,
|
||||
}
|
||||
for {
|
||||
if mappedModel, exists := modelMap[currentModel]; exists && mappedModel != "" {
|
||||
// 模型重定向循环检测,避免无限循环
|
||||
if visitedModels[mappedModel] {
|
||||
if mappedModel == currentModel {
|
||||
if currentModel == info.OriginModelName {
|
||||
info.IsModelMapped = false
|
||||
return nil
|
||||
} else {
|
||||
info.IsModelMapped = true
|
||||
break
|
||||
}
|
||||
}
|
||||
return errors.New("model_mapping_contains_cycle")
|
||||
}
|
||||
visitedModels[mappedModel] = true
|
||||
currentModel = mappedModel
|
||||
info.IsModelMapped = true
|
||||
} else {
|
||||
break
|
||||
}
|
||||
}
|
||||
if info.IsModelMapped {
|
||||
info.UpstreamModelName = currentModel
|
||||
}
|
||||
}
|
||||
return nil
|
||||
|
||||
@@ -15,14 +15,15 @@ type PriceData struct {
|
||||
ModelRatio float64
|
||||
CompletionRatio float64
|
||||
CacheRatio float64
|
||||
CacheCreationRatio float64
|
||||
ImageRatio float64
|
||||
GroupRatio float64
|
||||
UsePrice bool
|
||||
CacheCreationRatio float64
|
||||
ShouldPreConsumedQuota int
|
||||
}
|
||||
|
||||
func (p PriceData) ToSetting() string {
|
||||
return fmt.Sprintf("ModelPrice: %f, ModelRatio: %f, CompletionRatio: %f, CacheRatio: %f, GroupRatio: %f, UsePrice: %t, CacheCreationRatio: %f, ShouldPreConsumedQuota: %d", p.ModelPrice, p.ModelRatio, p.CompletionRatio, p.CacheRatio, p.GroupRatio, p.UsePrice, p.CacheCreationRatio, p.ShouldPreConsumedQuota)
|
||||
return fmt.Sprintf("ModelPrice: %f, ModelRatio: %f, CompletionRatio: %f, CacheRatio: %f, GroupRatio: %f, UsePrice: %t, CacheCreationRatio: %f, ShouldPreConsumedQuota: %d, ImageRatio: %d", p.ModelPrice, p.ModelRatio, p.CompletionRatio, p.CacheRatio, p.GroupRatio, p.UsePrice, p.CacheCreationRatio, p.ShouldPreConsumedQuota, p.ImageRatio)
|
||||
}
|
||||
|
||||
func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens int, maxTokens int) (PriceData, error) {
|
||||
@@ -32,6 +33,7 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
|
||||
var modelRatio float64
|
||||
var completionRatio float64
|
||||
var cacheRatio float64
|
||||
var imageRatio float64
|
||||
var cacheCreationRatio float64
|
||||
if !usePrice {
|
||||
preConsumedTokens := common.PreConsumedQuota
|
||||
@@ -49,16 +51,13 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
|
||||
}
|
||||
}
|
||||
if !acceptUnsetRatio {
|
||||
if info.UserId == 1 {
|
||||
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置,请设置或开始自用模式;Model %s ratio or price not set, please set or start self-use mode", info.OriginModelName, info.OriginModelName)
|
||||
} else {
|
||||
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置, 请联系管理员设置;Model %s ratio or price not set, please contact administrator to set", info.OriginModelName, info.OriginModelName)
|
||||
}
|
||||
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置,请联系管理员设置或开始自用模式;Model %s ratio or price not set, please set or start self-use mode", info.OriginModelName, info.OriginModelName)
|
||||
}
|
||||
}
|
||||
completionRatio = operation_setting.GetCompletionRatio(info.OriginModelName)
|
||||
cacheRatio, _ = operation_setting.GetCacheRatio(info.OriginModelName)
|
||||
cacheCreationRatio, _ = operation_setting.GetCreateCacheRatio(info.OriginModelName)
|
||||
imageRatio, _ = operation_setting.GetImageRatio(info.OriginModelName)
|
||||
ratio := modelRatio * groupRatio
|
||||
preConsumedQuota = int(float64(preConsumedTokens) * ratio)
|
||||
} else {
|
||||
@@ -72,6 +71,7 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
|
||||
GroupRatio: groupRatio,
|
||||
UsePrice: usePrice,
|
||||
CacheRatio: cacheRatio,
|
||||
ImageRatio: imageRatio,
|
||||
CacheCreationRatio: cacheCreationRatio,
|
||||
ShouldPreConsumedQuota: preConsumedQuota,
|
||||
}
|
||||
@@ -82,3 +82,15 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
|
||||
|
||||
return priceData, nil
|
||||
}
|
||||
|
||||
func ContainPriceOrRatio(modelName string) bool {
|
||||
_, ok := operation_setting.GetModelPrice(modelName, false)
|
||||
if ok {
|
||||
return true
|
||||
}
|
||||
_, ok = operation_setting.GetModelRatio(modelName)
|
||||
if ok {
|
||||
return true
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -3,42 +3,67 @@ package helper
|
||||
import (
|
||||
"bufio"
|
||||
"context"
|
||||
"github.com/bytedance/gopkg/util/gopool"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/constant"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/setting/operation_setting"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
const (
|
||||
InitialScannerBufferSize = 1 << 20 // 1MB (1*1024*1024)
|
||||
MaxScannerBufferSize = 10 << 20 // 10MB (10*1024*1024)
|
||||
DefaultPingInterval = 10 * time.Second
|
||||
)
|
||||
|
||||
func StreamScannerHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, dataHandler func(data string) bool) {
|
||||
|
||||
if resp == nil {
|
||||
if resp == nil || dataHandler == nil {
|
||||
return
|
||||
}
|
||||
|
||||
defer resp.Body.Close()
|
||||
|
||||
streamingTimeout := time.Duration(constant.StreamingTimeout) * time.Second
|
||||
if strings.HasPrefix(info.UpstreamModelName, "o1") || strings.HasPrefix(info.UpstreamModelName, "o3") {
|
||||
if strings.HasPrefix(info.UpstreamModelName, "o") {
|
||||
// twice timeout for thinking model
|
||||
streamingTimeout *= 2
|
||||
}
|
||||
|
||||
var (
|
||||
stopChan = make(chan bool, 2)
|
||||
scanner = bufio.NewScanner(resp.Body)
|
||||
ticker = time.NewTicker(streamingTimeout)
|
||||
stopChan = make(chan bool, 2)
|
||||
scanner = bufio.NewScanner(resp.Body)
|
||||
ticker = time.NewTicker(streamingTimeout)
|
||||
pingTicker *time.Ticker
|
||||
writeMutex sync.Mutex // Mutex to protect concurrent writes
|
||||
)
|
||||
|
||||
generalSettings := operation_setting.GetGeneralSetting()
|
||||
pingEnabled := generalSettings.PingIntervalEnabled
|
||||
pingInterval := time.Duration(generalSettings.PingIntervalSeconds) * time.Second
|
||||
if pingInterval <= 0 {
|
||||
pingInterval = DefaultPingInterval
|
||||
}
|
||||
|
||||
if pingEnabled {
|
||||
pingTicker = time.NewTicker(pingInterval)
|
||||
}
|
||||
|
||||
defer func() {
|
||||
ticker.Stop()
|
||||
if pingTicker != nil {
|
||||
pingTicker.Stop()
|
||||
}
|
||||
close(stopChan)
|
||||
}()
|
||||
|
||||
scanner.Buffer(make([]byte, InitialScannerBufferSize), MaxScannerBufferSize)
|
||||
scanner.Split(bufio.ScanLines)
|
||||
SetEventStreamHeaders(c)
|
||||
|
||||
@@ -46,6 +71,34 @@ func StreamScannerHandler(c *gin.Context, resp *http.Response, info *relaycommon
|
||||
defer cancel()
|
||||
|
||||
ctx = context.WithValue(ctx, "stop_chan", stopChan)
|
||||
|
||||
// Handle ping data sending
|
||||
if pingEnabled && pingTicker != nil {
|
||||
gopool.Go(func() {
|
||||
for {
|
||||
select {
|
||||
case <-pingTicker.C:
|
||||
writeMutex.Lock() // Lock before writing
|
||||
err := PingData(c)
|
||||
writeMutex.Unlock() // Unlock after writing
|
||||
if err != nil {
|
||||
common.LogError(c, "ping data error: "+err.Error())
|
||||
common.SafeSendBool(stopChan, true)
|
||||
return
|
||||
}
|
||||
if common.DebugEnabled {
|
||||
println("ping data sent")
|
||||
}
|
||||
case <-ctx.Done():
|
||||
if common.DebugEnabled {
|
||||
println("ping data goroutine stopped")
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
common.RelayCtxGo(ctx, func() {
|
||||
for scanner.Scan() {
|
||||
ticker.Reset(streamingTimeout)
|
||||
@@ -62,10 +115,12 @@ func StreamScannerHandler(c *gin.Context, resp *http.Response, info *relaycommon
|
||||
}
|
||||
data = data[5:]
|
||||
data = strings.TrimLeft(data, " ")
|
||||
data = strings.TrimSuffix(data, "\"")
|
||||
data = strings.TrimSuffix(data, "\r")
|
||||
if !strings.HasPrefix(data, "[DONE]") {
|
||||
info.SetFirstResponseTime()
|
||||
writeMutex.Lock() // Lock before writing
|
||||
success := dataHandler(data)
|
||||
writeMutex.Unlock() // Unlock after writing
|
||||
if !success {
|
||||
break
|
||||
}
|
||||
@@ -85,7 +140,9 @@ func StreamScannerHandler(c *gin.Context, resp *http.Response, info *relaycommon
|
||||
case <-ticker.C:
|
||||
// 超时处理逻辑
|
||||
common.LogError(c, "streaming timeout")
|
||||
common.SafeSendBool(stopChan, true)
|
||||
case <-stopChan:
|
||||
// 正常结束
|
||||
common.LogInfo(c, "streaming finished")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -5,21 +5,83 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/gin-gonic/gin"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/dto"
|
||||
"one-api/model"
|
||||
relaycommon "one-api/relay/common"
|
||||
relayconstant "one-api/relay/constant"
|
||||
"one-api/relay/helper"
|
||||
"one-api/service"
|
||||
"one-api/setting"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func getAndValidImageRequest(c *gin.Context, info *relaycommon.RelayInfo) (*dto.ImageRequest, error) {
|
||||
imageRequest := &dto.ImageRequest{}
|
||||
|
||||
switch info.RelayMode {
|
||||
case relayconstant.RelayModeImagesEdits:
|
||||
_, err := c.MultipartForm()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
formData := c.Request.PostForm
|
||||
imageRequest.Prompt = formData.Get("prompt")
|
||||
imageRequest.Model = formData.Get("model")
|
||||
imageRequest.N = common.String2Int(formData.Get("n"))
|
||||
imageRequest.Quality = formData.Get("quality")
|
||||
imageRequest.Size = formData.Get("size")
|
||||
|
||||
if imageRequest.Model == "gpt-image-1" {
|
||||
if imageRequest.Quality == "" {
|
||||
imageRequest.Quality = "standard"
|
||||
}
|
||||
}
|
||||
default:
|
||||
err := common.UnmarshalBodyReusable(c, imageRequest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
// Not "256x256", "512x512", or "1024x1024"
|
||||
if imageRequest.Model == "dall-e-2" || imageRequest.Model == "dall-e" {
|
||||
if imageRequest.Size != "" && imageRequest.Size != "256x256" && imageRequest.Size != "512x512" && imageRequest.Size != "1024x1024" {
|
||||
return nil, errors.New("size must be one of 256x256, 512x512, or 1024x1024 for dall-e-2 or dall-e")
|
||||
}
|
||||
} else if imageRequest.Model == "dall-e-3" {
|
||||
if imageRequest.Size != "" && imageRequest.Size != "1024x1024" && imageRequest.Size != "1024x1792" && imageRequest.Size != "1792x1024" {
|
||||
return nil, errors.New("size must be one of 1024x1024, 1024x1792 or 1792x1024 for dall-e-3")
|
||||
}
|
||||
if imageRequest.Quality == "" {
|
||||
imageRequest.Quality = "standard"
|
||||
}
|
||||
// N should between 1 and 10
|
||||
//if imageRequest.N != 0 && (imageRequest.N < 1 || imageRequest.N > 10) {
|
||||
// return service.OpenAIErrorWrapper(errors.New("n must be between 1 and 10"), "invalid_field_value", http.StatusBadRequest)
|
||||
//}
|
||||
}
|
||||
}
|
||||
|
||||
if imageRequest.Prompt == "" {
|
||||
return nil, errors.New("prompt is required")
|
||||
}
|
||||
|
||||
if imageRequest.Model == "" {
|
||||
imageRequest.Model = "dall-e-2"
|
||||
}
|
||||
if strings.Contains(imageRequest.Size, "×") {
|
||||
return nil, errors.New("size an unexpected error occurred in the parameter, please use 'x' instead of the multiplication sign '×'")
|
||||
}
|
||||
if imageRequest.N == 0 {
|
||||
imageRequest.N = 1
|
||||
}
|
||||
if imageRequest.Size == "" {
|
||||
imageRequest.Size = "1024x1024"
|
||||
}
|
||||
|
||||
err := common.UnmarshalBodyReusable(c, imageRequest)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -39,6 +101,10 @@ func getAndValidImageRequest(c *gin.Context, info *relaycommon.RelayInfo) (*dto.
|
||||
if imageRequest.Model == "" {
|
||||
imageRequest.Model = "dall-e-2"
|
||||
}
|
||||
// x.ai grok-2-image not support size, quality or style
|
||||
if imageRequest.Size == "empty" {
|
||||
imageRequest.Size = ""
|
||||
}
|
||||
|
||||
// Not "256x256", "512x512", or "1024x1024"
|
||||
if imageRequest.Model == "dall-e-2" || imageRequest.Model == "dall-e" {
|
||||
@@ -86,43 +152,59 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
|
||||
|
||||
imageRequest.Model = relayInfo.UpstreamModelName
|
||||
|
||||
priceData, err := helper.ModelPriceHelper(c, relayInfo, 0, 0)
|
||||
priceData, err := helper.ModelPriceHelper(c, relayInfo, len(imageRequest.Prompt), 0)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
|
||||
}
|
||||
var preConsumedQuota int
|
||||
var quota int
|
||||
var userQuota int
|
||||
if !priceData.UsePrice {
|
||||
// modelRatio 16 = modelPrice $0.04
|
||||
// per 1 modelRatio = $0.04 / 16
|
||||
priceData.ModelPrice = 0.0025 * priceData.ModelRatio
|
||||
}
|
||||
|
||||
userQuota, err := model.GetUserQuota(relayInfo.UserId, false)
|
||||
|
||||
sizeRatio := 1.0
|
||||
// Size
|
||||
if imageRequest.Size == "256x256" {
|
||||
sizeRatio = 0.4
|
||||
} else if imageRequest.Size == "512x512" {
|
||||
sizeRatio = 0.45
|
||||
} else if imageRequest.Size == "1024x1024" {
|
||||
sizeRatio = 1
|
||||
} else if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
|
||||
sizeRatio = 2
|
||||
}
|
||||
|
||||
qualityRatio := 1.0
|
||||
if imageRequest.Model == "dall-e-3" && imageRequest.Quality == "hd" {
|
||||
qualityRatio = 2.0
|
||||
if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
|
||||
qualityRatio = 1.5
|
||||
// priceData.ModelPrice = 0.0025 * priceData.ModelRatio
|
||||
var openaiErr *dto.OpenAIErrorWithStatusCode
|
||||
preConsumedQuota, userQuota, openaiErr = preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
|
||||
if openaiErr != nil {
|
||||
return openaiErr
|
||||
}
|
||||
}
|
||||
defer func() {
|
||||
if openaiErr != nil {
|
||||
returnPreConsumedQuota(c, relayInfo, userQuota, preConsumedQuota)
|
||||
}
|
||||
}()
|
||||
|
||||
priceData.ModelPrice *= sizeRatio * qualityRatio * float64(imageRequest.N)
|
||||
quota := int(priceData.ModelPrice * priceData.GroupRatio * common.QuotaPerUnit)
|
||||
} else {
|
||||
sizeRatio := 1.0
|
||||
// Size
|
||||
if imageRequest.Size == "256x256" {
|
||||
sizeRatio = 0.4
|
||||
} else if imageRequest.Size == "512x512" {
|
||||
sizeRatio = 0.45
|
||||
} else if imageRequest.Size == "1024x1024" {
|
||||
sizeRatio = 1
|
||||
} else if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
|
||||
sizeRatio = 2
|
||||
}
|
||||
|
||||
if userQuota-quota < 0 {
|
||||
return service.OpenAIErrorWrapperLocal(fmt.Errorf("image pre-consumed quota failed, user quota: %s, need quota: %s", common.FormatQuota(userQuota), common.FormatQuota(quota)), "insufficient_user_quota", http.StatusForbidden)
|
||||
qualityRatio := 1.0
|
||||
if imageRequest.Model == "dall-e-3" && imageRequest.Quality == "hd" {
|
||||
qualityRatio = 2.0
|
||||
if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
|
||||
qualityRatio = 1.5
|
||||
}
|
||||
}
|
||||
|
||||
// reset model price
|
||||
priceData.ModelPrice *= sizeRatio * qualityRatio * float64(imageRequest.N)
|
||||
quota = int(priceData.ModelPrice * priceData.GroupRatio * common.QuotaPerUnit)
|
||||
userQuota, err = model.GetUserQuota(relayInfo.UserId, false)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "get_user_quota_failed", http.StatusInternalServerError)
|
||||
}
|
||||
if userQuota-quota < 0 {
|
||||
return service.OpenAIErrorWrapperLocal(fmt.Errorf("image pre-consumed quota failed, user quota: %s, need quota: %s", common.FormatQuota(userQuota), common.FormatQuota(quota)), "insufficient_user_quota", http.StatusForbidden)
|
||||
}
|
||||
}
|
||||
|
||||
adaptor := GetAdaptor(relayInfo.ApiType)
|
||||
@@ -137,12 +219,15 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "convert_request_failed", http.StatusInternalServerError)
|
||||
}
|
||||
|
||||
jsonData, err := json.Marshal(convertedRequest)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "json_marshal_failed", http.StatusInternalServerError)
|
||||
if relayInfo.RelayMode == relayconstant.RelayModeImagesEdits {
|
||||
requestBody = convertedRequest.(io.Reader)
|
||||
} else {
|
||||
jsonData, err := json.Marshal(convertedRequest)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "json_marshal_failed", http.StatusInternalServerError)
|
||||
}
|
||||
requestBody = bytes.NewBuffer(jsonData)
|
||||
}
|
||||
requestBody = bytes.NewBuffer(jsonData)
|
||||
|
||||
statusCodeMappingStr := c.GetString("status_code_mapping")
|
||||
|
||||
@@ -162,24 +247,25 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
|
||||
}
|
||||
}
|
||||
|
||||
_, openaiErr := adaptor.DoResponse(c, httpResp, relayInfo)
|
||||
usage, openaiErr := adaptor.DoResponse(c, httpResp, relayInfo)
|
||||
if openaiErr != nil {
|
||||
// reset status code 重置状态码
|
||||
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
|
||||
return openaiErr
|
||||
}
|
||||
|
||||
usage := &dto.Usage{
|
||||
PromptTokens: imageRequest.N,
|
||||
TotalTokens: imageRequest.N,
|
||||
if usage.(*dto.Usage).TotalTokens == 0 {
|
||||
usage.(*dto.Usage).TotalTokens = imageRequest.N
|
||||
}
|
||||
if usage.(*dto.Usage).PromptTokens == 0 {
|
||||
usage.(*dto.Usage).PromptTokens = imageRequest.N
|
||||
}
|
||||
|
||||
quality := "standard"
|
||||
if imageRequest.Quality == "hd" {
|
||||
quality = "hd"
|
||||
}
|
||||
|
||||
logContent := fmt.Sprintf("大小 %s, 品质 %s", imageRequest.Size, quality)
|
||||
postConsumeQuota(c, relayInfo, usage, 0, userQuota, priceData, logContent)
|
||||
postConsumeQuota(c, relayInfo, usage.(*dto.Usage), preConsumedQuota, userQuota, priceData, logContent)
|
||||
return nil
|
||||
}
|
||||
|
||||
171
relay/relay-responses.go
Normal file
171
relay/relay-responses.go
Normal file
@@ -0,0 +1,171 @@
|
||||
package relay
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"one-api/common"
|
||||
"one-api/dto"
|
||||
relaycommon "one-api/relay/common"
|
||||
"one-api/relay/helper"
|
||||
"one-api/service"
|
||||
"one-api/setting"
|
||||
"one-api/setting/model_setting"
|
||||
"strings"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func getAndValidateResponsesRequest(c *gin.Context) (*dto.OpenAIResponsesRequest, error) {
|
||||
request := &dto.OpenAIResponsesRequest{}
|
||||
err := common.UnmarshalBodyReusable(c, request)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
if request.Model == "" {
|
||||
return nil, errors.New("model is required")
|
||||
}
|
||||
if len(request.Input) == 0 {
|
||||
return nil, errors.New("input is required")
|
||||
}
|
||||
return request, nil
|
||||
|
||||
}
|
||||
|
||||
func checkInputSensitive(textRequest *dto.OpenAIResponsesRequest, info *relaycommon.RelayInfo) ([]string, error) {
|
||||
sensitiveWords, err := service.CheckSensitiveInput(textRequest.Input)
|
||||
return sensitiveWords, err
|
||||
}
|
||||
|
||||
func getInputTokens(req *dto.OpenAIResponsesRequest, info *relaycommon.RelayInfo) (int, error) {
|
||||
inputTokens, err := service.CountTokenInput(req.Input, req.Model)
|
||||
info.PromptTokens = inputTokens
|
||||
return inputTokens, err
|
||||
}
|
||||
|
||||
func ResponsesHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
|
||||
req, err := getAndValidateResponsesRequest(c)
|
||||
if err != nil {
|
||||
common.LogError(c, fmt.Sprintf("getAndValidateResponsesRequest error: %s", err.Error()))
|
||||
return service.OpenAIErrorWrapperLocal(err, "invalid_responses_request", http.StatusBadRequest)
|
||||
}
|
||||
|
||||
relayInfo := relaycommon.GenRelayInfoResponses(c, req)
|
||||
|
||||
if setting.ShouldCheckPromptSensitive() {
|
||||
sensitiveWords, err := checkInputSensitive(req, relayInfo)
|
||||
if err != nil {
|
||||
common.LogWarn(c, fmt.Sprintf("user sensitive words detected: %s", strings.Join(sensitiveWords, ", ")))
|
||||
return service.OpenAIErrorWrapperLocal(err, "check_request_sensitive_error", http.StatusBadRequest)
|
||||
}
|
||||
}
|
||||
|
||||
err = helper.ModelMappedHelper(c, relayInfo)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "model_mapped_error", http.StatusBadRequest)
|
||||
}
|
||||
req.Model = relayInfo.UpstreamModelName
|
||||
if value, exists := c.Get("prompt_tokens"); exists {
|
||||
promptTokens := value.(int)
|
||||
relayInfo.SetPromptTokens(promptTokens)
|
||||
} else {
|
||||
promptTokens, err := getInputTokens(req, relayInfo)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "count_input_tokens_error", http.StatusBadRequest)
|
||||
}
|
||||
c.Set("prompt_tokens", promptTokens)
|
||||
}
|
||||
|
||||
priceData, err := helper.ModelPriceHelper(c, relayInfo, relayInfo.PromptTokens, int(req.MaxOutputTokens))
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
|
||||
}
|
||||
// pre consume quota
|
||||
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
|
||||
if openaiErr != nil {
|
||||
return openaiErr
|
||||
}
|
||||
defer func() {
|
||||
if openaiErr != nil {
|
||||
returnPreConsumedQuota(c, relayInfo, userQuota, preConsumedQuota)
|
||||
}
|
||||
}()
|
||||
adaptor := GetAdaptor(relayInfo.ApiType)
|
||||
if adaptor == nil {
|
||||
return service.OpenAIErrorWrapperLocal(fmt.Errorf("invalid api type: %d", relayInfo.ApiType), "invalid_api_type", http.StatusBadRequest)
|
||||
}
|
||||
adaptor.Init(relayInfo)
|
||||
var requestBody io.Reader
|
||||
if model_setting.GetGlobalSettings().PassThroughRequestEnabled {
|
||||
body, err := common.GetRequestBody(c)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "get_request_body_error", http.StatusInternalServerError)
|
||||
}
|
||||
requestBody = bytes.NewBuffer(body)
|
||||
} else {
|
||||
convertedRequest, err := adaptor.ConvertOpenAIResponsesRequest(c, relayInfo, *req)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "convert_request_error", http.StatusBadRequest)
|
||||
}
|
||||
jsonData, err := json.Marshal(convertedRequest)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "marshal_request_error", http.StatusInternalServerError)
|
||||
}
|
||||
// apply param override
|
||||
if len(relayInfo.ParamOverride) > 0 {
|
||||
reqMap := make(map[string]interface{})
|
||||
err = json.Unmarshal(jsonData, &reqMap)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "param_override_unmarshal_failed", http.StatusInternalServerError)
|
||||
}
|
||||
for key, value := range relayInfo.ParamOverride {
|
||||
reqMap[key] = value
|
||||
}
|
||||
jsonData, err = json.Marshal(reqMap)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapperLocal(err, "param_override_marshal_failed", http.StatusInternalServerError)
|
||||
}
|
||||
}
|
||||
|
||||
if common.DebugEnabled {
|
||||
println("requestBody: ", string(jsonData))
|
||||
}
|
||||
requestBody = bytes.NewBuffer(jsonData)
|
||||
}
|
||||
|
||||
var httpResp *http.Response
|
||||
resp, err := adaptor.DoRequest(c, relayInfo, requestBody)
|
||||
if err != nil {
|
||||
return service.OpenAIErrorWrapper(err, "do_request_failed", http.StatusInternalServerError)
|
||||
}
|
||||
|
||||
statusCodeMappingStr := c.GetString("status_code_mapping")
|
||||
|
||||
if resp != nil {
|
||||
httpResp = resp.(*http.Response)
|
||||
|
||||
if httpResp.StatusCode != http.StatusOK {
|
||||
openaiErr = service.RelayErrorHandler(httpResp, false)
|
||||
// reset status code 重置状态码
|
||||
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
|
||||
return openaiErr
|
||||
}
|
||||
}
|
||||
|
||||
usage, openaiErr := adaptor.DoResponse(c, httpResp, relayInfo)
|
||||
if openaiErr != nil {
|
||||
// reset status code 重置状态码
|
||||
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
|
||||
return openaiErr
|
||||
}
|
||||
|
||||
if strings.HasPrefix(relayInfo.OriginModelName, "gpt-4o-audio") {
|
||||
service.PostAudioConsumeQuota(c, relayInfo, usage.(*dto.Usage), preConsumedQuota, userQuota, priceData, "")
|
||||
} else {
|
||||
postConsumeQuota(c, relayInfo, usage.(*dto.Usage), preConsumedQuota, userQuota, priceData, "")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@@ -18,6 +18,7 @@ import (
|
||||
"one-api/service"
|
||||
"one-api/setting"
|
||||
"one-api/setting/model_setting"
|
||||
"one-api/setting/operation_setting"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
@@ -331,12 +332,14 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
|
||||
useTimeSeconds := time.Now().Unix() - relayInfo.StartTime.Unix()
|
||||
promptTokens := usage.PromptTokens
|
||||
cacheTokens := usage.PromptTokensDetails.CachedTokens
|
||||
imageTokens := usage.PromptTokensDetails.ImageTokens
|
||||
completionTokens := usage.CompletionTokens
|
||||
modelName := relayInfo.OriginModelName
|
||||
|
||||
tokenName := ctx.GetString("token_name")
|
||||
completionRatio := priceData.CompletionRatio
|
||||
cacheRatio := priceData.CacheRatio
|
||||
imageRatio := priceData.ImageRatio
|
||||
modelRatio := priceData.ModelRatio
|
||||
groupRatio := priceData.GroupRatio
|
||||
modelPrice := priceData.ModelPrice
|
||||
@@ -344,9 +347,11 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
|
||||
// Convert values to decimal for precise calculation
|
||||
dPromptTokens := decimal.NewFromInt(int64(promptTokens))
|
||||
dCacheTokens := decimal.NewFromInt(int64(cacheTokens))
|
||||
dImageTokens := decimal.NewFromInt(int64(imageTokens))
|
||||
dCompletionTokens := decimal.NewFromInt(int64(completionTokens))
|
||||
dCompletionRatio := decimal.NewFromFloat(completionRatio)
|
||||
dCacheRatio := decimal.NewFromFloat(cacheRatio)
|
||||
dImageRatio := decimal.NewFromFloat(imageRatio)
|
||||
dModelRatio := decimal.NewFromFloat(modelRatio)
|
||||
dGroupRatio := decimal.NewFromFloat(groupRatio)
|
||||
dModelPrice := decimal.NewFromFloat(modelPrice)
|
||||
@@ -354,11 +359,46 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
|
||||
|
||||
ratio := dModelRatio.Mul(dGroupRatio)
|
||||
|
||||
// openai web search 工具计费
|
||||
var dWebSearchQuota decimal.Decimal
|
||||
var webSearchPrice float64
|
||||
if relayInfo.ResponsesUsageInfo != nil {
|
||||
if webSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolWebSearchPreview]; exists && webSearchTool.CallCount > 0 {
|
||||
// 计算 web search 调用的配额 (配额 = 价格 * 调用次数 / 1000 * 分组倍率)
|
||||
webSearchPrice = operation_setting.GetWebSearchPricePerThousand(modelName, webSearchTool.SearchContextSize)
|
||||
dWebSearchQuota = decimal.NewFromFloat(webSearchPrice).
|
||||
Mul(decimal.NewFromInt(int64(webSearchTool.CallCount))).
|
||||
Div(decimal.NewFromInt(1000)).Mul(dGroupRatio).Mul(dQuotaPerUnit)
|
||||
extraContent += fmt.Sprintf("Web Search 调用 %d 次,上下文大小 %s,调用花费 $%s",
|
||||
webSearchTool.CallCount, webSearchTool.SearchContextSize, dWebSearchQuota.String())
|
||||
}
|
||||
}
|
||||
// file search tool 计费
|
||||
var dFileSearchQuota decimal.Decimal
|
||||
var fileSearchPrice float64
|
||||
if relayInfo.ResponsesUsageInfo != nil {
|
||||
if fileSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolFileSearch]; exists && fileSearchTool.CallCount > 0 {
|
||||
fileSearchPrice = operation_setting.GetFileSearchPricePerThousand()
|
||||
dFileSearchQuota = decimal.NewFromFloat(fileSearchPrice).
|
||||
Mul(decimal.NewFromInt(int64(fileSearchTool.CallCount))).
|
||||
Div(decimal.NewFromInt(1000)).Mul(dGroupRatio).Mul(dQuotaPerUnit)
|
||||
extraContent += fmt.Sprintf("File Search 调用 %d 次,调用花费 $%s",
|
||||
fileSearchTool.CallCount, dFileSearchQuota.String())
|
||||
}
|
||||
}
|
||||
|
||||
var quotaCalculateDecimal decimal.Decimal
|
||||
if !priceData.UsePrice {
|
||||
nonCachedTokens := dPromptTokens.Sub(dCacheTokens)
|
||||
cachedTokensWithRatio := dCacheTokens.Mul(dCacheRatio)
|
||||
|
||||
promptQuota := nonCachedTokens.Add(cachedTokensWithRatio)
|
||||
if imageTokens > 0 {
|
||||
nonImageTokens := dPromptTokens.Sub(dImageTokens)
|
||||
imageTokensWithRatio := dImageTokens.Mul(dImageRatio)
|
||||
promptQuota = nonImageTokens.Add(imageTokensWithRatio)
|
||||
}
|
||||
|
||||
completionQuota := dCompletionTokens.Mul(dCompletionRatio)
|
||||
|
||||
quotaCalculateDecimal = promptQuota.Add(completionQuota).Mul(ratio)
|
||||
@@ -369,6 +409,9 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
|
||||
} else {
|
||||
quotaCalculateDecimal = dModelPrice.Mul(dQuotaPerUnit).Mul(dGroupRatio)
|
||||
}
|
||||
// 添加 responses tools call 调用的配额
|
||||
quotaCalculateDecimal = quotaCalculateDecimal.Add(dWebSearchQuota)
|
||||
quotaCalculateDecimal = quotaCalculateDecimal.Add(dFileSearchQuota)
|
||||
|
||||
quota := int(quotaCalculateDecimal.Round(0).IntPart())
|
||||
totalTokens := promptTokens + completionTokens
|
||||
@@ -414,6 +457,25 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
|
||||
logContent += ", " + extraContent
|
||||
}
|
||||
other := service.GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, cacheTokens, cacheRatio, modelPrice)
|
||||
if imageTokens != 0 {
|
||||
other["image"] = true
|
||||
other["image_ratio"] = imageRatio
|
||||
other["image_output"] = imageTokens
|
||||
}
|
||||
if !dWebSearchQuota.IsZero() && relayInfo.ResponsesUsageInfo != nil {
|
||||
if webSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolWebSearchPreview]; exists {
|
||||
other["web_search"] = true
|
||||
other["web_search_call_count"] = webSearchTool.CallCount
|
||||
other["web_search_price"] = webSearchPrice
|
||||
}
|
||||
}
|
||||
if !dFileSearchQuota.IsZero() && relayInfo.ResponsesUsageInfo != nil {
|
||||
if fileSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolFileSearch]; exists {
|
||||
other["file_search"] = true
|
||||
other["file_search_call_count"] = fileSearchTool.CallCount
|
||||
other["file_search_price"] = fileSearchPrice
|
||||
}
|
||||
}
|
||||
model.RecordConsumeLog(ctx, relayInfo.UserId, relayInfo.ChannelId, promptTokens, completionTokens, logModel,
|
||||
tokenName, quota, logContent, relayInfo.TokenId, userQuota, int(useTimeSeconds), relayInfo.IsStream, relayInfo.Group, other)
|
||||
}
|
||||
|
||||
@@ -25,6 +25,7 @@ import (
|
||||
"one-api/relay/channel/tencent"
|
||||
"one-api/relay/channel/vertex"
|
||||
"one-api/relay/channel/volcengine"
|
||||
"one-api/relay/channel/xai"
|
||||
"one-api/relay/channel/xunfei"
|
||||
"one-api/relay/channel/zhipu"
|
||||
"one-api/relay/channel/zhipu_4v"
|
||||
@@ -85,6 +86,8 @@ func GetAdaptor(apiType int) channel.Adaptor {
|
||||
return &openai.Adaptor{}
|
||||
case constant.APITypeXinference:
|
||||
return &openai.Adaptor{}
|
||||
case constant.APITypeXai:
|
||||
return &xai.Adaptor{}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
package router
|
||||
|
||||
import (
|
||||
"github.com/gin-gonic/gin"
|
||||
"one-api/controller"
|
||||
"one-api/middleware"
|
||||
"one-api/relay"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
)
|
||||
|
||||
func SetRelayRouter(router *gin.Engine) {
|
||||
@@ -40,13 +41,14 @@ func SetRelayRouter(router *gin.Engine) {
|
||||
httpRouter.POST("/chat/completions", controller.Relay)
|
||||
httpRouter.POST("/edits", controller.Relay)
|
||||
httpRouter.POST("/images/generations", controller.Relay)
|
||||
httpRouter.POST("/images/edits", controller.RelayNotImplemented)
|
||||
httpRouter.POST("/images/edits", controller.Relay)
|
||||
httpRouter.POST("/images/variations", controller.RelayNotImplemented)
|
||||
httpRouter.POST("/embeddings", controller.Relay)
|
||||
httpRouter.POST("/engines/:model/embeddings", controller.Relay)
|
||||
httpRouter.POST("/audio/transcriptions", controller.Relay)
|
||||
httpRouter.POST("/audio/translations", controller.Relay)
|
||||
httpRouter.POST("/audio/speech", controller.Relay)
|
||||
httpRouter.POST("/responses", controller.Relay)
|
||||
httpRouter.GET("/files", controller.RelayNotImplemented)
|
||||
httpRouter.POST("/files", controller.RelayNotImplemented)
|
||||
httpRouter.DELETE("/files/:id", controller.RelayNotImplemented)
|
||||
|
||||
@@ -6,9 +6,10 @@ import (
|
||||
"one-api/common"
|
||||
"one-api/dto"
|
||||
relaycommon "one-api/relay/common"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func ClaudeToOpenAIRequest(claudeRequest dto.ClaudeRequest) (*dto.GeneralOpenAIRequest, error) {
|
||||
func ClaudeToOpenAIRequest(claudeRequest dto.ClaudeRequest, info *relaycommon.RelayInfo) (*dto.GeneralOpenAIRequest, error) {
|
||||
openAIRequest := dto.GeneralOpenAIRequest{
|
||||
Model: claudeRequest.Model,
|
||||
MaxTokens: claudeRequest.MaxTokens,
|
||||
@@ -17,6 +18,13 @@ func ClaudeToOpenAIRequest(claudeRequest dto.ClaudeRequest) (*dto.GeneralOpenAIR
|
||||
Stream: claudeRequest.Stream,
|
||||
}
|
||||
|
||||
if claudeRequest.Thinking != nil {
|
||||
if strings.HasSuffix(info.OriginModelName, "-thinking") &&
|
||||
!strings.HasSuffix(claudeRequest.Model, "-thinking") {
|
||||
openAIRequest.Model = openAIRequest.Model + "-thinking"
|
||||
}
|
||||
}
|
||||
|
||||
// Convert stop sequences
|
||||
if len(claudeRequest.StopSequences) == 1 {
|
||||
openAIRequest.Stop = claudeRequest.StopSequences[0]
|
||||
@@ -45,7 +53,7 @@ func ClaudeToOpenAIRequest(claudeRequest dto.ClaudeRequest) (*dto.GeneralOpenAIR
|
||||
|
||||
// Add system message if present
|
||||
if claudeRequest.System != nil {
|
||||
if claudeRequest.IsStringSystem() {
|
||||
if claudeRequest.IsStringSystem() && claudeRequest.GetStringSystem() != "" {
|
||||
openAIMessage := dto.Message{
|
||||
Role: "system",
|
||||
}
|
||||
@@ -59,7 +67,9 @@ func ClaudeToOpenAIRequest(claudeRequest dto.ClaudeRequest) (*dto.GeneralOpenAIR
|
||||
Role: "system",
|
||||
}
|
||||
for _, system := range systems {
|
||||
systemStr += system.Type
|
||||
if system.Text != nil {
|
||||
systemStr += *system.Text
|
||||
}
|
||||
}
|
||||
openAIMessage.SetStringContent(systemStr)
|
||||
openAIMessages = append(openAIMessages, openAIMessage)
|
||||
@@ -122,23 +132,22 @@ func ClaudeToOpenAIRequest(claudeRequest dto.ClaudeRequest) (*dto.GeneralOpenAIR
|
||||
oaiToolMessage.SetStringContent(mediaMsg.GetStringContent())
|
||||
} else {
|
||||
mediaContents := mediaMsg.ParseMediaContent()
|
||||
if len(mediaContents) > 0 && mediaContents[0].Text != nil {
|
||||
oaiToolMessage.SetStringContent(*mediaContents[0].Text)
|
||||
}
|
||||
encodeJson, _ := common.EncodeJson(mediaContents)
|
||||
oaiToolMessage.SetStringContent(string(encodeJson))
|
||||
}
|
||||
openAIMessages = append(openAIMessages, oaiToolMessage)
|
||||
}
|
||||
}
|
||||
|
||||
if len(mediaMessages) > 0 {
|
||||
openAIMessage.SetMediaContent(mediaMessages)
|
||||
}
|
||||
|
||||
if len(toolCalls) > 0 {
|
||||
openAIMessage.SetToolCalls(toolCalls)
|
||||
}
|
||||
|
||||
if len(mediaMessages) > 0 && len(toolCalls) == 0 {
|
||||
openAIMessage.SetMediaContent(mediaMessages)
|
||||
}
|
||||
}
|
||||
if len(openAIMessage.ParseContent()) > 0 {
|
||||
if len(openAIMessage.ParseContent()) > 0 || len(openAIMessage.ToolCalls) > 0 {
|
||||
openAIMessages = append(openAIMessages, openAIMessage)
|
||||
}
|
||||
}
|
||||
@@ -211,15 +220,15 @@ func StreamResponseOpenAI2Claude(openAIResponse *dto.ChatCompletionsStreamRespon
|
||||
resp.SetIndex(0)
|
||||
claudeResponses = append(claudeResponses, resp)
|
||||
} else {
|
||||
resp := &dto.ClaudeResponse{
|
||||
Type: "content_block_start",
|
||||
ContentBlock: &dto.ClaudeMediaMessage{
|
||||
Type: "text",
|
||||
Text: common.GetPointer[string](""),
|
||||
},
|
||||
}
|
||||
resp.SetIndex(0)
|
||||
claudeResponses = append(claudeResponses, resp)
|
||||
//resp := &dto.ClaudeResponse{
|
||||
// Type: "content_block_start",
|
||||
// ContentBlock: &dto.ClaudeMediaMessage{
|
||||
// Type: "text",
|
||||
// Text: common.GetPointer[string](""),
|
||||
// },
|
||||
//}
|
||||
//resp.SetIndex(0)
|
||||
//claudeResponses = append(claudeResponses, resp)
|
||||
}
|
||||
return claudeResponses
|
||||
}
|
||||
@@ -232,16 +241,20 @@ func StreamResponseOpenAI2Claude(openAIResponse *dto.ChatCompletionsStreamRespon
|
||||
chosenChoice := openAIResponse.Choices[0]
|
||||
if chosenChoice.FinishReason != nil && *chosenChoice.FinishReason != "" {
|
||||
// should be done
|
||||
info.FinishReason = *chosenChoice.FinishReason
|
||||
return claudeResponses
|
||||
}
|
||||
if info.Done {
|
||||
claudeResponses = append(claudeResponses, generateStopBlock(info.ClaudeConvertInfo.Index))
|
||||
if openAIResponse.Usage != nil {
|
||||
if info.ClaudeConvertInfo.Usage != nil {
|
||||
claudeResponses = append(claudeResponses, &dto.ClaudeResponse{
|
||||
Type: "message_delta",
|
||||
Usage: &dto.ClaudeUsage{
|
||||
InputTokens: openAIResponse.Usage.PromptTokens,
|
||||
OutputTokens: openAIResponse.Usage.CompletionTokens,
|
||||
InputTokens: info.ClaudeConvertInfo.Usage.PromptTokens,
|
||||
OutputTokens: info.ClaudeConvertInfo.Usage.CompletionTokens,
|
||||
},
|
||||
Delta: &dto.ClaudeMediaMessage{
|
||||
StopReason: common.GetPointer[string](stopReasonOpenAI2Claude(*chosenChoice.FinishReason)),
|
||||
StopReason: common.GetPointer[string](stopReasonOpenAI2Claude(info.FinishReason)),
|
||||
},
|
||||
})
|
||||
}
|
||||
@@ -250,10 +263,10 @@ func StreamResponseOpenAI2Claude(openAIResponse *dto.ChatCompletionsStreamRespon
|
||||
})
|
||||
} else {
|
||||
var claudeResponse dto.ClaudeResponse
|
||||
claudeResponse.SetIndex(0)
|
||||
var isEmpty bool
|
||||
claudeResponse.Type = "content_block_delta"
|
||||
if len(chosenChoice.Delta.ToolCalls) > 0 {
|
||||
if info.ClaudeConvertInfo.LastMessagesType == relaycommon.LastMessageTypeText {
|
||||
if info.ClaudeConvertInfo.LastMessagesType != relaycommon.LastMessageTypeTools {
|
||||
claudeResponses = append(claudeResponses, generateStopBlock(info.ClaudeConvertInfo.Index))
|
||||
info.ClaudeConvertInfo.Index++
|
||||
claudeResponses = append(claudeResponses, &dto.ClaudeResponse{
|
||||
@@ -274,15 +287,57 @@ func StreamResponseOpenAI2Claude(openAIResponse *dto.ChatCompletionsStreamRespon
|
||||
PartialJson: &chosenChoice.Delta.ToolCalls[0].Function.Arguments,
|
||||
}
|
||||
} else {
|
||||
info.ClaudeConvertInfo.LastMessagesType = relaycommon.LastMessageTypeText
|
||||
// text delta
|
||||
claudeResponse.Delta = &dto.ClaudeMediaMessage{
|
||||
Type: "text_delta",
|
||||
Text: common.GetPointer[string](chosenChoice.Delta.GetContentString()),
|
||||
reasoning := chosenChoice.Delta.GetReasoningContent()
|
||||
textContent := chosenChoice.Delta.GetContentString()
|
||||
if reasoning != "" || textContent != "" {
|
||||
if reasoning != "" {
|
||||
if info.ClaudeConvertInfo.LastMessagesType != relaycommon.LastMessageTypeThinking {
|
||||
//info.ClaudeConvertInfo.Index++
|
||||
claudeResponses = append(claudeResponses, &dto.ClaudeResponse{
|
||||
Index: &info.ClaudeConvertInfo.Index,
|
||||
Type: "content_block_start",
|
||||
ContentBlock: &dto.ClaudeMediaMessage{
|
||||
Type: "thinking",
|
||||
Thinking: "",
|
||||
},
|
||||
})
|
||||
}
|
||||
info.ClaudeConvertInfo.LastMessagesType = relaycommon.LastMessageTypeThinking
|
||||
// text delta
|
||||
claudeResponse.Delta = &dto.ClaudeMediaMessage{
|
||||
Type: "thinking_delta",
|
||||
Thinking: reasoning,
|
||||
}
|
||||
} else {
|
||||
if info.ClaudeConvertInfo.LastMessagesType != relaycommon.LastMessageTypeText {
|
||||
if info.LastMessagesType == relaycommon.LastMessageTypeThinking || info.LastMessagesType == relaycommon.LastMessageTypeTools {
|
||||
claudeResponses = append(claudeResponses, generateStopBlock(info.ClaudeConvertInfo.Index))
|
||||
info.ClaudeConvertInfo.Index++
|
||||
}
|
||||
claudeResponses = append(claudeResponses, &dto.ClaudeResponse{
|
||||
Index: &info.ClaudeConvertInfo.Index,
|
||||
Type: "content_block_start",
|
||||
ContentBlock: &dto.ClaudeMediaMessage{
|
||||
Type: "text",
|
||||
Text: common.GetPointer[string](""),
|
||||
},
|
||||
})
|
||||
}
|
||||
info.ClaudeConvertInfo.LastMessagesType = relaycommon.LastMessageTypeText
|
||||
// text delta
|
||||
claudeResponse.Delta = &dto.ClaudeMediaMessage{
|
||||
Type: "text_delta",
|
||||
Text: common.GetPointer[string](textContent),
|
||||
}
|
||||
}
|
||||
} else {
|
||||
isEmpty = true
|
||||
}
|
||||
}
|
||||
claudeResponse.Index = &info.ClaudeConvertInfo.Index
|
||||
claudeResponses = append(claudeResponses, &claudeResponse)
|
||||
if !isEmpty {
|
||||
claudeResponses = append(claudeResponses, &claudeResponse)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -8,9 +8,9 @@ import (
|
||||
"one-api/dto"
|
||||
)
|
||||
|
||||
var maxFileSize = constant.MaxFileDownloadMB * 1024 * 1024
|
||||
|
||||
func GetFileBase64FromUrl(url string) (*dto.LocalFileData, error) {
|
||||
var maxFileSize = constant.MaxFileDownloadMB * 1024 * 1024
|
||||
|
||||
resp, err := DoDownloadRequest(url)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -22,7 +22,6 @@ func GetFileBase64FromUrl(url string) (*dto.LocalFileData, error) {
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Check actual size after reading
|
||||
if len(fileBytes) > maxFileSize {
|
||||
return nil, fmt.Errorf("file size exceeds maximum allowed size: %dMB", constant.MaxFileDownloadMB)
|
||||
|
||||
@@ -43,7 +43,7 @@ func InitTokenEncoders() {
|
||||
} else {
|
||||
tokenEncoderMap[model] = defaultTokenEncoder
|
||||
}
|
||||
} else if strings.HasPrefix(model, "o1") {
|
||||
} else if strings.HasPrefix(model, "o") {
|
||||
tokenEncoderMap[model] = o200kTokenEncoder
|
||||
} else {
|
||||
tokenEncoderMap[model] = defaultTokenEncoder
|
||||
@@ -398,6 +398,10 @@ func CountTokenMessages(info *relaycommon.RelayInfo, messages []dto.Message, mod
|
||||
} else if m.Type == dto.ContentTypeInputAudio {
|
||||
// TODO: 音频token数量计算
|
||||
tokenNum += 100
|
||||
} else if m.Type == dto.ContentTypeFile {
|
||||
tokenNum += 5000
|
||||
} else if m.Type == dto.ContentTypeVideoUrl {
|
||||
tokenNum += 5000
|
||||
} else {
|
||||
tokenNum += getTokenNum(tokenEncoder, m.Text)
|
||||
}
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"one-api/common"
|
||||
"sync"
|
||||
)
|
||||
|
||||
var groupRatio = map[string]float64{
|
||||
@@ -11,8 +12,12 @@ var groupRatio = map[string]float64{
|
||||
"vip": 1,
|
||||
"svip": 1,
|
||||
}
|
||||
var groupRatioMutex sync.RWMutex
|
||||
|
||||
func GetGroupRatioCopy() map[string]float64 {
|
||||
groupRatioMutex.RLock()
|
||||
defer groupRatioMutex.RUnlock()
|
||||
|
||||
groupRatioCopy := make(map[string]float64)
|
||||
for k, v := range groupRatio {
|
||||
groupRatioCopy[k] = v
|
||||
@@ -21,11 +26,17 @@ func GetGroupRatioCopy() map[string]float64 {
|
||||
}
|
||||
|
||||
func ContainsGroupRatio(name string) bool {
|
||||
groupRatioMutex.RLock()
|
||||
defer groupRatioMutex.RUnlock()
|
||||
|
||||
_, ok := groupRatio[name]
|
||||
return ok
|
||||
}
|
||||
|
||||
func GroupRatio2JSONString() string {
|
||||
groupRatioMutex.RLock()
|
||||
defer groupRatioMutex.RUnlock()
|
||||
|
||||
jsonBytes, err := json.Marshal(groupRatio)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling model ratio: " + err.Error())
|
||||
@@ -34,11 +45,17 @@ func GroupRatio2JSONString() string {
|
||||
}
|
||||
|
||||
func UpdateGroupRatioByJSONString(jsonStr string) error {
|
||||
groupRatioMutex.Lock()
|
||||
defer groupRatioMutex.Unlock()
|
||||
|
||||
groupRatio = make(map[string]float64)
|
||||
return json.Unmarshal([]byte(jsonStr), &groupRatio)
|
||||
}
|
||||
|
||||
func GetGroupRatio(name string) float64 {
|
||||
groupRatioMutex.RLock()
|
||||
defer groupRatioMutex.RUnlock()
|
||||
|
||||
ratio, ok := groupRatio[name]
|
||||
if !ok {
|
||||
common.SysError("group ratio not found: " + name)
|
||||
|
||||
@@ -6,8 +6,11 @@ import (
|
||||
|
||||
// GeminiSettings 定义Gemini模型的配置
|
||||
type GeminiSettings struct {
|
||||
SafetySettings map[string]string `json:"safety_settings"`
|
||||
VersionSettings map[string]string `json:"version_settings"`
|
||||
SafetySettings map[string]string `json:"safety_settings"`
|
||||
VersionSettings map[string]string `json:"version_settings"`
|
||||
SupportedImagineModels []string `json:"supported_imagine_models"`
|
||||
ThinkingAdapterEnabled bool `json:"thinking_adapter_enabled"`
|
||||
ThinkingAdapterBudgetTokensPercentage float64 `json:"thinking_adapter_budget_tokens_percentage"`
|
||||
}
|
||||
|
||||
// 默认配置
|
||||
@@ -20,6 +23,12 @@ var defaultGeminiSettings = GeminiSettings{
|
||||
"default": "v1beta",
|
||||
"gemini-1.0-pro": "v1",
|
||||
},
|
||||
SupportedImagineModels: []string{
|
||||
"gemini-2.0-flash-exp-image-generation",
|
||||
"gemini-2.0-flash-exp",
|
||||
},
|
||||
ThinkingAdapterEnabled: false,
|
||||
ThinkingAdapterBudgetTokensPercentage: 0.6,
|
||||
}
|
||||
|
||||
// 全局实例
|
||||
@@ -50,3 +59,12 @@ func GetGeminiVersionSetting(key string) string {
|
||||
}
|
||||
return geminiSettings.VersionSettings["default"]
|
||||
}
|
||||
|
||||
func IsGeminiModelSupportImagine(model string) bool {
|
||||
for _, v := range geminiSettings.SupportedImagineModels {
|
||||
if v == model {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -56,17 +56,15 @@ var cacheRatioMapMutex sync.RWMutex
|
||||
|
||||
// GetCacheRatioMap returns the cache ratio map
|
||||
func GetCacheRatioMap() map[string]float64 {
|
||||
cacheRatioMapMutex.Lock()
|
||||
defer cacheRatioMapMutex.Unlock()
|
||||
if cacheRatioMap == nil {
|
||||
cacheRatioMap = defaultCacheRatio
|
||||
}
|
||||
cacheRatioMapMutex.RLock()
|
||||
defer cacheRatioMapMutex.RUnlock()
|
||||
return cacheRatioMap
|
||||
}
|
||||
|
||||
// CacheRatio2JSONString converts the cache ratio map to a JSON string
|
||||
func CacheRatio2JSONString() string {
|
||||
GetCacheRatioMap()
|
||||
cacheRatioMapMutex.RLock()
|
||||
defer cacheRatioMapMutex.RUnlock()
|
||||
jsonBytes, err := json.Marshal(cacheRatioMap)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling cache ratio: " + err.Error())
|
||||
@@ -84,10 +82,11 @@ func UpdateCacheRatioByJSONString(jsonStr string) error {
|
||||
|
||||
// GetCacheRatio returns the cache ratio for a model
|
||||
func GetCacheRatio(name string) (float64, bool) {
|
||||
GetCacheRatioMap()
|
||||
cacheRatioMapMutex.RLock()
|
||||
defer cacheRatioMapMutex.RUnlock()
|
||||
ratio, ok := cacheRatioMap[name]
|
||||
if !ok {
|
||||
return 1, false // Default to 0.5 if not found
|
||||
return 1, false // Default to 1 if not found
|
||||
}
|
||||
return ratio, true
|
||||
}
|
||||
|
||||
@@ -3,12 +3,16 @@ package operation_setting
|
||||
import "one-api/setting/config"
|
||||
|
||||
type GeneralSetting struct {
|
||||
DocsLink string `json:"docs_link"`
|
||||
DocsLink string `json:"docs_link"`
|
||||
PingIntervalEnabled bool `json:"ping_interval_enabled"`
|
||||
PingIntervalSeconds int `json:"ping_interval_seconds"`
|
||||
}
|
||||
|
||||
// 默认配置
|
||||
var generalSetting = GeneralSetting{
|
||||
DocsLink: "https://docs.newapi.pro",
|
||||
DocsLink: "https://docs.newapi.pro",
|
||||
PingIntervalEnabled: false,
|
||||
PingIntervalSeconds: 60,
|
||||
}
|
||||
|
||||
func init() {
|
||||
|
||||
@@ -51,26 +51,27 @@ var defaultModelRatio = map[string]float64{
|
||||
"gpt-4o-realtime-preview-2024-12-17": 2.5,
|
||||
"gpt-4o-mini-realtime-preview": 0.3,
|
||||
"gpt-4o-mini-realtime-preview-2024-12-17": 0.3,
|
||||
"o1": 7.5,
|
||||
"o1-2024-12-17": 7.5,
|
||||
"o1-preview": 7.5,
|
||||
"o1-preview-2024-09-12": 7.5,
|
||||
"o1-mini": 0.55,
|
||||
"o1-mini-2024-09-12": 0.55,
|
||||
"o3-mini": 0.55,
|
||||
"o3-mini-2025-01-31": 0.55,
|
||||
"o3-mini-high": 0.55,
|
||||
"o3-mini-2025-01-31-high": 0.55,
|
||||
"o3-mini-low": 0.55,
|
||||
"o3-mini-2025-01-31-low": 0.55,
|
||||
"o3-mini-medium": 0.55,
|
||||
"o3-mini-2025-01-31-medium": 0.55,
|
||||
"gpt-4o-mini": 0.075,
|
||||
"gpt-4o-mini-2024-07-18": 0.075,
|
||||
"gpt-4-turbo": 5, // $0.01 / 1K tokens
|
||||
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
|
||||
"gpt-4.5-preview": 37.5,
|
||||
"gpt-4.5-preview-2025-02-27": 37.5,
|
||||
"gpt-image-1": 2.5,
|
||||
"o1": 7.5,
|
||||
"o1-2024-12-17": 7.5,
|
||||
"o1-preview": 7.5,
|
||||
"o1-preview-2024-09-12": 7.5,
|
||||
"o1-mini": 0.55,
|
||||
"o1-mini-2024-09-12": 0.55,
|
||||
"o3-mini": 0.55,
|
||||
"o3-mini-2025-01-31": 0.55,
|
||||
"o3-mini-high": 0.55,
|
||||
"o3-mini-2025-01-31-high": 0.55,
|
||||
"o3-mini-low": 0.55,
|
||||
"o3-mini-2025-01-31-low": 0.55,
|
||||
"o3-mini-medium": 0.55,
|
||||
"o3-mini-2025-01-31-medium": 0.55,
|
||||
"gpt-4o-mini": 0.075,
|
||||
"gpt-4o-mini-2024-07-18": 0.075,
|
||||
"gpt-4-turbo": 5, // $0.01 / 1K tokens
|
||||
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
|
||||
"gpt-4.5-preview": 37.5,
|
||||
"gpt-4.5-preview-2025-02-27": 37.5,
|
||||
//"gpt-3.5-turbo-0301": 0.75, //deprecated
|
||||
"gpt-3.5-turbo": 0.25,
|
||||
"gpt-3.5-turbo-0613": 0.75,
|
||||
@@ -86,89 +87,92 @@ var defaultModelRatio = map[string]float64{
|
||||
"text-curie-001": 1,
|
||||
//"text-davinci-002": 10,
|
||||
//"text-davinci-003": 10,
|
||||
"text-davinci-edit-001": 10,
|
||||
"code-davinci-edit-001": 10,
|
||||
"whisper-1": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
|
||||
"tts-1": 7.5, // 1k characters -> $0.015
|
||||
"tts-1-1106": 7.5, // 1k characters -> $0.015
|
||||
"tts-1-hd": 15, // 1k characters -> $0.03
|
||||
"tts-1-hd-1106": 15, // 1k characters -> $0.03
|
||||
"davinci": 10,
|
||||
"curie": 10,
|
||||
"babbage": 10,
|
||||
"ada": 10,
|
||||
"text-embedding-3-small": 0.01,
|
||||
"text-embedding-3-large": 0.065,
|
||||
"text-embedding-ada-002": 0.05,
|
||||
"text-search-ada-doc-001": 10,
|
||||
"text-moderation-stable": 0.1,
|
||||
"text-moderation-latest": 0.1,
|
||||
"claude-instant-1": 0.4, // $0.8 / 1M tokens
|
||||
"claude-2.0": 4, // $8 / 1M tokens
|
||||
"claude-2.1": 4, // $8 / 1M tokens
|
||||
"claude-3-haiku-20240307": 0.125, // $0.25 / 1M tokens
|
||||
"claude-3-5-haiku-20241022": 0.5, // $1 / 1M tokens
|
||||
"claude-3-sonnet-20240229": 1.5, // $3 / 1M tokens
|
||||
"claude-3-5-sonnet-20240620": 1.5,
|
||||
"claude-3-5-sonnet-20241022": 1.5,
|
||||
"claude-3-7-sonnet-20250219": 1.5,
|
||||
"claude-3-7-sonnet-20250219-thinking": 1.5,
|
||||
"claude-3-opus-20240229": 7.5, // $15 / 1M tokens
|
||||
"ERNIE-4.0-8K": 0.120 * RMB,
|
||||
"ERNIE-3.5-8K": 0.012 * RMB,
|
||||
"ERNIE-3.5-8K-0205": 0.024 * RMB,
|
||||
"ERNIE-3.5-8K-1222": 0.012 * RMB,
|
||||
"ERNIE-Bot-8K": 0.024 * RMB,
|
||||
"ERNIE-3.5-4K-0205": 0.012 * RMB,
|
||||
"ERNIE-Speed-8K": 0.004 * RMB,
|
||||
"ERNIE-Speed-128K": 0.004 * RMB,
|
||||
"ERNIE-Lite-8K-0922": 0.008 * RMB,
|
||||
"ERNIE-Lite-8K-0308": 0.003 * RMB,
|
||||
"ERNIE-Tiny-8K": 0.001 * RMB,
|
||||
"BLOOMZ-7B": 0.004 * RMB,
|
||||
"Embedding-V1": 0.002 * RMB,
|
||||
"bge-large-zh": 0.002 * RMB,
|
||||
"bge-large-en": 0.002 * RMB,
|
||||
"tao-8k": 0.002 * RMB,
|
||||
"PaLM-2": 1,
|
||||
"gemini-1.5-pro-latest": 1.25, // $3.5 / 1M tokens
|
||||
"gemini-1.5-flash-latest": 0.075,
|
||||
"gemini-2.0-flash": 0.05,
|
||||
"gemini-2.5-pro-exp-03-25": 1.25,
|
||||
"gemini-2.5-pro-preview-03-25": 1.25,
|
||||
"text-embedding-004": 0.001,
|
||||
"chatglm_turbo": 0.3572, // ¥0.005 / 1k tokens
|
||||
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
|
||||
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens
|
||||
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
|
||||
"glm-4": 7.143, // ¥0.1 / 1k tokens
|
||||
"glm-4v": 0.05 * RMB, // ¥0.05 / 1k tokens
|
||||
"glm-4-alltools": 0.1 * RMB, // ¥0.1 / 1k tokens
|
||||
"glm-3-turbo": 0.3572,
|
||||
"glm-4-plus": 0.05 * RMB,
|
||||
"glm-4-0520": 0.1 * RMB,
|
||||
"glm-4-air": 0.001 * RMB,
|
||||
"glm-4-airx": 0.01 * RMB,
|
||||
"glm-4-long": 0.001 * RMB,
|
||||
"glm-4-flash": 0,
|
||||
"glm-4v-plus": 0.01 * RMB,
|
||||
"qwen-turbo": 0.8572, // ¥0.012 / 1k tokens
|
||||
"qwen-plus": 10, // ¥0.14 / 1k tokens
|
||||
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
|
||||
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v4.0": 1.2858,
|
||||
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
|
||||
"360gpt-turbo": 0.0858, // ¥0.0012 / 1k tokens
|
||||
"360gpt-turbo-responsibility-8k": 0.8572, // ¥0.012 / 1k tokens
|
||||
"360gpt-pro": 0.8572, // ¥0.012 / 1k tokens
|
||||
"360gpt2-pro": 0.8572, // ¥0.012 / 1k tokens
|
||||
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
|
||||
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
|
||||
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
|
||||
"hunyuan": 7.143, // ¥0.1 / 1k tokens // https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
|
||||
"text-davinci-edit-001": 10,
|
||||
"code-davinci-edit-001": 10,
|
||||
"whisper-1": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
|
||||
"tts-1": 7.5, // 1k characters -> $0.015
|
||||
"tts-1-1106": 7.5, // 1k characters -> $0.015
|
||||
"tts-1-hd": 15, // 1k characters -> $0.03
|
||||
"tts-1-hd-1106": 15, // 1k characters -> $0.03
|
||||
"davinci": 10,
|
||||
"curie": 10,
|
||||
"babbage": 10,
|
||||
"ada": 10,
|
||||
"text-embedding-3-small": 0.01,
|
||||
"text-embedding-3-large": 0.065,
|
||||
"text-embedding-ada-002": 0.05,
|
||||
"text-search-ada-doc-001": 10,
|
||||
"text-moderation-stable": 0.1,
|
||||
"text-moderation-latest": 0.1,
|
||||
"claude-instant-1": 0.4, // $0.8 / 1M tokens
|
||||
"claude-2.0": 4, // $8 / 1M tokens
|
||||
"claude-2.1": 4, // $8 / 1M tokens
|
||||
"claude-3-haiku-20240307": 0.125, // $0.25 / 1M tokens
|
||||
"claude-3-5-haiku-20241022": 0.5, // $1 / 1M tokens
|
||||
"claude-3-sonnet-20240229": 1.5, // $3 / 1M tokens
|
||||
"claude-3-5-sonnet-20240620": 1.5,
|
||||
"claude-3-5-sonnet-20241022": 1.5,
|
||||
"claude-3-7-sonnet-20250219": 1.5,
|
||||
"claude-3-7-sonnet-20250219-thinking": 1.5,
|
||||
"claude-3-opus-20240229": 7.5, // $15 / 1M tokens
|
||||
"ERNIE-4.0-8K": 0.120 * RMB,
|
||||
"ERNIE-3.5-8K": 0.012 * RMB,
|
||||
"ERNIE-3.5-8K-0205": 0.024 * RMB,
|
||||
"ERNIE-3.5-8K-1222": 0.012 * RMB,
|
||||
"ERNIE-Bot-8K": 0.024 * RMB,
|
||||
"ERNIE-3.5-4K-0205": 0.012 * RMB,
|
||||
"ERNIE-Speed-8K": 0.004 * RMB,
|
||||
"ERNIE-Speed-128K": 0.004 * RMB,
|
||||
"ERNIE-Lite-8K-0922": 0.008 * RMB,
|
||||
"ERNIE-Lite-8K-0308": 0.003 * RMB,
|
||||
"ERNIE-Tiny-8K": 0.001 * RMB,
|
||||
"BLOOMZ-7B": 0.004 * RMB,
|
||||
"Embedding-V1": 0.002 * RMB,
|
||||
"bge-large-zh": 0.002 * RMB,
|
||||
"bge-large-en": 0.002 * RMB,
|
||||
"tao-8k": 0.002 * RMB,
|
||||
"PaLM-2": 1,
|
||||
"gemini-1.5-pro-latest": 1.25, // $3.5 / 1M tokens
|
||||
"gemini-1.5-flash-latest": 0.075,
|
||||
"gemini-2.0-flash": 0.05,
|
||||
"gemini-2.5-pro-exp-03-25": 0.625,
|
||||
"gemini-2.5-pro-preview-03-25": 0.625,
|
||||
"gemini-2.5-flash-preview-04-17": 0.075,
|
||||
"gemini-2.5-flash-preview-04-17-thinking": 0.075,
|
||||
"gemini-2.5-flash-preview-04-17-nothinking": 0.075,
|
||||
"text-embedding-004": 0.001,
|
||||
"chatglm_turbo": 0.3572, // ¥0.005 / 1k tokens
|
||||
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
|
||||
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens
|
||||
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
|
||||
"glm-4": 7.143, // ¥0.1 / 1k tokens
|
||||
"glm-4v": 0.05 * RMB, // ¥0.05 / 1k tokens
|
||||
"glm-4-alltools": 0.1 * RMB, // ¥0.1 / 1k tokens
|
||||
"glm-3-turbo": 0.3572,
|
||||
"glm-4-plus": 0.05 * RMB,
|
||||
"glm-4-0520": 0.1 * RMB,
|
||||
"glm-4-air": 0.001 * RMB,
|
||||
"glm-4-airx": 0.01 * RMB,
|
||||
"glm-4-long": 0.001 * RMB,
|
||||
"glm-4-flash": 0,
|
||||
"glm-4v-plus": 0.01 * RMB,
|
||||
"qwen-turbo": 0.8572, // ¥0.012 / 1k tokens
|
||||
"qwen-plus": 10, // ¥0.14 / 1k tokens
|
||||
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
|
||||
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
|
||||
"SparkDesk-v4.0": 1.2858,
|
||||
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
|
||||
"360gpt-turbo": 0.0858, // ¥0.0012 / 1k tokens
|
||||
"360gpt-turbo-responsibility-8k": 0.8572, // ¥0.012 / 1k tokens
|
||||
"360gpt-pro": 0.8572, // ¥0.012 / 1k tokens
|
||||
"360gpt2-pro": 0.8572, // ¥0.012 / 1k tokens
|
||||
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
|
||||
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
|
||||
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
|
||||
"hunyuan": 7.143, // ¥0.1 / 1k tokens // https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
|
||||
// https://platform.lingyiwanwu.com/docs#-计费单元
|
||||
// 已经按照 7.2 来换算美元价格
|
||||
"yi-34b-chat-0205": 0.18,
|
||||
@@ -199,6 +203,15 @@ var defaultModelRatio = map[string]float64{
|
||||
"llama-3-sonar-small-32k-online": 0.2 / 1000 * USD,
|
||||
"llama-3-sonar-large-32k-chat": 1 / 1000 * USD,
|
||||
"llama-3-sonar-large-32k-online": 1 / 1000 * USD,
|
||||
// grok
|
||||
"grok-3-beta": 1.5,
|
||||
"grok-3-mini-beta": 0.15,
|
||||
"grok-2": 1,
|
||||
"grok-2-vision": 1,
|
||||
"grok-beta": 2.5,
|
||||
"grok-vision-beta": 2.5,
|
||||
"grok-3-fast-beta": 2.5,
|
||||
"grok-3-mini-fast-beta": 0.3,
|
||||
}
|
||||
|
||||
var defaultModelPrice = map[string]float64{
|
||||
@@ -243,19 +256,48 @@ var defaultCompletionRatio = map[string]float64{
|
||||
"gpt-4-gizmo-*": 2,
|
||||
"gpt-4o-gizmo-*": 3,
|
||||
"gpt-4-all": 2,
|
||||
"gpt-image-1": 8,
|
||||
}
|
||||
|
||||
// InitRatioSettings initializes all model related settings maps
|
||||
func InitRatioSettings() {
|
||||
// Initialize modelPriceMap
|
||||
modelPriceMapMutex.Lock()
|
||||
modelPriceMap = defaultModelPrice
|
||||
modelPriceMapMutex.Unlock()
|
||||
|
||||
// Initialize modelRatioMap
|
||||
modelRatioMapMutex.Lock()
|
||||
modelRatioMap = defaultModelRatio
|
||||
modelRatioMapMutex.Unlock()
|
||||
|
||||
// Initialize CompletionRatio
|
||||
CompletionRatioMutex.Lock()
|
||||
CompletionRatio = defaultCompletionRatio
|
||||
CompletionRatioMutex.Unlock()
|
||||
|
||||
// Initialize cacheRatioMap
|
||||
cacheRatioMapMutex.Lock()
|
||||
cacheRatioMap = defaultCacheRatio
|
||||
cacheRatioMapMutex.Unlock()
|
||||
|
||||
// initialize imageRatioMap
|
||||
imageRatioMapMutex.Lock()
|
||||
imageRatioMap = defaultImageRatio
|
||||
imageRatioMapMutex.Unlock()
|
||||
|
||||
}
|
||||
|
||||
func GetModelPriceMap() map[string]float64 {
|
||||
modelPriceMapMutex.Lock()
|
||||
defer modelPriceMapMutex.Unlock()
|
||||
if modelPriceMap == nil {
|
||||
modelPriceMap = defaultModelPrice
|
||||
}
|
||||
modelPriceMapMutex.RLock()
|
||||
defer modelPriceMapMutex.RUnlock()
|
||||
return modelPriceMap
|
||||
}
|
||||
|
||||
func ModelPrice2JSONString() string {
|
||||
GetModelPriceMap()
|
||||
modelPriceMapMutex.RLock()
|
||||
defer modelPriceMapMutex.RUnlock()
|
||||
|
||||
jsonBytes, err := json.Marshal(modelPriceMap)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling model price: " + err.Error())
|
||||
@@ -272,7 +314,9 @@ func UpdateModelPriceByJSONString(jsonStr string) error {
|
||||
|
||||
// GetModelPrice 返回模型的价格,如果模型不存在则返回-1,false
|
||||
func GetModelPrice(name string, printErr bool) (float64, bool) {
|
||||
GetModelPriceMap()
|
||||
modelPriceMapMutex.RLock()
|
||||
defer modelPriceMapMutex.RUnlock()
|
||||
|
||||
if strings.HasPrefix(name, "gpt-4-gizmo") {
|
||||
name = "gpt-4-gizmo-*"
|
||||
}
|
||||
@@ -289,24 +333,6 @@ func GetModelPrice(name string, printErr bool) (float64, bool) {
|
||||
return price, true
|
||||
}
|
||||
|
||||
func GetModelRatioMap() map[string]float64 {
|
||||
modelRatioMapMutex.Lock()
|
||||
defer modelRatioMapMutex.Unlock()
|
||||
if modelRatioMap == nil {
|
||||
modelRatioMap = defaultModelRatio
|
||||
}
|
||||
return modelRatioMap
|
||||
}
|
||||
|
||||
func ModelRatio2JSONString() string {
|
||||
GetModelRatioMap()
|
||||
jsonBytes, err := json.Marshal(modelRatioMap)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling model ratio: " + err.Error())
|
||||
}
|
||||
return string(jsonBytes)
|
||||
}
|
||||
|
||||
func UpdateModelRatioByJSONString(jsonStr string) error {
|
||||
modelRatioMapMutex.Lock()
|
||||
defer modelRatioMapMutex.Unlock()
|
||||
@@ -315,7 +341,9 @@ func UpdateModelRatioByJSONString(jsonStr string) error {
|
||||
}
|
||||
|
||||
func GetModelRatio(name string) (float64, bool) {
|
||||
GetModelRatioMap()
|
||||
modelRatioMapMutex.RLock()
|
||||
defer modelRatioMapMutex.RUnlock()
|
||||
|
||||
if strings.HasPrefix(name, "gpt-4-gizmo") {
|
||||
name = "gpt-4-gizmo-*"
|
||||
}
|
||||
@@ -339,16 +367,15 @@ func GetDefaultModelRatioMap() map[string]float64 {
|
||||
}
|
||||
|
||||
func GetCompletionRatioMap() map[string]float64 {
|
||||
CompletionRatioMutex.Lock()
|
||||
defer CompletionRatioMutex.Unlock()
|
||||
if CompletionRatio == nil {
|
||||
CompletionRatio = defaultCompletionRatio
|
||||
}
|
||||
CompletionRatioMutex.RLock()
|
||||
defer CompletionRatioMutex.RUnlock()
|
||||
return CompletionRatio
|
||||
}
|
||||
|
||||
func CompletionRatio2JSONString() string {
|
||||
GetCompletionRatioMap()
|
||||
CompletionRatioMutex.RLock()
|
||||
defer CompletionRatioMutex.RUnlock()
|
||||
|
||||
jsonBytes, err := json.Marshal(CompletionRatio)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling completion ratio: " + err.Error())
|
||||
@@ -364,7 +391,8 @@ func UpdateCompletionRatioByJSONString(jsonStr string) error {
|
||||
}
|
||||
|
||||
func GetCompletionRatio(name string) float64 {
|
||||
GetCompletionRatioMap()
|
||||
CompletionRatioMutex.RLock()
|
||||
defer CompletionRatioMutex.RUnlock()
|
||||
|
||||
if strings.Contains(name, "/") {
|
||||
if ratio, ok := CompletionRatio[name]; ok {
|
||||
@@ -434,12 +462,18 @@ func getHardcodedCompletionModelRatio(name string) (float64, bool) {
|
||||
return 3, true
|
||||
}
|
||||
if strings.HasPrefix(name, "gemini-") {
|
||||
if strings.HasPrefix(name, "gemini-1.5-pro") {
|
||||
if strings.HasPrefix(name, "gemini-1.5") {
|
||||
return 4, true
|
||||
} else if strings.HasPrefix(name, "gemini-2.0") {
|
||||
return 4, true
|
||||
} else if strings.HasPrefix(name, "gemini-2.5-pro-preview") {
|
||||
return 6, true
|
||||
return 8, true
|
||||
} else if strings.HasPrefix(name, "gemini-2.5-flash-preview") {
|
||||
if strings.HasSuffix(name, "-nothinking") {
|
||||
return 4, false
|
||||
} else {
|
||||
return 3.5 / 0.6, false
|
||||
}
|
||||
}
|
||||
return 4, false
|
||||
}
|
||||
@@ -483,18 +517,18 @@ func getHardcodedCompletionModelRatio(name string) (float64, bool) {
|
||||
|
||||
func GetAudioRatio(name string) float64 {
|
||||
if strings.Contains(name, "-realtime") {
|
||||
if strings.HasSuffix(name, "gpt-4o-realtime-preview-2024-12-17") {
|
||||
if strings.HasSuffix(name, "gpt-4o-realtime-preview") {
|
||||
return 8
|
||||
} else if strings.Contains(name, "mini") {
|
||||
} else if strings.Contains(name, "gpt-4o-mini-realtime-preview") {
|
||||
return 10 / 0.6
|
||||
} else {
|
||||
return 20
|
||||
}
|
||||
}
|
||||
if strings.Contains(name, "-audio") {
|
||||
if strings.HasSuffix(name, "gpt-4o-audio-preview-2024-12-17") {
|
||||
return 16
|
||||
} else if strings.Contains(name, "mini") {
|
||||
if strings.HasPrefix(name, "gpt-4o-audio-preview") {
|
||||
return 40 / 2.5
|
||||
} else if strings.HasPrefix(name, "gpt-4o-mini-audio-preview") {
|
||||
return 10 / 0.15
|
||||
} else {
|
||||
return 40
|
||||
@@ -511,3 +545,47 @@ func GetAudioCompletionRatio(name string) float64 {
|
||||
}
|
||||
return 2
|
||||
}
|
||||
|
||||
func ModelRatio2JSONString() string {
|
||||
modelRatioMapMutex.RLock()
|
||||
defer modelRatioMapMutex.RUnlock()
|
||||
|
||||
jsonBytes, err := json.Marshal(modelRatioMap)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling model ratio: " + err.Error())
|
||||
}
|
||||
return string(jsonBytes)
|
||||
}
|
||||
|
||||
var defaultImageRatio = map[string]float64{
|
||||
"gpt-image-1": 2,
|
||||
}
|
||||
var imageRatioMap map[string]float64
|
||||
var imageRatioMapMutex sync.RWMutex
|
||||
|
||||
func ImageRatio2JSONString() string {
|
||||
imageRatioMapMutex.RLock()
|
||||
defer imageRatioMapMutex.RUnlock()
|
||||
jsonBytes, err := json.Marshal(imageRatioMap)
|
||||
if err != nil {
|
||||
common.SysError("error marshalling cache ratio: " + err.Error())
|
||||
}
|
||||
return string(jsonBytes)
|
||||
}
|
||||
|
||||
func UpdateImageRatioByJSONString(jsonStr string) error {
|
||||
imageRatioMapMutex.Lock()
|
||||
defer imageRatioMapMutex.Unlock()
|
||||
imageRatioMap = make(map[string]float64)
|
||||
return json.Unmarshal([]byte(jsonStr), &imageRatioMap)
|
||||
}
|
||||
|
||||
func GetImageRatio(name string) (float64, bool) {
|
||||
imageRatioMapMutex.RLock()
|
||||
defer imageRatioMapMutex.RUnlock()
|
||||
ratio, ok := imageRatioMap[name]
|
||||
if !ok {
|
||||
return 1, false // Default to 1 if not found
|
||||
}
|
||||
return ratio, true
|
||||
}
|
||||
|
||||
57
setting/operation_setting/tools.go
Normal file
57
setting/operation_setting/tools.go
Normal file
@@ -0,0 +1,57 @@
|
||||
package operation_setting
|
||||
|
||||
import "strings"
|
||||
|
||||
const (
|
||||
// Web search
|
||||
WebSearchHighTierModelPriceLow = 30.00
|
||||
WebSearchHighTierModelPriceMedium = 35.00
|
||||
WebSearchHighTierModelPriceHigh = 50.00
|
||||
WebSearchPriceLow = 25.00
|
||||
WebSearchPriceMedium = 27.50
|
||||
WebSearchPriceHigh = 30.00
|
||||
// File search
|
||||
FileSearchPrice = 2.5
|
||||
)
|
||||
|
||||
func GetWebSearchPricePerThousand(modelName string, contextSize string) float64 {
|
||||
// 确定模型类型
|
||||
// https://platform.openai.com/docs/pricing Web search 价格按模型类型和 search context size 收费
|
||||
// gpt-4.1, gpt-4o, or gpt-4o-search-preview 更贵,gpt-4.1-mini, gpt-4o-mini, gpt-4o-mini-search-preview 更便宜
|
||||
isHighTierModel := (strings.HasPrefix(modelName, "gpt-4.1") || strings.HasPrefix(modelName, "gpt-4o")) &&
|
||||
!strings.Contains(modelName, "mini")
|
||||
// 确定 search context size 对应的价格
|
||||
var priceWebSearchPerThousandCalls float64
|
||||
switch contextSize {
|
||||
case "low":
|
||||
if isHighTierModel {
|
||||
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceLow
|
||||
} else {
|
||||
priceWebSearchPerThousandCalls = WebSearchPriceLow
|
||||
}
|
||||
case "medium":
|
||||
if isHighTierModel {
|
||||
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceMedium
|
||||
} else {
|
||||
priceWebSearchPerThousandCalls = WebSearchPriceMedium
|
||||
}
|
||||
case "high":
|
||||
if isHighTierModel {
|
||||
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceHigh
|
||||
} else {
|
||||
priceWebSearchPerThousandCalls = WebSearchPriceHigh
|
||||
}
|
||||
default:
|
||||
// search context size 默认为 medium
|
||||
if isHighTierModel {
|
||||
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceMedium
|
||||
} else {
|
||||
priceWebSearchPerThousandCalls = WebSearchPriceMedium
|
||||
}
|
||||
}
|
||||
return priceWebSearchPerThousandCalls
|
||||
}
|
||||
|
||||
func GetFileSearchPricePerThousand() float64 {
|
||||
return FileSearchPrice
|
||||
}
|
||||
@@ -1 +1 @@
|
||||
module.exports = require("@so1ve/prettier-config");
|
||||
module.exports = require('@so1ve/prettier-config');
|
||||
|
||||
@@ -23,7 +23,7 @@
|
||||
"react-turnstile": "^1.0.5",
|
||||
"semantic-ui-offline": "^2.5.0",
|
||||
"semantic-ui-react": "^2.1.3",
|
||||
"sse": "github:mpetazzoni/sse.js",
|
||||
"sse": "https://github.com/mpetazzoni/sse.js",
|
||||
"i18next": "^23.16.8",
|
||||
"react-i18next": "^13.0.0",
|
||||
"i18next-browser-languagedetector": "^7.2.0"
|
||||
|
||||
2633
web/pnpm-lock.yaml
generated
2633
web/pnpm-lock.yaml
generated
File diff suppressed because it is too large
Load Diff
BIN
web/public/azure_model_name.png
Normal file
BIN
web/public/azure_model_name.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 251 KiB |
@@ -21,11 +21,12 @@ import Chat2Link from './pages/Chat2Link';
|
||||
import { Layout } from '@douyinfe/semi-ui';
|
||||
import Midjourney from './pages/Midjourney';
|
||||
import Pricing from './pages/Pricing/index.js';
|
||||
import Task from "./pages/Task/index.js";
|
||||
import Task from './pages/Task/index.js';
|
||||
import Playground from './pages/Playground/Playground.js';
|
||||
import OAuth2Callback from "./components/OAuth2Callback.js";
|
||||
import OAuth2Callback from './components/OAuth2Callback.js';
|
||||
import PersonalSetting from './components/PersonalSetting.js';
|
||||
import Setup from './pages/Setup/index.js';
|
||||
import SetupCheck from './components/SetupCheck';
|
||||
|
||||
const Home = lazy(() => import('./pages/Home'));
|
||||
const Detail = lazy(() => import('./pages/Detail'));
|
||||
@@ -33,9 +34,9 @@ const About = lazy(() => import('./pages/About'));
|
||||
|
||||
function App() {
|
||||
const location = useLocation();
|
||||
|
||||
|
||||
return (
|
||||
<>
|
||||
<SetupCheck>
|
||||
<Routes>
|
||||
<Route
|
||||
path='/'
|
||||
@@ -166,18 +167,18 @@ function App() {
|
||||
}
|
||||
/>
|
||||
<Route
|
||||
path='/oauth/oidc'
|
||||
element={
|
||||
<Suspense fallback={<Loading></Loading>}>
|
||||
<OAuth2Callback type='oidc'></OAuth2Callback>
|
||||
</Suspense>
|
||||
}
|
||||
path='/oauth/oidc'
|
||||
element={
|
||||
<Suspense fallback={<Loading></Loading>}>
|
||||
<OAuth2Callback type='oidc'></OAuth2Callback>
|
||||
</Suspense>
|
||||
}
|
||||
/>
|
||||
<Route
|
||||
path='/oauth/linuxdo'
|
||||
element={
|
||||
<Suspense fallback={<Loading></Loading>} key={location.pathname}>
|
||||
<OAuth2Callback type='linuxdo'></OAuth2Callback>
|
||||
<OAuth2Callback type='linuxdo'></OAuth2Callback>
|
||||
</Suspense>
|
||||
}
|
||||
/>
|
||||
@@ -274,19 +275,19 @@ function App() {
|
||||
}
|
||||
/>
|
||||
{/* 方便使用chat2link直接跳转聊天... */}
|
||||
<Route
|
||||
path='/chat2link'
|
||||
element={
|
||||
<PrivateRoute>
|
||||
<Suspense fallback={<Loading></Loading>} key={location.pathname}>
|
||||
<Chat2Link />
|
||||
</Suspense>
|
||||
</PrivateRoute>
|
||||
}
|
||||
/>
|
||||
<Route path='*' element={<NotFound />} />
|
||||
</Routes>
|
||||
</>
|
||||
<Route
|
||||
path='/chat2link'
|
||||
element={
|
||||
<PrivateRoute>
|
||||
<Suspense fallback={<Loading></Loading>} key={location.pathname}>
|
||||
<Chat2Link />
|
||||
</Suspense>
|
||||
</PrivateRoute>
|
||||
}
|
||||
/>
|
||||
<Route path='*' element={<NotFound />} />
|
||||
</Routes>
|
||||
</SetupCheck>
|
||||
);
|
||||
}
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user