Compare commits

...

91 Commits

Author SHA1 Message Date
CaIon
90d85a6f0a feat: add AzureNoRemoveDotTime constant and update channel handling #1044
- Introduced a new constant `AzureNoRemoveDotTime` in `constant/azure.go` to manage model name formatting for channels created after May 10, 2025.
- Updated `distributor.go` to set `channel_create_time` in the context.
- Modified `adaptor.go` to conditionally remove dots from model names based on the channel creation time.
- Enhanced `relay_info.go` to include `ChannelCreateTime` in the `RelayInfo` struct.
- Updated English localization files to reflect changes in model name handling for new channels.
2025-05-08 22:39:55 +08:00
CaIon
d40429ad93 fix: update OpenAI request handling to include 'o1-preview' model support #1029 2025-05-08 21:34:31 +08:00
Calcium-Ion
30806ef270 Merge pull request #1040 from QuantumNous/responses-quota
fix: tool quota calculate
2025-05-08 01:21:34 +08:00
IcedTangerine
3458476115 Merge pull request #1039 from liusanp/main
Fix grok-2-image request error
2025-05-07 22:06:51 +08:00
IcedTangerine
61c685ad79 Merge pull request #1032 from feitianbubu/upstream
fix: correct error messages for dall-e models size parameters
2025-05-07 20:56:36 +08:00
IcedTangerine
0121795a84 Merge pull request #1037 from LarchLiu/main
fix: gemini response json schema
2025-05-07 20:53:45 +08:00
creamlike1024
ae254f5368 fix: tool quota calculate 2025-05-07 19:33:32 +08:00
liusanp
562448b441 fix: xAi response 2025-05-07 18:59:27 +08:00
liusanp
04f7d89399 fix: xAi requestUrl 2025-05-07 18:32:59 +08:00
Alex Liu
0d456df588 fix: gemini response json schema 2025-05-07 18:08:56 +08:00
CaIon
dc3b453b05 fix: update ResponseChunkData to format data correctly without newline 2025-05-07 17:02:47 +08:00
CaIon
b19e1b8207 feat: add support for BaiduV2 channel in relay info 2025-05-07 16:30:32 +08:00
liusanp
97b5ca8099 fix: quality, size or style are not supported by xAI API 2025-05-07 16:17:22 +08:00
CaIon
4ecf5dde14 Merge remote-tracking branch 'origin/main' 2025-05-07 16:16:19 +08:00
joey
65ccfd0848 feat: support model mapping chain
#1033
2025-05-07 16:00:35 +08:00
skynono
2621b77f9a fix: correct error messages for dall-e models size parameters
(cherry picked from commit 149d06850c10cc6cdb3291164e3e46f99ca59abc)
2025-05-07 11:21:19 +08:00
Calcium-Ion
65a15dbc17 Merge pull request #1025 from QuantumNous/responses_buildin_tools
feat: implement OpenAI responses built-in tool tracking
2025-05-07 02:25:47 +08:00
creamlike1024
c0095d4521 feat: 添加 built in tools 计费前端显示 2025-05-07 01:08:20 +08:00
creamlike1024
5043075135 chore: move file search tool price to operation_setting 2025-05-06 23:57:22 +08:00
creamlike1024
10ef61eedb chore: move web search tool price to operation_setting 2025-05-06 23:25:16 +08:00
IcedTangerine
dc9e3b4139 Merge pull request #1026 from tbphp/tbphp_fix_redis_limit
fix: Redis limit ignoring max eq 0
2025-05-06 22:36:13 +08:00
creamlike1024
27e3aa828c Merge branch 'feitianbubu-upstream' 2025-05-06 22:31:39 +08:00
creamlike1024
d859e3fa64 fix: 修复未输入新密码时提示修改成功 2025-05-06 22:28:32 +08:00
creamlike1024
459c277c94 feat: 添加 built in tools 计费
- 增加非流的工具调用次数统计
- 添加 web search 和 file search 计费
2025-05-06 21:58:01 +08:00
CaIon
5639f1c2d8 feat: add support for DeepSeek channel in streamSupportedChannels 2025-05-06 18:41:01 +08:00
skynono
0cf4c59d22 feat: add original password verification when changing password 2025-05-06 14:28:27 +08:00
Apple\Apple
1c67dd3c31 📕docs: Update the content in README.en.md and the structure of the docs directory 2025-05-05 23:44:30 +08:00
tbphp
b7fd1e4a20 fix: Redis limit ignoring max eq 0 2025-05-05 12:55:48 +08:00
CaIon
18b3300ff1 feat: implement OpenAI responses handling and streaming support with built-in tool tracking 2025-05-05 00:40:16 +08:00
Calcium-Ion
bae57c05c1 Merge pull request #1024 from tbphp/fix-edituser-text
fix: EditUser text error
2025-05-04 18:30:32 +08:00
tbphp
3def2bbd30 fix: EditUser text error 2025-05-04 18:26:18 +08:00
CaIon
419a056fbf refactor: remove unnecessary call to helper.Done and adjust data rendering in ClaudeChunkData 2025-05-04 17:35:45 +08:00
Calcium-Ion
48af027903 Merge pull request #1020 from QuantumNous/v1responses
feat: support /v1/responses API
2025-05-04 17:13:39 +08:00
Calcium-Ion
9bf90c3baf Merge pull request #1012 from tbphp/vertex_thinking_support
feat: support thinking suffix for vertex gemini channel
2025-05-04 17:11:27 +08:00
CaIon
fe3232bf23 feat: enhance OaiResponsesStreamHandler to handle output text and improve response streaming 2025-05-04 17:09:37 +08:00
creamlike1024
1236fa8fe4 add OaiResponsesStreamHandler 2025-05-03 22:36:27 +08:00
CaIon
e097d5a538 feat: add video URL support in MediaContent and update token counting logic 2025-05-03 21:12:07 +08:00
creamlike1024
425feb88d8 feat: support /v1/responses API 2025-05-02 13:59:46 +08:00
CaIon
fd6838e690 feat: enable error logging configuration in docker-compose and application 2025-04-29 16:26:55 +08:00
CaIon
b64480b750 fix: gemini thinking tokens count #1014 2025-04-29 16:21:54 +08:00
CaIon
da6423de33 refactor: Reducing the lock duration to the minimum necessary time in CacheGetRandomSatisfiedChannel function 2025-04-29 15:57:21 +08:00
tbphp
efc9d200b1 feat: support thinking suffix for vertex gemini channel 2025-04-29 13:30:03 +08:00
CaIon
fe37718259 fix: update audio ratio logic for model names in GetAudioRatio function 2025-04-28 20:55:40 +08:00
IcedTangerine
c412fd9cde Merge pull request #1008 from JoeyLearnsToCode/feat-search-channel-by-url
feat: support searching channels by base url
2025-04-28 13:15:49 +08:00
creamlike1024
54f5b1a951 Merge branch 'wzxjohn-feature/wellknown' 2025-04-28 12:55:06 +08:00
JoeyLearnsToCode
a9b9d23586 feat: support searching channels by base url 2025-04-28 11:38:53 +08:00
wzxjohn
168226ba10 fix: remove custom header in oidc well known request 2025-04-28 11:25:04 +08:00
wzxjohn
1a8fd61a98 feat: support empty well known url 2025-04-28 11:25:04 +08:00
wzxjohn
2bd2d73d33 feat: improve log delete api 2025-04-28 11:25:04 +08:00
creamlike1024
62da481dc6 Merge branch 'error-logs' of github.com:zenghongtu/new-api into zenghongtu-error-logs 2025-04-28 11:06:32 +08:00
CaIon
4217358de7 feat: add image preview functionality and update model name instructions in EditChannel 2025-04-27 17:20:49 +08:00
CaIon
bb9f5a4a6d refactor: rename InitModelSettings to InitRatioSettings 2025-04-26 17:15:34 +08:00
CaIon
935acccca4 fix: update cacheRatioMap initialization in InitModelSettings function 2025-04-26 17:09:23 +08:00
CaIon
453a42fad9 feat: initialize cacheRatioMap in InitModelSettings function 2025-04-26 17:06:03 +08:00
CaIon
58101328c5 fix: handle optional user_group_ratio in LogsTable and render helper 2025-04-26 15:59:49 +08:00
CaIon
a03c615fa4 Merge remote-tracking branch 'new-api/main' into gpt-image
# Conflicts:
#	relay/relay-image.go
2025-04-26 15:54:08 +08:00
CaIon
487ef35c58 feat: support image edit model mapping
(cherry picked from commit 1a869d8ad77f262ee27675ec2deaf451b1743eb7)
2025-04-26 15:48:59 +08:00
xyfacai
f9f32a0158 feat: support /images/edit
(cherry picked from commit 1c0a1238787d490f02dd9269b616580a16604180)
2025-04-26 15:44:56 +08:00
IcedTangerine
ea10806cf9 Merge pull request #950 from datehoer/main
fix: update getAndValidImageRequest function in relay/relay-image.go to support grok-2-image model
2025-04-26 15:34:15 +08:00
IcedTangerine
1a9ebb54b2 Merge pull request #843 from IllTamer/pr
fix: the pricing available popover display anyway
2025-04-25 18:27:45 +08:00
IcedTangerine
6de3857150 Merge branch 'main' into pr 2025-04-25 18:27:11 +08:00
han shi
32cd890b6e feat: 增加sendcloud邮件服务器的支持 (#947)
* 增加sendcloud邮件服务器的支持

* 调整代码结构

* Used slince.Contains function

---------

Co-authored-by: shih <shih@knownsec.com>
2025-04-25 18:17:46 +08:00
creamlike1024
f968d77365 fix: remove apikey from test channel log, close #1000 2025-04-25 17:08:26 +08:00
CaIon
dc22f7d32f refactor: update deepseek beta api 2025-04-25 16:26:16 +08:00
creamlike1024
c2b33e3b23 fix: GetMaxUserId use Unscope, close #987 2025-04-25 16:13:11 +08:00
IcedTangerine
db3326deae Merge pull request #975 from asjfoajs/qn-main
[#969] Refactor: Optimize the request rate limiting for ModelRequestRateLimi…
2025-04-25 11:59:05 +08:00
CaIon
25ae077ac9 refactor: update claude media source handling 2025-04-24 15:59:43 +08:00
CaIon
aaa41a8074 refactor: update ClaudeMessageSource struct to include optional Url field and adjust media source handling in relay-claude #993 2025-04-24 00:39:09 +08:00
CaIon
26f5b954c5 f*** gemini 2025-04-19 18:07:51 +08:00
CaIon
79c6dd08c9 refactor: enhance SystemSetting submission logic and handle empty WorkerUrl 2025-04-19 00:20:25 +08:00
CaIon
17e8a3432a refactor: update GeminiThinkingConfig initialization 2025-04-18 23:13:28 +08:00
CaIon
790af65b2c refactor: remove unsupported 'exclusiveMinimum' field from cleanFunctionParameters 2025-04-18 22:40:05 +08:00
CaIon
6522147183 refactor: remove unsupported root-level fields from cleanFunctionParameters 2025-04-18 21:38:12 +08:00
CaIon
0755ac9991 refactor: streamline value assignment in SettingGeminiModel 2025-04-18 20:08:26 +08:00
CaIon
4c4dc6e8b4 feat: add gemini thinking suffix support #981 2025-04-18 19:36:18 +08:00
CaIon
1eebdc4773 refactor: remove reasoning field from GeneralOpenAIRequest struct 2025-04-17 17:11:42 +08:00
CaIon
9b6c898675 feat: add reasoning field to GeneralOpenAIReques 2025-04-17 17:09:46 +08:00
CaIon
ee4f27d01b refactor: simplify model prefix checks and update message role for o-series models 2025-04-17 16:50:52 +08:00
Apple\Apple
995c19a997 🐛fix: Fix the issue where new whitelist email domain names cannot be added in the system settings 2025-04-16 17:11:59 +08:00
霍雨佳
e385e347ea Refactor: Optimize the token bucket algorithm, specifically the New method in common/imiterlimiter.go.
Solution: Remove Redis ping. When printing exceptions, use SysLog to print and add additional logging information.
2025-04-16 16:36:07 +08:00
Apple\Apple
71d0d759da Merge pull request #927 from QuentinHsu/refactor-system-setting
# Conflicts:
#	web/src/App.js
#	web/src/components/ModelSetting.js
#	web/src/components/PersonalSetting.js
#	web/src/components/SystemSetting.js
#	web/src/pages/Channel/EditChannel.js
2025-04-16 16:27:11 +08:00
霍雨佳
eb75ff232f Refactor: Optimize the request rate limiting for ModelRequestRateLimitCount.
Reason: The original steps 1 and 3 in the redisRateLimitHandler method were not atomic, leading to poor precision under high concurrent requests. For example, with a rate limit set to 60, sending 200 concurrent requests would result in none being blocked, whereas theoretically around 140 should be intercepted.
Solution: I chose not to merge steps 1 and 3 into a single Lua script because a single atomic operation involving read, write, and delete operations could suffer from performance issues under high concurrency. Instead, I implemented a token bucket algorithm to optimize this, reducing the atomic operation to just read and write steps while significantly decreasing the memory footprint.
2025-04-16 10:33:43 +08:00
CaIon
272662089d refactor: remove unused mutex from RelayInfo struct 2025-04-15 23:06:32 +08:00
CaIon
214ca4db56 fix: claude parallel function calling 2025-04-15 04:52:33 +08:00
CaIon
473e8e0eaf feat: support gemini output text and inline images. (close #866) 2025-04-15 02:32:51 +08:00
jasonzeng
97bc2b4474 feat: add error logging functionality to relay and update logs table for error type display 2025-04-12 00:43:34 +08:00
datehoer
c5f1a0c712 Add support for grok-2-image. Currently, grok-2-image doesn't support the size, quality, or style parameters. Set 'size'='empty' to use grok-2-image 2025-04-09 15:05:00 +08:00
QuentinHsu
09adc6f201 refactor(web): systemSetting component to enhance UI structure and add new configuration options
- Wrapped form sections in Card components for better visual separation
- Added new configuration options for payment settings, email domain whitelist, SMTP, OIDC, GitHub OAuth, Linux DO OAuth, WeChat, and Telegram
- Improved layout with responsive design using Row and Col components
- Updated button actions for saving settings in new sections
2025-04-04 17:46:34 +08:00
QuentinHsu
6b79b89dc0 style(web): format code 2025-04-04 17:37:27 +08:00
IllTamer
3223c7e181 feat & fix: fix the pricing available sort, set defaultSortOrder descend 2025-03-10 22:39:21 +08:00
IllTamer
ccfac06645 fix: the pricing available popover display anyway 2025-03-10 22:16:02 +08:00
153 changed files with 9458 additions and 4624 deletions

View File

@@ -1,10 +1,13 @@
<p align="right">
<a href="./README.md">中文</a> | <strong>English</strong>
</p>
<div align="center">
![new-api](/web/public/logo.png)
# New API
🍥 Next Generation LLM Gateway and AI Asset Management System
🍥 Next-Generation Large Model Gateway and AI Asset Management System
<a href="https://trendshift.io/repositories/8227" target="_blank"><img src="https://trendshift.io/api/badge/repositories/8227" alt="Calcium-Ion%2Fnew-api | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
@@ -33,171 +36,155 @@
> This is an open-source project developed based on [One API](https://github.com/songquanpeng/one-api)
> [!IMPORTANT]
> - Users must comply with OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) and relevant laws and regulations. Not to be used for illegal purposes.
> - This project is for personal learning only. Stability is not guaranteed, and no technical support is provided.
> - This project is for personal learning purposes only, with no guarantee of stability or technical support.
> - Users must comply with OpenAI's [Terms of Use](https://openai.com/policies/terms-of-use) and **applicable laws and regulations**, and must not use it for illegal purposes.
> - According to the [《Interim Measures for the Management of Generative Artificial Intelligence Services》](http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm), please do not provide any unregistered generative AI services to the public in China.
## 📚 Documentation
For detailed documentation, please visit our official Wiki: [https://docs.newapi.pro/](https://docs.newapi.pro/)
## ✨ Key Features
1. 🎨 New UI interface (some interfaces pending update)
2. 🌍 Multi-language support (work in progress)
3. 🎨 Added [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy) interface support, [Integration Guide](Midjourney.md)
4. 💰 Online recharge support, configurable in system settings:
- [x] EasyPay
5. 🔍 Query usage quota by key:
- Works with [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool)
6. 📑 Configurable items per page in pagination
7. 🔄 Compatible with original One API database (one-api.db)
8. 💵 Support per-request model pricing, configurable in System Settings - Operation Settings
9. ⚖️ Support channel **weighted random** selection
10. 📈 Data dashboard (console)
11. 🔒 Configurable model access per token
12. 🤖 Telegram authorization login support:
1. System Settings - Configure Login Registration - Allow Telegram Login
2. Send /setdomain command to [@Botfather](https://t.me/botfather)
3. Select your bot, then enter http(s)://your-website/login
4. Telegram Bot name is the bot username without @
13. 🎵 Added [Suno API](https://github.com/Suno-API/Suno-API) interface support, [Integration Guide](Suno.md)
14. 🔄 Support for Rerank models, compatible with Cohere and Jina, can integrate with Dify, [Integration Guide](Rerank.md)
15.**[OpenAI Realtime API](https://platform.openai.com/docs/guides/realtime/integration)** - Support for OpenAI's Realtime API, including Azure channels
16. 🧠 Support for setting reasoning effort through model name suffix:
- Add suffix `-high` to set high reasoning effort (e.g., `o3-mini-high`)
- Add suffix `-medium` to set medium reasoning effort
- Add suffix `-low` to set low reasoning effort
17. 🔄 Thinking to content option `thinking_to_content` in `Channel->Edit->Channel Extra Settings`, default is `false`, when `true`, the `reasoning_content` of the thinking content will be converted to `<think>` tags and concatenated to the content returned.
18. 🔄 Model rate limit, support setting total request limit and successful request limit in `System Settings->Rate Limit Settings`
19. 💰 Cache billing support, when enabled can charge a configurable ratio for cache hits:
1. Set `Prompt Cache Ratio` in `System Settings -> Operation Settings`
2. Set `Prompt Cache Ratio` in channel settings, range 0-1 (e.g., 0.5 means 50% charge on cache hits)
New API offers a wide range of features, please refer to [Features Introduction](https://docs.newapi.pro/wiki/features-introduction) for details:
1. 🎨 Brand new UI interface
2. 🌍 Multi-language support
3. 💰 Online recharge functionality (YiPay)
4. 🔍 Support for querying usage quotas with keys (works with [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool))
5. 🔄 Compatible with the original One API database
6. 💵 Support for pay-per-use model pricing
7. ⚖️ Support for weighted random channel selection
8. 📈 Data dashboard (console)
9. 🔒 Token grouping and model restrictions
10. 🤖 Support for more authorization login methods (LinuxDO, Telegram, OIDC)
11. 🔄 Support for Rerank models (Cohere and Jina), [API Documentation](https://docs.newapi.pro/api/jinaai-rerank)
12. ⚡ Support for OpenAI Realtime API (including Azure channels), [API Documentation](https://docs.newapi.pro/api/openai-realtime)
13. ⚡ Support for Claude Messages format, [API Documentation](https://docs.newapi.pro/api/anthropic-chat)
14. Support for entering chat interface via /chat2link route
15. 🧠 Support for setting reasoning effort through model name suffixes:
1. OpenAI o-series models
- Add `-high` suffix for high reasoning effort (e.g.: `o3-mini-high`)
- Add `-medium` suffix for medium reasoning effort (e.g.: `o3-mini-medium`)
- Add `-low` suffix for low reasoning effort (e.g.: `o3-mini-low`)
2. Claude thinking models
- Add `-thinking` suffix to enable thinking mode (e.g.: `claude-3-7-sonnet-20250219-thinking`)
16. 🔄 Thinking-to-content functionality
17. 🔄 Model rate limiting for users
18. 💰 Cache billing support, which allows billing at a set ratio when cache is hit:
1. Set the `Prompt Cache Ratio` option in `System Settings-Operation Settings`
2. Set `Prompt Cache Ratio` in the channel, range 0-1, e.g., setting to 0.5 means billing at 50% when cache is hit
3. Supported channels:
- [x] OpenAI
- [x] Azure
- [x] Azure
- [x] DeepSeek
- [ ] Claude
- [x] Claude
## Model Support
This version additionally supports:
1. Third-party model **gpts** (gpt-4-gizmo-*)
2. [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy) interface, [Integration Guide](Midjourney.md)
3. Custom channels with full API URL support
4. [Suno API](https://github.com/Suno-API/Suno-API) interface, [Integration Guide](Suno.md)
5. Rerank models, supporting [Cohere](https://cohere.ai/) and [Jina](https://jina.ai/), [Integration Guide](Rerank.md)
6. Dify
You can add custom models gpt-4-gizmo-* in channels. These are third-party models and cannot be called with official OpenAI keys.
This version supports multiple models, please refer to [API Documentation-Relay Interface](https://docs.newapi.pro/api) for details:
## Additional Configurations Beyond One API
- `GENERATE_DEFAULT_TOKEN`: Generate initial token for new users, default `false`
- `STREAMING_TIMEOUT`: Set streaming response timeout, default 60 seconds
- `DIFY_DEBUG`: Output workflow and node info to client for Dify channel, default `true`
- `FORCE_STREAM_OPTION`: Override client stream_options parameter, default `true`
- `GET_MEDIA_TOKEN`: Calculate image tokens, default `true`
- `GET_MEDIA_TOKEN_NOT_STREAM`: Calculate image tokens in non-stream mode, default `true`
- `UPDATE_TASK`: Update async tasks (Midjourney, Suno), default `true`
- `GEMINI_MODEL_MAP`: Specify Gemini model versions (v1/v1beta), format: "model:version", comma-separated
- `COHERE_SAFETY_SETTING`: Cohere model [safety settings](https://docs.cohere.com/docs/safety-modes#overview), options: `NONE`, `CONTEXTUAL`, `STRICT`, default `NONE`
- `GEMINI_VISION_MAX_IMAGE_NUM`: Gemini model maximum image number, default `16`, set to `-1` to disable
- `MAX_FILE_DOWNLOAD_MB`: Maximum file download size in MB, default `20`
- `CRYPTO_SECRET`: Encryption key for encrypting database content
- `AZURE_DEFAULT_API_VERSION`: Azure channel default API version, if not specified in channel settings, use this version, default `2024-12-01-preview`
- `NOTIFICATION_LIMIT_DURATION_MINUTE`: Duration of notification limit in minutes, default `10`
- `NOTIFY_LIMIT_COUNT`: Maximum number of user notifications in the specified duration, default `2`
1. Third-party models **gpts** (gpt-4-gizmo-*)
2. Third-party channel [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy) interface, [API Documentation](https://docs.newapi.pro/api/midjourney-proxy-image)
3. Third-party channel [Suno API](https://github.com/Suno-API/Suno-API) interface, [API Documentation](https://docs.newapi.pro/api/suno-music)
4. Custom channels, supporting full call address input
5. Rerank models ([Cohere](https://cohere.ai/) and [Jina](https://jina.ai/)), [API Documentation](https://docs.newapi.pro/api/jinaai-rerank)
6. Claude Messages format, [API Documentation](https://docs.newapi.pro/api/anthropic-chat)
7. Dify, currently only supports chatflow
## Environment Variable Configuration
For detailed configuration instructions, please refer to [Installation Guide-Environment Variables Configuration](https://docs.newapi.pro/installation/environment-variables):
- `GENERATE_DEFAULT_TOKEN`: Whether to generate initial tokens for newly registered users, default is `false`
- `STREAMING_TIMEOUT`: Streaming response timeout, default is 60 seconds
- `DIFY_DEBUG`: Whether to output workflow and node information for Dify channels, default is `true`
- `FORCE_STREAM_OPTION`: Whether to override client stream_options parameter, default is `true`
- `GET_MEDIA_TOKEN`: Whether to count image tokens, default is `true`
- `GET_MEDIA_TOKEN_NOT_STREAM`: Whether to count image tokens in non-streaming cases, default is `true`
- `UPDATE_TASK`: Whether to update asynchronous tasks (Midjourney, Suno), default is `true`
- `COHERE_SAFETY_SETTING`: Cohere model safety settings, options are `NONE`, `CONTEXTUAL`, `STRICT`, default is `NONE`
- `GEMINI_VISION_MAX_IMAGE_NUM`: Maximum number of images for Gemini models, default is `16`
- `MAX_FILE_DOWNLOAD_MB`: Maximum file download size in MB, default is `20`
- `CRYPTO_SECRET`: Encryption key used for encrypting database content
- `AZURE_DEFAULT_API_VERSION`: Azure channel default API version, default is `2024-12-01-preview`
- `NOTIFICATION_LIMIT_DURATION_MINUTE`: Notification limit duration, default is `10` minutes
- `NOTIFY_LIMIT_COUNT`: Maximum number of user notifications within the specified duration, default is `2`
## Deployment
For detailed deployment guides, please refer to [Installation Guide-Deployment Methods](https://docs.newapi.pro/installation):
> [!TIP]
> Latest Docker image: `calciumion/new-api:latest`
> Default account: root, password: 123456
> Latest Docker image: `calciumion/new-api:latest`
### Multi-Server Deployment
- Must set `SESSION_SECRET` environment variable, otherwise login state will not be consistent across multiple servers.
- If using a public Redis, must set `CRYPTO_SECRET` environment variable, otherwise Redis content will not be able to be obtained in multi-server deployment.
### Multi-machine Deployment Considerations
- Environment variable `SESSION_SECRET` must be set, otherwise login status will be inconsistent across multiple machines
- If sharing Redis, `CRYPTO_SECRET` must be set, otherwise Redis content cannot be accessed across multiple machines
### Requirements
- Local database (default): SQLite (Docker deployment must mount `/data` directory)
- Remote database: MySQL >= 5.7.8, PgSQL >= 9.6
### Deployment Requirements
- Local database (default): SQLite (Docker deployment must mount the `/data` directory)
- Remote database: MySQL version >= 5.7.8, PgSQL version >= 9.6
### Deployment with BT Panel
Install BT Panel (**version 9.2.0** or above) from [BT Panel Official Website](https://www.bt.cn/new/download.html), choose the stable version script to download and install.
After installation, log in to BT Panel and click Docker in the menu bar. First-time access will prompt to install Docker service. Click Install Now and follow the prompts to complete installation.
After installation, find **New-API** in the app store, click install, configure basic options to complete installation.
[Pictorial Guide](BT.md)
### Deployment Methods
### Docker Deployment
#### Using BaoTa Panel Docker Feature
Install BaoTa Panel (version **9.2.0** or above), find **New-API** in the application store and install it.
[Tutorial with images](./docs/BT.md)
### Using Docker Compose (Recommended)
#### Using Docker Compose (Recommended)
```shell
# Clone project
# Download the project
git clone https://github.com/Calcium-Ion/new-api.git
cd new-api
# Edit docker-compose.yml as needed
# nano docker-compose.yml
# vim docker-compose.yml
# Start
docker-compose up -d
```
#### Update Version
#### Using Docker Image Directly
```shell
docker-compose pull
docker-compose up -d
```
### Direct Docker Image Usage
```shell
# SQLite deployment:
# Using SQLite
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
# MySQL deployment (add -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi"), modify database connection parameters as needed
# Example:
# Using MySQL
docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:123456@tcp(localhost:3306)/oneapi" -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
```
#### Update Version
```shell
# Pull the latest image
docker pull calciumion/new-api:latest
# Stop and remove the old container
docker stop new-api
docker rm new-api
# Run the new container with the same parameters as before
docker run --name new-api -d --restart always -p 3000:3000 -e TZ=Asia/Shanghai -v /home/ubuntu/data/new-api:/data calciumion/new-api:latest
```
## Channel Retry and Cache
Channel retry functionality has been implemented, you can set the number of retries in `Settings->Operation Settings->General Settings`. It is **recommended to enable caching**.
Alternatively, you can use Watchtower for automatic updates (not recommended, may cause database incompatibility):
```shell
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtower -cR
```
### Cache Configuration Method
1. `REDIS_CONN_STRING`: Set Redis as cache
2. `MEMORY_CACHE_ENABLED`: Enable memory cache (no need to set manually if Redis is set)
## Channel Retry
Channel retry is implemented, configurable in `Settings->Operation Settings->General Settings`. **Cache recommended**.
If retry is enabled, the system will automatically use the next priority channel for the same request after a failed request.
## API Documentation
### Cache Configuration
1. `REDIS_CONN_STRING`: Use Redis as cache
+ Example: `REDIS_CONN_STRING=redis://default:redispw@localhost:49153`
2. `MEMORY_CACHE_ENABLED`: Enable memory cache, default `false`
+ Example: `MEMORY_CACHE_ENABLED=true`
For detailed API documentation, please refer to [API Documentation](https://docs.newapi.pro/api):
### Why Some Errors Don't Retry
Error codes 400, 504, 524 won't retry
### To Enable Retry for 400
In `Channel->Edit`, set `Status Code Override` to:
```json
{
"400": "500"
}
```
## Integration Guides
- [Midjourney Integration](Midjourney.md)
- [Suno Integration](Suno.md)
- [Chat API](https://docs.newapi.pro/api/openai-chat)
- [Image API](https://docs.newapi.pro/api/openai-image)
- [Rerank API](https://docs.newapi.pro/api/jinaai-rerank)
- [Realtime API](https://docs.newapi.pro/api/openai-realtime)
- [Claude Chat API (messages)](https://docs.newapi.pro/api/anthropic-chat)
## Related Projects
- [One API](https://github.com/songquanpeng/one-api): Original project
- [Midjourney-Proxy](https://github.com/novicezk/midjourney-proxy): Midjourney interface support
- [chatnio](https://github.com/Deeptrain-Community/chatnio): Next-gen AI B/C solution
- [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool): Query usage quota by key
- [chatnio](https://github.com/Deeptrain-Community/chatnio): Next-generation AI one-stop B/C-end solution
- [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool): Query usage quota with key
Other projects based on New API:
- [new-api-horizon](https://github.com/Calcium-Ion/new-api-horizon): High-performance optimized version of New API
- [VoAPI](https://github.com/VoAPI/VoAPI): Frontend beautified version based on New API
## Help and Support
If you have any questions, please refer to [Help and Support](https://docs.newapi.pro/support):
- [Community Interaction](https://docs.newapi.pro/support/community-interaction)
- [Issue Feedback](https://docs.newapi.pro/support/feedback-issues)
- [FAQ](https://docs.newapi.pro/support/faq)
## 🌟 Star History
[![Star History Chart](https://api.star-history.com/svg?repos=Calcium-Ion/new-api&type=Date)](https://star-history.com/#Calcium-Ion/new-api&Date)
[![Star History Chart](https://api.star-history.com/svg?repos=Calcium-Ion/new-api&type=Date)](https://star-history.com/#Calcium-Ion/new-api&Date)

View File

@@ -130,7 +130,7 @@ New API提供了丰富的功能详细特性请参考[特性说明](https://do
#### 使用宝塔面板Docker功能部署
安装宝塔面板(**9.2.0版本**及以上),在应用商店中找到**New-API**安装即可。
[图文教程](BT.md)
[图文教程](./docs/BT.md)
#### 使用Docker Compose部署推荐
```shell

View File

@@ -62,6 +62,10 @@ var EmailDomainWhitelist = []string{
"yahoo.com",
"foxmail.com",
}
var EmailLoginAuthServerList = []string{
"smtp.sendcloud.net",
"smtp.azurecomm.net",
}
var DebugEnabled bool
var MemoryCacheEnabled bool

View File

@@ -5,6 +5,7 @@ import (
"encoding/base64"
"fmt"
"net/smtp"
"slices"
"strings"
"time"
)
@@ -79,7 +80,7 @@ func SendEmail(subject string, receiver string, content string) error {
if err != nil {
return err
}
} else if isOutlookServer(SMTPAccount) || SMTPServer == "smtp.azurecomm.net" {
} else if isOutlookServer(SMTPAccount) || slices.Contains(EmailLoginAuthServerList, SMTPServer) {
auth = LoginAuth(SMTPAccount, SMTPToken)
err = smtp.SendMail(addr, auth, SMTPFrom, to, mail)
} else {

89
common/limiter/limiter.go Normal file
View File

@@ -0,0 +1,89 @@
package limiter
import (
"context"
_ "embed"
"fmt"
"github.com/go-redis/redis/v8"
"one-api/common"
"sync"
)
//go:embed lua/rate_limit.lua
var rateLimitScript string
type RedisLimiter struct {
client *redis.Client
limitScriptSHA string
}
var (
instance *RedisLimiter
once sync.Once
)
func New(ctx context.Context, r *redis.Client) *RedisLimiter {
once.Do(func() {
// 预加载脚本
limitSHA, err := r.ScriptLoad(ctx, rateLimitScript).Result()
if err != nil {
common.SysLog(fmt.Sprintf("Failed to load rate limit script: %v", err))
}
instance = &RedisLimiter{
client: r,
limitScriptSHA: limitSHA,
}
})
return instance
}
func (rl *RedisLimiter) Allow(ctx context.Context, key string, opts ...Option) (bool, error) {
// 默认配置
config := &Config{
Capacity: 10,
Rate: 1,
Requested: 1,
}
// 应用选项模式
for _, opt := range opts {
opt(config)
}
// 执行限流
result, err := rl.client.EvalSha(
ctx,
rl.limitScriptSHA,
[]string{key},
config.Requested,
config.Rate,
config.Capacity,
).Int()
if err != nil {
return false, fmt.Errorf("rate limit failed: %w", err)
}
return result == 1, nil
}
// Config 配置选项模式
type Config struct {
Capacity int64
Rate int64
Requested int64
}
type Option func(*Config)
func WithCapacity(c int64) Option {
return func(cfg *Config) { cfg.Capacity = c }
}
func WithRate(r int64) Option {
return func(cfg *Config) { cfg.Rate = r }
}
func WithRequested(n int64) Option {
return func(cfg *Config) { cfg.Requested = n }
}

View File

@@ -0,0 +1,44 @@
-- 令牌桶限流器
-- KEYS[1]: 限流器唯一标识
-- ARGV[1]: 请求令牌数 (通常为1)
-- ARGV[2]: 令牌生成速率 (每秒)
-- ARGV[3]: 桶容量
local key = KEYS[1]
local requested = tonumber(ARGV[1])
local rate = tonumber(ARGV[2])
local capacity = tonumber(ARGV[3])
-- 获取当前时间Redis服务器时间
local now = redis.call('TIME')
local nowInSeconds = tonumber(now[1])
-- 获取桶状态
local bucket = redis.call('HMGET', key, 'tokens', 'last_time')
local tokens = tonumber(bucket[1])
local last_time = tonumber(bucket[2])
-- 初始化桶(首次请求或过期)
if not tokens or not last_time then
tokens = capacity
last_time = nowInSeconds
else
-- 计算新增令牌
local elapsed = nowInSeconds - last_time
local add_tokens = elapsed * rate
tokens = math.min(capacity, tokens + add_tokens)
last_time = nowInSeconds
end
-- 判断是否允许请求
local allowed = false
if tokens >= requested then
tokens = tokens - requested
allowed = true
end
---- 更新桶状态并设置过期时间
redis.call('HMSET', key, 'tokens', tokens, 'last_time', last_time)
--redis.call('EXPIRE', key, math.ceil(capacity / rate) + 60) -- 适当延长过期时间
return allowed and 1 or 0

View File

@@ -7,7 +7,6 @@ import (
"encoding/base64"
"encoding/json"
"fmt"
"github.com/pkg/errors"
"html/template"
"io"
"log"
@@ -22,6 +21,7 @@ import (
"time"
"github.com/google/uuid"
"github.com/pkg/errors"
)
func OpenBrowser(url string) {

5
constant/azure.go Normal file
View File

@@ -0,0 +1,5 @@
package constant
import "time"
var AzureNoRemoveDotTime = time.Date(2025, time.May, 10, 0, 0, 0, 0, time.UTC).Unix()

View File

@@ -16,6 +16,7 @@ var GeminiVisionMaxImageNum int
var NotifyLimitCount int
var NotificationLimitDurationMinute int
var GenerateDefaultToken bool
var ErrorLogEnabled bool
//var GeminiModelMap = map[string]string{
// "gemini-1.0-pro": "v1",
@@ -36,6 +37,8 @@ func InitEnv() {
NotificationLimitDurationMinute = common.GetEnvOrDefault("NOTIFICATION_LIMIT_DURATION_MINUTE", 10)
// GenerateDefaultToken 是否生成初始令牌,默认关闭。
GenerateDefaultToken = common.GetEnvOrDefaultBool("GENERATE_DEFAULT_TOKEN", false)
// 是否启用错误日志
ErrorLogEnabled = common.GetEnvOrDefaultBool("ERROR_LOG_ENABLED", false)
//modelVersionMapStr := strings.TrimSpace(os.Getenv("GEMINI_MODEL_MAP"))
//if modelVersionMapStr == "" {

View File

@@ -103,7 +103,10 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
}
request := buildTestRequest(testModel)
common.SysLog(fmt.Sprintf("testing channel %d with model %s , info %v ", channel.Id, testModel, info))
// 创建一个用于日志的 info 副本,移除 ApiKey
logInfo := *info
logInfo.ApiKey = ""
common.SysLog(fmt.Sprintf("testing channel %d with model %s , info %+v ", channel.Id, testModel, logInfo))
priceData, err := helper.ModelPriceHelper(c, info, 0, int(request.MaxTokens))
if err != nil {
@@ -186,7 +189,7 @@ func buildTestRequest(model string) *dto.GeneralOpenAIRequest {
return testRequest
}
// 并非Embedding 模型
if strings.HasPrefix(model, "o1") || strings.HasPrefix(model, "o3") {
if strings.HasPrefix(model, "o") {
testRequest.MaxCompletionTokens = 10
} else if strings.Contains(model, "thinking") {
if !strings.Contains(model, "claude") {

View File

@@ -196,7 +196,7 @@ func DeleteHistoryLogs(c *gin.Context) {
})
return
}
count, err := model.DeleteOldLog(targetTimestamp)
count, err := model.DeleteOldLog(c.Request.Context(), targetTimestamp, 100)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,

View File

@@ -4,12 +4,11 @@ import (
"bytes"
"errors"
"fmt"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
"io"
"log"
"net/http"
"one-api/common"
constant2 "one-api/constant"
"one-api/dto"
"one-api/middleware"
"one-api/model"
@@ -19,12 +18,15 @@ import (
"one-api/relay/helper"
"one-api/service"
"strings"
"github.com/gin-gonic/gin"
"github.com/gorilla/websocket"
)
func relayHandler(c *gin.Context, relayMode int) *dto.OpenAIErrorWithStatusCode {
var err *dto.OpenAIErrorWithStatusCode
switch relayMode {
case relayconstant.RelayModeImagesGenerations:
case relayconstant.RelayModeImagesGenerations, relayconstant.RelayModeImagesEdits:
err = relay.ImageHelper(c)
case relayconstant.RelayModeAudioSpeech:
fallthrough
@@ -36,9 +38,31 @@ func relayHandler(c *gin.Context, relayMode int) *dto.OpenAIErrorWithStatusCode
err = relay.RerankHelper(c, relayMode)
case relayconstant.RelayModeEmbeddings:
err = relay.EmbeddingHelper(c)
case relayconstant.RelayModeResponses:
err = relay.ResponsesHelper(c)
default:
err = relay.TextHelper(c)
}
if constant2.ErrorLogEnabled && err != nil {
// 保存错误日志到mysql中
userId := c.GetInt("id")
tokenName := c.GetString("token_name")
modelName := c.GetString("original_model")
tokenId := c.GetInt("token_id")
userGroup := c.GetString("group")
channelId := c.GetInt("channel_id")
other := make(map[string]interface{})
other["error_type"] = err.Error.Type
other["error_code"] = err.Error.Code
other["status_code"] = err.StatusCode
other["channel_id"] = channelId
other["channel_name"] = c.GetString("channel_name")
other["channel_type"] = c.GetInt("channel_type")
model.RecordErrorLog(c, userId, channelId, modelName, tokenName, err.Error.Message, tokenId, 0, false, userGroup, other)
}
return err
}

View File

@@ -592,7 +592,14 @@ func UpdateSelf(c *gin.Context) {
user.Password = "" // rollback to what it should be
cleanUser.Password = ""
}
updatePassword := user.Password != ""
updatePassword, err := checkUpdatePassword(user.OriginalPassword, user.Password, cleanUser.Id)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
})
return
}
if err := cleanUser.Update(updatePassword); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
@@ -608,6 +615,23 @@ func UpdateSelf(c *gin.Context) {
return
}
func checkUpdatePassword(originalPassword string, newPassword string, userId int) (updatePassword bool, err error) {
var currentUser *model.User
currentUser, err = model.GetUserById(userId, true)
if err != nil {
return
}
if !common.ValidatePasswordAndHash(originalPassword, currentUser.Password) {
err = fmt.Errorf("原密码错误")
return
}
if newPassword == "" {
return
}
updatePassword = true
return
}
func DeleteUser(c *gin.Context) {
id, err := strconv.Atoi(c.Param("id"))
if err != nil {

View File

@@ -15,6 +15,7 @@ services:
- SQL_DSN=root:123456@tcp(mysql:3306)/new-api # Point to the mysql service
- REDIS_CONN_STRING=redis://redis
- TZ=Asia/Shanghai
- ERROR_LOG_ENABLED=true # 是否启用错误日志记录
# - TIKTOKEN_CACHE_DIR=./tiktoken_cache # 如果需要使用tiktoken_cache请取消注释
# - SESSION_SECRET=random_string # 多机部署时设置,必须修改这个随机字符串!!!!!!!
# - NODE_TYPE=slave # Uncomment for slave node in multi-node deployment

View File

@@ -1,3 +1,3 @@
密钥为环境变量SESSION_SECRET
![8285bba413e770fe9620f1bf9b40d44e](https://github.com/user-attachments/assets/7a6fc03e-c457-45e4-b8f9-184508fc26b0)
密钥为环境变量SESSION_SECRET
![8285bba413e770fe9620f1bf9b40d44e](https://github.com/user-attachments/assets/7a6fc03e-c457-45e4-b8f9-184508fc26b0)

View File

@@ -70,8 +70,9 @@ func (c *ClaudeMediaMessage) ParseMediaContent() []ClaudeMediaMessage {
type ClaudeMessageSource struct {
Type string `json:"type"`
MediaType string `json:"media_type"`
Data any `json:"data"`
MediaType string `json:"media_type,omitempty"`
Data any `json:"data,omitempty"`
Url string `json:"url,omitempty"`
}
type ClaudeMessage struct {

View File

@@ -1,14 +1,17 @@
package dto
import "encoding/json"
type ImageRequest struct {
Model string `json:"model"`
Prompt string `json:"prompt" binding:"required"`
N int `json:"n,omitempty"`
Size string `json:"size,omitempty"`
Quality string `json:"quality,omitempty"`
ResponseFormat string `json:"response_format,omitempty"`
Style string `json:"style,omitempty"`
User string `json:"user,omitempty"`
Model string `json:"model"`
Prompt string `json:"prompt" binding:"required"`
N int `json:"n,omitempty"`
Size string `json:"size,omitempty"`
Quality string `json:"quality,omitempty"`
ResponseFormat string `json:"response_format,omitempty"`
Style string `json:"style,omitempty"`
User string `json:"user,omitempty"`
ExtraFields json.RawMessage `json:"extra_fields,omitempty"`
}
type ImageResponse struct {

View File

@@ -18,39 +18,41 @@ type FormatJsonSchema struct {
}
type GeneralOpenAIRequest struct {
Model string `json:"model,omitempty"`
Messages []Message `json:"messages,omitempty"`
Prompt any `json:"prompt,omitempty"`
Prefix any `json:"prefix,omitempty"`
Suffix any `json:"suffix,omitempty"`
Stream bool `json:"stream,omitempty"`
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
MaxTokens uint `json:"max_tokens,omitempty"`
MaxCompletionTokens uint `json:"max_completion_tokens,omitempty"`
ReasoningEffort string `json:"reasoning_effort,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
Stop any `json:"stop,omitempty"`
N int `json:"n,omitempty"`
Input any `json:"input,omitempty"`
Instruction string `json:"instruction,omitempty"`
Size string `json:"size,omitempty"`
Functions any `json:"functions,omitempty"`
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
PresencePenalty float64 `json:"presence_penalty,omitempty"`
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
EncodingFormat any `json:"encoding_format,omitempty"`
Seed float64 `json:"seed,omitempty"`
Tools []ToolCallRequest `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
User string `json:"user,omitempty"`
LogProbs bool `json:"logprobs,omitempty"`
TopLogProbs int `json:"top_logprobs,omitempty"`
Dimensions int `json:"dimensions,omitempty"`
Modalities any `json:"modalities,omitempty"`
Audio any `json:"audio,omitempty"`
ExtraBody any `json:"extra_body,omitempty"`
Model string `json:"model,omitempty"`
Messages []Message `json:"messages,omitempty"`
Prompt any `json:"prompt,omitempty"`
Prefix any `json:"prefix,omitempty"`
Suffix any `json:"suffix,omitempty"`
Stream bool `json:"stream,omitempty"`
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
MaxTokens uint `json:"max_tokens,omitempty"`
MaxCompletionTokens uint `json:"max_completion_tokens,omitempty"`
ReasoningEffort string `json:"reasoning_effort,omitempty"`
//Reasoning json.RawMessage `json:"reasoning,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
Stop any `json:"stop,omitempty"`
N int `json:"n,omitempty"`
Input any `json:"input,omitempty"`
Instruction string `json:"instruction,omitempty"`
Size string `json:"size,omitempty"`
Functions any `json:"functions,omitempty"`
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
PresencePenalty float64 `json:"presence_penalty,omitempty"`
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
EncodingFormat any `json:"encoding_format,omitempty"`
Seed float64 `json:"seed,omitempty"`
Tools []ToolCallRequest `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
User string `json:"user,omitempty"`
LogProbs bool `json:"logprobs,omitempty"`
TopLogProbs int `json:"top_logprobs,omitempty"`
Dimensions int `json:"dimensions,omitempty"`
Modalities any `json:"modalities,omitempty"`
Audio any `json:"audio,omitempty"`
EnableThinking any `json:"enable_thinking,omitempty"` // ali
ExtraBody any `json:"extra_body,omitempty"`
}
type ToolCallRequest struct {
@@ -112,6 +114,7 @@ type MediaContent struct {
ImageUrl any `json:"image_url,omitempty"`
InputAudio any `json:"input_audio,omitempty"`
File any `json:"file,omitempty"`
VideoUrl any `json:"video_url,omitempty"`
}
func (m *MediaContent) GetImageMedia() *MessageImageUrl {
@@ -156,11 +159,16 @@ type MessageFile struct {
FileId string `json:"file_id,omitempty"`
}
type MessageVideoUrl struct {
Url string `json:"url"`
}
const (
ContentTypeText = "text"
ContentTypeImageURL = "image_url"
ContentTypeInputAudio = "input_audio"
ContentTypeFile = "file"
ContentTypeVideoUrl = "video_url" // 阿里百炼视频识别
)
func (m *Message) GetPrefix() bool {
@@ -344,6 +352,15 @@ func (m *Message) ParseContent() []MediaContent {
}
}
}
case ContentTypeVideoUrl:
if videoUrl, ok := contentItem["video_url"].(string); ok {
contentList = append(contentList, MediaContent{
Type: ContentTypeVideoUrl,
VideoUrl: &MessageVideoUrl{
Url: videoUrl,
},
})
}
}
}
}
@@ -353,3 +370,49 @@ func (m *Message) ParseContent() []MediaContent {
}
return contentList
}
type OpenAIResponsesRequest struct {
Model string `json:"model"`
Input json.RawMessage `json:"input,omitempty"`
Include json.RawMessage `json:"include,omitempty"`
Instructions json.RawMessage `json:"instructions,omitempty"`
MaxOutputTokens uint `json:"max_output_tokens,omitempty"`
Metadata json.RawMessage `json:"metadata,omitempty"`
ParallelToolCalls bool `json:"parallel_tool_calls,omitempty"`
PreviousResponseID string `json:"previous_response_id,omitempty"`
Reasoning *Reasoning `json:"reasoning,omitempty"`
ServiceTier string `json:"service_tier,omitempty"`
Store bool `json:"store,omitempty"`
Stream bool `json:"stream,omitempty"`
Temperature float64 `json:"temperature,omitempty"`
Text json.RawMessage `json:"text,omitempty"`
ToolChoice json.RawMessage `json:"tool_choice,omitempty"`
Tools []ResponsesToolsCall `json:"tools,omitempty"`
TopP float64 `json:"top_p,omitempty"`
Truncation string `json:"truncation,omitempty"`
User string `json:"user,omitempty"`
}
type Reasoning struct {
Effort string `json:"effort,omitempty"`
Summary string `json:"summary,omitempty"`
}
type ResponsesToolsCall struct {
Type string `json:"type"`
// Web Search
UserLocation json.RawMessage `json:"user_location,omitempty"`
SearchContextSize string `json:"search_context_size,omitempty"`
// File Search
VectorStoreIds []string `json:"vector_store_ids,omitempty"`
MaxNumResults uint `json:"max_num_results,omitempty"`
Filters json.RawMessage `json:"filters,omitempty"`
// Computer Use
DisplayWidth uint `json:"display_width,omitempty"`
DisplayHeight uint `json:"display_height,omitempty"`
Environment string `json:"environment,omitempty"`
// Function
Name string `json:"name,omitempty"`
Description string `json:"description,omitempty"`
Parameters json.RawMessage `json:"parameters,omitempty"`
}

View File

@@ -1,5 +1,7 @@
package dto
import "encoding/json"
type SimpleResponse struct {
Usage `json:"usage"`
Error *OpenAIError `json:"error"`
@@ -166,10 +168,93 @@ type CompletionsStreamResponse struct {
}
type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
PromptCacheHitTokens int `json:"prompt_cache_hit_tokens,omitempty"`
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
PromptCacheHitTokens int `json:"prompt_cache_hit_tokens,omitempty"`
PromptTokensDetails InputTokenDetails `json:"prompt_tokens_details"`
CompletionTokenDetails OutputTokenDetails `json:"completion_tokens_details"`
InputTokens int `json:"input_tokens"`
OutputTokens int `json:"output_tokens"`
InputTokensDetails *InputTokenDetails `json:"input_tokens_details"`
}
type InputTokenDetails struct {
CachedTokens int `json:"cached_tokens"`
CachedCreationTokens int `json:"-"`
TextTokens int `json:"text_tokens"`
AudioTokens int `json:"audio_tokens"`
ImageTokens int `json:"image_tokens"`
}
type OutputTokenDetails struct {
TextTokens int `json:"text_tokens"`
AudioTokens int `json:"audio_tokens"`
ReasoningTokens int `json:"reasoning_tokens"`
}
type OpenAIResponsesResponse struct {
ID string `json:"id"`
Object string `json:"object"`
CreatedAt int `json:"created_at"`
Status string `json:"status"`
Error *OpenAIError `json:"error,omitempty"`
IncompleteDetails *IncompleteDetails `json:"incomplete_details,omitempty"`
Instructions string `json:"instructions"`
MaxOutputTokens int `json:"max_output_tokens"`
Model string `json:"model"`
Output []ResponsesOutput `json:"output"`
ParallelToolCalls bool `json:"parallel_tool_calls"`
PreviousResponseID string `json:"previous_response_id"`
Reasoning *Reasoning `json:"reasoning"`
Store bool `json:"store"`
Temperature float64 `json:"temperature"`
ToolChoice string `json:"tool_choice"`
Tools []ResponsesToolsCall `json:"tools"`
TopP float64 `json:"top_p"`
Truncation string `json:"truncation"`
Usage *Usage `json:"usage"`
User json.RawMessage `json:"user"`
Metadata json.RawMessage `json:"metadata"`
}
type IncompleteDetails struct {
Reasoning string `json:"reasoning"`
}
type ResponsesOutput struct {
Type string `json:"type"`
ID string `json:"id"`
Status string `json:"status"`
Role string `json:"role"`
Content []ResponsesOutputContent `json:"content"`
}
type ResponsesOutputContent struct {
Type string `json:"type"`
Text string `json:"text"`
Annotations []interface{} `json:"annotations"`
}
const (
BuildInToolWebSearchPreview = "web_search_preview"
BuildInToolFileSearch = "file_search"
)
const (
BuildInCallWebSearchCall = "web_search_call"
)
const (
ResponsesOutputTypeItemAdded = "response.output_item.added"
ResponsesOutputTypeItemDone = "response.output_item.done"
)
// ResponsesStreamResponse 用于处理 /v1/responses 流式响应
type ResponsesStreamResponse struct {
Type string `json:"type"`
Response *OpenAIResponsesResponse `json:"response,omitempty"`
Delta string `json:"delta,omitempty"`
Item *ResponsesOutput `json:"item,omitempty"`
}

View File

@@ -43,20 +43,6 @@ type RealtimeUsage struct {
OutputTokenDetails OutputTokenDetails `json:"output_token_details"`
}
type InputTokenDetails struct {
CachedTokens int `json:"cached_tokens"`
CachedCreationTokens int `json:"-"`
TextTokens int `json:"text_tokens"`
AudioTokens int `json:"audio_tokens"`
ImageTokens int `json:"image_tokens"`
}
type OutputTokenDetails struct {
TextTokens int `json:"text_tokens"`
AudioTokens int `json:"audio_tokens"`
ReasoningTokens int `json:"reasoning_tokens"`
}
type RealtimeSession struct {
Modalities []string `json:"modalities"`
Instructions string `json:"instructions"`

View File

@@ -74,12 +74,14 @@ func main() {
}
// Initialize model settings
operation_setting.InitModelSettings()
operation_setting.InitRatioSettings()
// Initialize constants
constant.InitEnv()
// Initialize options
model.InitOptionMap()
service.InitTokenEncoders()
if common.RedisEnabled {
// for compatibility with old versions
common.MemoryCacheEnabled = true
@@ -133,8 +135,6 @@ func main() {
common.SysLog("pprof enabled")
}
service.InitTokenEncoders()
// Initialize HTTP server
server := gin.New()
server.Use(gin.CustomRecovery(func(c *gin.Context, err any) {

View File

@@ -162,7 +162,7 @@ func getModelRequest(c *gin.Context) (*ModelRequest, bool, error) {
}
c.Set("platform", string(constant.TaskPlatformSuno))
c.Set("relay_mode", relayMode)
} else if !strings.HasPrefix(c.Request.URL.Path, "/v1/audio/transcriptions") {
} else if !strings.HasPrefix(c.Request.URL.Path, "/v1/audio/transcriptions") && !strings.HasPrefix(c.Request.URL.Path, "/v1/images/edits") {
err = common.UnmarshalBodyReusable(c, &modelRequest)
}
if err != nil {
@@ -184,6 +184,8 @@ func getModelRequest(c *gin.Context) (*ModelRequest, bool, error) {
}
if strings.HasPrefix(c.Request.URL.Path, "/v1/images/generations") {
modelRequest.Model = common.GetStringIfEmpty(modelRequest.Model, "dall-e")
} else if strings.HasPrefix(c.Request.URL.Path, "/v1/images/edits") {
modelRequest.Model = common.GetStringIfEmpty(modelRequest.Model, "gpt-image-1")
}
if strings.HasPrefix(c.Request.URL.Path, "/v1/audio") {
relayMode := relayconstant.RelayModeAudioSpeech
@@ -211,6 +213,7 @@ func SetupContextForSelectedChannel(c *gin.Context, channel *model.Channel, mode
c.Set("channel_id", channel.Id)
c.Set("channel_name", channel.Name)
c.Set("channel_type", channel.Type)
c.Set("channel_create_time", channel.CreatedTime)
c.Set("channel_setting", channel.GetSetting())
c.Set("param_override", channel.GetParamOverride())
if nil != channel.OpenAIOrganization && "" != *channel.OpenAIOrganization {

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"one-api/common"
"one-api/common/limiter"
"one-api/setting"
"strconv"
"time"
@@ -78,21 +79,9 @@ func redisRateLimitHandler(duration int64, totalMaxCount, successMaxCount int) g
ctx := context.Background()
rdb := common.RDB
// 1. 检查请求数限制当totalMaxCount为0时会自动跳过
totalKey := fmt.Sprintf("rateLimit:%s:%s", ModelRequestRateLimitCountMark, userId)
allowed, err := checkRedisRateLimit(ctx, rdb, totalKey, totalMaxCount, duration)
if err != nil {
fmt.Println("检查总请求数限制失败:", err.Error())
abortWithOpenAiMessage(c, http.StatusInternalServerError, "rate_limit_check_failed")
return
}
if !allowed {
abortWithOpenAiMessage(c, http.StatusTooManyRequests, fmt.Sprintf("您已达到总请求数限制:%d分钟内最多请求%d次包括失败次数请检查您的请求是否正确", setting.ModelRequestRateLimitDurationMinutes, totalMaxCount))
}
// 2. 检查成功请求数限制
// 1. 检查成功请求数限制
successKey := fmt.Sprintf("rateLimit:%s:%s", ModelRequestRateLimitSuccessCountMark, userId)
allowed, err = checkRedisRateLimit(ctx, rdb, successKey, successMaxCount, duration)
allowed, err := checkRedisRateLimit(ctx, rdb, successKey, successMaxCount, duration)
if err != nil {
fmt.Println("检查成功请求数限制失败:", err.Error())
abortWithOpenAiMessage(c, http.StatusInternalServerError, "rate_limit_check_failed")
@@ -103,8 +92,29 @@ func redisRateLimitHandler(duration int64, totalMaxCount, successMaxCount int) g
return
}
// 3. 记录总请求当totalMaxCount为0时会自动跳过
recordRedisRequest(ctx, rdb, totalKey, totalMaxCount)
//2.检查总请求数限制并记录总请求当totalMaxCount为0时会自动跳过,使用令牌桶限流器
if totalMaxCount > 0 {
totalKey := fmt.Sprintf("rateLimit:%s", userId)
// 初始化
tb := limiter.New(ctx, rdb)
allowed, err = tb.Allow(
ctx,
totalKey,
limiter.WithCapacity(int64(totalMaxCount)*duration),
limiter.WithRate(int64(totalMaxCount)),
limiter.WithRequested(duration),
)
if err != nil {
fmt.Println("检查总请求数限制失败:", err.Error())
abortWithOpenAiMessage(c, http.StatusInternalServerError, "rate_limit_check_failed")
return
}
if !allowed {
abortWithOpenAiMessage(c, http.StatusTooManyRequests, fmt.Sprintf("您已达到总请求数限制:%d分钟内最多请求%d次包括失败次数请检查您的请求是否正确", setting.ModelRequestRateLimitDurationMinutes, totalMaxCount))
}
}
// 4. 处理请求
c.Next()

View File

@@ -84,9 +84,11 @@ func CacheGetRandomSatisfiedChannel(group string, model string, retry int) (*Cha
if !common.MemoryCacheEnabled {
return GetRandomSatisfiedChannel(group, model, retry)
}
channelSyncLock.RLock()
defer channelSyncLock.RUnlock()
channels := group2model2channels[group][model]
channelSyncLock.RUnlock()
if len(channels) == 0 {
return nil, errors.New("channel not found")
}

View File

@@ -119,10 +119,15 @@ func SearchChannels(keyword string, group string, model string, idSort bool) ([]
// 如果是 PostgreSQL使用双引号
if common.UsingPostgreSQL {
keyCol = `"key"`
modelsCol = `"models"`
}
baseURLCol := "`base_url`"
// 如果是 PostgreSQL使用双引号
if common.UsingPostgreSQL {
baseURLCol = `"base_url"`
}
order := "priority desc"
if idSort {
order = "id desc"
@@ -142,11 +147,11 @@ func SearchChannels(keyword string, group string, model string, idSort bool) ([]
// sqlite, PostgreSQL
groupCondition = `(',' || ` + groupCol + ` || ',') LIKE ?`
}
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%", "%,"+group+",%")
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%", "%,"+group+",%")
} else {
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + " LIKE ?"
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%")
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + " LIKE ?"
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%")
}
// 执行查询
@@ -450,6 +455,12 @@ func SearchTags(keyword string, group string, model string, idSort bool) ([]*str
modelsCol = `"models"`
}
baseURLCol := "`base_url`"
// 如果是 PostgreSQL使用双引号
if common.UsingPostgreSQL {
baseURLCol = `"base_url"`
}
order := "priority desc"
if idSort {
order = "id desc"
@@ -469,11 +480,11 @@ func SearchTags(keyword string, group string, model string, idSort bool) ([]*str
// sqlite, PostgreSQL
groupCondition = `(',' || ` + groupCol + ` || ',') LIKE ?`
}
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%", "%,"+group+",%")
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + ` LIKE ? AND ` + groupCondition
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%", "%,"+group+",%")
} else {
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ?) AND " + modelsCol + " LIKE ?"
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+model+"%")
whereClause = "(id = ? OR name LIKE ? OR " + keyCol + " = ? OR " + baseURLCol + " LIKE ?) AND " + modelsCol + " LIKE ?"
args = append(args, common.String2Int(keyword), "%"+keyword+"%", keyword, "%"+keyword+"%", "%"+model+"%")
}
subQuery := baseQuery.Where(whereClause, args...).

View File

@@ -1,6 +1,7 @@
package model
import (
"context"
"fmt"
"one-api/common"
"os"
@@ -40,6 +41,7 @@ const (
LogTypeConsume
LogTypeManage
LogTypeSystem
LogTypeError
)
func formatUserLogs(logs []*Log) {
@@ -88,6 +90,35 @@ func RecordLog(userId int, logType int, content string) {
}
}
func RecordErrorLog(c *gin.Context, userId int, channelId int, modelName string, tokenName string, content string, tokenId int, useTimeSeconds int,
isStream bool, group string, other map[string]interface{}) {
common.LogInfo(c, fmt.Sprintf("record error log: userId=%d, channelId=%d, modelName=%s, tokenName=%s, content=%s", userId, channelId, modelName, tokenName, content))
username := c.GetString("username")
otherStr := common.MapToJsonStr(other)
log := &Log{
UserId: userId,
Username: username,
CreatedAt: common.GetTimestamp(),
Type: LogTypeError,
Content: content,
PromptTokens: 0,
CompletionTokens: 0,
TokenName: tokenName,
ModelName: modelName,
Quota: 0,
ChannelId: channelId,
TokenId: tokenId,
UseTime: useTimeSeconds,
IsStream: isStream,
Group: group,
Other: otherStr,
}
err := LOG_DB.Create(log).Error
if err != nil {
common.LogError(c, "failed to record log: "+err.Error())
}
}
func RecordConsumeLog(c *gin.Context, userId int, channelId int, promptTokens int, completionTokens int,
modelName string, tokenName string, quota int, content string, tokenId int, userQuota int, useTimeSeconds int,
isStream bool, group string, other map[string]interface{}) {
@@ -310,7 +341,25 @@ func SumUsedToken(logType int, startTimestamp int64, endTimestamp int64, modelNa
return token
}
func DeleteOldLog(targetTimestamp int64) (int64, error) {
result := LOG_DB.Where("created_at < ?", targetTimestamp).Delete(&Log{})
return result.RowsAffected, result.Error
func DeleteOldLog(ctx context.Context, targetTimestamp int64, limit int) (int64, error) {
var total int64 = 0
for {
if nil != ctx.Err() {
return total, ctx.Err()
}
result := LOG_DB.Where("created_at < ?", targetTimestamp).Limit(limit).Delete(&Log{})
if nil != result.Error {
return total, result.Error
}
total += result.RowsAffected
if result.RowsAffected < int64(limit) {
break
}
}
return total, nil
}

View File

@@ -18,6 +18,7 @@ type User struct {
Id int `json:"id"`
Username string `json:"username" gorm:"unique;index" validate:"max=12"`
Password string `json:"password" gorm:"not null;" validate:"min=8,max=20"`
OriginalPassword string `json:"original_password" gorm:"-:all"` // this field is only for Password change verification, don't save it to database!
DisplayName string `json:"display_name" gorm:"index" validate:"max=20"`
Role int `json:"role" gorm:"type:int;default:1"` // admin, common
Status int `json:"status" gorm:"type:int;default:1"` // enabled, disabled
@@ -108,7 +109,7 @@ func CheckUserExistOrDeleted(username string, email string) (bool, error) {
func GetMaxUserId() int {
var user User
DB.Last(&user)
DB.Unscoped().Last(&user)
return user.Id
}

View File

@@ -1,11 +1,12 @@
package channel
import (
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
relaycommon "one-api/relay/common"
"github.com/gin-gonic/gin"
)
type Adaptor interface {
@@ -18,6 +19,7 @@ type Adaptor interface {
ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error)
ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error)
ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error)
ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error)
DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error)
DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode)
GetModelList() []string

View File

@@ -3,7 +3,6 @@ package ali
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -11,6 +10,8 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -79,6 +80,11 @@ func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInf
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -1,7 +1,12 @@
package ali
var ModelList = []string{
"qwen-turbo", "qwen-plus", "qwen-max", "qwen-max-longcontext",
"qwen-turbo",
"qwen-plus",
"qwen-max",
"qwen-max-longcontext",
"qwq-32b",
"qwen3-235b-a22b",
"text-embedding-v1",
}

View File

@@ -2,13 +2,14 @@ package aws
import (
"errors"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel/claude"
relaycommon "one-api/relay/common"
"one-api/setting/model_setting"
"github.com/gin-gonic/gin"
)
const (
@@ -74,6 +75,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return nil, nil
}

View File

@@ -3,7 +3,6 @@ package baidu
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -11,6 +10,8 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"strings"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -130,6 +131,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return baiduEmbeddingRequest, nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,13 +3,14 @@ package baidu_v2
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -60,6 +61,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,7 +3,6 @@ package claude
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -11,6 +10,8 @@ import (
relaycommon "one-api/relay/common"
"one-api/setting/model_setting"
"strings"
"github.com/gin-gonic/gin"
)
const (
@@ -84,6 +85,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -300,6 +300,13 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *dto.ClaudeResponse
response.Model = claudeResponse.Model
response.Choices = make([]dto.ChatCompletionsStreamResponseChoice, 0)
tools := make([]dto.ToolCallResponse, 0)
fcIdx := 0
if claudeResponse.Index != nil {
fcIdx = *claudeResponse.Index - 1
if fcIdx < 0 {
fcIdx = 0
}
}
var choice dto.ChatCompletionsStreamResponseChoice
if reqMode == RequestModeCompletion {
choice.Delta.SetContentString(claudeResponse.Completion)
@@ -319,7 +326,7 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *dto.ClaudeResponse
//choice.Delta.SetContentString(claudeResponse.ContentBlock.Text)
if claudeResponse.ContentBlock.Type == "tool_use" {
tools = append(tools, dto.ToolCallResponse{
Index: common.GetPointer(0),
Index: common.GetPointer(fcIdx),
ID: claudeResponse.ContentBlock.Id,
Type: "function",
Function: dto.FunctionResponse{
@@ -338,7 +345,7 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *dto.ClaudeResponse
case "input_json_delta":
tools = append(tools, dto.ToolCallResponse{
Type: "function",
Index: common.GetPointer(0),
Index: common.GetPointer(fcIdx),
Function: dto.FunctionResponse{
Arguments: *claudeResponse.Delta.PartialJson,
},

View File

@@ -55,6 +55,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
}
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,13 +3,14 @@ package cohere
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -52,6 +53,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
return requestOpenAI2Cohere(*request), nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,7 +3,6 @@ package deepseek
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -11,6 +10,9 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"strings"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -36,9 +38,13 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
}
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
fimBaseUrl := info.BaseUrl
if !strings.HasSuffix(info.BaseUrl, "/beta") {
fimBaseUrl += "/beta"
}
switch info.RelayMode {
case constant.RelayModeCompletions:
return fmt.Sprintf("%s/beta/completions", info.BaseUrl), nil
return fmt.Sprintf("%s/completions", fimBaseUrl), nil
default:
return fmt.Sprintf("%s/v1/chat/completions", info.BaseUrl), nil
}
@@ -66,6 +72,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,12 +3,13 @@ package dify
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"github.com/gin-gonic/gin"
)
const (
@@ -86,6 +87,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -12,7 +12,6 @@ import (
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/setting/model_setting"
"strings"
"github.com/gin-gonic/gin"
@@ -70,6 +69,16 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
}
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
// suffix -thinking and -nothinking
if strings.HasSuffix(info.OriginModelName, "-thinking") {
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-thinking")
} else if strings.HasSuffix(info.OriginModelName, "-nothinking") {
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-nothinking")
}
}
version := model_setting.GetGeminiVersionSetting(info.UpstreamModelName)
if strings.HasPrefix(info.UpstreamModelName, "imagen") {
@@ -99,11 +108,13 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
if request == nil {
return nil, errors.New("request is nil")
}
ai, err := CovertGemini2OpenAI(*request)
geminiRequest, err := CovertGemini2OpenAI(*request, info)
if err != nil {
return nil, err
}
return ai, nil
return geminiRequest, nil
}
func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dto.RerankRequest) (any, error) {
@@ -144,6 +155,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return geminiRequest, nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}
@@ -165,6 +181,18 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
} else {
err, usage = GeminiChatHandler(c, resp, info)
}
//if usage.(*dto.Usage).CompletionTokenDetails.ReasoningTokens > 100 {
// // 没有请求-thinking的情况下产生思考token则按照思考模型计费
// if !strings.HasSuffix(info.OriginModelName, "-thinking") &&
// !strings.HasSuffix(info.OriginModelName, "-nothinking") {
// thinkingModelName := info.OriginModelName + "-thinking"
// if operation_setting.SelfUseModeEnabled || helper.ContainPriceOrRatio(thinkingModelName) {
// info.OriginModelName = thinkingModelName
// }
// }
//}
return
}

View File

@@ -8,6 +8,15 @@ type GeminiChatRequest struct {
SystemInstructions *GeminiChatContent `json:"system_instruction,omitempty"`
}
type GeminiThinkingConfig struct {
IncludeThoughts bool `json:"includeThoughts,omitempty"`
ThinkingBudget *int `json:"thinkingBudget,omitempty"`
}
func (c *GeminiThinkingConfig) SetThinkingBudget(budget int) {
c.ThinkingBudget = &budget
}
type GeminiInlineData struct {
MimeType string `json:"mimeType"`
Data string `json:"data"`
@@ -71,15 +80,17 @@ type GeminiChatTool struct {
}
type GeminiChatGenerationConfig struct {
Temperature *float64 `json:"temperature,omitempty"`
TopP float64 `json:"topP,omitempty"`
TopK float64 `json:"topK,omitempty"`
MaxOutputTokens uint `json:"maxOutputTokens,omitempty"`
CandidateCount int `json:"candidateCount,omitempty"`
StopSequences []string `json:"stopSequences,omitempty"`
ResponseMimeType string `json:"responseMimeType,omitempty"`
ResponseSchema any `json:"responseSchema,omitempty"`
Seed int64 `json:"seed,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP float64 `json:"topP,omitempty"`
TopK float64 `json:"topK,omitempty"`
MaxOutputTokens uint `json:"maxOutputTokens,omitempty"`
CandidateCount int `json:"candidateCount,omitempty"`
StopSequences []string `json:"stopSequences,omitempty"`
ResponseMimeType string `json:"responseMimeType,omitempty"`
ResponseSchema any `json:"responseSchema,omitempty"`
Seed int64 `json:"seed,omitempty"`
ResponseModalities []string `json:"responseModalities,omitempty"`
ThinkingConfig *GeminiThinkingConfig `json:"thinkingConfig,omitempty"`
}
type GeminiChatCandidate struct {
@@ -108,6 +119,7 @@ type GeminiUsageMetadata struct {
PromptTokenCount int `json:"promptTokenCount"`
CandidatesTokenCount int `json:"candidatesTokenCount"`
TotalTokenCount int `json:"totalTokenCount"`
ThoughtsTokenCount int `json:"thoughtsTokenCount"`
}
// Imagen related structs

View File

@@ -19,11 +19,10 @@ import (
)
// Setting safety to the lowest possible values since Gemini is already powerless enough
func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatRequest, error) {
func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest, info *relaycommon.RelayInfo) (*GeminiChatRequest, error) {
geminiRequest := GeminiChatRequest{
Contents: make([]GeminiChatContent, 0, len(textRequest.Messages)),
//SafetySettings: []GeminiChatSafetySettings{},
GenerationConfig: GeminiChatGenerationConfig{
Temperature: textRequest.Temperature,
TopP: textRequest.TopP,
@@ -32,6 +31,30 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
},
}
if model_setting.IsGeminiModelSupportImagine(info.UpstreamModelName) {
geminiRequest.GenerationConfig.ResponseModalities = []string{
"TEXT",
"IMAGE",
}
}
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
if strings.HasSuffix(info.OriginModelName, "-thinking") {
budgetTokens := model_setting.GetGeminiSettings().ThinkingAdapterBudgetTokensPercentage * float64(geminiRequest.GenerationConfig.MaxOutputTokens)
if budgetTokens == 0 || budgetTokens > 24576 {
budgetTokens = 24576
}
geminiRequest.GenerationConfig.ThinkingConfig = &GeminiThinkingConfig{
ThinkingBudget: common.GetPointer(int(budgetTokens)),
IncludeThoughts: true,
}
} else if strings.HasSuffix(info.OriginModelName, "-nothinking") {
geminiRequest.GenerationConfig.ThinkingConfig = &GeminiThinkingConfig{
ThinkingBudget: common.GetPointer(0),
}
}
}
safetySettings := make([]GeminiChatSafetySettings, 0, len(SafetySettingList))
for _, category := range SafetySettingList {
safetySettings = append(safetySettings, GeminiChatSafetySettings{
@@ -279,6 +302,13 @@ func cleanFunctionParameters(params interface{}) interface{} {
cleanedMap[k] = v
}
// Remove unsupported root-level fields
delete(cleanedMap, "default")
delete(cleanedMap, "exclusiveMaximum")
delete(cleanedMap, "exclusiveMinimum")
delete(cleanedMap, "$schema")
delete(cleanedMap, "additionalProperties")
// Clean properties
if props, ok := cleanedMap["properties"].(map[string]interface{}); ok && props != nil {
cleanedProps := make(map[string]interface{})
@@ -299,6 +329,8 @@ func cleanFunctionParameters(params interface{}) interface{} {
delete(cleanedPropMap, "default")
delete(cleanedPropMap, "exclusiveMaximum")
delete(cleanedPropMap, "exclusiveMinimum")
delete(cleanedPropMap, "$schema")
delete(cleanedPropMap, "additionalProperties")
// Check and clean 'format' for string types
if propType, typeExists := cleanedPropMap["type"].(string); typeExists && propType == "string" {
@@ -359,6 +391,7 @@ func removeAdditionalPropertiesWithDepth(schema interface{}, depth int) interfac
}
// 删除所有的title字段
delete(v, "title")
delete(v, "$schema")
// 如果type不为object和array则直接返回
if typeVal, exists := v["type"]; !exists || (typeVal != "object" && typeVal != "array") {
return schema
@@ -546,9 +579,10 @@ func responseGeminiChat2OpenAI(response *GeminiChatResponse) *dto.OpenAITextResp
return &fullTextResponse
}
func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.ChatCompletionsStreamResponse, bool) {
func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.ChatCompletionsStreamResponse, bool, bool) {
choices := make([]dto.ChatCompletionsStreamResponseChoice, 0, len(geminiResponse.Candidates))
isStop := false
hasImage := false
for _, candidate := range geminiResponse.Candidates {
if candidate.FinishReason != nil && *candidate.FinishReason == "STOP" {
isStop = true
@@ -574,7 +608,13 @@ func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.C
}
}
for _, part := range candidate.Content.Parts {
if part.FunctionCall != nil {
if part.InlineData != nil {
if strings.HasPrefix(part.InlineData.MimeType, "image") {
imgText := "![image](data:" + part.InlineData.MimeType + ";base64," + part.InlineData.Data + ")"
texts = append(texts, imgText)
hasImage = true
}
} else if part.FunctionCall != nil {
isTools = true
if call := getResponseToolCall(&part); call != nil {
call.SetIndex(len(choice.Delta.ToolCalls))
@@ -602,7 +642,7 @@ func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.C
var response dto.ChatCompletionsStreamResponse
response.Object = "chat.completion.chunk"
response.Choices = choices
return &response, isStop
return &response, isStop, hasImage
}
func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
@@ -610,23 +650,28 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
id := fmt.Sprintf("chatcmpl-%s", common.GetUUID())
createAt := common.GetTimestamp()
var usage = &dto.Usage{}
var imageCount int
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
var geminiResponse GeminiChatResponse
err := json.Unmarshal([]byte(data), &geminiResponse)
err := common.DecodeJsonStr(data, &geminiResponse)
if err != nil {
common.LogError(c, "error unmarshalling stream response: "+err.Error())
return false
}
response, isStop := streamResponseGeminiChat2OpenAI(&geminiResponse)
response, isStop, hasImage := streamResponseGeminiChat2OpenAI(&geminiResponse)
if hasImage {
imageCount++
}
response.Id = id
response.Created = createAt
response.Model = info.UpstreamModelName
// responseText += response.Choices[0].Delta.GetContentString()
if geminiResponse.UsageMetadata.TotalTokenCount != 0 {
usage.PromptTokens = geminiResponse.UsageMetadata.PromptTokenCount
usage.CompletionTokens = geminiResponse.UsageMetadata.CandidatesTokenCount
usage.CompletionTokenDetails.ReasoningTokens = geminiResponse.UsageMetadata.ThoughtsTokenCount
usage.TotalTokens = geminiResponse.UsageMetadata.TotalTokenCount
}
err = helper.ObjectData(c, response)
if err != nil {
@@ -641,9 +686,14 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
var response *dto.ChatCompletionsStreamResponse
usage.TotalTokens = usage.PromptTokens + usage.CompletionTokens
if imageCount != 0 {
if usage.CompletionTokens == 0 {
usage.CompletionTokens = imageCount * 258
}
}
usage.PromptTokensDetails.TextTokens = usage.PromptTokens
usage.CompletionTokenDetails.TextTokens = usage.CompletionTokens
usage.CompletionTokens = usage.TotalTokens - usage.PromptTokens
if info.ShouldIncludeUsage {
response = helper.GenerateFinalUsageResponse(id, createAt, info.UpstreamModelName, *usage)
@@ -689,6 +739,10 @@ func GeminiChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
CompletionTokens: geminiResponse.UsageMetadata.CandidatesTokenCount,
TotalTokens: geminiResponse.UsageMetadata.TotalTokenCount,
}
usage.CompletionTokenDetails.ReasoningTokens = geminiResponse.UsageMetadata.ThoughtsTokenCount
usage.CompletionTokens = usage.TotalTokens - usage.PromptTokens
fullTextResponse.Usage = usage
jsonResponse, err := json.Marshal(fullTextResponse)
if err != nil {

View File

@@ -3,7 +3,6 @@ package jina
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -12,6 +11,8 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/common_handler"
"one-api/relay/constant"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -55,6 +56,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
return request, nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -2,13 +2,14 @@ package mistral
import (
"errors"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -59,6 +60,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,7 +3,6 @@ package mokaai
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -11,6 +10,8 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"strings"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -74,6 +75,11 @@ func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dt
return nil, nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -2,7 +2,6 @@ package ollama
import (
"errors"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -10,6 +9,8 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -64,6 +65,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return requestOpenAI2Embeddings(request), nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -8,6 +8,7 @@ import (
"io"
"mime/multipart"
"net/http"
"net/textproto"
"one-api/common"
constant2 "one-api/constant"
"one-api/dto"
@@ -22,6 +23,7 @@ import (
"one-api/relay/common_handler"
"one-api/relay/constant"
"one-api/service"
"path/filepath"
"strings"
"github.com/gin-gonic/gin"
@@ -65,6 +67,9 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
if info.RelayFormat == relaycommon.RelayFormatClaude {
return fmt.Sprintf("%s/v1/chat/completions", info.BaseUrl), nil
}
if info.RelayMode == constant.RelayModeResponses {
return fmt.Sprintf("%s/v1/responses", info.BaseUrl), nil
}
if info.RelayMode == constant.RelayModeRealtime {
if strings.HasPrefix(info.BaseUrl, "https://") {
baseUrl := strings.TrimPrefix(info.BaseUrl, "https://")
@@ -87,7 +92,10 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
requestURL = fmt.Sprintf("%s?api-version=%s", requestURL, apiVersion)
task := strings.TrimPrefix(requestURL, "/v1/")
model_ := info.UpstreamModelName
model_ = strings.Replace(model_, ".", "", -1)
// 2025年5月10日后创建的渠道不移除.
if info.ChannelCreateTime < constant2.AzureNoRemoveDotTime {
model_ = strings.Replace(model_, ".", "", -1)
}
// https://github.com/songquanpeng/one-api/issues/67
requestURL = fmt.Sprintf("/openai/deployments/%s/%s", model_, task)
if info.RelayMode == constant.RelayModeRealtime {
@@ -147,14 +155,12 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
if info.ChannelType != common.ChannelTypeOpenAI && info.ChannelType != common.ChannelTypeAzure {
request.StreamOptions = nil
}
if strings.HasPrefix(request.Model, "o1") || strings.HasPrefix(request.Model, "o3") {
if strings.HasPrefix(request.Model, "o") {
if request.MaxCompletionTokens == 0 && request.MaxTokens != 0 {
request.MaxCompletionTokens = request.MaxTokens
request.MaxTokens = 0
}
if strings.HasPrefix(request.Model, "o3") || strings.HasPrefix(request.Model, "o1") {
request.Temperature = nil
}
request.Temperature = nil
if strings.HasSuffix(request.Model, "-high") {
request.ReasoningEffort = "high"
request.Model = strings.TrimSuffix(request.Model, "-high")
@@ -167,11 +173,13 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
}
info.ReasoningEffort = request.ReasoningEffort
info.UpstreamModelName = request.Model
}
if request.Model == "o1" || request.Model == "o1-2024-12-17" || strings.HasPrefix(request.Model, "o3") {
//修改第一个Message的内容将system改为developer
if len(request.Messages) > 0 && request.Messages[0].Role == "system" {
request.Messages[0].Role = "developer"
// o系列模型developer适配o1-mini除外
if !strings.HasPrefix(request.Model, "o1-mini") && !strings.HasPrefix(request.Model, "o1-preview") {
//修改第一个Message的内容将system改为developer
if len(request.Messages) > 0 && request.Messages[0].Role == "system" {
request.Messages[0].Role = "developer"
}
}
}
@@ -236,11 +244,167 @@ func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInf
}
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
switch info.RelayMode {
case constant.RelayModeImagesEdits:
var requestBody bytes.Buffer
writer := multipart.NewWriter(&requestBody)
writer.WriteField("model", request.Model)
// 获取所有表单字段
formData := c.Request.PostForm
// 遍历表单字段并打印输出
for key, values := range formData {
if key == "model" {
continue
}
for _, value := range values {
writer.WriteField(key, value)
}
}
// Parse the multipart form to handle both single image and multiple images
if err := c.Request.ParseMultipartForm(32 << 20); err != nil { // 32MB max memory
return nil, errors.New("failed to parse multipart form")
}
if c.Request.MultipartForm != nil && c.Request.MultipartForm.File != nil {
// Check if "image" field exists in any form, including array notation
var imageFiles []*multipart.FileHeader
var exists bool
// First check for standard "image" field
if imageFiles, exists = c.Request.MultipartForm.File["image"]; !exists || len(imageFiles) == 0 {
// If not found, check for "image[]" field
if imageFiles, exists = c.Request.MultipartForm.File["image[]"]; !exists || len(imageFiles) == 0 {
// If still not found, iterate through all fields to find any that start with "image["
foundArrayImages := false
for fieldName, files := range c.Request.MultipartForm.File {
if strings.HasPrefix(fieldName, "image[") && len(files) > 0 {
foundArrayImages = true
for _, file := range files {
imageFiles = append(imageFiles, file)
}
}
}
// If no image fields found at all
if !foundArrayImages && (len(imageFiles) == 0) {
return nil, errors.New("image is required")
}
}
}
// Process all image files
for i, fileHeader := range imageFiles {
file, err := fileHeader.Open()
if err != nil {
return nil, fmt.Errorf("failed to open image file %d: %w", i, err)
}
defer file.Close()
// If multiple images, use image[] as the field name
fieldName := "image"
if len(imageFiles) > 1 {
fieldName = "image[]"
}
// Determine MIME type based on file extension
mimeType := detectImageMimeType(fileHeader.Filename)
// Create a form file with the appropriate content type
h := make(textproto.MIMEHeader)
h.Set("Content-Disposition", fmt.Sprintf(`form-data; name="%s"; filename="%s"`, fieldName, fileHeader.Filename))
h.Set("Content-Type", mimeType)
part, err := writer.CreatePart(h)
if err != nil {
return nil, fmt.Errorf("create form part failed for image %d: %w", i, err)
}
if _, err := io.Copy(part, file); err != nil {
return nil, fmt.Errorf("copy file failed for image %d: %w", i, err)
}
}
// Handle mask file if present
if maskFiles, exists := c.Request.MultipartForm.File["mask"]; exists && len(maskFiles) > 0 {
maskFile, err := maskFiles[0].Open()
if err != nil {
return nil, errors.New("failed to open mask file")
}
defer maskFile.Close()
// Determine MIME type for mask file
mimeType := detectImageMimeType(maskFiles[0].Filename)
// Create a form file with the appropriate content type
h := make(textproto.MIMEHeader)
h.Set("Content-Disposition", fmt.Sprintf(`form-data; name="mask"; filename="%s"`, maskFiles[0].Filename))
h.Set("Content-Type", mimeType)
maskPart, err := writer.CreatePart(h)
if err != nil {
return nil, errors.New("create form file failed for mask")
}
if _, err := io.Copy(maskPart, maskFile); err != nil {
return nil, errors.New("copy mask file failed")
}
}
} else {
return nil, errors.New("no multipart form data found")
}
// 关闭 multipart 编写器以设置分界线
writer.Close()
c.Request.Header.Set("Content-Type", writer.FormDataContentType())
return bytes.NewReader(requestBody.Bytes()), nil
default:
return request, nil
}
}
// detectImageMimeType determines the MIME type based on the file extension
func detectImageMimeType(filename string) string {
ext := strings.ToLower(filepath.Ext(filename))
switch ext {
case ".jpg", ".jpeg":
return "image/jpeg"
case ".png":
return "image/png"
case ".webp":
return "image/webp"
default:
// Try to detect from extension if possible
if strings.HasPrefix(ext, ".jp") {
return "image/jpeg"
}
// Default to png as a fallback
return "image/png"
}
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// 模型后缀转换 reasoning effort
if strings.HasSuffix(request.Model, "-high") {
request.Reasoning.Effort = "high"
request.Model = strings.TrimSuffix(request.Model, "-high")
} else if strings.HasSuffix(request.Model, "-low") {
request.Reasoning.Effort = "low"
request.Model = strings.TrimSuffix(request.Model, "-low")
} else if strings.HasSuffix(request.Model, "-medium") {
request.Reasoning.Effort = "medium"
request.Model = strings.TrimSuffix(request.Model, "-medium")
}
return request, nil
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
if info.RelayMode == constant.RelayModeAudioTranscription || info.RelayMode == constant.RelayModeAudioTranslation {
if info.RelayMode == constant.RelayModeAudioTranscription ||
info.RelayMode == constant.RelayModeAudioTranslation ||
info.RelayMode == constant.RelayModeImagesEdits {
return channel.DoFormRequest(a, c, info, requestBody)
} else if info.RelayMode == constant.RelayModeRealtime {
return channel.DoWssRequest(a, c, info, requestBody)
@@ -259,10 +423,16 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
fallthrough
case constant.RelayModeAudioTranscription:
err, usage = OpenaiSTTHandler(c, resp, info, a.ResponseFormat)
case constant.RelayModeImagesGenerations:
err, usage = OpenaiTTSHandler(c, resp, info)
case constant.RelayModeImagesGenerations, constant.RelayModeImagesEdits:
err, usage = OpenaiHandlerWithUsage(c, resp, info)
case constant.RelayModeRerank:
err, usage = common_handler.RerankHandler(c, info, resp)
case constant.RelayModeResponses:
if info.IsStream {
err, usage = OaiResponsesStreamHandler(c, resp, info)
} else {
err, usage = OaiResponsesHandler(c, resp, info)
}
default:
if info.IsStream {
err, usage = OaiStreamHandler(c, resp, info)

View File

@@ -187,3 +187,10 @@ func handleFinalResponse(c *gin.Context, info *relaycommon.RelayInfo, lastStream
}
}
}
func sendResponsesStreamData(c *gin.Context, streamResponse dto.ResponsesStreamResponse, data string) {
if data == "" {
return
}
helper.ResponseChunkData(c, streamResponse, data)
}

View File

@@ -595,3 +595,52 @@ func preConsumeUsage(ctx *gin.Context, info *relaycommon.RelayInfo, usage *dto.R
err := service.PreWssConsumeQuota(ctx, info, usage)
return err
}
func OpenaiHandlerWithUsage(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
// Reset response body
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
for k, v := range resp.Header {
c.Writer.Header().Set(k, v[0])
}
// reset content length
c.Writer.Header().Set("Content-Length", fmt.Sprintf("%d", len(responseBody)))
c.Writer.WriteHeader(resp.StatusCode)
_, err = io.Copy(c.Writer, resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "copy_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
var usageResp dto.SimpleResponse
err = json.Unmarshal(responseBody, &usageResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "parse_response_body_failed", http.StatusInternalServerError), nil
}
// format
if usageResp.InputTokens > 0 {
usageResp.PromptTokens += usageResp.InputTokens
}
if usageResp.OutputTokens > 0 {
usageResp.CompletionTokens += usageResp.OutputTokens
}
if usageResp.InputTokensDetails != nil {
usageResp.PromptTokensDetails.ImageTokens += usageResp.InputTokensDetails.ImageTokens
usageResp.PromptTokensDetails.TextTokens += usageResp.InputTokensDetails.TextTokens
}
return nil, &usageResp.Usage
}

View File

@@ -0,0 +1,119 @@
package openai
import (
"bytes"
"fmt"
"io"
"net/http"
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"strings"
"github.com/gin-gonic/gin"
)
func OaiResponsesHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
// read response body
var responsesResponse dto.OpenAIResponsesResponse
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
}
err = resp.Body.Close()
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
err = common.DecodeJson(responseBody, &responsesResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
if responsesResponse.Error != nil {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: responsesResponse.Error.Message,
Type: "openai_error",
Code: responsesResponse.Error.Code,
},
StatusCode: resp.StatusCode,
}, nil
}
// reset response body
resp.Body = io.NopCloser(bytes.NewBuffer(responseBody))
// We shouldn't set the header before we parse the response body, because the parse part may fail.
// And then we will have to send an error response, but in this case, the header has already been set.
// So the httpClient will be confused by the response.
// For example, Postman will report error, and we cannot check the response at all.
for k, v := range resp.Header {
c.Writer.Header().Set(k, v[0])
}
c.Writer.WriteHeader(resp.StatusCode)
// copy response body
_, err = io.Copy(c.Writer, resp.Body)
if err != nil {
common.SysError("error copying response body: " + err.Error())
}
resp.Body.Close()
// compute usage
usage := dto.Usage{}
usage.PromptTokens = responsesResponse.Usage.InputTokens
usage.CompletionTokens = responsesResponse.Usage.OutputTokens
usage.TotalTokens = responsesResponse.Usage.TotalTokens
// 解析 Tools 用量
for _, tool := range responsesResponse.Tools {
info.ResponsesUsageInfo.BuiltInTools[tool.Type].CallCount++
}
return nil, &usage
}
func OaiResponsesStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
if resp == nil || resp.Body == nil {
common.LogError(c, "invalid response or response body")
return service.OpenAIErrorWrapper(fmt.Errorf("invalid response"), "invalid_response", http.StatusInternalServerError), nil
}
var usage = &dto.Usage{}
var responseTextBuilder strings.Builder
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
// 检查当前数据是否包含 completed 状态和 usage 信息
var streamResponse dto.ResponsesStreamResponse
if err := common.DecodeJsonStr(data, &streamResponse); err == nil {
sendResponsesStreamData(c, streamResponse, data)
switch streamResponse.Type {
case "response.completed":
usage.PromptTokens = streamResponse.Response.Usage.InputTokens
usage.CompletionTokens = streamResponse.Response.Usage.OutputTokens
usage.TotalTokens = streamResponse.Response.Usage.TotalTokens
case "response.output_text.delta":
// 处理输出文本
responseTextBuilder.WriteString(streamResponse.Delta)
case dto.ResponsesOutputTypeItemDone:
// 函数调用处理
if streamResponse.Item != nil {
switch streamResponse.Item.Type {
case dto.BuildInCallWebSearchCall:
info.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolWebSearchPreview].CallCount++
}
}
}
}
return true
})
if usage.CompletionTokens == 0 {
// 计算输出文本的 token 数量
tempStr := responseTextBuilder.String()
if len(tempStr) > 0 {
// 非正常结束,使用输出文本的 token 数量
completionTokens, _ := service.CountTextToken(tempStr, info.UpstreamModelName)
usage.CompletionTokens = completionTokens
}
}
return nil, usage
}

View File

@@ -3,13 +3,14 @@ package palm
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/service"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -60,6 +61,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,13 +3,14 @@ package perplexity
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -63,6 +64,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,7 +3,6 @@ package siliconflow
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -11,6 +10,8 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -58,6 +59,11 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
return request, nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}
@@ -74,13 +80,9 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
switch info.RelayMode {
case constant.RelayModeRerank:
err, usage = siliconflowRerankHandler(c, resp)
case constant.RelayModeChatCompletions:
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
} else {
err, usage = openai.OpenaiHandler(c, resp, info)
}
case constant.RelayModeCompletions:
fallthrough
case constant.RelayModeChatCompletions:
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
} else {

View File

@@ -3,7 +3,6 @@ package tencent
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
@@ -13,6 +12,8 @@ import (
"one-api/service"
"strconv"
"strings"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -84,6 +85,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -4,7 +4,6 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -13,7 +12,10 @@ import (
"one-api/relay/channel/gemini"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"one-api/setting/model_setting"
"strings"
"github.com/gin-gonic/gin"
)
const (
@@ -77,6 +79,15 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
a.AccountCredentials = *adc
suffix := ""
if a.RequestMode == RequestModeGemini {
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
// suffix -thinking and -nothinking
if strings.HasSuffix(info.OriginModelName, "-thinking") {
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-thinking")
} else if strings.HasSuffix(info.OriginModelName, "-nothinking") {
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-nothinking")
}
}
if info.IsStream {
suffix = "streamGenerateContent?alt=sse"
} else {
@@ -143,7 +154,7 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
info.UpstreamModelName = claudeReq.Model
return vertexClaudeReq, nil
} else if a.RequestMode == RequestModeGemini {
geminiRequest, err := gemini.CovertGemini2OpenAI(*request)
geminiRequest, err := gemini.CovertGemini2OpenAI(*request, info)
if err != nil {
return nil, err
}
@@ -164,6 +175,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -3,7 +3,6 @@ package volcengine
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -12,6 +11,8 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"strings"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -71,6 +72,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return request, nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -2,14 +2,17 @@ package xai
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
"strings"
"one-api/relay/constant"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -27,15 +30,20 @@ func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInf
}
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
request.Size = ""
return request, nil
xaiRequest := ImageRequest{
Model: request.Model,
Prompt: request.Prompt,
N: request.N,
ResponseFormat: request.ResponseFormat,
}
return xaiRequest, nil
}
func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
}
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
return fmt.Sprintf("%s/v1/chat/completions", info.BaseUrl), nil
return relaycommon.GetFullRequestURL(info.BaseUrl, info.RequestURLPath, info.ChannelType), nil
}
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
@@ -78,20 +86,26 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not available")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
if info.IsStream {
err, usage = xAIStreamHandler(c, resp, info)
} else {
err, usage = xAIHandler(c, resp, info)
switch info.RelayMode {
case constant.RelayModeImagesGenerations, constant.RelayModeImagesEdits:
err, usage = openai.OpenaiHandlerWithUsage(c, resp, info)
default:
if info.IsStream {
err, usage = xAIStreamHandler(c, resp, info)
} else {
err, usage = xAIHandler(c, resp, info)
}
}
//if _, ok := usage.(*dto.Usage); ok && usage != nil {
// usage.(*dto.Usage).CompletionTokens = usage.(*dto.Usage).TotalTokens - usage.(*dto.Usage).PromptTokens
//}
return
}

View File

@@ -12,3 +12,16 @@ type ChatCompletionResponse struct {
Usage *dto.Usage `json:"usage"`
SystemFingerprint string `json:"system_fingerprint"`
}
// quality, size or style are not supported by xAI API at the moment.
type ImageRequest struct {
Model string `json:"model"`
Prompt string `json:"prompt" binding:"required"`
N int `json:"n,omitempty"`
// Size string `json:"size,omitempty"`
// Quality string `json:"quality,omitempty"`
ResponseFormat string `json:"response_format,omitempty"`
// Style string `json:"style,omitempty"`
// User string `json:"user,omitempty"`
// ExtraFields json.RawMessage `json:"extra_fields,omitempty"`
}

View File

@@ -2,7 +2,6 @@ package xunfei
import (
"errors"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -10,6 +9,8 @@ import (
relaycommon "one-api/relay/common"
"one-api/service"
"strings"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -61,6 +62,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
// xunfei's request is not http request, so we don't need to do anything here
dummyResp := &http.Response{}

View File

@@ -3,12 +3,13 @@ package zhipu
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -71,6 +72,11 @@ func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, request
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
if info.IsStream {
err, usage = zhipuStreamHandler(c, resp)

View File

@@ -3,7 +3,6 @@ package zhipu_4v
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
@@ -11,6 +10,8 @@ import (
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
@@ -70,6 +71,11 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return request, nil
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
// TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -6,7 +6,6 @@ import (
"one-api/dto"
relayconstant "one-api/relay/constant"
"strings"
"sync"
"time"
"github.com/gin-gonic/gin"
@@ -37,6 +36,7 @@ type ClaudeConvertInfo struct {
const (
RelayFormatOpenAI = "openai"
RelayFormatClaude = "claude"
RelayFormatGemini = "gemini"
)
type RerankerInfo struct {
@@ -44,6 +44,16 @@ type RerankerInfo struct {
ReturnDocuments bool
}
type BuildInToolInfo struct {
ToolName string
CallCount int
SearchContextSize string
}
type ResponsesUsageInfo struct {
BuiltInTools map[string]*BuildInToolInfo
}
type RelayInfo struct {
ChannelType int
ChannelId int
@@ -55,7 +65,6 @@ type RelayInfo struct {
StartTime time.Time
FirstResponseTime time.Time
isFirstResponse bool
responseMutex sync.Mutex // Add mutex for protecting concurrent access
//SendLastReasoningResponse bool
ApiType int
IsStream bool
@@ -89,9 +98,11 @@ type RelayInfo struct {
UserQuota int
RelayFormat string
SendResponseCount int
ChannelCreateTime int64
ThinkingContentInfo
ClaudeConvertInfo
*ClaudeConvertInfo
*RerankerInfo
*ResponsesUsageInfo
}
// 定义支持流式选项的通道类型
@@ -105,6 +116,8 @@ var streamSupportedChannels = map[int]bool{
common.ChannelTypeVolcEngine: true,
common.ChannelTypeOllama: true,
common.ChannelTypeXai: true,
common.ChannelTypeDeepSeek: true,
common.ChannelTypeBaiduV2: true,
}
func GenRelayInfoWs(c *gin.Context, ws *websocket.Conn) *RelayInfo {
@@ -120,7 +133,7 @@ func GenRelayInfoClaude(c *gin.Context) *RelayInfo {
info := GenRelayInfo(c)
info.RelayFormat = RelayFormatClaude
info.ShouldIncludeUsage = false
info.ClaudeConvertInfo = ClaudeConvertInfo{
info.ClaudeConvertInfo = &ClaudeConvertInfo{
LastMessagesType: LastMessageTypeNone,
}
return info
@@ -136,6 +149,31 @@ func GenRelayInfoRerank(c *gin.Context, req *dto.RerankRequest) *RelayInfo {
return info
}
func GenRelayInfoResponses(c *gin.Context, req *dto.OpenAIResponsesRequest) *RelayInfo {
info := GenRelayInfo(c)
info.RelayMode = relayconstant.RelayModeResponses
info.ResponsesUsageInfo = &ResponsesUsageInfo{
BuiltInTools: make(map[string]*BuildInToolInfo),
}
if len(req.Tools) > 0 {
for _, tool := range req.Tools {
info.ResponsesUsageInfo.BuiltInTools[tool.Type] = &BuildInToolInfo{
ToolName: tool.Type,
CallCount: 0,
}
switch tool.Type {
case dto.BuildInToolWebSearchPreview:
if tool.SearchContextSize == "" {
tool.SearchContextSize = "medium"
}
info.ResponsesUsageInfo.BuiltInTools[tool.Type].SearchContextSize = tool.SearchContextSize
}
}
}
info.IsStream = req.Stream
return info
}
func GenRelayInfo(c *gin.Context) *RelayInfo {
channelType := c.GetInt("channel_type")
channelId := c.GetInt("channel_id")
@@ -172,14 +210,15 @@ func GenRelayInfo(c *gin.Context) *RelayInfo {
OriginModelName: c.GetString("original_model"),
UpstreamModelName: c.GetString("original_model"),
//RecodeModelName: c.GetString("original_model"),
IsModelMapped: false,
ApiType: apiType,
ApiVersion: c.GetString("api_version"),
ApiKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
Organization: c.GetString("channel_organization"),
ChannelSetting: channelSetting,
ParamOverride: paramOverride,
RelayFormat: RelayFormatOpenAI,
IsModelMapped: false,
ApiType: apiType,
ApiVersion: c.GetString("api_version"),
ApiKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
Organization: c.GetString("channel_organization"),
ChannelSetting: channelSetting,
ChannelCreateTime: c.GetInt64("channel_create_time"),
ParamOverride: paramOverride,
RelayFormat: RelayFormatOpenAI,
ThinkingContentInfo: ThinkingContentInfo{
IsFirstThinkingContent: true,
SendLastThinkingContent: false,
@@ -202,6 +241,10 @@ func GenRelayInfo(c *gin.Context) *RelayInfo {
if streamSupportedChannels[info.ChannelType] {
info.SupportStreamOptions = true
}
// responses 模式不支持 StreamOptions
if relayconstant.RelayModeResponses == info.RelayMode {
info.SupportStreamOptions = false
}
return info
}
@@ -214,9 +257,6 @@ func (info *RelayInfo) SetIsStream(isStream bool) {
}
func (info *RelayInfo) SetFirstResponseTime() {
info.responseMutex.Lock()
defer info.responseMutex.Unlock()
if info.isFirstResponse {
info.FirstResponseTime = time.Now()
info.isFirstResponse = false

View File

@@ -12,6 +12,7 @@ const (
RelayModeEmbeddings
RelayModeModerations
RelayModeImagesGenerations
RelayModeImagesEdits
RelayModeEdits
RelayModeMidjourneyImagine
@@ -39,6 +40,8 @@ const (
RelayModeRerank
RelayModeResponses
RelayModeRealtime
)
@@ -56,8 +59,12 @@ func Path2RelayMode(path string) int {
relayMode = RelayModeModerations
} else if strings.HasPrefix(path, "/v1/images/generations") {
relayMode = RelayModeImagesGenerations
} else if strings.HasPrefix(path, "/v1/images/edits") {
relayMode = RelayModeImagesEdits
} else if strings.HasPrefix(path, "/v1/edits") {
relayMode = RelayModeEdits
} else if strings.HasPrefix(path, "/v1/responses") {
relayMode = RelayModeResponses
} else if strings.HasPrefix(path, "/v1/audio/speech") {
relayMode = RelayModeAudioSpeech
} else if strings.HasPrefix(path, "/v1/audio/transcriptions") {

View File

@@ -43,6 +43,14 @@ func ClaudeChunkData(c *gin.Context, resp dto.ClaudeResponse, data string) {
}
}
func ResponseChunkData(c *gin.Context, resp dto.ResponsesStreamResponse, data string) {
c.Render(-1, common.CustomEvent{Data: fmt.Sprintf("event: %s\n", resp.Type)})
c.Render(-1, common.CustomEvent{Data: fmt.Sprintf("data: %s", data)})
if flusher, ok := c.Writer.(http.Flusher); ok {
flusher.Flush()
}
}
func StringData(c *gin.Context, str string) error {
//str = strings.TrimPrefix(str, "data: ")
//str = strings.TrimSuffix(str, "\r")

View File

@@ -2,9 +2,11 @@ package helper
import (
"encoding/json"
"errors"
"fmt"
"github.com/gin-gonic/gin"
"one-api/relay/common"
"github.com/gin-gonic/gin"
)
func ModelMappedHelper(c *gin.Context, info *common.RelayInfo) error {
@@ -16,9 +18,36 @@ func ModelMappedHelper(c *gin.Context, info *common.RelayInfo) error {
if err != nil {
return fmt.Errorf("unmarshal_model_mapping_failed")
}
if modelMap[info.OriginModelName] != "" {
info.UpstreamModelName = modelMap[info.OriginModelName]
info.IsModelMapped = true
// 支持链式模型重定向,最终使用链尾的模型
currentModel := info.OriginModelName
visitedModels := map[string]bool{
currentModel: true,
}
for {
if mappedModel, exists := modelMap[currentModel]; exists && mappedModel != "" {
// 模型重定向循环检测,避免无限循环
if visitedModels[mappedModel] {
if mappedModel == currentModel {
if currentModel == info.OriginModelName {
info.IsModelMapped = false
return nil
} else {
info.IsModelMapped = true
break
}
}
return errors.New("model_mapping_contains_cycle")
}
visitedModels[mappedModel] = true
currentModel = mappedModel
info.IsModelMapped = true
} else {
break
}
}
if info.IsModelMapped {
info.UpstreamModelName = currentModel
}
}
return nil

View File

@@ -15,14 +15,15 @@ type PriceData struct {
ModelRatio float64
CompletionRatio float64
CacheRatio float64
CacheCreationRatio float64
ImageRatio float64
GroupRatio float64
UsePrice bool
CacheCreationRatio float64
ShouldPreConsumedQuota int
}
func (p PriceData) ToSetting() string {
return fmt.Sprintf("ModelPrice: %f, ModelRatio: %f, CompletionRatio: %f, CacheRatio: %f, GroupRatio: %f, UsePrice: %t, CacheCreationRatio: %f, ShouldPreConsumedQuota: %d", p.ModelPrice, p.ModelRatio, p.CompletionRatio, p.CacheRatio, p.GroupRatio, p.UsePrice, p.CacheCreationRatio, p.ShouldPreConsumedQuota)
return fmt.Sprintf("ModelPrice: %f, ModelRatio: %f, CompletionRatio: %f, CacheRatio: %f, GroupRatio: %f, UsePrice: %t, CacheCreationRatio: %f, ShouldPreConsumedQuota: %d, ImageRatio: %d", p.ModelPrice, p.ModelRatio, p.CompletionRatio, p.CacheRatio, p.GroupRatio, p.UsePrice, p.CacheCreationRatio, p.ShouldPreConsumedQuota, p.ImageRatio)
}
func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens int, maxTokens int) (PriceData, error) {
@@ -32,6 +33,7 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
var modelRatio float64
var completionRatio float64
var cacheRatio float64
var imageRatio float64
var cacheCreationRatio float64
if !usePrice {
preConsumedTokens := common.PreConsumedQuota
@@ -49,16 +51,13 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
}
}
if !acceptUnsetRatio {
if info.UserId == 1 {
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置请设置或开始自用模式Model %s ratio or price not set, please set or start self-use mode", info.OriginModelName, info.OriginModelName)
} else {
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置, 请联系管理员设置Model %s ratio or price not set, please contact administrator to set", info.OriginModelName, info.OriginModelName)
}
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置请联系管理员设置或开始自用模式Model %s ratio or price not set, please set or start self-use mode", info.OriginModelName, info.OriginModelName)
}
}
completionRatio = operation_setting.GetCompletionRatio(info.OriginModelName)
cacheRatio, _ = operation_setting.GetCacheRatio(info.OriginModelName)
cacheCreationRatio, _ = operation_setting.GetCreateCacheRatio(info.OriginModelName)
imageRatio, _ = operation_setting.GetImageRatio(info.OriginModelName)
ratio := modelRatio * groupRatio
preConsumedQuota = int(float64(preConsumedTokens) * ratio)
} else {
@@ -72,6 +71,7 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
GroupRatio: groupRatio,
UsePrice: usePrice,
CacheRatio: cacheRatio,
ImageRatio: imageRatio,
CacheCreationRatio: cacheCreationRatio,
ShouldPreConsumedQuota: preConsumedQuota,
}
@@ -82,3 +82,15 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
return priceData, nil
}
func ContainPriceOrRatio(modelName string) bool {
_, ok := operation_setting.GetModelPrice(modelName, false)
if ok {
return true
}
_, ok = operation_setting.GetModelRatio(modelName)
if ok {
return true
}
return false
}

View File

@@ -32,7 +32,7 @@ func StreamScannerHandler(c *gin.Context, resp *http.Response, info *relaycommon
defer resp.Body.Close()
streamingTimeout := time.Duration(constant.StreamingTimeout) * time.Second
if strings.HasPrefix(info.UpstreamModelName, "o1") || strings.HasPrefix(info.UpstreamModelName, "o3") {
if strings.HasPrefix(info.UpstreamModelName, "o") {
// twice timeout for thinking model
streamingTimeout *= 2
}
@@ -115,7 +115,7 @@ func StreamScannerHandler(c *gin.Context, resp *http.Response, info *relaycommon
}
data = data[5:]
data = strings.TrimLeft(data, " ")
data = strings.TrimSuffix(data, "\"")
data = strings.TrimSuffix(data, "\r")
if !strings.HasPrefix(data, "[DONE]") {
info.SetFirstResponseTime()
writeMutex.Lock() // Lock before writing

View File

@@ -5,21 +5,83 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
"one-api/dto"
"one-api/model"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
"one-api/relay/helper"
"one-api/service"
"one-api/setting"
"strings"
"github.com/gin-gonic/gin"
)
func getAndValidImageRequest(c *gin.Context, info *relaycommon.RelayInfo) (*dto.ImageRequest, error) {
imageRequest := &dto.ImageRequest{}
switch info.RelayMode {
case relayconstant.RelayModeImagesEdits:
_, err := c.MultipartForm()
if err != nil {
return nil, err
}
formData := c.Request.PostForm
imageRequest.Prompt = formData.Get("prompt")
imageRequest.Model = formData.Get("model")
imageRequest.N = common.String2Int(formData.Get("n"))
imageRequest.Quality = formData.Get("quality")
imageRequest.Size = formData.Get("size")
if imageRequest.Model == "gpt-image-1" {
if imageRequest.Quality == "" {
imageRequest.Quality = "standard"
}
}
default:
err := common.UnmarshalBodyReusable(c, imageRequest)
if err != nil {
return nil, err
}
// Not "256x256", "512x512", or "1024x1024"
if imageRequest.Model == "dall-e-2" || imageRequest.Model == "dall-e" {
if imageRequest.Size != "" && imageRequest.Size != "256x256" && imageRequest.Size != "512x512" && imageRequest.Size != "1024x1024" {
return nil, errors.New("size must be one of 256x256, 512x512, or 1024x1024 for dall-e-2 or dall-e")
}
} else if imageRequest.Model == "dall-e-3" {
if imageRequest.Size != "" && imageRequest.Size != "1024x1024" && imageRequest.Size != "1024x1792" && imageRequest.Size != "1792x1024" {
return nil, errors.New("size must be one of 1024x1024, 1024x1792 or 1792x1024 for dall-e-3")
}
if imageRequest.Quality == "" {
imageRequest.Quality = "standard"
}
// N should between 1 and 10
//if imageRequest.N != 0 && (imageRequest.N < 1 || imageRequest.N > 10) {
// return service.OpenAIErrorWrapper(errors.New("n must be between 1 and 10"), "invalid_field_value", http.StatusBadRequest)
//}
}
}
if imageRequest.Prompt == "" {
return nil, errors.New("prompt is required")
}
if imageRequest.Model == "" {
imageRequest.Model = "dall-e-2"
}
if strings.Contains(imageRequest.Size, "×") {
return nil, errors.New("size an unexpected error occurred in the parameter, please use 'x' instead of the multiplication sign '×'")
}
if imageRequest.N == 0 {
imageRequest.N = 1
}
if imageRequest.Size == "" {
imageRequest.Size = "1024x1024"
}
err := common.UnmarshalBodyReusable(c, imageRequest)
if err != nil {
return nil, err
@@ -39,6 +101,10 @@ func getAndValidImageRequest(c *gin.Context, info *relaycommon.RelayInfo) (*dto.
if imageRequest.Model == "" {
imageRequest.Model = "dall-e-2"
}
// x.ai grok-2-image not support size, quality or style
if imageRequest.Size == "empty" {
imageRequest.Size = ""
}
// Not "256x256", "512x512", or "1024x1024"
if imageRequest.Model == "dall-e-2" || imageRequest.Model == "dall-e" {
@@ -86,43 +152,59 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
imageRequest.Model = relayInfo.UpstreamModelName
priceData, err := helper.ModelPriceHelper(c, relayInfo, 0, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, len(imageRequest.Prompt), 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
var preConsumedQuota int
var quota int
var userQuota int
if !priceData.UsePrice {
// modelRatio 16 = modelPrice $0.04
// per 1 modelRatio = $0.04 / 16
priceData.ModelPrice = 0.0025 * priceData.ModelRatio
}
userQuota, err := model.GetUserQuota(relayInfo.UserId, false)
sizeRatio := 1.0
// Size
if imageRequest.Size == "256x256" {
sizeRatio = 0.4
} else if imageRequest.Size == "512x512" {
sizeRatio = 0.45
} else if imageRequest.Size == "1024x1024" {
sizeRatio = 1
} else if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
sizeRatio = 2
}
qualityRatio := 1.0
if imageRequest.Model == "dall-e-3" && imageRequest.Quality == "hd" {
qualityRatio = 2.0
if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
qualityRatio = 1.5
// priceData.ModelPrice = 0.0025 * priceData.ModelRatio
var openaiErr *dto.OpenAIErrorWithStatusCode
preConsumedQuota, userQuota, openaiErr = preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {
return openaiErr
}
}
defer func() {
if openaiErr != nil {
returnPreConsumedQuota(c, relayInfo, userQuota, preConsumedQuota)
}
}()
priceData.ModelPrice *= sizeRatio * qualityRatio * float64(imageRequest.N)
quota := int(priceData.ModelPrice * priceData.GroupRatio * common.QuotaPerUnit)
} else {
sizeRatio := 1.0
// Size
if imageRequest.Size == "256x256" {
sizeRatio = 0.4
} else if imageRequest.Size == "512x512" {
sizeRatio = 0.45
} else if imageRequest.Size == "1024x1024" {
sizeRatio = 1
} else if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
sizeRatio = 2
}
if userQuota-quota < 0 {
return service.OpenAIErrorWrapperLocal(fmt.Errorf("image pre-consumed quota failed, user quota: %s, need quota: %s", common.FormatQuota(userQuota), common.FormatQuota(quota)), "insufficient_user_quota", http.StatusForbidden)
qualityRatio := 1.0
if imageRequest.Model == "dall-e-3" && imageRequest.Quality == "hd" {
qualityRatio = 2.0
if imageRequest.Size == "1024x1792" || imageRequest.Size == "1792x1024" {
qualityRatio = 1.5
}
}
// reset model price
priceData.ModelPrice *= sizeRatio * qualityRatio * float64(imageRequest.N)
quota = int(priceData.ModelPrice * priceData.GroupRatio * common.QuotaPerUnit)
userQuota, err = model.GetUserQuota(relayInfo.UserId, false)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "get_user_quota_failed", http.StatusInternalServerError)
}
if userQuota-quota < 0 {
return service.OpenAIErrorWrapperLocal(fmt.Errorf("image pre-consumed quota failed, user quota: %s, need quota: %s", common.FormatQuota(userQuota), common.FormatQuota(quota)), "insufficient_user_quota", http.StatusForbidden)
}
}
adaptor := GetAdaptor(relayInfo.ApiType)
@@ -137,12 +219,15 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "convert_request_failed", http.StatusInternalServerError)
}
jsonData, err := json.Marshal(convertedRequest)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "json_marshal_failed", http.StatusInternalServerError)
if relayInfo.RelayMode == relayconstant.RelayModeImagesEdits {
requestBody = convertedRequest.(io.Reader)
} else {
jsonData, err := json.Marshal(convertedRequest)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "json_marshal_failed", http.StatusInternalServerError)
}
requestBody = bytes.NewBuffer(jsonData)
}
requestBody = bytes.NewBuffer(jsonData)
statusCodeMappingStr := c.GetString("status_code_mapping")
@@ -162,24 +247,25 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
}
}
_, openaiErr := adaptor.DoResponse(c, httpResp, relayInfo)
usage, openaiErr := adaptor.DoResponse(c, httpResp, relayInfo)
if openaiErr != nil {
// reset status code 重置状态码
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
return openaiErr
}
usage := &dto.Usage{
PromptTokens: imageRequest.N,
TotalTokens: imageRequest.N,
if usage.(*dto.Usage).TotalTokens == 0 {
usage.(*dto.Usage).TotalTokens = imageRequest.N
}
if usage.(*dto.Usage).PromptTokens == 0 {
usage.(*dto.Usage).PromptTokens = imageRequest.N
}
quality := "standard"
if imageRequest.Quality == "hd" {
quality = "hd"
}
logContent := fmt.Sprintf("大小 %s, 品质 %s", imageRequest.Size, quality)
postConsumeQuota(c, relayInfo, usage, 0, userQuota, priceData, logContent)
postConsumeQuota(c, relayInfo, usage.(*dto.Usage), preConsumedQuota, userQuota, priceData, logContent)
return nil
}

171
relay/relay-responses.go Normal file
View File

@@ -0,0 +1,171 @@
package relay
import (
"bytes"
"encoding/json"
"errors"
"fmt"
"io"
"net/http"
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/setting"
"one-api/setting/model_setting"
"strings"
"github.com/gin-gonic/gin"
)
func getAndValidateResponsesRequest(c *gin.Context) (*dto.OpenAIResponsesRequest, error) {
request := &dto.OpenAIResponsesRequest{}
err := common.UnmarshalBodyReusable(c, request)
if err != nil {
return nil, err
}
if request.Model == "" {
return nil, errors.New("model is required")
}
if len(request.Input) == 0 {
return nil, errors.New("input is required")
}
return request, nil
}
func checkInputSensitive(textRequest *dto.OpenAIResponsesRequest, info *relaycommon.RelayInfo) ([]string, error) {
sensitiveWords, err := service.CheckSensitiveInput(textRequest.Input)
return sensitiveWords, err
}
func getInputTokens(req *dto.OpenAIResponsesRequest, info *relaycommon.RelayInfo) (int, error) {
inputTokens, err := service.CountTokenInput(req.Input, req.Model)
info.PromptTokens = inputTokens
return inputTokens, err
}
func ResponsesHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
req, err := getAndValidateResponsesRequest(c)
if err != nil {
common.LogError(c, fmt.Sprintf("getAndValidateResponsesRequest error: %s", err.Error()))
return service.OpenAIErrorWrapperLocal(err, "invalid_responses_request", http.StatusBadRequest)
}
relayInfo := relaycommon.GenRelayInfoResponses(c, req)
if setting.ShouldCheckPromptSensitive() {
sensitiveWords, err := checkInputSensitive(req, relayInfo)
if err != nil {
common.LogWarn(c, fmt.Sprintf("user sensitive words detected: %s", strings.Join(sensitiveWords, ", ")))
return service.OpenAIErrorWrapperLocal(err, "check_request_sensitive_error", http.StatusBadRequest)
}
}
err = helper.ModelMappedHelper(c, relayInfo)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_mapped_error", http.StatusBadRequest)
}
req.Model = relayInfo.UpstreamModelName
if value, exists := c.Get("prompt_tokens"); exists {
promptTokens := value.(int)
relayInfo.SetPromptTokens(promptTokens)
} else {
promptTokens, err := getInputTokens(req, relayInfo)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "count_input_tokens_error", http.StatusBadRequest)
}
c.Set("prompt_tokens", promptTokens)
}
priceData, err := helper.ModelPriceHelper(c, relayInfo, relayInfo.PromptTokens, int(req.MaxOutputTokens))
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
// pre consume quota
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {
return openaiErr
}
defer func() {
if openaiErr != nil {
returnPreConsumedQuota(c, relayInfo, userQuota, preConsumedQuota)
}
}()
adaptor := GetAdaptor(relayInfo.ApiType)
if adaptor == nil {
return service.OpenAIErrorWrapperLocal(fmt.Errorf("invalid api type: %d", relayInfo.ApiType), "invalid_api_type", http.StatusBadRequest)
}
adaptor.Init(relayInfo)
var requestBody io.Reader
if model_setting.GetGlobalSettings().PassThroughRequestEnabled {
body, err := common.GetRequestBody(c)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "get_request_body_error", http.StatusInternalServerError)
}
requestBody = bytes.NewBuffer(body)
} else {
convertedRequest, err := adaptor.ConvertOpenAIResponsesRequest(c, relayInfo, *req)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "convert_request_error", http.StatusBadRequest)
}
jsonData, err := json.Marshal(convertedRequest)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "marshal_request_error", http.StatusInternalServerError)
}
// apply param override
if len(relayInfo.ParamOverride) > 0 {
reqMap := make(map[string]interface{})
err = json.Unmarshal(jsonData, &reqMap)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "param_override_unmarshal_failed", http.StatusInternalServerError)
}
for key, value := range relayInfo.ParamOverride {
reqMap[key] = value
}
jsonData, err = json.Marshal(reqMap)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "param_override_marshal_failed", http.StatusInternalServerError)
}
}
if common.DebugEnabled {
println("requestBody: ", string(jsonData))
}
requestBody = bytes.NewBuffer(jsonData)
}
var httpResp *http.Response
resp, err := adaptor.DoRequest(c, relayInfo, requestBody)
if err != nil {
return service.OpenAIErrorWrapper(err, "do_request_failed", http.StatusInternalServerError)
}
statusCodeMappingStr := c.GetString("status_code_mapping")
if resp != nil {
httpResp = resp.(*http.Response)
if httpResp.StatusCode != http.StatusOK {
openaiErr = service.RelayErrorHandler(httpResp, false)
// reset status code 重置状态码
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
return openaiErr
}
}
usage, openaiErr := adaptor.DoResponse(c, httpResp, relayInfo)
if openaiErr != nil {
// reset status code 重置状态码
service.ResetStatusCode(openaiErr, statusCodeMappingStr)
return openaiErr
}
if strings.HasPrefix(relayInfo.OriginModelName, "gpt-4o-audio") {
service.PostAudioConsumeQuota(c, relayInfo, usage.(*dto.Usage), preConsumedQuota, userQuota, priceData, "")
} else {
postConsumeQuota(c, relayInfo, usage.(*dto.Usage), preConsumedQuota, userQuota, priceData, "")
}
return nil
}

View File

@@ -18,6 +18,7 @@ import (
"one-api/service"
"one-api/setting"
"one-api/setting/model_setting"
"one-api/setting/operation_setting"
"strings"
"time"
@@ -331,12 +332,14 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
useTimeSeconds := time.Now().Unix() - relayInfo.StartTime.Unix()
promptTokens := usage.PromptTokens
cacheTokens := usage.PromptTokensDetails.CachedTokens
imageTokens := usage.PromptTokensDetails.ImageTokens
completionTokens := usage.CompletionTokens
modelName := relayInfo.OriginModelName
tokenName := ctx.GetString("token_name")
completionRatio := priceData.CompletionRatio
cacheRatio := priceData.CacheRatio
imageRatio := priceData.ImageRatio
modelRatio := priceData.ModelRatio
groupRatio := priceData.GroupRatio
modelPrice := priceData.ModelPrice
@@ -344,9 +347,11 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
// Convert values to decimal for precise calculation
dPromptTokens := decimal.NewFromInt(int64(promptTokens))
dCacheTokens := decimal.NewFromInt(int64(cacheTokens))
dImageTokens := decimal.NewFromInt(int64(imageTokens))
dCompletionTokens := decimal.NewFromInt(int64(completionTokens))
dCompletionRatio := decimal.NewFromFloat(completionRatio)
dCacheRatio := decimal.NewFromFloat(cacheRatio)
dImageRatio := decimal.NewFromFloat(imageRatio)
dModelRatio := decimal.NewFromFloat(modelRatio)
dGroupRatio := decimal.NewFromFloat(groupRatio)
dModelPrice := decimal.NewFromFloat(modelPrice)
@@ -354,11 +359,46 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
ratio := dModelRatio.Mul(dGroupRatio)
// openai web search 工具计费
var dWebSearchQuota decimal.Decimal
var webSearchPrice float64
if relayInfo.ResponsesUsageInfo != nil {
if webSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolWebSearchPreview]; exists && webSearchTool.CallCount > 0 {
// 计算 web search 调用的配额 (配额 = 价格 * 调用次数 / 1000 * 分组倍率)
webSearchPrice = operation_setting.GetWebSearchPricePerThousand(modelName, webSearchTool.SearchContextSize)
dWebSearchQuota = decimal.NewFromFloat(webSearchPrice).
Mul(decimal.NewFromInt(int64(webSearchTool.CallCount))).
Div(decimal.NewFromInt(1000)).Mul(dGroupRatio).Mul(dQuotaPerUnit)
extraContent += fmt.Sprintf("Web Search 调用 %d 次,上下文大小 %s调用花费 $%s",
webSearchTool.CallCount, webSearchTool.SearchContextSize, dWebSearchQuota.String())
}
}
// file search tool 计费
var dFileSearchQuota decimal.Decimal
var fileSearchPrice float64
if relayInfo.ResponsesUsageInfo != nil {
if fileSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolFileSearch]; exists && fileSearchTool.CallCount > 0 {
fileSearchPrice = operation_setting.GetFileSearchPricePerThousand()
dFileSearchQuota = decimal.NewFromFloat(fileSearchPrice).
Mul(decimal.NewFromInt(int64(fileSearchTool.CallCount))).
Div(decimal.NewFromInt(1000)).Mul(dGroupRatio).Mul(dQuotaPerUnit)
extraContent += fmt.Sprintf("File Search 调用 %d 次,调用花费 $%s",
fileSearchTool.CallCount, dFileSearchQuota.String())
}
}
var quotaCalculateDecimal decimal.Decimal
if !priceData.UsePrice {
nonCachedTokens := dPromptTokens.Sub(dCacheTokens)
cachedTokensWithRatio := dCacheTokens.Mul(dCacheRatio)
promptQuota := nonCachedTokens.Add(cachedTokensWithRatio)
if imageTokens > 0 {
nonImageTokens := dPromptTokens.Sub(dImageTokens)
imageTokensWithRatio := dImageTokens.Mul(dImageRatio)
promptQuota = nonImageTokens.Add(imageTokensWithRatio)
}
completionQuota := dCompletionTokens.Mul(dCompletionRatio)
quotaCalculateDecimal = promptQuota.Add(completionQuota).Mul(ratio)
@@ -369,6 +409,9 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
} else {
quotaCalculateDecimal = dModelPrice.Mul(dQuotaPerUnit).Mul(dGroupRatio)
}
// 添加 responses tools call 调用的配额
quotaCalculateDecimal = quotaCalculateDecimal.Add(dWebSearchQuota)
quotaCalculateDecimal = quotaCalculateDecimal.Add(dFileSearchQuota)
quota := int(quotaCalculateDecimal.Round(0).IntPart())
totalTokens := promptTokens + completionTokens
@@ -414,6 +457,25 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
logContent += ", " + extraContent
}
other := service.GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, cacheTokens, cacheRatio, modelPrice)
if imageTokens != 0 {
other["image"] = true
other["image_ratio"] = imageRatio
other["image_output"] = imageTokens
}
if !dWebSearchQuota.IsZero() && relayInfo.ResponsesUsageInfo != nil {
if webSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolWebSearchPreview]; exists {
other["web_search"] = true
other["web_search_call_count"] = webSearchTool.CallCount
other["web_search_price"] = webSearchPrice
}
}
if !dFileSearchQuota.IsZero() && relayInfo.ResponsesUsageInfo != nil {
if fileSearchTool, exists := relayInfo.ResponsesUsageInfo.BuiltInTools[dto.BuildInToolFileSearch]; exists {
other["file_search"] = true
other["file_search_call_count"] = fileSearchTool.CallCount
other["file_search_price"] = fileSearchPrice
}
}
model.RecordConsumeLog(ctx, relayInfo.UserId, relayInfo.ChannelId, promptTokens, completionTokens, logModel,
tokenName, quota, logContent, relayInfo.TokenId, userQuota, int(useTimeSeconds), relayInfo.IsStream, relayInfo.Group, other)
}

View File

@@ -1,10 +1,11 @@
package router
import (
"github.com/gin-gonic/gin"
"one-api/controller"
"one-api/middleware"
"one-api/relay"
"github.com/gin-gonic/gin"
)
func SetRelayRouter(router *gin.Engine) {
@@ -40,13 +41,14 @@ func SetRelayRouter(router *gin.Engine) {
httpRouter.POST("/chat/completions", controller.Relay)
httpRouter.POST("/edits", controller.Relay)
httpRouter.POST("/images/generations", controller.Relay)
httpRouter.POST("/images/edits", controller.RelayNotImplemented)
httpRouter.POST("/images/edits", controller.Relay)
httpRouter.POST("/images/variations", controller.RelayNotImplemented)
httpRouter.POST("/embeddings", controller.Relay)
httpRouter.POST("/engines/:model/embeddings", controller.Relay)
httpRouter.POST("/audio/transcriptions", controller.Relay)
httpRouter.POST("/audio/translations", controller.Relay)
httpRouter.POST("/audio/speech", controller.Relay)
httpRouter.POST("/responses", controller.Relay)
httpRouter.GET("/files", controller.RelayNotImplemented)
httpRouter.POST("/files", controller.RelayNotImplemented)
httpRouter.DELETE("/files/:id", controller.RelayNotImplemented)

View File

@@ -43,7 +43,7 @@ func InitTokenEncoders() {
} else {
tokenEncoderMap[model] = defaultTokenEncoder
}
} else if strings.HasPrefix(model, "o1") {
} else if strings.HasPrefix(model, "o") {
tokenEncoderMap[model] = o200kTokenEncoder
} else {
tokenEncoderMap[model] = defaultTokenEncoder
@@ -400,6 +400,8 @@ func CountTokenMessages(info *relaycommon.RelayInfo, messages []dto.Message, mod
tokenNum += 100
} else if m.Type == dto.ContentTypeFile {
tokenNum += 5000
} else if m.Type == dto.ContentTypeVideoUrl {
tokenNum += 5000
} else {
tokenNum += getTokenNum(tokenEncoder, m.Text)
}

View File

@@ -6,8 +6,11 @@ import (
// GeminiSettings 定义Gemini模型的配置
type GeminiSettings struct {
SafetySettings map[string]string `json:"safety_settings"`
VersionSettings map[string]string `json:"version_settings"`
SafetySettings map[string]string `json:"safety_settings"`
VersionSettings map[string]string `json:"version_settings"`
SupportedImagineModels []string `json:"supported_imagine_models"`
ThinkingAdapterEnabled bool `json:"thinking_adapter_enabled"`
ThinkingAdapterBudgetTokensPercentage float64 `json:"thinking_adapter_budget_tokens_percentage"`
}
// 默认配置
@@ -20,6 +23,12 @@ var defaultGeminiSettings = GeminiSettings{
"default": "v1beta",
"gemini-1.0-pro": "v1",
},
SupportedImagineModels: []string{
"gemini-2.0-flash-exp-image-generation",
"gemini-2.0-flash-exp",
},
ThinkingAdapterEnabled: false,
ThinkingAdapterBudgetTokensPercentage: 0.6,
}
// 全局实例
@@ -50,3 +59,12 @@ func GetGeminiVersionSetting(key string) string {
}
return geminiSettings.VersionSettings["default"]
}
func IsGeminiModelSupportImagine(model string) bool {
for _, v := range geminiSettings.SupportedImagineModels {
if v == model {
return true
}
}
return false
}

View File

@@ -51,26 +51,27 @@ var defaultModelRatio = map[string]float64{
"gpt-4o-realtime-preview-2024-12-17": 2.5,
"gpt-4o-mini-realtime-preview": 0.3,
"gpt-4o-mini-realtime-preview-2024-12-17": 0.3,
"o1": 7.5,
"o1-2024-12-17": 7.5,
"o1-preview": 7.5,
"o1-preview-2024-09-12": 7.5,
"o1-mini": 0.55,
"o1-mini-2024-09-12": 0.55,
"o3-mini": 0.55,
"o3-mini-2025-01-31": 0.55,
"o3-mini-high": 0.55,
"o3-mini-2025-01-31-high": 0.55,
"o3-mini-low": 0.55,
"o3-mini-2025-01-31-low": 0.55,
"o3-mini-medium": 0.55,
"o3-mini-2025-01-31-medium": 0.55,
"gpt-4o-mini": 0.075,
"gpt-4o-mini-2024-07-18": 0.075,
"gpt-4-turbo": 5, // $0.01 / 1K tokens
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
"gpt-4.5-preview": 37.5,
"gpt-4.5-preview-2025-02-27": 37.5,
"gpt-image-1": 2.5,
"o1": 7.5,
"o1-2024-12-17": 7.5,
"o1-preview": 7.5,
"o1-preview-2024-09-12": 7.5,
"o1-mini": 0.55,
"o1-mini-2024-09-12": 0.55,
"o3-mini": 0.55,
"o3-mini-2025-01-31": 0.55,
"o3-mini-high": 0.55,
"o3-mini-2025-01-31-high": 0.55,
"o3-mini-low": 0.55,
"o3-mini-2025-01-31-low": 0.55,
"o3-mini-medium": 0.55,
"o3-mini-2025-01-31-medium": 0.55,
"gpt-4o-mini": 0.075,
"gpt-4o-mini-2024-07-18": 0.075,
"gpt-4-turbo": 5, // $0.01 / 1K tokens
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
"gpt-4.5-preview": 37.5,
"gpt-4.5-preview-2025-02-27": 37.5,
//"gpt-3.5-turbo-0301": 0.75, //deprecated
"gpt-3.5-turbo": 0.25,
"gpt-3.5-turbo-0613": 0.75,
@@ -86,89 +87,92 @@ var defaultModelRatio = map[string]float64{
"text-curie-001": 1,
//"text-davinci-002": 10,
//"text-davinci-003": 10,
"text-davinci-edit-001": 10,
"code-davinci-edit-001": 10,
"whisper-1": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
"tts-1": 7.5, // 1k characters -> $0.015
"tts-1-1106": 7.5, // 1k characters -> $0.015
"tts-1-hd": 15, // 1k characters -> $0.03
"tts-1-hd-1106": 15, // 1k characters -> $0.03
"davinci": 10,
"curie": 10,
"babbage": 10,
"ada": 10,
"text-embedding-3-small": 0.01,
"text-embedding-3-large": 0.065,
"text-embedding-ada-002": 0.05,
"text-search-ada-doc-001": 10,
"text-moderation-stable": 0.1,
"text-moderation-latest": 0.1,
"claude-instant-1": 0.4, // $0.8 / 1M tokens
"claude-2.0": 4, // $8 / 1M tokens
"claude-2.1": 4, // $8 / 1M tokens
"claude-3-haiku-20240307": 0.125, // $0.25 / 1M tokens
"claude-3-5-haiku-20241022": 0.5, // $1 / 1M tokens
"claude-3-sonnet-20240229": 1.5, // $3 / 1M tokens
"claude-3-5-sonnet-20240620": 1.5,
"claude-3-5-sonnet-20241022": 1.5,
"claude-3-7-sonnet-20250219": 1.5,
"claude-3-7-sonnet-20250219-thinking": 1.5,
"claude-3-opus-20240229": 7.5, // $15 / 1M tokens
"ERNIE-4.0-8K": 0.120 * RMB,
"ERNIE-3.5-8K": 0.012 * RMB,
"ERNIE-3.5-8K-0205": 0.024 * RMB,
"ERNIE-3.5-8K-1222": 0.012 * RMB,
"ERNIE-Bot-8K": 0.024 * RMB,
"ERNIE-3.5-4K-0205": 0.012 * RMB,
"ERNIE-Speed-8K": 0.004 * RMB,
"ERNIE-Speed-128K": 0.004 * RMB,
"ERNIE-Lite-8K-0922": 0.008 * RMB,
"ERNIE-Lite-8K-0308": 0.003 * RMB,
"ERNIE-Tiny-8K": 0.001 * RMB,
"BLOOMZ-7B": 0.004 * RMB,
"Embedding-V1": 0.002 * RMB,
"bge-large-zh": 0.002 * RMB,
"bge-large-en": 0.002 * RMB,
"tao-8k": 0.002 * RMB,
"PaLM-2": 1,
"gemini-1.5-pro-latest": 1.25, // $3.5 / 1M tokens
"gemini-1.5-flash-latest": 0.075,
"gemini-2.0-flash": 0.05,
"gemini-2.5-pro-exp-03-25": 0.625,
"gemini-2.5-pro-preview-03-25": 0.625,
"text-embedding-004": 0.001,
"chatglm_turbo": 0.3572, // ¥0.005 / 1k tokens
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
"glm-4": 7.143, // ¥0.1 / 1k tokens
"glm-4v": 0.05 * RMB, // ¥0.05 / 1k tokens
"glm-4-alltools": 0.1 * RMB, // ¥0.1 / 1k tokens
"glm-3-turbo": 0.3572,
"glm-4-plus": 0.05 * RMB,
"glm-4-0520": 0.1 * RMB,
"glm-4-air": 0.001 * RMB,
"glm-4-airx": 0.01 * RMB,
"glm-4-long": 0.001 * RMB,
"glm-4-flash": 0,
"glm-4v-plus": 0.01 * RMB,
"qwen-turbo": 0.8572, // ¥0.012 / 1k tokens
"qwen-plus": 10, // ¥0.14 / 1k tokens
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v4.0": 1.2858,
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"360gpt-turbo": 0.0858, // ¥0.0012 / 1k tokens
"360gpt-turbo-responsibility-8k": 0.8572, // ¥0.012 / 1k tokens
"360gpt-pro": 0.8572, // ¥0.012 / 1k tokens
"360gpt2-pro": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"hunyuan": 7.143, // ¥0.1 / 1k tokens // https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
"text-davinci-edit-001": 10,
"code-davinci-edit-001": 10,
"whisper-1": 15, // $0.006 / minute -> $0.006 / 150 words -> $0.006 / 200 tokens -> $0.03 / 1k tokens
"tts-1": 7.5, // 1k characters -> $0.015
"tts-1-1106": 7.5, // 1k characters -> $0.015
"tts-1-hd": 15, // 1k characters -> $0.03
"tts-1-hd-1106": 15, // 1k characters -> $0.03
"davinci": 10,
"curie": 10,
"babbage": 10,
"ada": 10,
"text-embedding-3-small": 0.01,
"text-embedding-3-large": 0.065,
"text-embedding-ada-002": 0.05,
"text-search-ada-doc-001": 10,
"text-moderation-stable": 0.1,
"text-moderation-latest": 0.1,
"claude-instant-1": 0.4, // $0.8 / 1M tokens
"claude-2.0": 4, // $8 / 1M tokens
"claude-2.1": 4, // $8 / 1M tokens
"claude-3-haiku-20240307": 0.125, // $0.25 / 1M tokens
"claude-3-5-haiku-20241022": 0.5, // $1 / 1M tokens
"claude-3-sonnet-20240229": 1.5, // $3 / 1M tokens
"claude-3-5-sonnet-20240620": 1.5,
"claude-3-5-sonnet-20241022": 1.5,
"claude-3-7-sonnet-20250219": 1.5,
"claude-3-7-sonnet-20250219-thinking": 1.5,
"claude-3-opus-20240229": 7.5, // $15 / 1M tokens
"ERNIE-4.0-8K": 0.120 * RMB,
"ERNIE-3.5-8K": 0.012 * RMB,
"ERNIE-3.5-8K-0205": 0.024 * RMB,
"ERNIE-3.5-8K-1222": 0.012 * RMB,
"ERNIE-Bot-8K": 0.024 * RMB,
"ERNIE-3.5-4K-0205": 0.012 * RMB,
"ERNIE-Speed-8K": 0.004 * RMB,
"ERNIE-Speed-128K": 0.004 * RMB,
"ERNIE-Lite-8K-0922": 0.008 * RMB,
"ERNIE-Lite-8K-0308": 0.003 * RMB,
"ERNIE-Tiny-8K": 0.001 * RMB,
"BLOOMZ-7B": 0.004 * RMB,
"Embedding-V1": 0.002 * RMB,
"bge-large-zh": 0.002 * RMB,
"bge-large-en": 0.002 * RMB,
"tao-8k": 0.002 * RMB,
"PaLM-2": 1,
"gemini-1.5-pro-latest": 1.25, // $3.5 / 1M tokens
"gemini-1.5-flash-latest": 0.075,
"gemini-2.0-flash": 0.05,
"gemini-2.5-pro-exp-03-25": 0.625,
"gemini-2.5-pro-preview-03-25": 0.625,
"gemini-2.5-flash-preview-04-17": 0.075,
"gemini-2.5-flash-preview-04-17-thinking": 0.075,
"gemini-2.5-flash-preview-04-17-nothinking": 0.075,
"text-embedding-004": 0.001,
"chatglm_turbo": 0.3572, // ¥0.005 / 1k tokens
"chatglm_pro": 0.7143, // ¥0.01 / 1k tokens
"chatglm_std": 0.3572, // ¥0.005 / 1k tokens
"chatglm_lite": 0.1429, // ¥0.002 / 1k tokens
"glm-4": 7.143, // ¥0.1 / 1k tokens
"glm-4v": 0.05 * RMB, // ¥0.05 / 1k tokens
"glm-4-alltools": 0.1 * RMB, // ¥0.1 / 1k tokens
"glm-3-turbo": 0.3572,
"glm-4-plus": 0.05 * RMB,
"glm-4-0520": 0.1 * RMB,
"glm-4-air": 0.001 * RMB,
"glm-4-airx": 0.01 * RMB,
"glm-4-long": 0.001 * RMB,
"glm-4-flash": 0,
"glm-4v-plus": 0.01 * RMB,
"qwen-turbo": 0.8572, // ¥0.012 / 1k tokens
"qwen-plus": 10, // ¥0.14 / 1k tokens
"text-embedding-v1": 0.05, // ¥0.0007 / 1k tokens
"SparkDesk-v1.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v2.1": 1.2858, // ¥0.018 / 1k tokens
"SparkDesk-v3.1": 1.2858, // 0.018 / 1k tokens
"SparkDesk-v3.5": 1.2858, // 0.018 / 1k tokens
"SparkDesk-v4.0": 1.2858,
"360GPT_S2_V9": 0.8572, // ¥0.012 / 1k tokens
"360gpt-turbo": 0.0858, // ¥0.0012 / 1k tokens
"360gpt-turbo-responsibility-8k": 0.8572, // ¥0.012 / 1k tokens
"360gpt-pro": 0.8572, // ¥0.012 / 1k tokens
"360gpt2-pro": 0.8572, // ¥0.012 / 1k tokens
"embedding-bert-512-v1": 0.0715, // ¥0.001 / 1k tokens
"embedding_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"semantic_similarity_s1_v1": 0.0715, // ¥0.001 / 1k tokens
"hunyuan": 7.143, // ¥0.1 / 1k tokens // https://cloud.tencent.com/document/product/1729/97731#e0e6be58-60c8-469f-bdeb-6c264ce3b4d0
// https://platform.lingyiwanwu.com/docs#-计费单元
// 已经按照 7.2 来换算美元价格
"yi-34b-chat-0205": 0.18,
@@ -252,10 +256,11 @@ var defaultCompletionRatio = map[string]float64{
"gpt-4-gizmo-*": 2,
"gpt-4o-gizmo-*": 3,
"gpt-4-all": 2,
"gpt-image-1": 8,
}
// InitModelSettings initializes all model related settings maps
func InitModelSettings() {
// InitRatioSettings initializes all model related settings maps
func InitRatioSettings() {
// Initialize modelPriceMap
modelPriceMapMutex.Lock()
modelPriceMap = defaultModelPrice
@@ -276,7 +281,11 @@ func InitModelSettings() {
cacheRatioMap = defaultCacheRatio
cacheRatioMapMutex.Unlock()
common.SysLog("model settings initialized")
// initialize imageRatioMap
imageRatioMapMutex.Lock()
imageRatioMap = defaultImageRatio
imageRatioMapMutex.Unlock()
}
func GetModelPriceMap() map[string]float64 {
@@ -459,6 +468,12 @@ func getHardcodedCompletionModelRatio(name string) (float64, bool) {
return 4, true
} else if strings.HasPrefix(name, "gemini-2.5-pro-preview") {
return 8, true
} else if strings.HasPrefix(name, "gemini-2.5-flash-preview") {
if strings.HasSuffix(name, "-nothinking") {
return 4, false
} else {
return 3.5 / 0.6, false
}
}
return 4, false
}
@@ -502,18 +517,18 @@ func getHardcodedCompletionModelRatio(name string) (float64, bool) {
func GetAudioRatio(name string) float64 {
if strings.Contains(name, "-realtime") {
if strings.HasSuffix(name, "gpt-4o-realtime-preview-2024-12-17") {
if strings.HasSuffix(name, "gpt-4o-realtime-preview") {
return 8
} else if strings.Contains(name, "mini") {
} else if strings.Contains(name, "gpt-4o-mini-realtime-preview") {
return 10 / 0.6
} else {
return 20
}
}
if strings.Contains(name, "-audio") {
if strings.HasSuffix(name, "gpt-4o-audio-preview-2024-12-17") {
return 16
} else if strings.Contains(name, "mini") {
if strings.HasPrefix(name, "gpt-4o-audio-preview") {
return 40 / 2.5
} else if strings.HasPrefix(name, "gpt-4o-mini-audio-preview") {
return 10 / 0.15
} else {
return 40
@@ -541,3 +556,36 @@ func ModelRatio2JSONString() string {
}
return string(jsonBytes)
}
var defaultImageRatio = map[string]float64{
"gpt-image-1": 2,
}
var imageRatioMap map[string]float64
var imageRatioMapMutex sync.RWMutex
func ImageRatio2JSONString() string {
imageRatioMapMutex.RLock()
defer imageRatioMapMutex.RUnlock()
jsonBytes, err := json.Marshal(imageRatioMap)
if err != nil {
common.SysError("error marshalling cache ratio: " + err.Error())
}
return string(jsonBytes)
}
func UpdateImageRatioByJSONString(jsonStr string) error {
imageRatioMapMutex.Lock()
defer imageRatioMapMutex.Unlock()
imageRatioMap = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &imageRatioMap)
}
func GetImageRatio(name string) (float64, bool) {
imageRatioMapMutex.RLock()
defer imageRatioMapMutex.RUnlock()
ratio, ok := imageRatioMap[name]
if !ok {
return 1, false // Default to 1 if not found
}
return ratio, true
}

View File

@@ -0,0 +1,57 @@
package operation_setting
import "strings"
const (
// Web search
WebSearchHighTierModelPriceLow = 30.00
WebSearchHighTierModelPriceMedium = 35.00
WebSearchHighTierModelPriceHigh = 50.00
WebSearchPriceLow = 25.00
WebSearchPriceMedium = 27.50
WebSearchPriceHigh = 30.00
// File search
FileSearchPrice = 2.5
)
func GetWebSearchPricePerThousand(modelName string, contextSize string) float64 {
// 确定模型类型
// https://platform.openai.com/docs/pricing Web search 价格按模型类型和 search context size 收费
// gpt-4.1, gpt-4o, or gpt-4o-search-preview 更贵gpt-4.1-mini, gpt-4o-mini, gpt-4o-mini-search-preview 更便宜
isHighTierModel := (strings.HasPrefix(modelName, "gpt-4.1") || strings.HasPrefix(modelName, "gpt-4o")) &&
!strings.Contains(modelName, "mini")
// 确定 search context size 对应的价格
var priceWebSearchPerThousandCalls float64
switch contextSize {
case "low":
if isHighTierModel {
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceLow
} else {
priceWebSearchPerThousandCalls = WebSearchPriceLow
}
case "medium":
if isHighTierModel {
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceMedium
} else {
priceWebSearchPerThousandCalls = WebSearchPriceMedium
}
case "high":
if isHighTierModel {
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceHigh
} else {
priceWebSearchPerThousandCalls = WebSearchPriceHigh
}
default:
// search context size 默认为 medium
if isHighTierModel {
priceWebSearchPerThousandCalls = WebSearchHighTierModelPriceMedium
} else {
priceWebSearchPerThousandCalls = WebSearchPriceMedium
}
}
return priceWebSearchPerThousandCalls
}
func GetFileSearchPricePerThousand() float64 {
return FileSearchPrice
}

View File

@@ -1 +1 @@
module.exports = require("@so1ve/prettier-config");
module.exports = require('@so1ve/prettier-config');

2633
web/pnpm-lock.yaml generated

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 251 KiB

View File

@@ -21,9 +21,9 @@ import Chat2Link from './pages/Chat2Link';
import { Layout } from '@douyinfe/semi-ui';
import Midjourney from './pages/Midjourney';
import Pricing from './pages/Pricing/index.js';
import Task from "./pages/Task/index.js";
import Task from './pages/Task/index.js';
import Playground from './pages/Playground/Playground.js';
import OAuth2Callback from "./components/OAuth2Callback.js";
import OAuth2Callback from './components/OAuth2Callback.js';
import PersonalSetting from './components/PersonalSetting.js';
import Setup from './pages/Setup/index.js';
import SetupCheck from './components/SetupCheck';
@@ -34,7 +34,7 @@ const About = lazy(() => import('./pages/About'));
function App() {
const location = useLocation();
return (
<SetupCheck>
<Routes>
@@ -167,18 +167,18 @@ function App() {
}
/>
<Route
path='/oauth/oidc'
element={
<Suspense fallback={<Loading></Loading>}>
<OAuth2Callback type='oidc'></OAuth2Callback>
</Suspense>
}
path='/oauth/oidc'
element={
<Suspense fallback={<Loading></Loading>}>
<OAuth2Callback type='oidc'></OAuth2Callback>
</Suspense>
}
/>
<Route
path='/oauth/linuxdo'
element={
<Suspense fallback={<Loading></Loading>} key={location.pathname}>
<OAuth2Callback type='linuxdo'></OAuth2Callback>
<OAuth2Callback type='linuxdo'></OAuth2Callback>
</Suspense>
}
/>
@@ -275,19 +275,19 @@ function App() {
}
/>
{/* 方便使用chat2link直接跳转聊天... */}
<Route
path='/chat2link'
element={
<PrivateRoute>
<Suspense fallback={<Loading></Loading>} key={location.pathname}>
<Chat2Link />
</Suspense>
</PrivateRoute>
}
/>
<Route path='*' element={<NotFound />} />
</Routes>
</SetupCheck>
<Route
path='/chat2link'
element={
<PrivateRoute>
<Suspense fallback={<Loading></Loading>} key={location.pathname}>
<Chat2Link />
</Suspense>
</PrivateRoute>
}
/>
<Route path='*' element={<NotFound />} />
</Routes>
</SetupCheck>
);
}

File diff suppressed because it is too large Load Diff

View File

@@ -28,11 +28,7 @@ const FooterBar = () => {
New API {import.meta.env.VITE_REACT_APP_VERSION}{' '}
</a>
{t('由')}{' '}
<a
href='https://github.com/Calcium-Ion'
target='_blank'
rel='noreferrer'
>
<a href='https://github.com/Calcium-Ion' target='_blank' rel='noreferrer'>
Calcium-Ion
</a>{' '}
{t('开发,基于')}{' '}
@@ -59,10 +55,12 @@ const FooterBar = () => {
}, []);
return (
<div style={{
textAlign: 'center',
paddingBottom: '5px',
}}>
<div
style={{
textAlign: 'center',
paddingBottom: '5px',
}}
>
{footer ? (
<div
className='custom-footer'

View File

@@ -13,18 +13,28 @@ import {
IconClose,
IconHelpCircle,
IconHome,
IconHomeStroked, IconIndentLeft,
IconHomeStroked,
IconIndentLeft,
IconComment,
IconKey, IconMenu,
IconKey,
IconMenu,
IconNoteMoneyStroked,
IconPriceTag,
IconUser,
IconLanguage,
IconInfoCircle,
IconCreditCard,
IconTerminal
IconTerminal,
} from '@douyinfe/semi-icons';
import { Avatar, Button, Dropdown, Layout, Nav, Switch, Tag } from '@douyinfe/semi-ui';
import {
Avatar,
Button,
Dropdown,
Layout,
Nav,
Switch,
Tag,
} from '@douyinfe/semi-ui';
import { stringToColor } from '../helpers/render';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
import { StyleContext } from '../context/Style/index.js';
@@ -36,20 +46,20 @@ const headerStyle = {
borderBottom: '1px solid var(--semi-color-border)',
background: 'var(--semi-color-bg-0)',
transition: 'all 0.3s ease',
width: '100%'
width: '100%',
};
// 自定义顶部栏按钮样式
const headerItemStyle = {
borderRadius: '4px',
margin: '0 4px',
transition: 'all 0.3s ease'
transition: 'all 0.3s ease',
};
// 自定义顶部栏按钮悬停样式
const headerItemHoverStyle = {
backgroundColor: 'var(--semi-color-primary-light-default)',
color: 'var(--semi-color-primary)'
color: 'var(--semi-color-primary)',
};
// 自定义顶部栏Logo样式
@@ -58,23 +68,24 @@ const logoStyle = {
alignItems: 'center',
gap: '10px',
padding: '0 10px',
height: '100%'
height: '100%',
};
// 自定义顶部栏系统名称样式
const systemNameStyle = {
fontWeight: 'bold',
fontSize: '18px',
background: 'linear-gradient(45deg, var(--semi-color-primary), var(--semi-color-secondary))',
background:
'linear-gradient(45deg, var(--semi-color-primary), var(--semi-color-secondary))',
WebkitBackgroundClip: 'text',
WebkitTextFillColor: 'transparent',
padding: '0 5px'
padding: '0 5px',
};
// 自定义顶部栏按钮图标样式
const headerIconStyle = {
fontSize: '18px',
transition: 'all 0.3s ease'
transition: 'all 0.3s ease',
};
// 自定义头像样式
@@ -82,19 +93,19 @@ const avatarStyle = {
margin: '4px',
cursor: 'pointer',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)',
transition: 'all 0.3s ease'
transition: 'all 0.3s ease',
};
// 自定义下拉菜单样式
const dropdownStyle = {
borderRadius: '8px',
boxShadow: '0 4px 12px rgba(0, 0, 0, 0.15)',
overflow: 'hidden'
overflow: 'hidden',
};
// 自定义主题切换开关样式
const switchStyle = {
margin: '0 8px'
margin: '0 8px',
};
const HeaderBar = () => {
@@ -109,8 +120,7 @@ const HeaderBar = () => {
const logo = getLogo();
const currentDate = new Date();
// enable fireworks on new year(1.1 and 2.9-2.24)
const isNewYear =
(currentDate.getMonth() === 0 && currentDate.getDate() === 1);
const isNewYear = currentDate.getMonth() === 0 && currentDate.getDate() === 1;
// Check if self-use mode is enabled
const isSelfUseMode = statusState?.status?.self_use_mode_enabled || false;
@@ -137,13 +147,17 @@ const HeaderBar = () => {
icon: <IconPriceTag style={headerIconStyle} />,
},
// Only include the docs button if docsLink exists
...(docsLink ? [{
text: t('文档'),
itemKey: 'docs',
isExternal: true,
externalLink: docsLink,
icon: <IconHelpCircle style={headerIconStyle} />,
}] : []),
...(docsLink
? [
{
text: t('文档'),
itemKey: 'docs',
isExternal: true,
externalLink: docsLink,
icon: <IconHelpCircle style={headerIconStyle} />,
},
]
: []),
{
text: t('关于'),
itemKey: 'about',
@@ -232,30 +246,38 @@ const HeaderBar = () => {
chat: '/chat',
};
return (
<div onClick={(e) => {
if (props.itemKey === 'home') {
styleDispatch({ type: 'SET_INNER_PADDING', payload: false });
styleDispatch({ type: 'SET_SIDER', payload: false });
} else {
styleDispatch({ type: 'SET_INNER_PADDING', payload: true });
if (!styleState.isMobile) {
styleDispatch({ type: 'SET_SIDER', payload: true });
<div
onClick={(e) => {
if (props.itemKey === 'home') {
styleDispatch({
type: 'SET_INNER_PADDING',
payload: false,
});
styleDispatch({ type: 'SET_SIDER', payload: false });
} else {
styleDispatch({
type: 'SET_INNER_PADDING',
payload: true,
});
if (!styleState.isMobile) {
styleDispatch({ type: 'SET_SIDER', payload: true });
}
}
}
}}>
}}
>
{props.isExternal ? (
<a
className="header-bar-text"
className='header-bar-text'
style={{ textDecoration: 'none' }}
href={props.externalLink}
target="_blank"
rel="noopener noreferrer"
target='_blank'
rel='noopener noreferrer'
>
{itemElement}
</a>
) : (
<Link
className="header-bar-text"
className='header-bar-text'
style={{ textDecoration: 'none' }}
to={routerMap[props.itemKey]}
>
@@ -268,67 +290,98 @@ const HeaderBar = () => {
selectedKeys={[]}
// items={headerButtons}
onSelect={(key) => {}}
header={styleState.isMobile?{
logo: (
<div style={{ display: 'flex', alignItems: 'center', position: 'relative' }}>
{
!styleState.showSider ?
<Button icon={<IconMenu />} theme="light" aria-label={t('展开侧边栏')} onClick={
() => styleDispatch({ type: 'SET_SIDER', payload: true })
} />:
<Button icon={<IconIndentLeft />} theme="light" aria-label={t('闭侧边栏')} onClick={
() => styleDispatch({ type: 'SET_SIDER', payload: false })
} />
header={
styleState.isMobile
? {
logo: (
<div
style={{
display: 'flex',
alignItems: 'center',
position: 'relative',
}}
>
{!styleState.showSider ? (
<Button
icon={<IconMenu />}
theme='light'
aria-label={t('展开侧边栏')}
onClick={() =>
styleDispatch({
type: 'SET_SIDER',
payload: true,
})
}
/>
) : (
<Button
icon={<IconIndentLeft />}
theme='light'
aria-label={t('闭侧边栏')}
onClick={() =>
styleDispatch({
type: 'SET_SIDER',
payload: false,
})
}
/>
)}
{(isSelfUseMode || isDemoSiteMode) && (
<Tag
color={isSelfUseMode ? 'purple' : 'blue'}
style={{
position: 'absolute',
top: '-8px',
right: '-15px',
fontSize: '0.7rem',
padding: '0 4px',
height: 'auto',
lineHeight: '1.2',
zIndex: 1,
pointerEvents: 'none',
}}
>
{isSelfUseMode ? t('自用模式') : t('演示站点')}
</Tag>
)}
</div>
),
}
{(isSelfUseMode || isDemoSiteMode) && (
<Tag
color={isSelfUseMode ? 'purple' : 'blue'}
style={{
position: 'absolute',
top: '-8px',
right: '-15px',
fontSize: '0.7rem',
padding: '0 4px',
height: 'auto',
lineHeight: '1.2',
zIndex: 1,
pointerEvents: 'none'
}}
>
{isSelfUseMode ? t('自用模式') : t('演示站点')}
</Tag>
)}
</div>
),
}:{
logo: (
<div style={logoStyle}>
<img src={logo} alt='logo' style={{ height: '28px' }} />
</div>
),
text: (
<div style={{ position: 'relative', display: 'inline-block' }}>
<span style={systemNameStyle}>{systemName}</span>
{(isSelfUseMode || isDemoSiteMode) && (
<Tag
color={isSelfUseMode ? 'purple' : 'blue'}
style={{
position: 'absolute',
top: '-10px',
right: '-25px',
fontSize: '0.7rem',
padding: '0 4px',
whiteSpace: 'nowrap',
zIndex: 1,
boxShadow: '0 0 3px rgba(255, 255, 255, 0.7)'
}}
>
{isSelfUseMode ? t('自用模式') : t('演示站点')}
</Tag>
)}
</div>
),
}}
: {
logo: (
<div style={logoStyle}>
<img src={logo} alt='logo' style={{ height: '28px' }} />
</div>
),
text: (
<div
style={{
position: 'relative',
display: 'inline-block',
}}
>
<span style={systemNameStyle}>{systemName}</span>
{(isSelfUseMode || isDemoSiteMode) && (
<Tag
color={isSelfUseMode ? 'purple' : 'blue'}
style={{
position: 'absolute',
top: '-10px',
right: '-25px',
fontSize: '0.7rem',
padding: '0 4px',
whiteSpace: 'nowrap',
zIndex: 1,
boxShadow: '0 0 3px rgba(255, 255, 255, 0.7)',
}}
>
{isSelfUseMode ? t('自用模式') : t('演示站点')}
</Tag>
)}
</div>
),
}
}
items={buttons}
footer={
<>
@@ -351,7 +404,7 @@ const HeaderBar = () => {
<>
<Switch
checkedText='🌞'
size={styleState.isMobile?'default':'large'}
size={styleState.isMobile ? 'default' : 'large'}
checked={theme === 'dark'}
uncheckedText='🌙'
style={switchStyle}
@@ -390,7 +443,9 @@ const HeaderBar = () => {
position='bottomRight'
render={
<Dropdown.Menu style={dropdownStyle}>
<Dropdown.Item onClick={logout}>{t('退出')}</Dropdown.Item>
<Dropdown.Item onClick={logout}>
{t('退出')}
</Dropdown.Item>
</Dropdown.Menu>
}
>
@@ -401,14 +456,18 @@ const HeaderBar = () => {
>
{userState.user.username[0]}
</Avatar>
{styleState.isMobile?null:<Text style={{ marginLeft: '4px', fontWeight: '500' }}>{userState.user.username}</Text>}
{styleState.isMobile ? null : (
<Text style={{ marginLeft: '4px', fontWeight: '500' }}>
{userState.user.username}
</Text>
)}
</Dropdown>
</>
) : (
<>
<Nav.Item
itemKey={'login'}
text={!styleState.isMobile?t('登录'):null}
text={!styleState.isMobile ? t('登录') : null}
icon={<IconUser style={headerIconStyle} />}
/>
{

View File

@@ -9,7 +9,11 @@ import {
showSuccess,
updateAPI,
} from '../helpers';
import {onGitHubOAuthClicked, onOIDCClicked, onLinuxDOOAuthClicked} from './utils';
import {
onGitHubOAuthClicked,
onOIDCClicked,
onLinuxDOOAuthClicked,
} from './utils';
import Turnstile from 'react-turnstile';
import {
Button,
@@ -71,7 +75,6 @@ const LoginForm = () => {
}
}, []);
const onWeChatLoginClicked = () => {
setShowWeChatLoginModal(true);
};
@@ -223,7 +226,8 @@ const LoginForm = () => {
}}
>
<Text>
{t('没有账户?')} <Link to='/register'>{t('点击注册')}</Link>
{t('没有账户?')}{' '}
<Link to='/register'>{t('点击注册')}</Link>
</Text>
<Text>
{t('忘记密码?')} <Link to='/reset'>{t('点击重置')}</Link>
@@ -257,15 +261,18 @@ const LoginForm = () => {
<></>
)}
{status.oidc_enabled ? (
<Button
type='primary'
icon={<OIDCIcon />}
onClick={() =>
onOIDCClicked(status.oidc_authorization_endpoint, status.oidc_client_id)
}
/>
<Button
type='primary'
icon={<OIDCIcon />}
onClick={() =>
onOIDCClicked(
status.oidc_authorization_endpoint,
status.oidc_client_id,
)
}
/>
) : (
<></>
<></>
)}
{status.linuxdo_oauth ? (
<Button
@@ -331,7 +338,9 @@ const LoginForm = () => {
</div>
<div style={{ textAlign: 'center' }}>
<p>
{t('微信扫码关注公众号,输入「验证码」获取验证码(三分钟内有效)')}
{t(
'微信扫码关注公众号,输入「验证码」获取验证码(三分钟内有效)',
)}
</p>
</div>
<Form size='large'>

View File

@@ -12,17 +12,19 @@ import {
import {
Avatar,
Button, Descriptions,
Button,
Descriptions,
Form,
Layout,
Modal, Popover,
Modal,
Popover,
Select,
Space,
Spin,
Table,
Tag,
Tooltip,
Checkbox
Checkbox,
} from '@douyinfe/semi-ui';
import { ITEMS_PER_PAGE } from '../constants';
import {
@@ -36,7 +38,7 @@ import {
renderModelPriceSimple,
renderNumber,
renderQuota,
stringToColor
stringToColor,
} from '../helpers/render';
import Paragraph from '@douyinfe/semi-ui/lib/es/typography/paragraph';
import { getLogOther } from '../helpers/other.js';
@@ -78,23 +80,57 @@ const LogsTable = () => {
function renderType(type) {
switch (type) {
case 1:
return <Tag color='cyan' size='large'>{t('充值')}</Tag>;
return (
<Tag color='cyan' size='large'>
{t('充值')}
</Tag>
);
case 2:
return <Tag color='lime' size='large'>{t('消费')}</Tag>;
return (
<Tag color='lime' size='large'>
{t('消费')}
</Tag>
);
case 3:
return <Tag color='orange' size='large'>{t('管理')}</Tag>;
return (
<Tag color='orange' size='large'>
{t('管理')}
</Tag>
);
case 4:
return <Tag color='purple' size='large'>{t('系统')}</Tag>;
return (
<Tag color='purple' size='large'>
{t('系统')}
</Tag>
);
case 5:
return (
<Tag color='red' size='large'>
{t('错误')}
</Tag>
);
default:
return <Tag color='black' size='large'>{t('未知')}</Tag>;
return (
<Tag color='grey' size='large'>
{t('未知')}
</Tag>
);
}
}
function renderIsStream(bool) {
if (bool) {
return <Tag color='blue' size='large'>{t('流')}</Tag>;
return (
<Tag color='blue' size='large'>
{t('流')}
</Tag>
);
} else {
return <Tag color='purple' size='large'>{t('非流')}</Tag>;
return (
<Tag color='purple' size='large'>
{t('非流')}
</Tag>
);
}
}
@@ -152,56 +188,70 @@ const LogsTable = () => {
}
function renderModelName(record) {
let other = getLogOther(record.other);
let modelMapped = other?.is_model_mapped && other?.upstream_model_name && other?.upstream_model_name !== '';
let modelMapped =
other?.is_model_mapped &&
other?.upstream_model_name &&
other?.upstream_model_name !== '';
if (!modelMapped) {
return <Tag
color={stringToColor(record.model_name)}
size='large'
onClick={(event) => {
copyText(event, record.model_name).then(r => {});
}}
>
{' '}{record.model_name}{' '}
</Tag>;
return (
<Tag
color={stringToColor(record.model_name)}
size='large'
onClick={(event) => {
copyText(event, record.model_name).then((r) => {});
}}
>
{' '}
{record.model_name}{' '}
</Tag>
);
} else {
return (
<>
<Space vertical align={'start'}>
<Popover content={
<div style={{padding: 10}}>
<Space vertical align={'start'}>
<Tag
color={stringToColor(record.model_name)}
size='large'
onClick={(event) => {
copyText(event, record.model_name).then(r => {});
}}
>
{t('请求并计费模型')}{' '}{record.model_name}{' '}
</Tag>
<Tag
color={stringToColor(other.upstream_model_name)}
size='large'
onClick={(event) => {
copyText(event, other.upstream_model_name).then(r => {});
}}
>
{t('实际模型')}{' '}{other.upstream_model_name}{' '}
</Tag>
</Space>
</div>
}>
<Popover
content={
<div style={{ padding: 10 }}>
<Space vertical align={'start'}>
<Tag
color={stringToColor(record.model_name)}
size='large'
onClick={(event) => {
copyText(event, record.model_name).then((r) => {});
}}
>
{t('请求并计费模型')} {record.model_name}{' '}
</Tag>
<Tag
color={stringToColor(other.upstream_model_name)}
size='large'
onClick={(event) => {
copyText(event, other.upstream_model_name).then(
(r) => {},
);
}}
>
{t('实际模型')} {other.upstream_model_name}{' '}
</Tag>
</Space>
</div>
}
>
<Tag
color={stringToColor(record.model_name)}
size='large'
onClick={(event) => {
copyText(event, record.model_name).then(r => {});
copyText(event, record.model_name).then((r) => {});
}}
suffixIcon={<IconRefresh style={{width: '0.8em', height: '0.8em', opacity: 0.6}} />}
suffixIcon={
<IconRefresh
style={{ width: '0.8em', height: '0.8em', opacity: 0.6 }}
/>
}
>
{' '}{record.model_name}{' '}
{' '}
{record.model_name}{' '}
</Tag>
</Popover>
{/*<Tooltip content={t('实际模型')}>*/}
@@ -219,7 +269,6 @@ const LogsTable = () => {
</>
);
}
}
// Define column keys for selection
@@ -236,7 +285,7 @@ const LogsTable = () => {
COMPLETION: 'completion',
COST: 'cost',
RETRY: 'retry',
DETAILS: 'details'
DETAILS: 'details',
};
// State for column visibility
@@ -277,7 +326,7 @@ const LogsTable = () => {
[COLUMN_KEYS.COMPLETION]: true,
[COLUMN_KEYS.COST]: true,
[COLUMN_KEYS.RETRY]: isAdminUser,
[COLUMN_KEYS.DETAILS]: true
[COLUMN_KEYS.DETAILS]: true,
};
};
@@ -296,18 +345,23 @@ const LogsTable = () => {
// Handle "Select All" checkbox
const handleSelectAll = (checked) => {
const allKeys = Object.keys(COLUMN_KEYS).map(key => COLUMN_KEYS[key]);
const allKeys = Object.keys(COLUMN_KEYS).map((key) => COLUMN_KEYS[key]);
const updatedColumns = {};
allKeys.forEach(key => {
allKeys.forEach((key) => {
// For admin-only columns, only enable them if user is admin
if ((key === COLUMN_KEYS.CHANNEL || key === COLUMN_KEYS.USERNAME || key === COLUMN_KEYS.RETRY) && !isAdminUser) {
if (
(key === COLUMN_KEYS.CHANNEL ||
key === COLUMN_KEYS.USERNAME ||
key === COLUMN_KEYS.RETRY) &&
!isAdminUser
) {
updatedColumns[key] = false;
} else {
updatedColumns[key] = checked;
}
});
setVisibleColumns(updatedColumns);
};
@@ -325,7 +379,7 @@ const LogsTable = () => {
className: isAdmin() ? 'tableShow' : 'tableHiddle',
render: (text, record, index) => {
return isAdminUser ? (
record.type === 0 || record.type === 2 ? (
record.type === 0 || record.type === 2 || record.type === 5 ? (
<div>
{
<Tooltip content={record.channel_name || '[未知]'}>
@@ -361,7 +415,7 @@ const LogsTable = () => {
style={{ marginRight: 4 }}
onClick={(event) => {
event.stopPropagation();
showUserInfo(record.user_id)
showUserInfo(record.user_id);
}}
>
{typeof text === 'string' && text.slice(0, 1)}
@@ -378,7 +432,7 @@ const LogsTable = () => {
title: t('令牌'),
dataIndex: 'token_name',
render: (text, record, index) => {
return record.type === 0 || record.type === 2 ? (
return record.type === 0 || record.type === 2 || record.type === 5 ? (
<div>
<Tag
color='grey'
@@ -402,33 +456,28 @@ const LogsTable = () => {
title: t('分组'),
dataIndex: 'group',
render: (text, record, index) => {
if (record.type === 0 || record.type === 2) {
if (record.group) {
return (
<>
{renderGroup(record.group)}
</>
);
} else {
let other = null;
try {
other = JSON.parse(record.other);
} catch (e) {
console.error(`Failed to parse record.other: "${record.other}".`, e);
}
if (other === null) {
return <></>;
}
if (other.group !== undefined) {
return (
<>
{renderGroup(other.group)}
</>
);
} else {
return <></>;
}
}
if (record.type === 0 || record.type === 2 || record.type === 5) {
if (record.group) {
return <>{renderGroup(record.group)}</>;
} else {
let other = null;
try {
other = JSON.parse(record.other);
} catch (e) {
console.error(
`Failed to parse record.other: "${record.other}".`,
e,
);
}
if (other === null) {
return <></>;
}
if (other.group !== undefined) {
return <>{renderGroup(other.group)}</>;
} else {
return <></>;
}
}
} else {
return <></>;
}
@@ -447,7 +496,7 @@ const LogsTable = () => {
title: t('模型'),
dataIndex: 'model_name',
render: (text, record, index) => {
return record.type === 0 || record.type === 2 ? (
return record.type === 0 || record.type === 2 || record.type === 5 ? (
<>{renderModelName(record)}</>
) : (
<></>
@@ -487,7 +536,7 @@ const LogsTable = () => {
title: t('提示'),
dataIndex: 'prompt_tokens',
render: (text, record, index) => {
return record.type === 0 || record.type === 2 ? (
return record.type === 0 || record.type === 2 || record.type === 5 ? (
<>{<span> {text} </span>}</>
) : (
<></>
@@ -500,7 +549,7 @@ const LogsTable = () => {
dataIndex: 'completion_tokens',
render: (text, record, index) => {
return parseInt(text) > 0 &&
(record.type === 0 || record.type === 2) ? (
(record.type === 0 || record.type === 2 || record.type === 5) ? (
<>{<span> {text} </span>}</>
) : (
<></>
@@ -512,7 +561,7 @@ const LogsTable = () => {
title: t('花费'),
dataIndex: 'quota',
render: (text, record, index) => {
return record.type === 0 || record.type === 2 ? (
return record.type === 0 || record.type === 2 || record.type === 5 ? (
<>{renderQuota(text, 6)}</>
) : (
<></>
@@ -569,33 +618,32 @@ const LogsTable = () => {
</Paragraph>
);
}
let content = other?.claude
? renderClaudeModelPriceSimple(
other.model_ratio,
other.model_price,
other.group_ratio,
other.cache_tokens || 0,
other.cache_ratio || 1.0,
other.cache_creation_tokens || 0,
other.cache_creation_ratio || 1.0,
)
other.model_ratio,
other.model_price,
other.group_ratio,
other.cache_tokens || 0,
other.cache_ratio || 1.0,
other.cache_creation_tokens || 0,
other.cache_creation_ratio || 1.0,
)
: renderModelPriceSimple(
other.model_ratio,
other.model_price,
other.group_ratio,
other.cache_tokens || 0,
other.cache_ratio || 1.0,
);
other.model_ratio,
other.model_price,
other.group_ratio,
other.cache_tokens || 0,
other.cache_ratio || 1.0,
);
return (
<Paragraph
ellipsis={{
rows: 2,
}}
style={{ maxWidth: 240 }}
>
{content}
</Paragraph>
<Paragraph
ellipsis={{
rows: 2,
}}
style={{ maxWidth: 240 }}
>
{content}
</Paragraph>
);
},
},
@@ -605,13 +653,16 @@ const LogsTable = () => {
useEffect(() => {
if (Object.keys(visibleColumns).length > 0) {
// Save to localStorage
localStorage.setItem('logs-table-columns', JSON.stringify(visibleColumns));
localStorage.setItem(
'logs-table-columns',
JSON.stringify(visibleColumns),
);
}
}, [visibleColumns]);
// Filter columns based on visibility settings
const getVisibleColumns = () => {
return allColumns.filter(column => visibleColumns[column.key]);
return allColumns.filter((column) => visibleColumns[column.key]);
};
// Column selector modal
@@ -624,42 +675,59 @@ const LogsTable = () => {
footer={
<>
<Button onClick={() => initDefaultColumns()}>{t('重置')}</Button>
<Button onClick={() => setShowColumnSelector(false)}>{t('取消')}</Button>
<Button type="primary" onClick={() => setShowColumnSelector(false)}>{t('确定')}</Button>
<Button onClick={() => setShowColumnSelector(false)}>
{t('取消')}
</Button>
<Button type='primary' onClick={() => setShowColumnSelector(false)}>
{t('确定')}
</Button>
</>
}
>
<div style={{ marginBottom: 20 }}>
<Checkbox
checked={Object.values(visibleColumns).every(v => v === true)}
indeterminate={Object.values(visibleColumns).some(v => v === true) && !Object.values(visibleColumns).every(v => v === true)}
onChange={e => handleSelectAll(e.target.checked)}
checked={Object.values(visibleColumns).every((v) => v === true)}
indeterminate={
Object.values(visibleColumns).some((v) => v === true) &&
!Object.values(visibleColumns).every((v) => v === true)
}
onChange={(e) => handleSelectAll(e.target.checked)}
>
{t('全选')}
</Checkbox>
</div>
<div style={{
display: 'flex',
flexWrap: 'wrap',
maxHeight: '400px',
overflowY: 'auto',
border: '1px solid var(--semi-color-border)',
borderRadius: '6px',
padding: '16px'
}}>
{allColumns.map(column => {
<div
style={{
display: 'flex',
flexWrap: 'wrap',
maxHeight: '400px',
overflowY: 'auto',
border: '1px solid var(--semi-color-border)',
borderRadius: '6px',
padding: '16px',
}}
>
{allColumns.map((column) => {
// Skip admin-only columns for non-admin users
if (!isAdminUser && (column.key === COLUMN_KEYS.CHANNEL ||
column.key === COLUMN_KEYS.USERNAME ||
column.key === COLUMN_KEYS.RETRY)) {
if (
!isAdminUser &&
(column.key === COLUMN_KEYS.CHANNEL ||
column.key === COLUMN_KEYS.USERNAME ||
column.key === COLUMN_KEYS.RETRY)
) {
return null;
}
return (
<div key={column.key} style={{ width: '50%', marginBottom: 16, paddingRight: 8 }}>
<div
key={column.key}
style={{ width: '50%', marginBottom: 16, paddingRight: 8 }}
>
<Checkbox
checked={!!visibleColumns[column.key]}
onChange={e => handleColumnVisibilityChange(column.key, e.target.checked)}
onChange={(e) =>
handleColumnVisibilityChange(column.key, e.target.checked)
}
>
{column.title}
</Checkbox>
@@ -709,7 +777,7 @@ const LogsTable = () => {
});
const handleInputChange = (value, name) => {
setInputs(inputs => ({ ...inputs, [name]: value }));
setInputs((inputs) => ({ ...inputs, [name]: value }));
};
const getLogSelfStat = async () => {
@@ -765,10 +833,18 @@ const LogsTable = () => {
title: t('用户信息'),
content: (
<div style={{ padding: 12 }}>
<p>{t('用户名')}: {data.username}</p>
<p>{t('余额')}: {renderQuota(data.quota)}</p>
<p>{t('已用额度')}{renderQuota(data.used_quota)}</p>
<p>{t('请求次数')}{renderNumber(data.request_count)}</p>
<p>
{t('用户名')}: {data.username}
</p>
<p>
{t('余额')}: {renderQuota(data.quota)}
</p>
<p>
{t('已用额度')}{renderQuota(data.used_quota)}
</p>
<p>
{t('请求次数')}{renderNumber(data.request_count)}
</p>
</div>
),
centered: true,
@@ -803,11 +879,11 @@ const LogsTable = () => {
// key: '渠道重试',
// value: content,
// })
}
}
if (isAdminUser && (logs[i].type === 0 || logs[i].type === 2)) {
expandDataLocal.push({
key: t('渠道信息'),
value: `${logs[i].channel} - ${logs[i].channel_name || '[未知]'}`
value: `${logs[i].channel} - ${logs[i].channel_name || '[未知]'}`,
});
}
if (other?.ws || other?.audio) {
@@ -845,25 +921,34 @@ const LogsTable = () => {
key: t('日志详情'),
value: other?.claude
? renderClaudeLogContent(
other?.model_ratio,
other.completion_ratio,
other.model_price,
other.group_ratio,
other.user_group_ratio,
other.cache_ratio || 1.0,
other.cache_creation_ratio || 1.0
)
other?.model_ratio,
other.completion_ratio,
other.model_price,
other.group_ratio,
other.cache_ratio || 1.0,
other.cache_creation_ratio || 1.0,
)
: renderLogContent(
other?.model_ratio,
other.completion_ratio,
other.model_price,
other.group_ratio,
other.user_group_ratio
),
other?.model_ratio,
other.completion_ratio,
other.model_price,
other.group_ratio,
other?.user_group_ratio,
false,
1.0,
undefined,
other.web_search || false,
other.web_search_call_count || 0,
other.file_search || false,
other.file_search_call_count || 0,
),
});
}
if (logs[i].type === 2) {
let modelMapped = other?.is_model_mapped && other?.upstream_model_name && other?.upstream_model_name !== '';
let modelMapped =
other?.is_model_mapped &&
other?.upstream_model_name &&
other?.upstream_model_name !== '';
if (modelMapped) {
expandDataLocal.push({
key: t('请求并计费模型'),
@@ -913,6 +998,15 @@ const LogsTable = () => {
other?.group_ratio,
other?.cache_tokens || 0,
other?.cache_ratio || 1.0,
other?.image || false,
other?.image_ratio || 0,
other?.image_output || 0,
other?.web_search || false,
other?.web_search_call_count || 0,
other?.web_search_price || 0,
other?.file_search || false,
other?.file_search_call_count || 0,
other?.file_search_price || 0,
);
}
expandDataLocal.push({
@@ -1014,29 +1108,41 @@ const LogsTable = () => {
<Header>
<Spin spinning={loadingStat}>
<Space>
<Tag color='blue' size='large' style={{
padding: 15,
borderRadius: '8px',
fontWeight: 500,
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)'
}}>
<Tag
color='blue'
size='large'
style={{
padding: 15,
borderRadius: '8px',
fontWeight: 500,
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)',
}}
>
{t('消耗额度')}: {renderQuota(stat.quota)}
</Tag>
<Tag color='pink' size='large' style={{
padding: 15,
borderRadius: '8px',
fontWeight: 500,
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)'
}}>
<Tag
color='pink'
size='large'
style={{
padding: 15,
borderRadius: '8px',
fontWeight: 500,
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)',
}}
>
RPM: {stat.rpm}
</Tag>
<Tag color='white' size='large' style={{
padding: 15,
border: 'none',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)',
borderRadius: '8px',
fontWeight: 500,
}}>
<Tag
color='white'
size='large'
style={{
padding: 15,
border: 'none',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)',
borderRadius: '8px',
fontWeight: 500,
}}
>
TPM: {stat.tpm}
</Tag>
</Space>
@@ -1046,46 +1152,46 @@ const LogsTable = () => {
<>
<Form.Section>
<div style={{ marginBottom: 10 }}>
{
styleState.isMobile ? (
<div>
<Form.DatePicker
field='start_timestamp'
label={t('起始时间')}
style={{ width: 272 }}
initValue={start_timestamp}
type='dateTime'
onChange={(value) => {
console.log(value);
handleInputChange(value, 'start_timestamp')
}}
/>
<Form.DatePicker
field='end_timestamp'
fluid
label={t('结束时间')}
style={{ width: 272 }}
initValue={end_timestamp}
type='dateTime'
onChange={(value) => handleInputChange(value, 'end_timestamp')}
/>
</div>
) : (
{styleState.isMobile ? (
<div>
<Form.DatePicker
field="range_timestamp"
label={t('时间范围')}
initValue={[start_timestamp, end_timestamp]}
type="dateTimeRange"
name="range_timestamp"
field='start_timestamp'
label={t('起始时间')}
style={{ width: 272 }}
initValue={start_timestamp}
type='dateTime'
onChange={(value) => {
if (Array.isArray(value) && value.length === 2) {
handleInputChange(value[0], 'start_timestamp');
handleInputChange(value[1], 'end_timestamp');
}
console.log(value);
handleInputChange(value, 'start_timestamp');
}}
/>
)
}
<Form.DatePicker
field='end_timestamp'
fluid
label={t('结束时间')}
style={{ width: 272 }}
initValue={end_timestamp}
type='dateTime'
onChange={(value) =>
handleInputChange(value, 'end_timestamp')
}
/>
</div>
) : (
<Form.DatePicker
field='range_timestamp'
label={t('时间范围')}
initValue={[start_timestamp, end_timestamp]}
type='dateTimeRange'
name='range_timestamp'
onChange={(value) => {
if (Array.isArray(value) && value.length === 2) {
handleInputChange(value[0], 'start_timestamp');
handleInputChange(value[1], 'end_timestamp');
}
}}
/>
)}
</div>
</Form.Section>
<Form.Input
@@ -1146,20 +1252,21 @@ const LogsTable = () => {
<Form.Section></Form.Section>
</>
</Form>
<div style={{marginTop:10}}>
<div style={{ marginTop: 10 }}>
<Select
defaultValue='0'
style={{ width: 120 }}
onChange={(value) => {
setLogType(parseInt(value));
loadLogs(0, pageSize, parseInt(value));
}}
defaultValue='0'
style={{ width: 120 }}
onChange={(value) => {
setLogType(parseInt(value));
loadLogs(0, pageSize, parseInt(value));
}}
>
<Select.Option value='0'>{t('全部')}</Select.Option>
<Select.Option value='1'>{t('充值')}</Select.Option>
<Select.Option value='2'>{t('消费')}</Select.Option>
<Select.Option value='3'>{t('管理')}</Select.Option>
<Select.Option value='4'>{t('系统')}</Select.Option>
<Select.Option value='5'>{t('错误')}</Select.Option>
</Select>
<Button
theme='light'
@@ -1177,13 +1284,13 @@ const LogsTable = () => {
expandedRowRender={expandRowRender}
expandRowByClick={true}
dataSource={logs}
rowKey="key"
rowKey='key'
pagination={{
formatPageText: (page) =>
t('第 {{start}} - {{end}} 条,共 {{total}} 条', {
start: page.currentStart,
end: page.currentEnd,
total: logCount
total: logCount,
}),
currentPage: activePage,
pageSize: pageSize,

View File

@@ -46,7 +46,6 @@ const LogsTable = () => {
const [isModalOpen, setIsModalOpen] = useState(false);
const [modalContent, setModalContent] = useState('');
function renderType(type) {
switch (type) {
case 'IMAGINE':
return (
@@ -98,9 +97,9 @@ const LogsTable = () => {
);
case 'UPLOAD':
return (
<Tag color='blue' size='large'>
上传文件
</Tag>
<Tag color='blue' size='large'>
上传文件
</Tag>
);
case 'SHORTEN':
return (
@@ -152,9 +151,8 @@ const LogsTable = () => {
);
}
}
function renderCode(code) {
switch (code) {
case 1:
return (
@@ -188,9 +186,8 @@ const LogsTable = () => {
);
}
}
function renderStatus(type) {
switch (type) {
case 'SUCCESS':
return (
@@ -236,22 +233,21 @@ const LogsTable = () => {
);
}
}
const renderTimestamp = (timestampInSeconds) => {
const date = new Date(timestampInSeconds * 1000); // 从秒转换为毫秒
const year = date.getFullYear(); // 获取年份
const month = ('0' + (date.getMonth() + 1)).slice(-2); // 获取月份从0开始需要+1并保证两位数
const day = ('0' + date.getDate()).slice(-2); // 获取日期,并保证两位数
const hours = ('0' + date.getHours()).slice(-2); // 获取小时,并保证两位数
const minutes = ('0' + date.getMinutes()).slice(-2); // 获取分钟,并保证两位数
const seconds = ('0' + date.getSeconds()).slice(-2); // 获取秒钟,并保证两位数
return `${year}-${month}-${day} ${hours}:${minutes}:${seconds}`; // 格式化输出
};
// 修改renderDuration函数以包含颜色逻辑
function renderDuration(submit_time, finishTime) {
if (!submit_time || !finishTime) return 'N/A';
const start = new Date(submit_time);
@@ -261,7 +257,7 @@ const LogsTable = () => {
const color = durationSec > 60 ? 'red' : 'green';
return (
<Tag color={color} size="large">
<Tag color={color} size='large'>
{durationSec} {t('秒')}
</Tag>
);
@@ -560,7 +556,9 @@ const LogsTable = () => {
{isAdminUser && showBanner ? (
<Banner
type='info'
description={t('当前未开启Midjourney回调部分项目可能无法获得绘图结果可在运营设置中开启。')}
description={t(
'当前未开启Midjourney回调部分项目可能无法获得绘图结果可在运营设置中开启。',
)}
/>
) : (
<></>
@@ -634,7 +632,7 @@ const LogsTable = () => {
t('第 {{start}} - {{end}} 条,共 {{total}} 条', {
start: page.currentStart,
end: page.currentEnd,
total: logCount
total: logCount,
}),
}}
loading={loading}

View File

@@ -34,12 +34,12 @@ const ModelPricing = () => {
const [selectedGroup, setSelectedGroup] = useState('default');
const rowSelection = useMemo(
() => ({
onChange: (selectedRowKeys, selectedRows) => {
setSelectedRowKeys(selectedRowKeys);
},
}),
[]
() => ({
onChange: (selectedRowKeys, selectedRows) => {
setSelectedRowKeys(selectedRowKeys);
},
}),
[],
);
const handleChange = (value) => {
@@ -59,7 +59,7 @@ const ModelPricing = () => {
const newFilteredValue = value ? [value] : [];
setFilteredValue(newFilteredValue);
};
function renderQuotaType(type) {
// Ensure all cases are string literals by adding quotes.
switch (type) {
@@ -79,9 +79,9 @@ const ModelPricing = () => {
return t('未知');
}
}
function renderAvailable(available) {
return (
return available ? (
<Popover
content={
<div style={{ padding: 8 }}>{t('您的分组可以使用该模型')}</div>
@@ -96,9 +96,9 @@ const ModelPricing = () => {
borderStyle: 'solid',
}}
>
<IconVerify style={{ color: 'green' }} size="large" />
<IconVerify style={{ color: 'green' }} size='large' />
</Popover>
)
) : null;
}
const columns = [
@@ -106,10 +106,15 @@ const ModelPricing = () => {
title: t('可用性'),
dataIndex: 'available',
render: (text, record, index) => {
// if record.enable_groups contains selectedGroup, then available is true
// if record.enable_groups contains selectedGroup, then available is true
return renderAvailable(record.enable_groups.includes(selectedGroup));
},
sorter: (a, b) => a.available - b.available,
sorter: (a, b) => {
const aAvailable = a.enable_groups.includes(selectedGroup);
const bAvailable = b.enable_groups.includes(selectedGroup);
return Number(aAvailable) - Number(bAvailable);
},
defaultSortOrder: 'descend',
},
{
title: t('模型名称'),
@@ -145,7 +150,6 @@ const ModelPricing = () => {
title: t('可用分组'),
dataIndex: 'enable_groups',
render: (text, record, index) => {
// enable_groups is a string array
return (
<Space>
@@ -153,11 +157,7 @@ const ModelPricing = () => {
if (usableGroup[group]) {
if (group === selectedGroup) {
return (
<Tag
color='blue'
size='large'
prefixIcon={<IconVerify />}
>
<Tag color='blue' size='large' prefixIcon={<IconVerify />}>
{group}
</Tag>
);
@@ -168,10 +168,12 @@ const ModelPricing = () => {
size='large'
onClick={() => {
setSelectedGroup(group);
showInfo(t('当前查看的分组为:{{group}},倍率为:{{ratio}}', {
group: group,
ratio: groupRatio[group]
}));
showInfo(
t('当前查看的分组为:{{group}},倍率为:{{ratio}}', {
group: group,
ratio: groupRatio[group],
}),
);
}}
>
{group}
@@ -186,22 +188,23 @@ const ModelPricing = () => {
},
{
title: () => (
<span style={{'display':'flex','alignItems':'center'}}>
<span style={{ display: 'flex', alignItems: 'center' }}>
{t('倍率')}
<Popover
content={
<div style={{ padding: 8 }}>
{t('倍率是为了方便换算不同价格的模型')}<br/>
{t('倍率是为了方便换算不同价格的模型')}
<br />
{t('点击查看倍率说明')}
</div>
}
position='top'
style={{
backgroundColor: 'rgba(var(--semi-blue-4),1)',
borderColor: 'rgba(var(--semi-blue-4),1)',
color: 'var(--semi-color-white)',
borderWidth: 1,
borderStyle: 'solid',
backgroundColor: 'rgba(var(--semi-blue-4),1)',
borderColor: 'rgba(var(--semi-blue-4),1)',
color: 'var(--semi-color-white)',
borderWidth: 1,
borderStyle: 'solid',
}}
>
<IconHelpCircle
@@ -219,11 +222,18 @@ const ModelPricing = () => {
let completionRatio = parseFloat(record.completion_ratio.toFixed(3));
content = (
<>
<Text>{t('模型倍率')}{record.quota_type === 0 ? text : t('无')}</Text>
<Text>
{t('模型倍率')}{record.quota_type === 0 ? text : t('无')}
</Text>
<br />
<Text>{t('补全倍率')}{record.quota_type === 0 ? completionRatio : t('无')}</Text>
<Text>
{t('补全倍率')}
{record.quota_type === 0 ? completionRatio : t('无')}
</Text>
<br />
<Text>{t('分组倍率')}{groupRatio[selectedGroup]}</Text>
<Text>
{t('分组倍率')}{groupRatio[selectedGroup]}
</Text>
</>
);
return <div>{content}</div>;
@@ -236,21 +246,31 @@ const ModelPricing = () => {
let content = text;
if (record.quota_type === 0) {
// 这里的 *2 是因为 1倍率=0.002刀,请勿删除
let inputRatioPrice = record.model_ratio * 2 * groupRatio[selectedGroup];
let inputRatioPrice =
record.model_ratio * 2 * groupRatio[selectedGroup];
let completionRatioPrice =
record.model_ratio *
record.completion_ratio * 2 *
record.completion_ratio *
2 *
groupRatio[selectedGroup];
content = (
<>
<Text>{t('提示')} ${inputRatioPrice} / 1M tokens</Text>
<Text>
{t('提示')} ${inputRatioPrice} / 1M tokens
</Text>
<br />
<Text>{t('补全')} ${completionRatioPrice} / 1M tokens</Text>
<Text>
{t('补全')} ${completionRatioPrice} / 1M tokens
</Text>
</>
);
} else {
let price = parseFloat(text) * groupRatio[selectedGroup];
content = <>${t('模型价格')}${price}</>;
content = (
<>
${t('模型价格')}${price}
</>
);
}
return <div>{content}</div>;
},
@@ -300,7 +320,7 @@ const ModelPricing = () => {
if (success) {
setGroupRatio(group_ratio);
setUsableGroup(usable_group);
setSelectedGroup(userState.user ? userState.user.group : 'default')
setSelectedGroup(userState.user ? userState.user.group : 'default');
setModelsFormat(data, group_ratio);
} else {
showError(message);
@@ -330,32 +350,38 @@ const ModelPricing = () => {
<Layout>
{userState.user ? (
<Banner
type="success"
type='success'
fullMode={false}
closeIcon="null"
closeIcon='null'
description={t('您的默认分组为:{{group}},分组倍率为:{{ratio}}', {
group: userState.user.group,
ratio: groupRatio[userState.user.group]
ratio: groupRatio[userState.user.group],
})}
/>
) : (
<Banner
type='warning'
fullMode={false}
closeIcon="null"
closeIcon='null'
description={t('您还未登陆,显示的价格为默认分组倍率: {{ratio}}', {
ratio: groupRatio['default']
ratio: groupRatio['default'],
})}
/>
)}
<br/>
<Banner
type="info"
fullMode={false}
description={<div>{t('按量计费费用 = 分组倍率 × 模型倍率 × 提示token数 + 补全token数 × 补全倍率)/ 500000 (单位:美元)')}</div>}
closeIcon="null"
<br />
<Banner
type='info'
fullMode={false}
description={
<div>
{t(
'按量计费费用 = 分组倍率 × 模型倍率 × 提示token数 + 补全token数 × 补全倍率)/ 500000 (单位:美元)',
)}
</div>
}
closeIcon='null'
/>
<br/>
<br />
<Space style={{ marginBottom: 16 }}>
<Input
placeholder={t('模糊搜索模型名称')}
@@ -368,11 +394,11 @@ const ModelPricing = () => {
<Button
theme='light'
type='tertiary'
style={{width: 150}}
style={{ width: 150 }}
onClick={() => {
copyText(selectedRowKeys);
}}
disabled={selectedRowKeys == ""}
disabled={selectedRowKeys == ''}
>
{t('复制选中模型')}
</Button>
@@ -387,7 +413,7 @@ const ModelPricing = () => {
t('第 {{start}} - {{end}} 条,共 {{total}} 条', {
start: page.currentStart,
end: page.currentEnd,
total: models.length
total: models.length,
}),
pageSize: models.length,
showSizeChanger: false,

View File

@@ -1,7 +1,6 @@
import React, { useEffect, useState } from 'react';
import { Card, Spin, Tabs } from '@douyinfe/semi-ui';
import { API, showError, showSuccess } from '../helpers';
import { useTranslation } from 'react-i18next';
import SettingGeminiModel from '../pages/Setting/Model/SettingGeminiModel.js';
@@ -13,6 +12,7 @@ const ModelSetting = () => {
let [inputs, setInputs] = useState({
'gemini.safety_settings': '',
'gemini.version_settings': '',
'gemini.supported_imagine_models': '',
'claude.model_headers_settings': '',
'claude.thinking_adapter_enabled': true,
'claude.default_max_tokens': '',
@@ -20,6 +20,8 @@ const ModelSetting = () => {
'global.pass_through_request_enabled': false,
'general_setting.ping_interval_enabled': false,
'general_setting.ping_interval_seconds': 60,
'gemini.thinking_adapter_enabled': false,
'gemini.thinking_adapter_budget_tokens_percentage': 0.6,
});
let [loading, setLoading] = useState(false);
@@ -33,14 +35,13 @@ const ModelSetting = () => {
if (
item.key === 'gemini.safety_settings' ||
item.key === 'gemini.version_settings' ||
item.key === 'claude.model_headers_settings'||
item.key === 'claude.default_max_tokens'
item.key === 'claude.model_headers_settings' ||
item.key === 'claude.default_max_tokens' ||
item.key === 'gemini.supported_imagine_models'
) {
item.value = JSON.stringify(JSON.parse(item.value), null, 2);
}
if (
item.key.endsWith('Enabled') || item.key.endsWith('enabled')
) {
if (item.key.endsWith('Enabled') || item.key.endsWith('enabled')) {
newInputs[item.key] = item.value === 'true' ? true : false;
} else {
newInputs[item.key] = item.value;

View File

@@ -6,56 +6,58 @@ import { UserContext } from '../context/User';
import { setUserData } from '../helpers/data.js';
const OAuth2Callback = (props) => {
const [searchParams, setSearchParams] = useSearchParams();
const [searchParams, setSearchParams] = useSearchParams();
const [userState, userDispatch] = useContext(UserContext);
const [prompt, setPrompt] = useState('处理中...');
const [processing, setProcessing] = useState(true);
const [userState, userDispatch] = useContext(UserContext);
const [prompt, setPrompt] = useState('处理中...');
const [processing, setProcessing] = useState(true);
let navigate = useNavigate();
let navigate = useNavigate();
const sendCode = async (code, state, count) => {
const res = await API.get(`/api/oauth/${props.type}?code=${code}&state=${state}`);
const { success, message, data } = res.data;
if (success) {
if (message === 'bind') {
showSuccess('绑定成功!');
navigate('/setting');
} else {
userDispatch({ type: 'login', payload: data });
localStorage.setItem('user', JSON.stringify(data));
setUserData(data);
updateAPI()
showSuccess('登录成功!');
navigate('/token');
}
} else {
showError(message);
if (count === 0) {
setPrompt(`操作失败,重定向至登录界面中...`);
navigate('/setting'); // in case this is failed to bind GitHub
return;
}
count++;
setPrompt(`出现错误,第 ${count} 次重试中...`);
await new Promise((resolve) => setTimeout(resolve, count * 2000));
await sendCode(code, state, count);
}
};
useEffect(() => {
let code = searchParams.get('code');
let state = searchParams.get('state');
sendCode(code, state, 0).then();
}, []);
return (
<Segment style={{ minHeight: '300px' }}>
<Dimmer active inverted>
<Loader size='large'>{prompt}</Loader>
</Dimmer>
</Segment>
const sendCode = async (code, state, count) => {
const res = await API.get(
`/api/oauth/${props.type}?code=${code}&state=${state}`,
);
const { success, message, data } = res.data;
if (success) {
if (message === 'bind') {
showSuccess('绑定成功!');
navigate('/setting');
} else {
userDispatch({ type: 'login', payload: data });
localStorage.setItem('user', JSON.stringify(data));
setUserData(data);
updateAPI();
showSuccess('登录成功!');
navigate('/token');
}
} else {
showError(message);
if (count === 0) {
setPrompt(`操作失败,重定向至登录界面中...`);
navigate('/setting'); // in case this is failed to bind GitHub
return;
}
count++;
setPrompt(`出现错误,第 ${count} 次重试中...`);
await new Promise((resolve) => setTimeout(resolve, count * 2000));
await sendCode(code, state, count);
}
};
useEffect(() => {
let code = searchParams.get('code');
let state = searchParams.get('state');
sendCode(code, state, 0).then();
}, []);
return (
<Segment style={{ minHeight: '300px' }}>
<Dimmer active inverted>
<Loader size='large'>{prompt}</Loader>
</Dimmer>
</Segment>
);
};
export default OAuth2Callback;

View File

@@ -2,21 +2,37 @@ import React from 'react';
import { Icon } from '@douyinfe/semi-ui';
const OIDCIcon = (props) => {
function CustomIcon() {
return (
<svg t="1723135116886" className="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg"
p-id="10969" width="1em" height="1em">
<path
d="M512 960C265 960 64 759 64 512S265 64 512 64s448 201 448 448-201 448-448 448z m0-882.6c-239.7 0-434.6 195-434.6 434.6s195 434.6 434.6 434.6 434.6-195 434.6-434.6S751.7 77.4 512 77.4z"
p-id="10970" fill="#2c2c2c" stroke="#2c2c2c" stroke-width="60"></path>
<path
d="M197.7 512c0-78.3 31.6-98.8 87.2-98.8 56.2 0 87.2 20.5 87.2 98.8s-31 98.8-87.2 98.8c-55.7 0-87.2-20.5-87.2-98.8z m130.4 0c0-46.8-7.8-64.5-43.2-64.5-35.2 0-42.9 17.7-42.9 64.5 0 47.1 7.8 63.7 42.9 63.7 35.4 0 43.2-16.6 43.2-63.7zM409.7 415.9h42.1V608h-42.1V415.9zM653.9 512c0 74.2-37.1 96.1-93.6 96.1h-65.9V415.9h65.9c56.5 0 93.6 16.1 93.6 96.1z m-43.5 0c0-49.3-17.7-60.6-52.3-60.6h-21.6v120.7h21.6c35.4 0 52.3-13.3 52.3-60.1zM686.5 512c0-74.2 36.3-98.8 92.7-98.8 18.3 0 33.2 2.2 44.8 6.4v36.3c-11.9-4.2-26-6.6-42.1-6.6-34.6 0-49.8 15.5-49.8 62.6 0 50.1 15.2 62.6 49.3 62.6 15.8 0 30.2-2.2 44.8-7.5v36c-11.3 4.7-28.5 8-46.8 8-56.1-0.2-92.9-18.7-92.9-99z"
p-id="10971" fill="#2c2c2c" stroke="#2c2c2c" stroke-width="20"></path>
</svg>
);
}
function CustomIcon() {
return (
<svg
t='1723135116886'
className='icon'
viewBox='0 0 1024 1024'
version='1.1'
xmlns='http://www.w3.org/2000/svg'
p-id='10969'
width='1em'
height='1em'
>
<path
d='M512 960C265 960 64 759 64 512S265 64 512 64s448 201 448 448-201 448-448 448z m0-882.6c-239.7 0-434.6 195-434.6 434.6s195 434.6 434.6 434.6 434.6-195 434.6-434.6S751.7 77.4 512 77.4z'
p-id='10970'
fill='#2c2c2c'
stroke='#2c2c2c'
stroke-width='60'
></path>
<path
d='M197.7 512c0-78.3 31.6-98.8 87.2-98.8 56.2 0 87.2 20.5 87.2 98.8s-31 98.8-87.2 98.8c-55.7 0-87.2-20.5-87.2-98.8z m130.4 0c0-46.8-7.8-64.5-43.2-64.5-35.2 0-42.9 17.7-42.9 64.5 0 47.1 7.8 63.7 42.9 63.7 35.4 0 43.2-16.6 43.2-63.7zM409.7 415.9h42.1V608h-42.1V415.9zM653.9 512c0 74.2-37.1 96.1-93.6 96.1h-65.9V415.9h65.9c56.5 0 93.6 16.1 93.6 96.1z m-43.5 0c0-49.3-17.7-60.6-52.3-60.6h-21.6v120.7h21.6c35.4 0 52.3-13.3 52.3-60.1zM686.5 512c0-74.2 36.3-98.8 92.7-98.8 18.3 0 33.2 2.2 44.8 6.4v36.3c-11.9-4.2-26-6.6-42.1-6.6-34.6 0-49.8 15.5-49.8 62.6 0 50.1 15.2 62.6 49.3 62.6 15.8 0 30.2-2.2 44.8-7.5v36c-11.3 4.7-28.5 8-46.8 8-56.1-0.2-92.9-18.7-92.9-99z'
p-id='10971'
fill='#2c2c2c'
stroke='#2c2c2c'
stroke-width='20'
></path>
</svg>
);
}
return <Icon svg={<CustomIcon />} />;
return <Icon svg={<CustomIcon />} />;
};
export default OIDCIcon;
export default OIDCIcon;

View File

@@ -11,7 +11,6 @@ import ModelSettingsVisualEditor from '../pages/Setting/Operation/ModelSettingsV
import GroupRatioSettings from '../pages/Setting/Operation/GroupRatioSettings.js';
import ModelRatioSettings from '../pages/Setting/Operation/ModelRatioSettings.js';
import { API, showError, showSuccess } from '../helpers';
import SettingsChats from '../pages/Setting/Operation/SettingsChats.js';
import { useTranslation } from 'react-i18next';
@@ -58,7 +57,7 @@ const OperationSetting = () => {
DataExportInterval: 5,
DefaultCollapseSidebar: false, // 默认折叠侧边栏
RetryTimes: 0,
Chats: "[]",
Chats: '[]',
DemoSiteEnabled: false,
SelfUseModeEnabled: false,
AutomaticDisableKeywords: '',
@@ -154,14 +153,14 @@ const OperationSetting = () => {
</Card>
{/* 合并模型倍率设置和可视化倍率设置 */}
<Card style={{ marginTop: '10px' }}>
<Tabs type="line">
<Tabs.TabPane tab={t('模型倍率设置')} itemKey="model">
<Tabs type='line'>
<Tabs.TabPane tab={t('模型倍率设置')} itemKey='model'>
<ModelRatioSettings options={inputs} refresh={onRefresh} />
</Tabs.TabPane>
<Tabs.TabPane tab={t('可视化倍率设置')} itemKey="visual">
<Tabs.TabPane tab={t('可视化倍率设置')} itemKey='visual'>
<ModelSettingsVisualEditor options={inputs} refresh={onRefresh} />
</Tabs.TabPane>
<Tabs.TabPane tab={t('未设置倍率模型')} itemKey="unset_models">
<Tabs.TabPane tab={t('未设置倍率模型')} itemKey='unset_models'>
<ModelRatioNotSetEditor options={inputs} refresh={onRefresh} />
</Tabs.TabPane>
</Tabs>

View File

@@ -1,5 +1,14 @@
import React, { useContext, useEffect, useRef, useState } from 'react';
import { Banner, Button, Col, Form, Row, Modal, Space } from '@douyinfe/semi-ui';
import {
Banner,
Button,
Col,
Form,
Row,
Modal,
Space,
Card,
} from '@douyinfe/semi-ui';
import { API, showError, showSuccess, timestamp2string } from '../helpers';
import { marked } from 'marked';
import { useTranslation } from 'react-i18next';
@@ -46,7 +55,7 @@ const OtherSetting = () => {
HomePageContent: false,
About: false,
Footer: false,
CheckUpdate: false
CheckUpdate: false,
});
const handleInputChange = async (value, e) => {
const name = e.target.id;
@@ -151,27 +160,30 @@ const OtherSetting = () => {
const checkUpdate = async () => {
try {
setLoadingInput((loadingInput) => ({ ...loadingInput, CheckUpdate: true }));
setLoadingInput((loadingInput) => ({
...loadingInput,
CheckUpdate: true,
}));
// Use a CORS proxy to avoid direct cross-origin requests to GitHub API
// Option 1: Use a public CORS proxy service
// const proxyUrl = 'https://cors-anywhere.herokuapp.com/';
// const res = await API.get(
// `${proxyUrl}https://api.github.com/repos/Calcium-Ion/new-api/releases/latest`,
// );
// Option 2: Use the JSON proxy approach which often works better with GitHub API
const res = await fetch(
'https://api.github.com/repos/Calcium-Ion/new-api/releases/latest',
{
headers: {
'Accept': 'application/json',
Accept: 'application/json',
'Content-Type': 'application/json',
// Adding User-Agent which is often required by GitHub API
'User-Agent': 'new-api-update-checker'
}
}
).then(response => response.json());
'User-Agent': 'new-api-update-checker',
},
},
).then((response) => response.json());
// Option 3: Use a local proxy endpoint
// Create a cached version of the response to avoid frequent GitHub API calls
// const res = await API.get('/api/status/github-latest-release');
@@ -190,7 +202,10 @@ const OtherSetting = () => {
console.error('Failed to check for updates:', error);
showError('检查更新失败,请稍后再试');
} finally {
setLoadingInput((loadingInput) => ({ ...loadingInput, CheckUpdate: false }));
setLoadingInput((loadingInput) => ({
...loadingInput,
CheckUpdate: false,
}));
}
};
const getOptions = async () => {
@@ -217,7 +232,10 @@ const OtherSetting = () => {
// Function to open GitHub release page
const openGitHubRelease = () => {
window.open(`https://github.com/Calcium-Ion/new-api/releases/tag/${updateData.tag_name}`, '_blank');
window.open(
`https://github.com/Calcium-Ion/new-api/releases/tag/${updateData.tag_name}`,
'_blank',
);
};
const getStartTimeString = () => {
@@ -227,120 +245,149 @@ const OtherSetting = () => {
return (
<Row>
<Col span={24}>
<Col
span={24}
style={{
marginTop: '10px',
display: 'flex',
flexDirection: 'column',
gap: '10px',
}}
>
{/* 版本信息 */}
<Form style={{ marginBottom: 15 }}>
<Form.Section text={t('系统信息')}>
<Row>
<Col span={16}>
<Space>
<Form>
<Card>
<Form.Section text={t('系统信息')}>
<Row>
<Col span={16}>
<Space>
<Text>
{t('当前版本')}
{statusState?.status?.version || t('未知')}
</Text>
<Button
type='primary'
onClick={checkUpdate}
loading={loadingInput['CheckUpdate']}
>
{t('检查更新')}
</Button>
</Space>
</Col>
</Row>
<Row>
<Col span={16}>
<Text>
{t('当前版本')}{statusState?.status?.version || t('未知')}
{t('启动时间')}{getStartTimeString()}
</Text>
<Button type="primary" onClick={checkUpdate} loading={loadingInput['CheckUpdate']}>
{t('检查更新')}
</Button>
</Space>
</Col>
</Row>
<Row>
<Col span={16}>
<Text>{t('启动时间')}{getStartTimeString()}</Text>
</Col>
</Row>
</Form.Section>
</Col>
</Row>
</Form.Section>
</Card>
</Form>
{/* 通用设置 */}
<Form
values={inputs}
getFormApi={(formAPI) => (formAPISettingGeneral.current = formAPI)}
style={{ marginBottom: 15 }}
>
<Form.Section text={t('通用设置')}>
<Form.TextArea
label={t('公告')}
placeholder={t('在此输入新的公告内容,支持 Markdown & HTML 代码')}
field={'Notice'}
onChange={handleInputChange}
style={{ fontFamily: 'JetBrains Mono, Consolas' }}
autosize={{ minRows: 6, maxRows: 12 }}
/>
<Button onClick={submitNotice} loading={loadingInput['Notice']}>
{t('设置公告')}
</Button>
</Form.Section>
<Card>
<Form.Section text={t('通用设置')}>
<Form.TextArea
label={t('公告')}
placeholder={t(
'在此输入新的公告内容,支持 Markdown & HTML 代码',
)}
field={'Notice'}
onChange={handleInputChange}
style={{ fontFamily: 'JetBrains Mono, Consolas' }}
autosize={{ minRows: 6, maxRows: 12 }}
/>
<Button onClick={submitNotice} loading={loadingInput['Notice']}>
{t('设置公告')}
</Button>
</Form.Section>
</Card>
</Form>
{/* 个性化设置 */}
<Form
values={inputs}
getFormApi={(formAPI) => (formAPIPersonalization.current = formAPI)}
style={{ marginBottom: 15 }}
>
<Form.Section text={t('个性化设置')}>
<Form.Input
label={t('系统名称')}
placeholder={t('在此输入系统名称')}
field={'SystemName'}
onChange={handleInputChange}
/>
<Button
onClick={submitSystemName}
loading={loadingInput['SystemName']}
>
{t('设置系统名称')}
</Button>
<Form.Input
label={t('Logo 图片地址')}
placeholder={t('在此输入 Logo 图片地址')}
field={'Logo'}
onChange={handleInputChange}
/>
<Button onClick={submitLogo} loading={loadingInput['Logo']}>
{t('设置 Logo')}
</Button>
<Form.TextArea
label={t('首页内容')}
placeholder={t('在此输入首页内容,支持 Markdown & HTML 代码,设置后首页的状态信息将不再显示。如果输入的是一个链接,则会使用该链接作为 iframe 的 src 属性,这允许你设置任意网页作为首页')}
field={'HomePageContent'}
onChange={handleInputChange}
style={{ fontFamily: 'JetBrains Mono, Consolas' }}
autosize={{ minRows: 6, maxRows: 12 }}
/>
<Button
onClick={() => submitOption('HomePageContent')}
loading={loadingInput['HomePageContent']}
>
{t('设置首页内容')}
</Button>
<Form.TextArea
label={t('关于')}
placeholder={t('在此输入新的关于内容,支持 Markdown & HTML 代码。如果输入的是一个链接,则会使用该链接作为 iframe 的 src 属性,这允许你设置任意网页作为关于页面')}
field={'About'}
onChange={handleInputChange}
style={{ fontFamily: 'JetBrains Mono, Consolas' }}
autosize={{ minRows: 6, maxRows: 12 }}
/>
<Button onClick={submitAbout} loading={loadingInput['About']}>
{t('设置关于')}
</Button>
{/* */}
<Banner
fullMode={false}
type='info'
description={t('移除 One API 的版权标识必须首先获得授权,项目维护需要花费大量精力,如果本项目对你有意义,请主动支持本项目')}
closeIcon={null}
style={{ marginTop: 15 }}
/>
<Form.Input
label={t('页脚')}
placeholder={t('在此输入新的页脚,留空则使用默认页脚,支持 HTML 代码')}
field={'Footer'}
onChange={handleInputChange}
/>
<Button onClick={submitFooter} loading={loadingInput['Footer']}>
{t('设置页脚')}
</Button>
</Form.Section>
<Card>
<Form.Section text={t('个性化设置')}>
<Form.Input
label={t('系统名称')}
placeholder={t('在此输入系统名称')}
field={'SystemName'}
onChange={handleInputChange}
/>
<Button
onClick={submitSystemName}
loading={loadingInput['SystemName']}
>
{t('设置系统名称')}
</Button>
<Form.Input
label={t('Logo 图片地址')}
placeholder={t('在此输入 Logo 图片地址')}
field={'Logo'}
onChange={handleInputChange}
/>
<Button onClick={submitLogo} loading={loadingInput['Logo']}>
{t('设置 Logo')}
</Button>
<Form.TextArea
label={t('首页内容')}
placeholder={t(
'在此输入首页内容,支持 Markdown & HTML 代码,设置后首页的状态信息将不再显示。如果输入的是一个链接,则会使用该链接作为 iframe 的 src 属性,这允许你设置任意网页作为首页',
)}
field={'HomePageContent'}
onChange={handleInputChange}
style={{ fontFamily: 'JetBrains Mono, Consolas' }}
autosize={{ minRows: 6, maxRows: 12 }}
/>
<Button
onClick={() => submitOption('HomePageContent')}
loading={loadingInput['HomePageContent']}
>
{t('设置首页内容')}
</Button>
<Form.TextArea
label={t('关于')}
placeholder={t(
'在此输入新的关于内容,支持 Markdown & HTML 代码。如果输入的是一个链接,则会使用该链接作为 iframe 的 src 属性,这允许你设置任意网页作为关于页面',
)}
field={'About'}
onChange={handleInputChange}
style={{ fontFamily: 'JetBrains Mono, Consolas' }}
autosize={{ minRows: 6, maxRows: 12 }}
/>
<Button onClick={submitAbout} loading={loadingInput['About']}>
{t('设置关于')}
</Button>
{/* */}
<Banner
fullMode={false}
type='info'
description={t(
'移除 One API 的版权标识必须首先获得授权,项目维护需要花费大量精力,如果本项目对你有意义,请主动支持本项目',
)}
closeIcon={null}
style={{ marginTop: 15 }}
/>
<Form.Input
label={t('页脚')}
placeholder={t(
'在此输入新的页脚,留空则使用默认页脚,支持 HTML 代码',
)}
field={'Footer'}
onChange={handleInputChange}
/>
<Button onClick={submitFooter} loading={loadingInput['Footer']}>
{t('设置页脚')}
</Button>
</Form.Section>
</Card>
</Form>
</Col>
<Modal
@@ -348,16 +395,16 @@ const OtherSetting = () => {
visible={showUpdateModal}
onCancel={() => setShowUpdateModal(false)}
footer={[
<Button
key="details"
type="primary"
<Button
key='details'
type='primary'
onClick={() => {
setShowUpdateModal(false);
openGitHubRelease();
}}
>
{t('详情')}
</Button>
</Button>,
]}
>
<div dangerouslySetInnerHTML={{ __html: updateData.content }}></div>

View File

@@ -13,7 +13,6 @@ import { UserContext } from '../context/User/index.js';
import { StatusContext } from '../context/Status/index.js';
const { Sider, Content, Header, Footer } = Layout;
const PageLayout = () => {
const [userState, userDispatch] = useContext(UserContext);
const [statusState, statusDispatch] = useContext(StatusContext);
@@ -62,85 +61,104 @@ const PageLayout = () => {
if (savedLang) {
i18n.changeLanguage(savedLang);
}
// 默认显示侧边栏
styleDispatch({ type: 'SET_SIDER', payload: true });
}, [i18n]);
// 获取侧边栏折叠状态
const isSidebarCollapsed = localStorage.getItem('default_collapse_sidebar') === 'true';
const isSidebarCollapsed =
localStorage.getItem('default_collapse_sidebar') === 'true';
return (
<Layout style={{
height: '100vh',
display: 'flex',
flexDirection: 'column',
overflow: styleState.isMobile ? 'visible' : 'hidden'
}}>
<Header style={{
padding: 0,
height: 'auto',
lineHeight: 'normal',
position: styleState.isMobile ? 'sticky' : 'fixed',
width: '100%',
top: 0,
zIndex: 100,
boxShadow: '0 1px 6px rgba(0, 0, 0, 0.08)'
}}>
<Layout
style={{
height: '100vh',
display: 'flex',
flexDirection: 'column',
overflow: styleState.isMobile ? 'visible' : 'hidden',
}}
>
<Header
style={{
padding: 0,
height: 'auto',
lineHeight: 'normal',
position: styleState.isMobile ? 'sticky' : 'fixed',
width: '100%',
top: 0,
zIndex: 100,
boxShadow: '0 1px 6px rgba(0, 0, 0, 0.08)',
}}
>
<HeaderBar />
</Header>
<Layout style={{
marginTop: styleState.isMobile ? '0' : '56px',
height: styleState.isMobile ? 'auto' : 'calc(100vh - 56px)',
overflow: styleState.isMobile ? 'visible' : 'auto',
display: 'flex',
flexDirection: 'column'
}}>
<Layout
style={{
marginTop: styleState.isMobile ? '0' : '56px',
height: styleState.isMobile ? 'auto' : 'calc(100vh - 56px)',
overflow: styleState.isMobile ? 'visible' : 'auto',
display: 'flex',
flexDirection: 'column',
}}
>
{styleState.showSider && (
<Sider style={{
position: 'fixed',
left: 0,
top: '56px',
zIndex: 99,
background: 'var(--semi-color-bg-1)',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.15)',
border: 'none',
paddingRight: '0',
height: 'calc(100vh - 56px)',
}}>
<Sider
style={{
position: 'fixed',
left: 0,
top: '56px',
zIndex: 99,
background: 'var(--semi-color-bg-1)',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.15)',
border: 'none',
paddingRight: '0',
height: 'calc(100vh - 56px)',
}}
>
<SiderBar />
</Sider>
)}
<Layout style={{
marginLeft: styleState.isMobile ? '0' : (styleState.showSider ? (styleState.siderCollapsed ? '60px' : '200px') : '0'),
transition: 'margin-left 0.3s ease',
flex: '1 1 auto',
display: 'flex',
flexDirection: 'column'
}}>
<Layout
style={{
marginLeft: styleState.isMobile
? '0'
: styleState.showSider
? styleState.siderCollapsed
? '60px'
: '200px'
: '0',
transition: 'margin-left 0.3s ease',
flex: '1 1 auto',
display: 'flex',
flexDirection: 'column',
}}
>
<Content
style={{
style={{
flex: '1 0 auto',
overflowY: styleState.isMobile ? 'visible' : 'auto',
WebkitOverflowScrolling: 'touch',
padding: styleState.shouldInnerPadding? '24px': '0',
padding: styleState.shouldInnerPadding ? '24px' : '0',
position: 'relative',
marginTop: styleState.isMobile ? '2px' : '0',
}}
>
<App />
</Content>
<Layout.Footer style={{
flex: '0 0 auto',
width: '100%'
}}>
<Layout.Footer
style={{
flex: '0 0 auto',
width: '100%',
}}
>
<FooterBar />
</Layout.Footer>
</Layout>
</Layout>
<ToastContainer />
</Layout>
)
}
);
};
export default PageLayout;
export default PageLayout;

View File

@@ -6,11 +6,15 @@ import {
isRoot,
showError,
showInfo,
showSuccess
showSuccess,
} from '../helpers';
import Turnstile from 'react-turnstile';
import { UserContext } from '../context/User';
import { onGitHubOAuthClicked, onOIDCClicked, onLinuxDOOAuthClicked } from './utils';
import {
onGitHubOAuthClicked,
onOIDCClicked,
onLinuxDOOAuthClicked,
} from './utils';
import {
Avatar,
Banner,
@@ -32,13 +36,13 @@ import {
AutoComplete,
Checkbox,
Tabs,
TabPane
TabPane,
} from '@douyinfe/semi-ui';
import {
getQuotaPerUnit,
renderQuota,
renderQuotaWithPrompt,
stringToColor
stringToColor,
} from '../helpers/render';
import TelegramLoginButton from 'react-telegram-login';
import { useTranslation } from 'react-i18next';
@@ -53,8 +57,9 @@ const PersonalSetting = () => {
email_verification_code: '',
email: '',
self_account_deletion_confirmation: '',
original_password: '',
set_new_password: '',
set_new_password_confirmation: ''
set_new_password_confirmation: '',
});
const [status, setStatus] = useState({});
const [showChangePasswordModal, setShowChangePasswordModal] = useState(false);
@@ -77,14 +82,14 @@ const PersonalSetting = () => {
const savedState = localStorage.getItem('modelsExpanded');
return savedState ? JSON.parse(savedState) : false;
});
const MODELS_DISPLAY_COUNT = 10; // 默认显示的模型数量
const MODELS_DISPLAY_COUNT = 10; // 默认显示的模型数量
const [notificationSettings, setNotificationSettings] = useState({
warningType: 'email',
warningThreshold: 100000,
webhookUrl: '',
webhookSecret: '',
notificationEmail: '',
acceptUnsetModelRatioModel: false
acceptUnsetModelRatioModel: false,
});
const [showWebhookDocs, setShowWebhookDocs] = useState(false);
@@ -128,7 +133,8 @@ const PersonalSetting = () => {
webhookUrl: settings.webhook_url || '',
webhookSecret: settings.webhook_secret || '',
notificationEmail: settings.notification_email || '',
acceptUnsetModelRatioModel: settings.accept_unset_model_ratio_model || false
acceptUnsetModelRatioModel:
settings.accept_unset_model_ratio_model || false,
});
}
}, [userState?.user?.setting]);
@@ -222,7 +228,7 @@ const PersonalSetting = () => {
const bindWeChat = async () => {
if (inputs.wechat_verification_code === '') return;
const res = await API.get(
`/api/oauth/wechat/bind?code=${inputs.wechat_verification_code}`
`/api/oauth/wechat/bind?code=${inputs.wechat_verification_code}`,
);
const { success, message } = res.data;
if (success) {
@@ -234,12 +240,25 @@ const PersonalSetting = () => {
};
const changePassword = async () => {
if (inputs.original_password === '') {
showError(t('请输入原密码!'));
return;
}
if (inputs.set_new_password === '') {
showError(t('请输入新密码!'));
return;
}
if (inputs.original_password === inputs.set_new_password) {
showError(t('新密码需要和原密码不一致!'));
return;
}
if (inputs.set_new_password !== inputs.set_new_password_confirmation) {
showError(t('两次输入的密码不一致!'));
return;
}
const res = await API.put(`/api/user/self`, {
password: inputs.set_new_password
original_password: inputs.original_password,
password: inputs.set_new_password,
});
const { success, message } = res.data;
if (success) {
@@ -257,7 +276,7 @@ const PersonalSetting = () => {
return;
}
const res = await API.post(`/api/user/aff_transfer`, {
quota: transferAmount
quota: transferAmount,
});
const { success, message } = res.data;
if (success) {
@@ -281,7 +300,7 @@ const PersonalSetting = () => {
}
setLoading(true);
const res = await API.get(
`/api/verification?email=${inputs.email}&turnstile=${turnstileToken}`
`/api/verification?email=${inputs.email}&turnstile=${turnstileToken}`,
);
const { success, message } = res.data;
if (success) {
@@ -299,7 +318,7 @@ const PersonalSetting = () => {
}
setLoading(true);
const res = await API.get(
`/api/oauth/email/bind?email=${inputs.email}&code=${inputs.email_verification_code}`
`/api/oauth/email/bind?email=${inputs.email}&code=${inputs.email_verification_code}`,
);
const { success, message } = res.data;
if (success) {
@@ -334,9 +353,9 @@ const PersonalSetting = () => {
};
const handleNotificationSettingChange = (type, value) => {
setNotificationSettings(prev => ({
setNotificationSettings((prev) => ({
...prev,
[type]: value.target ? value.target.value : value // 处理 Radio 事件对象
[type]: value.target ? value.target.value : value, // 处理 Radio 事件对象
}));
};
@@ -344,11 +363,14 @@ const PersonalSetting = () => {
try {
const res = await API.put('/api/user/setting', {
notify_type: notificationSettings.warningType,
quota_warning_threshold: parseFloat(notificationSettings.warningThreshold),
quota_warning_threshold: parseFloat(
notificationSettings.warningThreshold,
),
webhook_url: notificationSettings.webhookUrl,
webhook_secret: notificationSettings.webhookSecret,
notification_email: notificationSettings.notificationEmail,
accept_unset_model_ratio_model: notificationSettings.acceptUnsetModelRatioModel
accept_unset_model_ratio_model:
notificationSettings.acceptUnsetModelRatioModel,
});
if (res.data.success) {
@@ -363,7 +385,6 @@ const PersonalSetting = () => {
};
return (
<div>
<Layout>
<Layout.Content>
@@ -377,7 +398,10 @@ const PersonalSetting = () => {
centered={true}
>
<div style={{ marginTop: 20 }}>
<Typography.Text>{t('可用额度')}{renderQuotaWithPrompt(userState?.user?.aff_quota)}</Typography.Text>
<Typography.Text>
{t('可用额度')}
{renderQuotaWithPrompt(userState?.user?.aff_quota)}
</Typography.Text>
<Input
style={{ marginTop: 5 }}
value={userState?.user?.aff_quota}
@@ -386,7 +410,9 @@ const PersonalSetting = () => {
</div>
<div style={{ marginTop: 20 }}>
<Typography.Text>
{t('划转额度')}{renderQuotaWithPrompt(transferAmount)} {t('最低') + renderQuota(getQuotaPerUnit())}
{t('划转额度')}
{renderQuotaWithPrompt(transferAmount)}{' '}
{t('最低') + renderQuota(getQuotaPerUnit())}
</Typography.Text>
<div>
<InputNumber
@@ -405,7 +431,7 @@ const PersonalSetting = () => {
<Card.Meta
avatar={
<Avatar
size="default"
size='default'
color={stringToColor(getUsername())}
style={{ marginRight: 4 }}
>
@@ -416,25 +442,29 @@ const PersonalSetting = () => {
title={<Typography.Text>{getUsername()}</Typography.Text>}
description={
isRoot() ? (
<Tag color="red">{t('管理员')}</Tag>
<Tag color='red'>{t('管理员')}</Tag>
) : (
<Tag color="blue">{t('普通用户')}</Tag>
<Tag color='blue'>{t('普通用户')}</Tag>
)
}
></Card.Meta>
}
headerExtraContent={
<>
<Space vertical align="start">
<Tag color="green">{'ID: ' + userState?.user?.id}</Tag>
<Tag color="blue">{userState?.user?.group}</Tag>
<Space vertical align='start'>
<Tag color='green'>{'ID: ' + userState?.user?.id}</Tag>
<Tag color='blue'>{userState?.user?.group}</Tag>
</Space>
</>
}
footer={
<>
<div style={{ display: 'flex', alignItems: 'center', gap: 8 }}>
<Typography.Title heading={6}>{t('可用模型')}</Typography.Title>
<div
style={{ display: 'flex', alignItems: 'center', gap: 8 }}
>
<Typography.Title heading={6}>
{t('可用模型')}
</Typography.Title>
</div>
<div style={{ marginTop: 10 }}>
{models.length <= MODELS_DISPLAY_COUNT ? (
@@ -442,7 +472,7 @@ const PersonalSetting = () => {
{models.map((model) => (
<Tag
key={model}
color="cyan"
color='cyan'
onClick={() => {
copyText(model);
}}
@@ -458,7 +488,7 @@ const PersonalSetting = () => {
{models.map((model) => (
<Tag
key={model}
color="cyan"
color='cyan'
onClick={() => {
copyText(model);
}}
@@ -467,8 +497,8 @@ const PersonalSetting = () => {
</Tag>
))}
<Tag
color="blue"
type="light"
color='blue'
type='light'
style={{ cursor: 'pointer' }}
onClick={() => setIsModelsExpanded(false)}
>
@@ -478,24 +508,27 @@ const PersonalSetting = () => {
</Collapsible>
{!isModelsExpanded && (
<Space wrap>
{models.slice(0, MODELS_DISPLAY_COUNT).map((model) => (
<Tag
key={model}
color="cyan"
onClick={() => {
copyText(model);
}}
>
{model}
</Tag>
))}
{models
.slice(0, MODELS_DISPLAY_COUNT)
.map((model) => (
<Tag
key={model}
color='cyan'
onClick={() => {
copyText(model);
}}
>
{model}
</Tag>
))}
<Tag
color="blue"
type="light"
color='blue'
type='light'
style={{ cursor: 'pointer' }}
onClick={() => setIsModelsExpanded(true)}
>
{t('更多')} {models.length - MODELS_DISPLAY_COUNT} {t('个模型')}
{t('更多')} {models.length - MODELS_DISPLAY_COUNT}{' '}
{t('个模型')}
</Tag>
</Space>
)}
@@ -503,7 +536,6 @@ const PersonalSetting = () => {
)}
</div>
</>
}
>
<Descriptions row>
@@ -536,9 +568,9 @@ const PersonalSetting = () => {
<div style={{ marginTop: 10 }}>
<Descriptions row>
<Descriptions.Item itemKey={t('待使用收益')}>
<span style={{ color: 'rgba(var(--semi-red-5), 1)' }}>
{renderQuota(userState?.user?.aff_quota)}
</span>
<span style={{ color: 'rgba(var(--semi-red-5), 1)' }}>
{renderQuota(userState?.user?.aff_quota)}
</span>
<Button
type={'secondary'}
onClick={() => setOpenTransfer(true)}
@@ -589,7 +621,9 @@ const PersonalSetting = () => {
</div>
<div style={{ marginTop: 10 }}>
<Typography.Text strong>{t('微信')}</Typography.Text>
<div style={{ display: 'flex', justifyContent: 'space-between' }}>
<div
style={{ display: 'flex', justifyContent: 'space-between' }}
>
<div>
<Input
value={
@@ -664,7 +698,10 @@ const PersonalSetting = () => {
<div>
<Button
onClick={() => {
onOIDCClicked(status.oidc_authorization_endpoint, status.oidc_client_id);
onOIDCClicked(
status.oidc_authorization_endpoint,
status.oidc_client_id,
);
}}
disabled={
(userState.user && userState.user.oidc_id !== '') ||
@@ -697,7 +734,7 @@ const PersonalSetting = () => {
<Button disabled={true}>{t('已绑定')}</Button>
) : (
<TelegramLoginButton
dataAuthUrl="/api/oauth/telegram/bind"
dataAuthUrl='/api/oauth/telegram/bind'
botName={status.telegram_bot_name}
/>
)
@@ -779,59 +816,83 @@ const PersonalSetting = () => {
</p>
</div>
<Input
placeholder="验证码"
name="wechat_verification_code"
placeholder='验证码'
name='wechat_verification_code'
value={inputs.wechat_verification_code}
onChange={(v) =>
handleInputChange('wechat_verification_code', v)
}
/>
<Button color="" fluid size="large" onClick={bindWeChat}>
<Button color='' fluid size='large' onClick={bindWeChat}>
{t('绑定')}
</Button>
</Modal>
</div>
</Card>
<Card style={{ marginTop: 10 }}>
<Tabs type="line" defaultActiveKey="notification">
<TabPane tab={t('通知设置')} itemKey="notification">
<Tabs type='line' defaultActiveKey='notification'>
<TabPane tab={t('通知设置')} itemKey='notification'>
<div style={{ marginTop: 20 }}>
<Typography.Text strong>{t('通知方式')}</Typography.Text>
<div style={{ marginTop: 10 }}>
<RadioGroup
value={notificationSettings.warningType}
onChange={value => handleNotificationSettingChange('warningType', value)}
onChange={(value) =>
handleNotificationSettingChange('warningType', value)
}
>
<Radio value="email">{t('邮件通知')}</Radio>
<Radio value="webhook">{t('Webhook通知')}</Radio>
<Radio value='email'>{t('邮件通知')}</Radio>
<Radio value='webhook'>{t('Webhook通知')}</Radio>
</RadioGroup>
</div>
</div>
{notificationSettings.warningType === 'webhook' && (
<>
<div style={{ marginTop: 20 }}>
<Typography.Text strong>{t('Webhook地址')}</Typography.Text>
<Typography.Text strong>
{t('Webhook地址')}
</Typography.Text>
<div style={{ marginTop: 10 }}>
<Input
value={notificationSettings.webhookUrl}
onChange={val => handleNotificationSettingChange('webhookUrl', val)}
placeholder={t('请输入Webhook地址,例如: https://example.com/webhook')}
onChange={(val) =>
handleNotificationSettingChange('webhookUrl', val)
}
placeholder={t(
'请输入Webhook地址例如: https://example.com/webhook',
)}
/>
<Typography.Text type="secondary" style={{ marginTop: 8, display: 'block' }}>
{t('只支持https系统将以 POST 方式发送通知,请确保地址可以接收 POST 请求')}
<Typography.Text
type='secondary'
style={{ marginTop: 8, display: 'block' }}
>
{t(
'只支持https系统将以 POST 方式发送通知,请确保地址可以接收 POST 请求',
)}
</Typography.Text>
<Typography.Text type="secondary" style={{ marginTop: 8, display: 'block' }}>
<div style={{ cursor: 'pointer' }} onClick={() => setShowWebhookDocs(!showWebhookDocs)}>
{t('Webhook请求结构')} {showWebhookDocs ? '▼' : '▶'}
<Typography.Text
type='secondary'
style={{ marginTop: 8, display: 'block' }}
>
<div
style={{ cursor: 'pointer' }}
onClick={() =>
setShowWebhookDocs(!showWebhookDocs)
}
>
{t('Webhook请求结构')}{' '}
{showWebhookDocs ? '▼' : '▶'}
</div>
<Collapsible isOpen={showWebhookDocs}>
<pre style={{
marginTop: 4,
background: 'var(--semi-color-fill-0)',
padding: 8,
borderRadius: 4
}}>
{`{
<pre
style={{
marginTop: 4,
background: 'var(--semi-color-fill-0)',
padding: 8,
borderRadius: 4,
}}
>
{`{
"type": "quota_exceed", // 通知类型
"title": "标题", // 通知标题
"content": "通知内容", // 通知内容,支持 {{value}} 变量占位符
@@ -847,23 +908,38 @@ const PersonalSetting = () => {
"values": ["$0.99"],
"timestamp": 1739950503
}`}
</pre>
</pre>
</Collapsible>
</Typography.Text>
</div>
</div>
<div style={{ marginTop: 20 }}>
<Typography.Text strong>{t('接口凭证(可选)')}</Typography.Text>
<Typography.Text strong>
{t('接口凭证(可选)')}
</Typography.Text>
<div style={{ marginTop: 10 }}>
<Input
value={notificationSettings.webhookSecret}
onChange={val => handleNotificationSettingChange('webhookSecret', val)}
onChange={(val) =>
handleNotificationSettingChange(
'webhookSecret',
val,
)
}
placeholder={t('请输入密钥')}
/>
<Typography.Text type="secondary" style={{ marginTop: 8, display: 'block' }}>
{t('密钥将以 Bearer 方式添加到请求头中用于验证webhook请求的合法性')}
<Typography.Text
type='secondary'
style={{ marginTop: 8, display: 'block' }}
>
{t(
'密钥将以 Bearer 方式添加到请求头中用于验证webhook请求的合法性',
)}
</Typography.Text>
<Typography.Text type="secondary" style={{ marginTop: 4, display: 'block' }}>
<Typography.Text
type='secondary'
style={{ marginTop: 4, display: 'block' }}
>
{t('Authorization: Bearer your-secret-key')}
</Typography.Text>
</div>
@@ -876,57 +952,94 @@ const PersonalSetting = () => {
<div style={{ marginTop: 10 }}>
<Input
value={notificationSettings.notificationEmail}
onChange={val => handleNotificationSettingChange('notificationEmail', val)}
onChange={(val) =>
handleNotificationSettingChange(
'notificationEmail',
val,
)
}
placeholder={t('留空则使用账号绑定的邮箱')}
/>
<Typography.Text type="secondary" style={{ marginTop: 8, display: 'block' }}>
{t('设置用于接收额度预警的邮箱地址,不填则使用账号绑定的邮箱')}
<Typography.Text
type='secondary'
style={{ marginTop: 8, display: 'block' }}
>
{t(
'设置用于接收额度预警的邮箱地址,不填则使用账号绑定的邮箱',
)}
</Typography.Text>
</div>
</div>
)}
<div style={{ marginTop: 20 }}>
<Typography.Text
strong>{t('额度预警阈值')} {renderQuotaWithPrompt(notificationSettings.warningThreshold)}</Typography.Text>
<Typography.Text strong>
{t('额度预警阈值')}{' '}
{renderQuotaWithPrompt(
notificationSettings.warningThreshold,
)}
</Typography.Text>
<div style={{ marginTop: 10 }}>
<AutoComplete
value={notificationSettings.warningThreshold}
onChange={val => handleNotificationSettingChange('warningThreshold', val)}
onChange={(val) =>
handleNotificationSettingChange(
'warningThreshold',
val,
)
}
style={{ width: 200 }}
placeholder={t('请输入预警额度')}
data={[
{ value: 100000, label: '0.2$' },
{ value: 500000, label: '1$' },
{ value: 1000000, label: '5$' },
{ value: 5000000, label: '10$' }
{ value: 5000000, label: '10$' },
]}
/>
</div>
<Typography.Text type="secondary" style={{ marginTop: 10, display: 'block' }}>
{t('当剩余额度低于此数值时,系统将通过选择的方式发送通知')}
<Typography.Text
type='secondary'
style={{ marginTop: 10, display: 'block' }}
>
{t(
'当剩余额度低于此数值时,系统将通过选择的方式发送通知',
)}
</Typography.Text>
</div>
</TabPane>
<TabPane tab={t('价格设置')} itemKey="price">
<TabPane tab={t('价格设置')} itemKey='price'>
<div style={{ marginTop: 20 }}>
<Typography.Text strong>{t('接受未设置价格模型')}</Typography.Text>
<Typography.Text strong>
{t('接受未设置价格模型')}
</Typography.Text>
<div style={{ marginTop: 10 }}>
<Checkbox
checked={notificationSettings.acceptUnsetModelRatioModel}
onChange={e => handleNotificationSettingChange('acceptUnsetModelRatioModel', e.target.checked)}
checked={
notificationSettings.acceptUnsetModelRatioModel
}
onChange={(e) =>
handleNotificationSettingChange(
'acceptUnsetModelRatioModel',
e.target.checked,
)
}
>
{t('接受未设置价格模型')}
</Checkbox>
<Typography.Text type="secondary" style={{ marginTop: 8, display: 'block' }}>
{t('当模型没有设置价格时仍接受调用,仅当您信任该网站时使用,可能会产生高额费用')}
<Typography.Text
type='secondary'
style={{ marginTop: 8, display: 'block' }}
>
{t(
'当模型没有设置价格时仍接受调用,仅当您信任该网站时使用,可能会产生高额费用',
)}
</Typography.Text>
</div>
</div>
</TabPane>
</Tabs>
<div style={{ marginTop: 20 }}>
<Button type="primary" onClick={saveNotificationSettings}>
<Button type='primary' onClick={saveNotificationSettings}>
{t('保存设置')}
</Button>
</div>
@@ -939,20 +1052,22 @@ const PersonalSetting = () => {
centered={true}
maskClosable={false}
>
<Typography.Title heading={6}>{t('绑定邮箱地址')}</Typography.Title>
<Typography.Title heading={6}>
{t('绑定邮箱地址')}
</Typography.Title>
<div
style={{
marginTop: 20,
display: 'flex',
justifyContent: 'space-between'
justifyContent: 'space-between',
}}
>
<Input
fluid
placeholder="输入邮箱地址"
placeholder='输入邮箱地址'
onChange={(value) => handleInputChange('email', value)}
name="email"
type="email"
name='email'
type='email'
/>
<Button
onClick={sendVerificationCode}
@@ -964,8 +1079,8 @@ const PersonalSetting = () => {
<div style={{ marginTop: 10 }}>
<Input
fluid
placeholder="验证码"
name="email_verification_code"
placeholder='验证码'
name='email_verification_code'
value={inputs.email_verification_code}
onChange={(value) =>
handleInputChange('email_verification_code', value)
@@ -992,20 +1107,20 @@ const PersonalSetting = () => {
>
<div style={{ marginTop: 20 }}>
<Banner
type="danger"
description="您正在删除自己的帐户,将清空所有数据且不可恢复"
type='danger'
description='您正在删除自己的帐户,将清空所有数据且不可恢复'
closeIcon={null}
/>
</div>
<div style={{ marginTop: 20 }}>
<Input
placeholder={`输入你的账户名 ${userState?.user?.username} 以确认删除`}
name="self_account_deletion_confirmation"
name='self_account_deletion_confirmation'
value={inputs.self_account_deletion_confirmation}
onChange={(value) =>
handleInputChange(
'self_account_deletion_confirmation',
value
value,
)
}
/>
@@ -1030,7 +1145,17 @@ const PersonalSetting = () => {
>
<div style={{ marginTop: 20 }}>
<Input
name="set_new_password"
name='original_password'
placeholder={t('原密码')}
type='password'
value={inputs.original_password}
onChange={(value) =>
handleInputChange('original_password', value)
}
/>
<Input
style={{ marginTop: 20 }}
name='set_new_password'
placeholder={t('新密码')}
value={inputs.set_new_password}
onChange={(value) =>
@@ -1039,7 +1164,7 @@ const PersonalSetting = () => {
/>
<Input
style={{ marginTop: 20 }}
name="set_new_password_confirmation"
name='set_new_password_confirmation'
placeholder={t('确认新密码')}
value={inputs.set_new_password_confirmation}
onChange={(value) =>

View File

@@ -1,7 +1,6 @@
import React, { useEffect, useState } from 'react';
import { Card, Spin, Tabs } from '@douyinfe/semi-ui';
import { API, showError, showSuccess } from '../helpers';
import SettingsChats from '../pages/Setting/Operation/SettingsChats.js';
import { useTranslation } from 'react-i18next';
@@ -24,9 +23,7 @@ const RateLimitSetting = () => {
if (success) {
let newInputs = {};
data.forEach((item) => {
if (
item.key.endsWith('Enabled')
) {
if (item.key.endsWith('Enabled')) {
newInputs[item.key] = item.value === 'true' ? true : false;
} else {
newInputs[item.key] = item.value;

View File

@@ -10,7 +10,8 @@ import {
import { ITEMS_PER_PAGE } from '../constants';
import { renderQuota } from '../helpers/render';
import {
Button, Divider,
Button,
Divider,
Form,
Modal,
Popconfirm,
@@ -193,15 +194,17 @@ const RedemptionsTable = () => {
};
const loadRedemptions = async (startIdx, pageSize) => {
const res = await API.get(`/api/redemption/?p=${startIdx}&page_size=${pageSize}`);
const res = await API.get(
`/api/redemption/?p=${startIdx}&page_size=${pageSize}`,
);
const { success, message, data } = res.data;
if (success) {
const newPageData = data.items;
setActivePage(data.page);
setTokenCount(data.total);
setRedemptionFormat(newPageData);
const newPageData = data.items;
setActivePage(data.page);
setTokenCount(data.total);
setRedemptionFormat(newPageData);
} else {
showError(message);
showError(message);
}
setLoading(false);
};
@@ -282,19 +285,21 @@ const RedemptionsTable = () => {
const searchRedemptions = async (keyword, page, pageSize) => {
if (searchKeyword === '') {
await loadRedemptions(page, pageSize);
return;
await loadRedemptions(page, pageSize);
return;
}
setSearching(true);
const res = await API.get(`/api/redemption/search?keyword=${keyword}&p=${page}&page_size=${pageSize}`);
const res = await API.get(
`/api/redemption/search?keyword=${keyword}&p=${page}&page_size=${pageSize}`,
);
const { success, message, data } = res.data;
if (success) {
const newPageData = data.items;
setActivePage(data.page);
setTokenCount(data.total);
setRedemptionFormat(newPageData);
const newPageData = data.items;
setActivePage(data.page);
setTokenCount(data.total);
setRedemptionFormat(newPageData);
} else {
showError(message);
showError(message);
}
setSearching(false);
};
@@ -355,9 +360,11 @@ const RedemptionsTable = () => {
visiable={showEdit}
handleClose={closeEdit}
></EditRedemption>
<Form onSubmit={()=> {
searchRedemptions(searchKeyword, activePage, pageSize).then();
}}>
<Form
onSubmit={() => {
searchRedemptions(searchKeyword, activePage, pageSize).then();
}}
>
<Form.Input
label={t('搜索关键字')}
field='keyword'
@@ -369,35 +376,36 @@ const RedemptionsTable = () => {
onChange={handleKeywordChange}
/>
</Form>
<Divider style={{margin:'5px 0 15px 0'}}/>
<Divider style={{ margin: '5px 0 15px 0' }} />
<div>
<Button
theme='light'
type='primary'
style={{ marginRight: 8 }}
onClick={() => {
setEditingRedemption({
id: undefined,
});
setShowEdit(true);
}}
theme='light'
type='primary'
style={{ marginRight: 8 }}
onClick={() => {
setEditingRedemption({
id: undefined,
});
setShowEdit(true);
}}
>
{t('添加兑换码')}
</Button>
<Button
label={t('复制所选兑换码')}
type='warning'
onClick={async () => {
if (selectedKeys.length === 0) {
showError(t('请至少选择一个兑换码!'));
return;
}
let keys = '';
for (let i = 0; i < selectedKeys.length; i++) {
keys += selectedKeys[i].name + ' ' + selectedKeys[i].key + '\n';
}
await copyText(keys);
}}
label={t('复制所选兑换码')}
type='warning'
onClick={async () => {
if (selectedKeys.length === 0) {
showError(t('请至少选择一个兑换码!'));
return;
}
let keys = '';
for (let i = 0; i < selectedKeys.length; i++) {
keys +=
selectedKeys[i].name + ' ' + selectedKeys[i].key + '\n';
}
await copyText(keys);
}}
>
{t('复制所选兑换码到剪贴板')}
</Button>
@@ -417,7 +425,7 @@ const RedemptionsTable = () => {
t('第 {{start}} - {{end}} 条,共 {{total}} 条', {
start: page.currentStart,
end: page.currentEnd,
total: tokenCount
total: tokenCount,
}),
onPageSizeChange: (size) => {
setPageSize(size);

View File

@@ -1,13 +1,32 @@
import React, { useContext, useEffect, useState } from 'react';
import { Link, useNavigate } from 'react-router-dom';
import { API, getLogo, showError, showInfo, showSuccess, updateAPI } from '../helpers';
import {
API,
getLogo,
showError,
showInfo,
showSuccess,
updateAPI,
} from '../helpers';
import Turnstile from 'react-turnstile';
import { Button, Card, Divider, Form, Icon, Layout, Modal } from '@douyinfe/semi-ui';
import {
Button,
Card,
Divider,
Form,
Icon,
Layout,
Modal,
} from '@douyinfe/semi-ui';
import Title from '@douyinfe/semi-ui/lib/es/typography/title';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
import { IconGithubLogo } from '@douyinfe/semi-icons';
import {onGitHubOAuthClicked, onLinuxDOOAuthClicked, onOIDCClicked} from './utils.js';
import OIDCIcon from "./OIDCIcon.js";
import {
onGitHubOAuthClicked,
onLinuxDOOAuthClicked,
onOIDCClicked,
} from './utils.js';
import OIDCIcon from './OIDCIcon.js';
import LinuxDoIcon from './LinuxDoIcon.js';
import WeChatIcon from './WeChatIcon.js';
import TelegramLoginButton from 'react-telegram-login/src';
@@ -22,7 +41,7 @@ const RegisterForm = () => {
password: '',
password2: '',
email: '',
verification_code: ''
verification_code: '',
});
const { username, password, password2 } = inputs;
const [showEmailVerification, setShowEmailVerification] = useState(false);
@@ -54,7 +73,6 @@ const RegisterForm = () => {
}
});
const onWeChatLoginClicked = () => {
setShowWeChatLoginModal(true);
};
@@ -106,7 +124,7 @@ const RegisterForm = () => {
inputs.aff_code = affCode;
const res = await API.post(
`/api/user/register?turnstile=${turnstileToken}`,
inputs
inputs,
);
const { success, message } = res.data;
if (success) {
@@ -127,7 +145,7 @@ const RegisterForm = () => {
}
setLoading(true);
const res = await API.get(
`/api/verification?email=${inputs.email}&turnstile=${turnstileToken}`
`/api/verification?email=${inputs.email}&turnstile=${turnstileToken}`,
);
const { success, message } = res.data;
if (success) {
@@ -169,7 +187,6 @@ const RegisterForm = () => {
}
};
return (
<div>
<Layout>
@@ -179,7 +196,7 @@ const RegisterForm = () => {
style={{
justifyContent: 'center',
display: 'flex',
marginTop: 120
marginTop: 120,
}}
>
<div style={{ width: 500 }}>
@@ -187,28 +204,28 @@ const RegisterForm = () => {
<Title heading={2} style={{ textAlign: 'center' }}>
{t('新用户注册')}
</Title>
<Form size="large">
<Form size='large'>
<Form.Input
field={'username'}
label={t('用户名')}
placeholder={t('用户名')}
name="username"
name='username'
onChange={(value) => handleChange('username', value)}
/>
<Form.Input
field={'password'}
label={t('密码')}
placeholder={t('输入密码,最短 8 位,最长 20 位')}
name="password"
type="password"
name='password'
type='password'
onChange={(value) => handleChange('password', value)}
/>
<Form.Input
field={'password2'}
label={t('确认密码')}
placeholder={t('确认密码')}
name="password2"
type="password"
name='password2'
type='password'
onChange={(value) => handleChange('password2', value)}
/>
{showEmailVerification ? (
@@ -218,10 +235,13 @@ const RegisterForm = () => {
label={t('邮箱')}
placeholder={t('输入邮箱地址')}
onChange={(value) => handleChange('email', value)}
name="email"
type="email"
name='email'
type='email'
suffix={
<Button onClick={sendVerificationCode} disabled={loading}>
<Button
onClick={sendVerificationCode}
disabled={loading}
>
{t('获取验证码')}
</Button>
}
@@ -230,8 +250,10 @@ const RegisterForm = () => {
field={'verification_code'}
label={t('验证码')}
placeholder={t('输入验证码')}
onChange={(value) => handleChange('verification_code', value)}
name="verification_code"
onChange={(value) =>
handleChange('verification_code', value)
}
name='verification_code'
/>
</>
) : (
@@ -252,14 +274,12 @@ const RegisterForm = () => {
style={{
display: 'flex',
justifyContent: 'space-between',
marginTop: 20
marginTop: 20,
}}
>
<Text>
{t('已有账户?')}
<Link to="/login">
{t('点击登录')}
</Link>
<Link to='/login'>{t('点击登录')}</Link>
</Text>
</div>
{status.github_oauth ||
@@ -290,15 +310,18 @@ const RegisterForm = () => {
<></>
)}
{status.oidc_enabled ? (
<Button
type='primary'
icon={<OIDCIcon />}
onClick={() =>
onOIDCClicked(status.oidc_authorization_endpoint, status.oidc_client_id)
}
/>
<Button
type='primary'
icon={<OIDCIcon />}
onClick={() =>
onOIDCClicked(
status.oidc_authorization_endpoint,
status.oidc_client_id,
)
}
/>
) : (
<></>
<></>
)}
{status.linuxdo_oauth ? (
<Button
@@ -365,7 +388,9 @@ const RegisterForm = () => {
</div>
<div style={{ textAlign: 'center' }}>
<p>
{t('微信扫码关注公众号,输入「验证码」获取验证码(三分钟内有效)')}
{t(
'微信扫码关注公众号,输入「验证码」获取验证码(三分钟内有效)',
)}
</p>
</div>
<Form size='large'>

View File

@@ -15,10 +15,13 @@ import {
import '../index.css';
import {
IconCalendarClock, IconChecklistStroked,
IconComment, IconCommentStroked,
IconCalendarClock,
IconChecklistStroked,
IconComment,
IconCommentStroked,
IconCreditCard,
IconGift, IconHelpCircle,
IconGift,
IconHelpCircle,
IconHistogram,
IconHome,
IconImage,
@@ -26,9 +29,16 @@ import {
IconLayers,
IconPriceTag,
IconSetting,
IconUser
IconUser,
} from '@douyinfe/semi-icons';
import { Avatar, Dropdown, Layout, Nav, Switch, Divider } from '@douyinfe/semi-ui';
import {
Avatar,
Dropdown,
Layout,
Nav,
Switch,
Divider,
} from '@douyinfe/semi-ui';
import { setStatusData } from '../helpers/data.js';
import { stringToColor } from '../helpers/render.js';
import { useSetTheme, useTheme } from '../context/Theme/index.js';
@@ -44,21 +54,23 @@ const navItemStyle = {
// 自定义侧边栏按钮悬停样式
const navItemHoverStyle = {
backgroundColor: 'var(--semi-color-primary-light-default)',
color: 'var(--semi-color-primary)'
color: 'var(--semi-color-primary)',
};
// 自定义侧边栏按钮选中样式
const navItemSelectedStyle = {
backgroundColor: 'var(--semi-color-primary-light-default)',
color: 'var(--semi-color-primary)',
fontWeight: '600'
fontWeight: '600',
};
// 自定义图标样式
const iconStyle = (itemKey, selectedKeys) => {
return {
fontSize: '18px',
color: selectedKeys.includes(itemKey) ? 'var(--semi-color-primary)' : 'var(--semi-color-text-2)',
color: selectedKeys.includes(itemKey)
? 'var(--semi-color-primary)'
: 'var(--semi-color-text-2)',
};
};
@@ -99,8 +111,24 @@ const SiderBar = () => {
// 预先计算所有可能的图标样式
const allItemKeys = useMemo(() => {
const keys = ['home', 'channel', 'token', 'redemption', 'topup', 'user', 'log', 'midjourney',
'setting', 'about', 'chat', 'detail', 'pricing', 'task', 'playground', 'personal'];
const keys = [
'home',
'channel',
'token',
'redemption',
'topup',
'user',
'log',
'midjourney',
'setting',
'about',
'chat',
'detail',
'pricing',
'task',
'playground',
'personal',
];
// 添加聊天项的keys
for (let i = 0; i < chatItems.length; i++) {
keys.push('chat' + i);
@@ -111,7 +139,7 @@ const SiderBar = () => {
// 使用useMemo一次性计算所有图标样式
const iconStyles = useMemo(() => {
const styles = {};
allItemKeys.forEach(key => {
allItemKeys.forEach((key) => {
styles[key] = iconStyle(key, selectedKeys);
});
return styles;
@@ -157,10 +185,8 @@ const SiderBar = () => {
to: '/task',
icon: <IconChecklistStroked />,
className:
localStorage.getItem('enable_task') === 'true'
? ''
: 'tableHiddle',
}
localStorage.getItem('enable_task') === 'true' ? '' : 'tableHiddle',
},
],
[
localStorage.getItem('enable_data_export'),
@@ -241,13 +267,13 @@ const SiderBar = () => {
// Function to update router map with chat routes
const updateRouterMapWithChats = (chats) => {
const newRouterMap = { ...routerMap };
if (Array.isArray(chats) && chats.length > 0) {
for (let i = 0; i < chats.length; i++) {
newRouterMap['chat' + i] = '/chat/' + i;
}
}
setRouterMapState(newRouterMap);
return newRouterMap;
};
@@ -270,13 +296,13 @@ const SiderBar = () => {
chatItems.push(chat);
}
setChatItems(chatItems);
// Update router map with chat routes
updateRouterMapWithChats(chats);
}
} catch (e) {
console.error(e);
showError('聊天数据解析失败')
showError('聊天数据解析失败');
}
}
}, []);
@@ -284,7 +310,9 @@ const SiderBar = () => {
// Update the useEffect for route selection
useEffect(() => {
const currentPath = location.pathname;
let matchingKey = Object.keys(routerMapState).find(key => routerMapState[key] === currentPath);
let matchingKey = Object.keys(routerMapState).find(
(key) => routerMapState[key] === currentPath,
);
// Handle chat routes
if (!matchingKey && currentPath.startsWith('/chat/')) {
@@ -325,8 +353,8 @@ const SiderBar = () => {
return (
<>
<Nav
className="custom-sidebar-nav"
style={{
className='custom-sidebar-nav'
style={{
width: isCollapsed ? '60px' : '200px',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.15)',
borderRight: '1px solid var(--semi-color-border)',
@@ -351,7 +379,9 @@ const SiderBar = () => {
// 确保在收起侧边栏时有选中的项目,避免不必要的计算
if (selectedKeys.length === 0) {
const currentPath = location.pathname;
const matchingKey = Object.keys(routerMapState).find(key => routerMapState[key] === currentPath);
const matchingKey = Object.keys(routerMapState).find(
(key) => routerMapState[key] === currentPath,
);
if (matchingKey) {
setSelectedKeys([matchingKey]);
@@ -382,12 +412,12 @@ const SiderBar = () => {
} else {
styleDispatch({ type: 'SET_INNER_PADDING', payload: true });
}
// 如果点击的是已经展开的子菜单的父项,则收起子菜单
if (openedKeys.includes(key.itemKey)) {
setOpenedKeys(openedKeys.filter(k => k !== key.itemKey));
setOpenedKeys(openedKeys.filter((k) => k !== key.itemKey));
}
setSelectedKeys([key.itemKey]);
}}
openKeys={openedKeys}
@@ -403,7 +433,9 @@ const SiderBar = () => {
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
icon={React.cloneElement(item.icon, {
style: iconStyles[item.itemKey],
})}
>
{item.items.map((subItem) => (
<Nav.Item
@@ -420,7 +452,9 @@ const SiderBar = () => {
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
icon={React.cloneElement(item.icon, {
style: iconStyles[item.itemKey],
})}
/>
);
}
@@ -436,7 +470,9 @@ const SiderBar = () => {
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
icon={React.cloneElement(item.icon, {
style: iconStyles[item.itemKey],
})}
className={item.className}
/>
))}
@@ -453,7 +489,9 @@ const SiderBar = () => {
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
icon={React.cloneElement(item.icon, {
style: iconStyles[item.itemKey],
})}
className={item.className}
/>
))}
@@ -470,7 +508,9 @@ const SiderBar = () => {
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
icon={React.cloneElement(item.icon, {
style: iconStyles[item.itemKey],
})}
className={item.className}
/>
))}
@@ -480,14 +520,12 @@ const SiderBar = () => {
paddingBottom: styleState?.isMobile ? '112px' : '',
}}
collapseButton={true}
collapseText={(collapsed)=>
{
if(collapsed){
return t('展开侧边栏')
}
return t('收起侧边栏')
collapseText={(collapsed) => {
if (collapsed) {
return t('展开侧边栏');
}
}
return t('收起侧边栏');
}}
/>
</Nav>
</>

Some files were not shown because too many files have changed in this diff Show More