Compare commits

...

86 Commits

Author SHA1 Message Date
Calcium-Ion
ecb5b5630c Merge pull request #845 from Sh1n3zZ/gemini-embedding
feat: gemini Embeddings support
2025-03-10 23:46:53 +08:00
Sh1n3zZ
e1b9f164f9 feat: gemini Embeddings support 2025-03-10 23:32:06 +08:00
1808837298@qq.com
69db1f1465 Merge remote-tracking branch 'origin/main' 2025-03-10 21:05:43 +08:00
1808837298@qq.com
94549f9687 refactor: Improve responsive design across multiple setting pages 2025-03-10 21:05:22 +08:00
Calcium-Ion
c7e1bab18a Merge pull request #842 from asjfoajs/dev
Fix: Under Ali's large model, the task ID result for image retrieval …
2025-03-10 20:18:53 +08:00
1808837298@qq.com
627f95b034 refactor: Remove unnecessary transition styles and simplify sidebar state management 2025-03-10 20:14:23 +08:00
1808837298@qq.com
8b99eec440 refactor: Improve sidebar state management and layout responsiveness 2025-03-10 19:48:17 +08:00
1808837298@qq.com
49bfd2b719 feat: Enhance mobile UI responsiveness and layout for ChannelsTable and SiderBar 2025-03-10 19:01:56 +08:00
霍雨佳
434e9d7695 Fix: Under Ali's large model, the task ID result for image retrieval is incorrect.
Reason: The URL is incomplete, missing baseurl.
Solution: Add baseurl. url := fmt.Sprintf("%s/api/v1/tasks/%s", info.BaseUrl, taskID).
2025-03-10 16:22:40 +08:00
1808837298@qq.com
b2938ffe2c refactor: Improve mobile responsiveness and scrolling behavior in UI layout 2025-03-10 15:49:32 +08:00
1808837298@qq.com
d9cf0885f1 refactor: Enhance UI layout and styling with responsive design improvements 2025-03-10 03:25:02 +08:00
1808837298@qq.com
3ed50787b3 style: Enhance LogsTable header tags with improved styling and visual hierarchy 2025-03-10 00:34:24 +08:00
1808837298@qq.com
97d948cdb1 refactor: Make Channel Setting nullable and improve setting handling #836 2025-03-09 23:42:48 +08:00
1808837298@qq.com
5017fabbfa fix: Correct typo in group_ratio variable name in LogsTable 2025-03-09 21:24:19 +08:00
1808837298@qq.com
bd5c261b99 fix: Add optional chaining to prevent potential undefined errors in LogsTable #833 2025-03-09 21:23:33 +08:00
1808837298@qq.com
00c2d6c102 feat: Introduce configurable docs link and remove hardcoded chat links
- Added a new GeneralSetting struct to manage configurable docs link
- Removed hardcoded ChatLink and ChatLink2 variables across multiple files
- Updated frontend components to dynamically render docs link from status
- Simplified chat and link-related logic in various components
- Added a warning modal for quota per unit setting in operation settings
2025-03-09 18:31:16 +08:00
1808837298@qq.com
4a8bb625b8 fix: Refine embedding model detection in channel test 2025-03-09 15:03:07 +08:00
1808837298@qq.com
db01994cd0 refactor: Improve price rendering with clearer token and price calculations 2025-03-08 23:47:02 +08:00
Calcium-Ion
a0ca3effa7 Merge pull request #830 from Calcium-Ion/decimal
feat: Improve decimal precision for quota and payment calculationsDecimal
2025-03-08 22:01:15 +08:00
1808837298@qq.com
5a10ebd384 refactor: Update topup amount type from int to int64 for improved precision 2025-03-08 21:59:18 +08:00
1808837298@qq.com
68097c132d feat: Improve decimal precision for quota and payment calculations
- Added github.com/shopspring/decimal for precise floating-point calculations
- Refactored quota and payment calculations in multiple files to use decimal arithmetic
- Updated go.mod and go.sum to include decimal library
- Improved precision in topup, relay, and quota service calculations
- Added support for more OpenAI model variants in cache ratio settings
2025-03-08 21:55:50 +08:00
Calcium-Ion
3352bacd35 Merge pull request #828 from Calcium-Ion/ui
feat: Add column visibility settings for Channels and Logs tables
2025-03-08 19:55:28 +08:00
1808837298@qq.com
7fcb14e25f feat: Add column visibility settings for Channels and Logs tables
- Implemented dynamic column visibility for ChannelsTable and LogsTable
- Added localStorage persistence for column preferences
- Introduced column selector modal with select all/reset functionality
- Supported role-based default column visibility
- Added column settings button to table interfaces
2025-03-08 19:53:07 +08:00
1808837298@qq.com
867187ab4d refactor: Simplify chat menu items rendering in SiderBar 2025-03-08 19:06:49 +08:00
1808837298@qq.com
3ad96d3b4e feat: update readme and i18n 2025-03-08 18:13:44 +08:00
Calcium-Ion
d9390ff4c3 Merge pull request #826 from Calcium-Ion/cache
feat: Add prompt cache hit tokens support for DeepSeek channel #406
2025-03-08 16:52:19 +08:00
1808837298@qq.com
8c209e2fb9 fix: Adjust DeepSeek cache ratio to 0.1 2025-03-08 16:51:43 +08:00
1808837298@qq.com
a9bfcb0daf feat: Add prompt cache hit tokens support for DeepSeek channel #406 2025-03-08 16:50:53 +08:00
1808837298@qq.com
bb848b2fe0 refactor: Improve quota calculation precision using floating-point arithmetic 2025-03-08 16:44:08 +08:00
Calcium-Ion
618908f6f8 Merge pull request #821 from Calcium-Ion/cache
chore: Update terminology from "cache ratio" to "cache multiplier" in UI and add placeholder for default create cache ratio
2025-03-08 02:49:21 +08:00
1808837298@qq.com
1f4ebddcfa fix: Update default cache ratio from 0.5 to 1 2025-03-08 02:47:41 +08:00
1808837298@qq.com
6d79d8993e chore: Update terminology from "cache ratio" to "cache multiplier" in UI and add placeholder for default create cache ratio 2025-03-08 02:44:09 +08:00
Calcium-Ion
7c03ad71de Merge pull request #820 from Calcium-Ion/cache
feat: Implement cache token ratio for more precise token pricing
2025-03-08 01:31:44 +08:00
1808837298@qq.com
4f194f4e6a feat: Implement cache token ratio for more precise token pricing 2025-03-08 01:30:50 +08:00
1808837298@qq.com
81137e0533 refactor: Remove redundant user quota retrieval in audio relay 2025-03-07 19:59:00 +08:00
Calcium-Ion
b9b66dda54 Merge pull request #815 from Sh1n3zZ/openrouter-adapter
fix: adapting return format for openrouter think content (#793)
2025-03-07 19:25:20 +08:00
1808837298@qq.com
fd22948ead refactor: Reorganize sidebar navigation and add personal settings route 2025-03-07 17:22:37 +08:00
Sh1n3zZ
894dce7366 fix: possible incomplete return of the think field and incorrect occurrences of the reasoning field 2025-03-06 19:20:29 +08:00
Sh1n3zZ
b95142bbac fix: adapting return format for openrouter think content (#793) 2025-03-06 19:16:26 +08:00
1808837298@qq.com
7f74a9664e feat: Enhance channel status update with success tracking and dynamic notification #812 2025-03-06 17:46:03 +08:00
1808837298@qq.com
a3739f67f7 fix: Handle error in NotifyRootUser and log system errors #812 2025-03-06 17:25:39 +08:00
1808837298@qq.com
b841ce006f refactor: Improve model request rate limit middleware execution 2025-03-06 16:32:11 +08:00
1808837298@qq.com
e3f9ef1894 fix: error NotifyRootUser #812 2025-03-06 15:56:42 +08:00
1808837298@qq.com
558e625a01 fix: Prevent resource leaks by adding body close in stream handlers 2025-03-05 19:51:22 +08:00
1808837298@qq.com
37a83ecc33 refactor: Centralize stream handling and helper functions in relay package 2025-03-05 19:47:41 +08:00
1808837298@qq.com
37bb34b4b0 Update README.md 2025-03-05 16:55:17 +08:00
1808837298@qq.com
8deab221f9 fix: vertex claude 2025-03-05 16:43:40 +08:00
1808837298@qq.com
17e9f1a07d fix: #810 2025-03-05 16:39:42 +08:00
1808837298@qq.com
792754cee3 fix: #810 2025-03-05 16:34:08 +08:00
1808837298@qq.com
98b27a17a6 refactor: Extract operation-related settings into a separate package 2025-03-04 18:52:08 +08:00
1808837298@qq.com
7855f83e2d Update README.md 2025-03-04 18:50:05 +08:00
1808837298@qq.com
cbdf26bf2c feat: Add context-aware goroutine pool for safer concurrent operations 2025-03-04 18:42:34 +08:00
1808837298@qq.com
eb46b71a71 fix: Ignore EOF errors in OpenAI stream scanner 2025-03-04 17:35:41 +08:00
1808837298@qq.com
a42c3b6227 Merge remote-tracking branch 'origin/main' 2025-03-04 17:11:07 +08:00
1808837298@qq.com
b00dd8b405 fix: Handle scanner errors in OpenAI relay stream handler 2025-03-04 17:10:56 +08:00
Calcium-Ion
be228ccd2c Merge pull request #805 from PaperPlaneDeemo/main
Fix: fix typo in README
2025-03-04 16:27:15 +08:00
1808837298@qq.com
b1be64bcf3 fix: vertex claude 2025-03-03 20:06:08 +08:00
1808837298@qq.com
6ecfb81cbc feat: Improve image download and validation in GetImageFromUrl 2025-03-03 16:15:04 +08:00
Nekof
14848ff789 Merge branch 'Calcium-Ion:main' into main 2025-03-03 11:37:40 +08:00
“Deemo”
47d3b515da fix: Typo in README 2025-03-03 11:35:04 +08:00
1808837298@qq.com
760514c3e1 fix: channel test model mapped 2025-03-02 23:53:10 +08:00
1808837298@qq.com
254c25c27a feat: yanjingxia 2025-03-02 23:17:37 +08:00
1808837298@qq.com
8731a32e56 feat: Add model testing modal with search functionality in ChannelsTable
- Implement a new modal for selecting and testing models per channel
- Add search functionality to filter models by keyword
- Replace dropdown with direct button for model testing
- Introduce new state variables for managing model test modal
2025-03-02 19:53:35 +08:00
1808837298@qq.com
7208a65e5d refactor: Add index to Username column in Log model 2025-03-02 17:57:52 +08:00
1808837298@qq.com
4084b18071 refactor: Update rate limit configuration to use dynamic expiration duration 2025-03-02 17:34:39 +08:00
1808837298@qq.com
2ca0d7246d fix: Use channel group in model testing log record 2025-03-02 15:59:39 +08:00
1808837298@qq.com
d042a1bd55 refactor: Improve channel testing and model price handling 2025-03-02 15:47:12 +08:00
1808837298@qq.com
816e831a2e feat: Persist models expanded state in PersonalSetting component 2025-03-02 01:35:50 +08:00
1808837298@qq.com
a3ceae4a86 feat: Enhance update checking and system information display
- Add version and startup time display in OtherSetting component
- Implement robust GitHub release update checking mechanism
- Add error handling for update check process
- Update Modal component for displaying update information
- Add new translations for version and system information
2025-03-02 01:31:27 +08:00
1808837298@qq.com
eb163d9c94 feat: Add self-use mode and demo site mode indicators to HeaderBar 2025-03-02 00:46:54 +08:00
1808837298@qq.com
a592a81bc2 fix: Correct option map key for PreConsumedQuota 2025-03-01 22:37:14 +08:00
1808837298@qq.com
bb300d199e feat: Add translations for self-use mode and demo site mode settings 2025-03-01 21:15:59 +08:00
1808837298@qq.com
7dbb6b017c feat: Add self-use mode for model ratio and price configuration
- Introduce `SelfUseModeEnabled` setting to allow flexible model ratio configuration
- Update error handling to provide more informative messages when model ratios are not set
- Modify pricing and relay logic to support self-use mode
- Add UI toggle for enabling self-use mode in operation settings
- Implement fallback mechanism for model ratios when self-use mode is enabled
2025-03-01 21:13:48 +08:00
1808837298@qq.com
ce1854847b fix: Enhance error message for missing model ratio configuration 2025-03-01 17:02:31 +08:00
1808837298@qq.com
2f9faba40d fix: Improve error handling for model ratio and price validation #800 2025-03-01 15:27:32 +08:00
1808837298@qq.com
a5085014cc fix: Improve model ratio and price management
- Update error message for missing model ratio to be more user-friendly
- Modify ModelRatioNotSetEditor to filter models without price or ratio
- Enhance model data initialization with fallback values
2025-02-28 23:28:47 +08:00
1808837298@qq.com
18d3706ff8 feat: Add new model management features
- Implement `/api/channel/models_enabled` endpoint to retrieve enabled models
- Add `EnabledListModels` handler in controller
- Create new `ModelRatioNotSetEditor` component for managing unset model ratios
- Update router to include new models_enabled route
- Add internationalization support for new model management UI
- Include GPT-4.5 preview model in OpenAI model list
2025-02-28 21:13:30 +08:00
1808837298@qq.com
152950497e fix 2025-02-28 20:28:44 +08:00
1808837298@qq.com
d6fd50e382 feat: add new GPT-4.5 preview model ratios 2025-02-28 19:17:15 +08:00
1808837298@qq.com
cfd3f6c073 feat: Enhance Claude default max tokens configuration
- Replace ThinkingAdapterMaxTokens with a more flexible DefaultMaxTokens map
- Add support for model-specific default max tokens configuration
- Update relay and web interface to use the new configuration approach
- Implement a fallback mechanism for default max tokens
2025-02-28 17:53:08 +08:00
1808837298@qq.com
45c56b5ded feat: Implement model-specific headers configuration for Claude 2025-02-28 16:47:31 +08:00
1808837298@qq.com
d306394f33 fix: Simplify Claude settings value conversion logic 2025-02-27 22:26:21 +08:00
1808837298@qq.com
cdba87a7da fix: Prevent duplicate headers in Claude settings 2025-02-27 22:14:53 +08:00
1808837298@qq.com
ae5b874a6c refactor: Reorganize Claude MaxTokens configuration UI layout 2025-02-27 22:12:14 +08:00
1808837298@qq.com
d0bc8d17d1 feat: Enhance Claude MaxTokens configuration handling
- Update Claude relay to set default MaxTokens dynamically
- Modify web interface to clarify default MaxTokens input purpose
- Improve token configuration logic for thinking adapter models
2025-02-27 22:10:29 +08:00
1808837298@qq.com
4784ca7514 fix: Update Claude thinking adapter token percentage input guidance 2025-02-27 20:59:32 +08:00
105 changed files with 3421 additions and 1290 deletions

View File

@@ -65,10 +65,18 @@
- Add suffix `-low` to set low reasoning effort
17. 🔄 Thinking to content option `thinking_to_content` in `Channel->Edit->Channel Extra Settings`, default is `false`, when `true`, the `reasoning_content` of the thinking content will be converted to `<think>` tags and concatenated to the content returned.
18. 🔄 Model rate limit, support setting total request limit and successful request limit in `System Settings->Rate Limit Settings`
19. 💰 Cache billing support, when enabled can charge a configurable ratio for cache hits:
1. Set `Prompt Cache Ratio` in `System Settings -> Operation Settings`
2. Set `Prompt Cache Ratio` in channel settings, range 0-1 (e.g., 0.5 means 50% charge on cache hits)
3. Supported channels:
- [x] OpenAI
- [x] Azure
- [x] DeepSeek
- [ ] Claude
## Model Support
This version additionally supports:
1. Third-party model **gps** (gpt-4-gizmo-*)
1. Third-party model **gpts** (gpt-4-gizmo-*)
2. [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy) interface, [Integration Guide](Midjourney.md)
3. Custom channels with full API URL support
4. [Suno API](https://github.com/Suno-API/Suno-API) interface, [Integration Guide](Suno.md)
@@ -162,7 +170,7 @@ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtow
## Channel Retry
Channel retry is implemented, configurable in `Settings->Operation Settings->General Settings`. **Cache recommended**.
First retry uses same priority, second retry uses next priority, and so on.
If retry is enabled, the system will automatically use the next priority channel for the same request after a failed request.
### Cache Configuration
1. `REDIS_CONN_STRING`: Use Redis as cache

View File

@@ -74,10 +74,18 @@
- 添加后缀 `-thinking` 启用思考模式 (例如: `claude-3-7-sonnet-20250219-thinking`)
18. 🔄 思考转内容,支持在 `渠道-编辑-渠道额外设置` 中设置 `thinking_to_content` 选项,默认`false`,开启后会将思考内容`reasoning_content`转换为`<think>`标签拼接到内容中返回。
19. 🔄 模型限流,支持在 `系统设置-速率限制设置` 中设置模型限流,支持设置总请求数限制和成功请求数限制
20. 💰 缓存计费支持,开启后可以在缓存命中时按照设定的比例计费:
1.`系统设置-运营设置` 中设置 `提示缓存倍率` 选项
2. 在渠道中设置 `提示缓存倍率`,范围 0-1例如设置为 0.5 表示缓存命中时按照 50% 计费
3. 支持的渠道:
- [x] OpenAI
- [x] Azure
- [x] DeepSeek
- [ ] Claude
## 模型支持
此版本额外支持以下模型:
1. 第三方模型 **gps** gpt-4-gizmo-*
1. 第三方模型 **gpts** gpt-4-gizmo-*
2. [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy)接口,[对接文档](Midjourney.md)
3. 自定义渠道,支持填入完整调用地址
4. [Suno API](https://github.com/Suno-API/Suno-API) 接口,[对接文档](Suno.md)
@@ -177,7 +185,7 @@ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtow
## 渠道重试
渠道重试功能已经实现,可以在`设置->运营设置->通用设置`设置重试次数,**建议开启缓存**功能。
如果开启了重试功能,第一次重试使用同优先级,第二次重试使用下一个优先级,以此类推。
如果开启了重试功能,重试使用下一个优先级,以此类推。
### 缓存设置方法
1. `REDIS_CONN_STRING`:设置之后将使用 Redis 作为缓存使用。
+ 例子:`REDIS_CONN_STRING=redis://default:redispw@localhost:49153`
@@ -220,8 +228,8 @@ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock containrrr/watchtow
- [neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool)用key查询使用额度
其他基于New API的项目
- [new-api-horizon](https://github.com/Calcium-Ion/new-api-horizon)New API高性能优化版并支持Claude格式
- [VoAPI](https://github.com/VoAPI/VoAPI)基于New API的闭源项目
- [new-api-horizon](https://github.com/Calcium-Ion/new-api-horizon)New API高性能优化版专注于高并发优化,并支持Claude格式
- [VoAPI](https://github.com/VoAPI/VoAPI)基于New API的前端美化版本,闭源免费
## 🌟 Star History

View File

@@ -15,8 +15,9 @@ var SystemName = "New API"
var Footer = ""
var Logo = ""
var TopUpLink = ""
var ChatLink = ""
var ChatLink2 = ""
// var ChatLink = ""
// var ChatLink2 = ""
var QuotaPerUnit = 500 * 1000.0 // $0.002 / 1K tokens
var DisplayInCurrencyEnabled = true
var DisplayTokenStatEnabled = true

24
common/gopool.go Normal file
View File

@@ -0,0 +1,24 @@
package common
import (
"context"
"fmt"
"github.com/bytedance/gopkg/util/gopool"
"math"
)
var relayGoPool gopool.Pool
func init() {
relayGoPool = gopool.NewPool("gopool.RelayPool", math.MaxInt32, gopool.NewConfig())
relayGoPool.SetPanicHandler(func(ctx context.Context, i interface{}) {
if stopChan, ok := ctx.Value("stop_chan").(chan bool); ok {
SafeSendBool(stopChan, true)
}
SysError(fmt.Sprintf("panic in gopool.RelayPool: %v", i))
})
}
func RelayCtxGo(ctx context.Context, f func()) {
relayGoPool.CtxGo(ctx, f)
}

View File

@@ -17,6 +17,7 @@ import (
"one-api/relay"
relaycommon "one-api/relay/common"
"one-api/relay/constant"
"one-api/relay/helper"
"one-api/service"
"strconv"
"strings"
@@ -72,18 +73,6 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
}
}
modelMapping := *channel.ModelMapping
if modelMapping != "" && modelMapping != "{}" {
modelMap := make(map[string]string)
err := json.Unmarshal([]byte(modelMapping), &modelMap)
if err != nil {
return err, service.OpenAIErrorWrapperLocal(err, "unmarshal_model_mapping_failed", http.StatusInternalServerError)
}
if modelMap[testModel] != "" {
testModel = modelMap[testModel]
}
}
cache, err := model.GetUserCache(1)
if err != nil {
return err, nil
@@ -94,10 +83,19 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
c.Request.Header.Set("Content-Type", "application/json")
c.Set("channel", channel.Type)
c.Set("base_url", channel.GetBaseURL())
group, _ := model.GetUserGroup(1, false)
c.Set("group", group)
middleware.SetupContextForSelectedChannel(c, channel, testModel)
meta := relaycommon.GenRelayInfo(c)
info := relaycommon.GenRelayInfo(c)
err = helper.ModelMappedHelper(c, info)
if err != nil {
return err, nil
}
testModel = info.UpstreamModelName
apiType, _ := constant.ChannelType2APIType(channel.Type)
adaptor := relay.GetAdaptor(apiType)
if adaptor == nil {
@@ -105,12 +103,11 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
}
request := buildTestRequest(testModel)
meta.UpstreamModelName = testModel
common.SysLog(fmt.Sprintf("testing channel %d with model %s , meta %v ", channel.Id, testModel, meta))
common.SysLog(fmt.Sprintf("testing channel %d with model %s , info %v ", channel.Id, testModel, info))
adaptor.Init(meta)
adaptor.Init(info)
convertedRequest, err := adaptor.ConvertRequest(c, meta, request)
convertedRequest, err := adaptor.ConvertRequest(c, info, request)
if err != nil {
return err, nil
}
@@ -120,7 +117,7 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
}
requestBody := bytes.NewBuffer(jsonData)
c.Request.Body = io.NopCloser(requestBody)
resp, err := adaptor.DoRequest(c, meta, requestBody)
resp, err := adaptor.DoRequest(c, info, requestBody)
if err != nil {
return err, nil
}
@@ -132,7 +129,7 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
return fmt.Errorf("status code %d: %s", httpResp.StatusCode, err.Error.Message), err
}
}
usageA, respErr := adaptor.DoResponse(c, httpResp, meta)
usageA, respErr := adaptor.DoResponse(c, httpResp, info)
if respErr != nil {
return fmt.Errorf("%s", respErr.Error.Message), respErr
}
@@ -145,26 +142,28 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
if err != nil {
return err, nil
}
modelPrice, usePrice := common.GetModelPrice(testModel, false)
modelRatio := common.GetModelRatio(testModel)
completionRatio := common.GetCompletionRatio(testModel)
ratio := modelRatio
info.PromptTokens = usage.PromptTokens
priceData, err := helper.ModelPriceHelper(c, info, usage.PromptTokens, int(request.MaxTokens))
if err != nil {
return err, nil
}
quota := 0
if !usePrice {
quota = usage.PromptTokens + int(math.Round(float64(usage.CompletionTokens)*completionRatio))
quota = int(math.Round(float64(quota) * ratio))
if ratio != 0 && quota <= 0 {
if !priceData.UsePrice {
quota = usage.PromptTokens + int(math.Round(float64(usage.CompletionTokens)*priceData.CompletionRatio))
quota = int(math.Round(float64(quota) * priceData.ModelRatio))
if priceData.ModelRatio != 0 && quota <= 0 {
quota = 1
}
} else {
quota = int(modelPrice * common.QuotaPerUnit)
quota = int(priceData.ModelPrice * common.QuotaPerUnit)
}
tok := time.Now()
milliseconds := tok.Sub(tik).Milliseconds()
consumedTime := float64(milliseconds) / 1000.0
other := service.GenerateTextOtherInfo(c, meta, modelRatio, 1, completionRatio, modelPrice)
model.RecordConsumeLog(c, 1, channel.Id, usage.PromptTokens, usage.CompletionTokens, testModel, "模型测试",
quota, "模型测试", 0, quota, int(consumedTime), false, "default", other)
other := service.GenerateTextOtherInfo(c, info, priceData.ModelRatio, priceData.GroupRatio, priceData.CompletionRatio,
usage.PromptTokensDetails.CachedTokens, priceData.CacheRatio, priceData.ModelPrice)
model.RecordConsumeLog(c, 1, channel.Id, usage.PromptTokens, usage.CompletionTokens, info.OriginModelName, "模型测试",
quota, "模型测试", 0, quota, int(consumedTime), false, info.Group, other)
common.SysLog(fmt.Sprintf("testing channel #%d, response: \n%s", channel.Id, string(respBody)))
return nil, nil
}
@@ -176,10 +175,10 @@ func buildTestRequest(model string) *dto.GeneralOpenAIRequest {
}
// 先判断是否为 Embedding 模型
if strings.Contains(strings.ToLower(model), "embedding") ||
if strings.Contains(strings.ToLower(model), "embedding") || // 其他 embedding 模型
strings.HasPrefix(model, "m3e") || // m3e 系列模型
strings.Contains(model, "bge-") || // bge 系列模型
model == "text-embedding-v1" { // 其他 embedding 模型
strings.Contains(model, "bge-") {
testRequest.Model = model
// Embedding 请求
testRequest.Input = []string{"hello world"}
return testRequest
@@ -187,6 +186,8 @@ func buildTestRequest(model string) *dto.GeneralOpenAIRequest {
// 并非Embedding 模型
if strings.HasPrefix(model, "o1") || strings.HasPrefix(model, "o3") {
testRequest.MaxCompletionTokens = 10
} else if strings.Contains(model, "thinking") {
testRequest.MaxTokens = 50
} else {
testRequest.MaxTokens = 10
}

View File

@@ -7,6 +7,7 @@ import (
"one-api/common"
"one-api/model"
"one-api/setting"
"one-api/setting/operation_setting"
"strings"
"github.com/gin-gonic/gin"
@@ -53,8 +54,7 @@ func GetStatus(c *gin.Context) {
"turnstile_check": common.TurnstileCheckEnabled,
"turnstile_site_key": common.TurnstileSiteKey,
"top_up_link": common.TopUpLink,
"chat_link": common.ChatLink,
"chat_link2": common.ChatLink2,
"docs_link": operation_setting.GetGeneralSetting().DocsLink,
"quota_per_unit": common.QuotaPerUnit,
"display_in_currency": common.DisplayInCurrencyEnabled,
"enable_batch_update": common.BatchUpdateEnabled,
@@ -66,7 +66,8 @@ func GetStatus(c *gin.Context) {
"enable_online_topup": setting.PayAddress != "" && setting.EpayId != "" && setting.EpayKey != "",
"mj_notify_enabled": setting.MjNotifyEnabled,
"chats": setting.Chats,
"demo_site_enabled": setting.DemoSiteEnabled,
"demo_site_enabled": operation_setting.DemoSiteEnabled,
"self_use_mode_enabled": operation_setting.SelfUseModeEnabled,
},
})
return

View File

@@ -216,6 +216,13 @@ func DashboardListModels(c *gin.Context) {
})
}
func EnabledListModels(c *gin.Context) {
c.JSON(200, gin.H{
"success": true,
"data": model.GetEnabledModels(),
})
}
func RetrieveModel(c *gin.Context) {
modelId := c.Param("model")
if aiModel, ok := openAIModelsMap[modelId]; ok {

View File

@@ -2,9 +2,9 @@ package controller
import (
"github.com/gin-gonic/gin"
"one-api/common"
"one-api/model"
"one-api/setting"
"one-api/setting/operation_setting"
)
func GetPricing(c *gin.Context) {
@@ -40,7 +40,7 @@ func GetPricing(c *gin.Context) {
}
func ResetModelRatio(c *gin.Context) {
defaultStr := common.DefaultModelRatio2JSONString()
defaultStr := operation_setting.DefaultModelRatio2JSONString()
err := model.UpdateOption("ModelRatio", defaultStr)
if err != nil {
c.JSON(200, gin.H{
@@ -49,7 +49,7 @@ func ResetModelRatio(c *gin.Context) {
})
return
}
err = common.UpdateModelRatioByJSONString(defaultStr)
err = operation_setting.UpdateModelRatioByJSONString(defaultStr)
if err != nil {
c.JSON(200, gin.H{
"success": false,

View File

@@ -16,6 +16,7 @@ import (
"one-api/relay"
"one-api/relay/constant"
relayconstant "one-api/relay/constant"
"one-api/relay/helper"
"one-api/service"
"strings"
)
@@ -41,15 +42,6 @@ func relayHandler(c *gin.Context, relayMode int) *dto.OpenAIErrorWithStatusCode
return err
}
func wsHandler(c *gin.Context, ws *websocket.Conn, relayMode int) *dto.OpenAIErrorWithStatusCode {
var err *dto.OpenAIErrorWithStatusCode
switch relayMode {
default:
err = relay.TextHelper(c)
}
return err
}
func Relay(c *gin.Context) {
relayMode := constant.Path2RelayMode(c.Request.URL.Path)
requestId := c.GetString(common.RequestIdKey)
@@ -110,7 +102,7 @@ func WssRelay(c *gin.Context) {
if err != nil {
openaiErr := service.OpenAIErrorWrapper(err, "get_channel_failed", http.StatusInternalServerError)
service.WssError(c, ws, openaiErr.Error)
helper.WssError(c, ws, openaiErr.Error)
return
}
@@ -152,7 +144,7 @@ func WssRelay(c *gin.Context) {
openaiErr.Error.Message = "当前分组上游负载已饱和,请稍后再试"
}
openaiErr.Error.Message = common.MessageWithRequestId(openaiErr.Error.Message, requestId)
service.WssError(c, ws, openaiErr.Error)
helper.WssError(c, ws, openaiErr.Error)
}
}

View File

@@ -2,9 +2,6 @@ package controller
import (
"fmt"
"github.com/Calcium-Ion/go-epay/epay"
"github.com/gin-gonic/gin"
"github.com/samber/lo"
"log"
"net/url"
"one-api/common"
@@ -14,16 +11,21 @@ import (
"strconv"
"sync"
"time"
"github.com/Calcium-Ion/go-epay/epay"
"github.com/gin-gonic/gin"
"github.com/samber/lo"
"github.com/shopspring/decimal"
)
type EpayRequest struct {
Amount int `json:"amount"`
Amount int64 `json:"amount"`
PaymentMethod string `json:"payment_method"`
TopUpCode string `json:"top_up_code"`
}
type AmountRequest struct {
Amount int `json:"amount"`
Amount int64 `json:"amount"`
TopUpCode string `json:"top_up_code"`
}
@@ -41,25 +43,35 @@ func GetEpayClient() *epay.Client {
return withUrl
}
func getPayMoney(amount float64, group string) float64 {
func getPayMoney(amount int64, group string) float64 {
dAmount := decimal.NewFromInt(amount)
if !common.DisplayInCurrencyEnabled {
amount = amount / common.QuotaPerUnit
dQuotaPerUnit := decimal.NewFromFloat(common.QuotaPerUnit)
dAmount = dAmount.Div(dQuotaPerUnit)
}
// 别问为什么用float64问就是这么点钱没必要
topupGroupRatio := common.GetTopupGroupRatio(group)
if topupGroupRatio == 0 {
topupGroupRatio = 1
}
payMoney := amount * setting.Price * topupGroupRatio
return payMoney
dTopupGroupRatio := decimal.NewFromFloat(topupGroupRatio)
dPrice := decimal.NewFromFloat(setting.Price)
payMoney := dAmount.Mul(dPrice).Mul(dTopupGroupRatio)
return payMoney.InexactFloat64()
}
func getMinTopup() int {
func getMinTopup() int64 {
minTopup := setting.MinTopUp
if !common.DisplayInCurrencyEnabled {
minTopup = minTopup * int(common.QuotaPerUnit)
dMinTopup := decimal.NewFromInt(int64(minTopup))
dQuotaPerUnit := decimal.NewFromFloat(common.QuotaPerUnit)
minTopup = int(dMinTopup.Mul(dQuotaPerUnit).IntPart())
}
return minTopup
return int64(minTopup)
}
func RequestEpay(c *gin.Context) {
@@ -80,7 +92,7 @@ func RequestEpay(c *gin.Context) {
c.JSON(200, gin.H{"message": "error", "data": "获取用户分组失败"})
return
}
payMoney := getPayMoney(float64(req.Amount), group)
payMoney := getPayMoney(req.Amount, group)
if payMoney < 0.01 {
c.JSON(200, gin.H{"message": "error", "data": "充值金额过低"})
return
@@ -118,7 +130,9 @@ func RequestEpay(c *gin.Context) {
}
amount := req.Amount
if !common.DisplayInCurrencyEnabled {
amount = amount / int(common.QuotaPerUnit)
dAmount := decimal.NewFromInt(int64(amount))
dQuotaPerUnit := decimal.NewFromFloat(common.QuotaPerUnit)
amount = dAmount.Div(dQuotaPerUnit).IntPart()
}
topUp := &model.TopUp{
UserId: id,
@@ -210,13 +224,16 @@ func EpayNotify(c *gin.Context) {
}
//user, _ := model.GetUserById(topUp.UserId, false)
//user.Quota += topUp.Amount * 500000
err = model.IncreaseUserQuota(topUp.UserId, topUp.Amount*int(common.QuotaPerUnit), true)
dAmount := decimal.NewFromInt(int64(topUp.Amount))
dQuotaPerUnit := decimal.NewFromFloat(common.QuotaPerUnit)
quotaToAdd := int(dAmount.Mul(dQuotaPerUnit).IntPart())
err = model.IncreaseUserQuota(topUp.UserId, quotaToAdd, true)
if err != nil {
log.Printf("易支付回调更新用户失败: %v", topUp)
return
}
log.Printf("易支付回调更新用户成功 %v", topUp)
model.RecordLog(topUp.UserId, model.LogTypeTopup, fmt.Sprintf("使用在线充值成功,充值金额: %v支付金额%f", common.LogQuota(topUp.Amount*int(common.QuotaPerUnit)), topUp.Money))
model.RecordLog(topUp.UserId, model.LogTypeTopup, fmt.Sprintf("使用在线充值成功,充值金额: %v支付金额%f", common.LogQuota(quotaToAdd), topUp.Money))
}
} else {
log.Printf("易支付异常回调: %v", verifyInfo)
@@ -241,7 +258,7 @@ func RequestAmount(c *gin.Context) {
c.JSON(200, gin.H{"message": "error", "data": "获取用户分组失败"})
return
}
payMoney := getPayMoney(float64(req.Amount), group)
payMoney := getPayMoney(req.Amount, group)
if payMoney <= 0.01 {
c.JSON(200, gin.H{"message": "error", "data": "充值金额过低"})
return

View File

@@ -99,6 +99,7 @@ type Message struct {
Name *string `json:"name,omitempty"`
Prefix *bool `json:"prefix,omitempty"`
ReasoningContent string `json:"reasoning_content,omitempty"`
Reasoning string `json:"reasoning,omitempty"`
ToolCalls json.RawMessage `json:"tool_calls,omitempty"`
ToolCallId string `json:"tool_call_id,omitempty"`
parsedContent []MediaContent

View File

@@ -64,6 +64,7 @@ type ChatCompletionsStreamResponseChoice struct {
type ChatCompletionsStreamResponseChoiceDelta struct {
Content *string `json:"content,omitempty"`
ReasoningContent *string `json:"reasoning_content,omitempty"`
Reasoning *string `json:"reasoning,omitempty"`
Role string `json:"role,omitempty"`
ToolCalls []ToolCallResponse `json:"tool_calls,omitempty"`
}
@@ -80,14 +81,18 @@ func (c *ChatCompletionsStreamResponseChoiceDelta) GetContentString() string {
}
func (c *ChatCompletionsStreamResponseChoiceDelta) GetReasoningContent() string {
if c.ReasoningContent == nil {
if c.ReasoningContent == nil && c.Reasoning == nil {
return ""
}
return *c.ReasoningContent
if c.ReasoningContent != nil {
return *c.ReasoningContent
}
return *c.Reasoning
}
func (c *ChatCompletionsStreamResponseChoiceDelta) SetReasoningContent(s string) {
c.ReasoningContent = &s
c.Reasoning = &s
}
type ToolCallResponse struct {
@@ -161,6 +166,7 @@ type Usage struct {
PromptTokens int `json:"prompt_tokens"`
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
PromptCacheHitTokens int `json:"prompt_cache_hit_tokens,omitempty"`
PromptTokensDetails InputTokenDetails `json:"prompt_tokens_details"`
CompletionTokenDetails OutputTokenDetails `json:"completion_tokens_details"`
}

2
go.mod
View File

@@ -22,12 +22,12 @@ require (
github.com/golang-jwt/jwt v3.2.2+incompatible
github.com/google/uuid v1.6.0
github.com/gorilla/websocket v1.5.0
github.com/jinzhu/copier v0.4.0
github.com/joho/godotenv v1.5.1
github.com/pkg/errors v0.9.1
github.com/pkoukk/tiktoken-go v0.1.7
github.com/samber/lo v1.39.0
github.com/shirou/gopsutil v3.21.11+incompatible
github.com/shopspring/decimal v1.4.0
golang.org/x/crypto v0.27.0
golang.org/x/image v0.23.0
golang.org/x/net v0.28.0

4
go.sum
View File

@@ -117,8 +117,6 @@ github.com/jackc/pgx/v5 v5.7.1 h1:x7SYsPBYDkHDksogeSmZZ5xzThcTgRz++I5E+ePFUcs=
github.com/jackc/pgx/v5 v5.7.1/go.mod h1:e7O26IywZZ+naJtWWos6i6fvWK+29etgITqrqHLfoZA=
github.com/jackc/puddle/v2 v2.2.2 h1:PR8nw+E/1w0GLuRFSmiioY6UooMp6KJv0/61nB7icHo=
github.com/jackc/puddle/v2 v2.2.2/go.mod h1:vriiEXHvEE654aYKXXjOvZM39qJ0q+azkZFrfEOc3H4=
github.com/jinzhu/copier v0.4.0 h1:w3ciUoD19shMCRargcpm0cm91ytaBhDvuRpz1ODO/U8=
github.com/jinzhu/copier v0.4.0/go.mod h1:DfbEm0FYsaqBcKcFuvmOZb218JkPGtvSHsKg8S8hyyg=
github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD/E=
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.4/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
@@ -183,6 +181,8 @@ github.com/samber/lo v1.39.0 h1:4gTz1wUhNYLhFSKl6O+8peW0v2F4BCY034GRpU9WnuA=
github.com/samber/lo v1.39.0/go.mod h1:+m/ZKRl6ClXCE2Lgf3MsQlWfh4bn1bz6CXEOxnEXnEA=
github.com/shirou/gopsutil v3.21.11+incompatible h1:+1+c1VGhc88SSonWP6foOcLhvnKlUeu/erjjvaPEYiI=
github.com/shirou/gopsutil v3.21.11+incompatible/go.mod h1:5b4v6he4MtMOwMlS0TUMTu2PcXUg8+E1lC7eC3UO/RA=
github.com/shopspring/decimal v1.4.0 h1:bxl37RwXBklmTi0C79JfXCEBD1cqqHt0bbgBAGFp81k=
github.com/shopspring/decimal v1.4.0/go.mod h1:gawqmDU56v4yIKSwfBSFip1HdCCXN8/+DMd9qYNcwME=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=

View File

@@ -51,7 +51,7 @@ func checkRedisRateLimit(ctx context.Context, rdb *redis.Client, key string, max
// 如果在时间窗口内已达到限制,拒绝请求
subTime := nowTime.Sub(oldTime).Seconds()
if int64(subTime) < duration {
rdb.Expire(ctx, key, common.RateLimitKeyExpirationDuration)
rdb.Expire(ctx, key, time.Duration(setting.ModelRequestRateLimitDurationMinutes)*time.Minute)
return false, nil
}
@@ -68,7 +68,7 @@ func recordRedisRequest(ctx context.Context, rdb *redis.Client, key string, maxC
now := time.Now().Format(timeFormat)
rdb.LPush(ctx, key, now)
rdb.LTrim(ctx, key, 0, int64(maxCount-1))
rdb.Expire(ctx, key, common.RateLimitKeyExpirationDuration)
rdb.Expire(ctx, key, time.Duration(setting.ModelRequestRateLimitDurationMinutes)*time.Minute)
}
// Redis限流处理器
@@ -118,7 +118,7 @@ func redisRateLimitHandler(duration int64, totalMaxCount, successMaxCount int) g
// 内存限流处理器
func memoryRateLimitHandler(duration int64, totalMaxCount, successMaxCount int) gin.HandlerFunc {
inMemoryRateLimiter.Init(common.RateLimitKeyExpirationDuration)
inMemoryRateLimiter.Init(time.Duration(setting.ModelRequestRateLimitDurationMinutes) * time.Minute)
return func(c *gin.Context) {
userId := strconv.Itoa(c.GetInt("id"))
@@ -153,20 +153,23 @@ func memoryRateLimitHandler(duration int64, totalMaxCount, successMaxCount int)
// ModelRequestRateLimit 模型请求限流中间件
func ModelRequestRateLimit() func(c *gin.Context) {
// 如果未启用限流,直接放行
if !setting.ModelRequestRateLimitEnabled {
return defNext
}
return func(c *gin.Context) {
// 在每个请求时检查是否启用限流
if !setting.ModelRequestRateLimitEnabled {
c.Next()
return
}
// 计算限流参数
duration := int64(setting.ModelRequestRateLimitDurationMinutes * 60)
totalMaxCount := setting.ModelRequestRateLimitCount
successMaxCount := setting.ModelRequestRateLimitSuccessCount
// 计算限流参数
duration := int64(setting.ModelRequestRateLimitDurationMinutes * 60)
totalMaxCount := setting.ModelRequestRateLimitCount
successMaxCount := setting.ModelRequestRateLimitSuccessCount
// 根据存储类型选择限流处理器
if common.RedisEnabled {
return redisRateLimitHandler(duration, totalMaxCount, successMaxCount)
} else {
return memoryRateLimitHandler(duration, totalMaxCount, successMaxCount)
// 根据存储类型选择并执行限流处理器
if common.RedisEnabled {
redisRateLimitHandler(duration, totalMaxCount, successMaxCount)(c)
} else {
memoryRateLimitHandler(duration, totalMaxCount, successMaxCount)(c)
}
}
}

View File

@@ -35,7 +35,7 @@ type Channel struct {
AutoBan *int `json:"auto_ban" gorm:"default:1"`
OtherInfo string `json:"other_info"`
Tag *string `json:"tag" gorm:"index"`
Setting string `json:"setting" gorm:"type:text"`
Setting *string `json:"setting" gorm:"type:text"`
}
func (channel *Channel) GetModels() []string {
@@ -290,35 +290,42 @@ func (channel *Channel) Delete() error {
var channelStatusLock sync.Mutex
func UpdateChannelStatusById(id int, status int, reason string) {
func UpdateChannelStatusById(id int, status int, reason string) bool {
if common.MemoryCacheEnabled {
channelStatusLock.Lock()
defer channelStatusLock.Unlock()
channelCache, _ := CacheGetChannel(id)
// 如果缓存渠道存在,且状态已是目标状态,直接返回
if channelCache != nil && channelCache.Status == status {
channelStatusLock.Unlock()
return
return false
}
// 如果缓存渠道不存在(说明已经被禁用),且要设置的状态不为启用,直接返回
if channelCache == nil && status != common.ChannelStatusEnabled {
channelStatusLock.Unlock()
return
return false
}
CacheUpdateChannelStatus(id, status)
channelStatusLock.Unlock()
}
err := UpdateAbilityStatus(id, status == common.ChannelStatusEnabled)
if err != nil {
common.SysError("failed to update ability status: " + err.Error())
return false
}
channel, err := GetChannelById(id, true)
if err != nil {
// find channel by id error, directly update status
err = DB.Model(&Channel{}).Where("id = ?", id).Update("status", status).Error
if err != nil {
common.SysError("failed to update channel status: " + err.Error())
result := DB.Model(&Channel{}).Where("id = ?", id).Update("status", status)
if result.Error != nil {
common.SysError("failed to update channel status: " + result.Error.Error())
return false
}
if result.RowsAffected == 0 {
return false
}
} else {
if channel.Status == status {
return false
}
// find channel by id success, update status and other info
info := channel.GetOtherInfo()
info["status_reason"] = reason
@@ -328,9 +335,10 @@ func UpdateChannelStatusById(id int, status int, reason string) {
err = channel.Save()
if err != nil {
common.SysError("failed to update channel status: " + err.Error())
return false
}
}
return true
}
func EnableChannelByTag(tag string) error {
@@ -485,8 +493,8 @@ func SearchTags(keyword string, group string, model string, idSort bool) ([]*str
func (channel *Channel) GetSetting() map[string]interface{} {
setting := make(map[string]interface{})
if channel.Setting != "" {
err := json.Unmarshal([]byte(channel.Setting), &setting)
if channel.Setting != nil && *channel.Setting != "" {
err := json.Unmarshal([]byte(*channel.Setting), &setting)
if err != nil {
common.SysError("failed to unmarshal setting: " + err.Error())
}
@@ -500,7 +508,7 @@ func (channel *Channel) SetSetting(setting map[string]interface{}) {
common.SysError("failed to marshal setting: " + err.Error())
return
}
channel.Setting = string(settingBytes)
channel.Setting = common.GetPointer[string](string(settingBytes))
}
func GetChannelsByIds(ids []int) ([]*Channel, error) {

View File

@@ -2,12 +2,13 @@ package model
import (
"fmt"
"github.com/gin-gonic/gin"
"one-api/common"
"os"
"strings"
"time"
"github.com/gin-gonic/gin"
"github.com/bytedance/gopkg/util/gopool"
"gorm.io/gorm"
)
@@ -18,7 +19,7 @@ type Log struct {
CreatedAt int64 `json:"created_at" gorm:"bigint;index:idx_created_at_id,priority:2;index:idx_created_at_type"`
Type int `json:"type" gorm:"index:idx_created_at_type"`
Content string `json:"content"`
Username string `json:"username" gorm:"index:index_username_model_name,priority:2;default:''"`
Username string `json:"username" gorm:"index;index:index_username_model_name,priority:2;default:''"`
TokenName string `json:"token_name" gorm:"index;default:''"`
ModelName string `json:"model_name" gorm:"index;index:index_username_model_name,priority:1;default:''"`
Quota int `json:"quota" gorm:"default:0"`

View File

@@ -4,6 +4,7 @@ import (
"one-api/common"
"one-api/setting"
"one-api/setting/config"
"one-api/setting/operation_setting"
"strconv"
"strings"
"time"
@@ -87,18 +88,19 @@ func InitOptionMap() {
common.OptionMap["QuotaForInviter"] = strconv.Itoa(common.QuotaForInviter)
common.OptionMap["QuotaForInvitee"] = strconv.Itoa(common.QuotaForInvitee)
common.OptionMap["QuotaRemindThreshold"] = strconv.Itoa(common.QuotaRemindThreshold)
common.OptionMap["ShouldPreConsumedQuota"] = strconv.Itoa(common.PreConsumedQuota)
common.OptionMap["PreConsumedQuota"] = strconv.Itoa(common.PreConsumedQuota)
common.OptionMap["ModelRequestRateLimitCount"] = strconv.Itoa(setting.ModelRequestRateLimitCount)
common.OptionMap["ModelRequestRateLimitDurationMinutes"] = strconv.Itoa(setting.ModelRequestRateLimitDurationMinutes)
common.OptionMap["ModelRequestRateLimitSuccessCount"] = strconv.Itoa(setting.ModelRequestRateLimitSuccessCount)
common.OptionMap["ModelRatio"] = common.ModelRatio2JSONString()
common.OptionMap["ModelPrice"] = common.ModelPrice2JSONString()
common.OptionMap["ModelRatio"] = operation_setting.ModelRatio2JSONString()
common.OptionMap["ModelPrice"] = operation_setting.ModelPrice2JSONString()
common.OptionMap["CacheRatio"] = operation_setting.CacheRatio2JSONString()
common.OptionMap["GroupRatio"] = setting.GroupRatio2JSONString()
common.OptionMap["UserUsableGroups"] = setting.UserUsableGroups2JSONString()
common.OptionMap["CompletionRatio"] = common.CompletionRatio2JSONString()
common.OptionMap["CompletionRatio"] = operation_setting.CompletionRatio2JSONString()
common.OptionMap["TopUpLink"] = common.TopUpLink
common.OptionMap["ChatLink"] = common.ChatLink
common.OptionMap["ChatLink2"] = common.ChatLink2
//common.OptionMap["ChatLink"] = common.ChatLink
//common.OptionMap["ChatLink2"] = common.ChatLink2
common.OptionMap["QuotaPerUnit"] = strconv.FormatFloat(common.QuotaPerUnit, 'f', -1, 64)
common.OptionMap["RetryTimes"] = strconv.Itoa(common.RetryTimes)
common.OptionMap["DataExportInterval"] = strconv.Itoa(common.DataExportInterval)
@@ -110,13 +112,14 @@ func InitOptionMap() {
common.OptionMap["MjForwardUrlEnabled"] = strconv.FormatBool(setting.MjForwardUrlEnabled)
common.OptionMap["MjActionCheckSuccessEnabled"] = strconv.FormatBool(setting.MjActionCheckSuccessEnabled)
common.OptionMap["CheckSensitiveEnabled"] = strconv.FormatBool(setting.CheckSensitiveEnabled)
common.OptionMap["DemoSiteEnabled"] = strconv.FormatBool(setting.DemoSiteEnabled)
common.OptionMap["DemoSiteEnabled"] = strconv.FormatBool(operation_setting.DemoSiteEnabled)
common.OptionMap["SelfUseModeEnabled"] = strconv.FormatBool(operation_setting.SelfUseModeEnabled)
common.OptionMap["ModelRequestRateLimitEnabled"] = strconv.FormatBool(setting.ModelRequestRateLimitEnabled)
common.OptionMap["CheckSensitiveOnPromptEnabled"] = strconv.FormatBool(setting.CheckSensitiveOnPromptEnabled)
common.OptionMap["StopOnSensitiveEnabled"] = strconv.FormatBool(setting.StopOnSensitiveEnabled)
common.OptionMap["SensitiveWords"] = setting.SensitiveWordsToString()
common.OptionMap["StreamCacheQueueLength"] = strconv.Itoa(setting.StreamCacheQueueLength)
common.OptionMap["AutomaticDisableKeywords"] = setting.AutomaticDisableKeywordsToString()
common.OptionMap["AutomaticDisableKeywords"] = operation_setting.AutomaticDisableKeywordsToString()
// 自动添加所有注册的模型配置
modelConfigs := config.GlobalConfig.ExportAllConfigs()
@@ -242,7 +245,9 @@ func updateOptionMap(key string, value string) (err error) {
case "CheckSensitiveEnabled":
setting.CheckSensitiveEnabled = boolValue
case "DemoSiteEnabled":
setting.DemoSiteEnabled = boolValue
operation_setting.DemoSiteEnabled = boolValue
case "SelfUseModeEnabled":
operation_setting.SelfUseModeEnabled = boolValue
case "CheckSensitiveOnPromptEnabled":
setting.CheckSensitiveOnPromptEnabled = boolValue
case "ModelRequestRateLimitEnabled":
@@ -325,7 +330,7 @@ func updateOptionMap(key string, value string) (err error) {
common.QuotaForInvitee, _ = strconv.Atoi(value)
case "QuotaRemindThreshold":
common.QuotaRemindThreshold, _ = strconv.Atoi(value)
case "ShouldPreConsumedQuota":
case "PreConsumedQuota":
common.PreConsumedQuota, _ = strconv.Atoi(value)
case "ModelRequestRateLimitCount":
setting.ModelRequestRateLimitCount, _ = strconv.Atoi(value)
@@ -340,21 +345,23 @@ func updateOptionMap(key string, value string) (err error) {
case "DataExportDefaultTime":
common.DataExportDefaultTime = value
case "ModelRatio":
err = common.UpdateModelRatioByJSONString(value)
err = operation_setting.UpdateModelRatioByJSONString(value)
case "GroupRatio":
err = setting.UpdateGroupRatioByJSONString(value)
case "UserUsableGroups":
err = setting.UpdateUserUsableGroupsByJSONString(value)
case "CompletionRatio":
err = common.UpdateCompletionRatioByJSONString(value)
err = operation_setting.UpdateCompletionRatioByJSONString(value)
case "ModelPrice":
err = common.UpdateModelPriceByJSONString(value)
err = operation_setting.UpdateModelPriceByJSONString(value)
case "CacheRatio":
err = operation_setting.UpdateCacheRatioByJSONString(value)
case "TopUpLink":
common.TopUpLink = value
case "ChatLink":
common.ChatLink = value
case "ChatLink2":
common.ChatLink2 = value
//case "ChatLink":
// common.ChatLink = value
//case "ChatLink2":
// common.ChatLink2 = value
case "ChannelDisableThreshold":
common.ChannelDisableThreshold, _ = strconv.ParseFloat(value, 64)
case "QuotaPerUnit":
@@ -362,7 +369,7 @@ func updateOptionMap(key string, value string) (err error) {
case "SensitiveWords":
setting.SensitiveWordsFromString(value)
case "AutomaticDisableKeywords":
setting.AutomaticDisableKeywordsFromString(value)
operation_setting.AutomaticDisableKeywordsFromString(value)
case "StreamCacheQueueLength":
setting.StreamCacheQueueLength, _ = strconv.Atoi(value)
}

View File

@@ -2,6 +2,7 @@ package model
import (
"one-api/common"
"one-api/setting/operation_setting"
"sync"
"time"
)
@@ -64,13 +65,14 @@ func updatePricing() {
ModelName: model,
EnableGroup: groups,
}
modelPrice, findPrice := common.GetModelPrice(model, false)
modelPrice, findPrice := operation_setting.GetModelPrice(model, false)
if findPrice {
pricing.ModelPrice = modelPrice
pricing.QuotaType = 1
} else {
pricing.ModelRatio = common.GetModelRatio(model)
pricing.CompletionRatio = common.GetCompletionRatio(model)
modelRatio, _ := operation_setting.GetModelRatio(model)
pricing.ModelRatio = modelRatio
pricing.CompletionRatio = operation_setting.GetCompletionRatio(model)
pricing.QuotaType = 0
}
pricingMap = append(pricingMap, pricing)

View File

@@ -3,7 +3,7 @@ package model
type TopUp struct {
Id int `json:"id"`
UserId int `json:"user_id" gorm:"index"`
Amount int `json:"amount"`
Amount int64 `json:"amount"`
Money float64 `json:"money"`
TradeNo string `json:"trade_no"`
CreateTime int64 `json:"create_time"`

View File

@@ -27,7 +27,7 @@ func oaiImage2Ali(request dto.ImageRequest) *AliImageRequest {
}
func updateTask(info *relaycommon.RelayInfo, taskID string, key string) (*AliResponse, error, []byte) {
url := fmt.Sprintf("/api/v1/tasks/%s", taskID)
url := fmt.Sprintf("%s/api/v1/tasks/%s", info.BaseUrl, taskID)
var aliResponse AliResponse

View File

@@ -8,6 +8,7 @@ import (
"net/http"
"one-api/common"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
"strings"
)
@@ -153,7 +154,7 @@ func aliStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWith
}
stopChan <- true
}()
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
lastResponseText := ""
c.Stream(func(w io.Writer) bool {
select {

View File

@@ -39,7 +39,7 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
}
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
model_setting.GetClaudeSettings().WriteHeaders(req)
model_setting.GetClaudeSettings().WriteHeaders(info.OriginModelName, req)
return nil
}

View File

@@ -14,7 +14,7 @@ type AwsClaudeRequest struct {
TopP float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
StopSequences []string `json:"stop_sequences,omitempty"`
Tools []claude.Tool `json:"tools,omitempty"`
Tools any `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
Thinking *claude.Thinking `json:"thinking,omitempty"`
}

View File

@@ -12,6 +12,7 @@ import (
relaymodel "one-api/dto"
"one-api/relay/channel/claude"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"strings"
"time"
@@ -203,13 +204,13 @@ func awsStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
}
})
if info.ShouldIncludeUsage {
response := service.GenerateFinalUsageResponse(id, createdTime, info.UpstreamModelName, usage)
err := service.ObjectData(c, response)
response := helper.GenerateFinalUsageResponse(id, createdTime, info.UpstreamModelName, usage)
err := helper.ObjectData(c, response)
if err != nil {
common.SysError("send final response failed: " + err.Error())
}
}
service.Done(c)
helper.Done(c)
if resp != nil {
err = resp.Body.Close()
if err != nil {

View File

@@ -11,6 +11,7 @@ import (
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
"strings"
"sync"
@@ -138,7 +139,7 @@ func baiduStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWi
}
stopChan <- true
}()
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
c.Stream(func(w io.Writer) bool {
select {
case data := <-dataChan:

View File

@@ -56,7 +56,7 @@ func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *rel
anthropicVersion = "2023-06-01"
}
req.Set("anthropic-version", anthropicVersion)
model_setting.GetClaudeSettings().WriteHeaders(req)
model_setting.GetClaudeSettings().WriteHeaders(info.OriginModelName, req)
return nil
}

View File

@@ -58,7 +58,7 @@ type ClaudeRequest struct {
TopK int `json:"top_k,omitempty"`
//ClaudeMetadata `json:"metadata,omitempty"`
Stream bool `json:"stream,omitempty"`
Tools []Tool `json:"tools,omitempty"`
Tools any `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
Thinking *Thinking `json:"thinking,omitempty"`
}

View File

@@ -1,7 +1,6 @@
package claude
import (
"bufio"
"encoding/json"
"fmt"
"io"
@@ -9,6 +8,7 @@ import (
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/setting/model_setting"
"strings"
@@ -94,11 +94,12 @@ func RequestOpenAI2ClaudeMessage(textRequest dto.GeneralOpenAIRequest) (*ClaudeR
Tools: claudeTools,
}
if claudeRequest.MaxTokens == 0 {
claudeRequest.MaxTokens = uint(model_setting.GetClaudeSettings().GetDefaultMaxTokens(textRequest.Model))
}
if model_setting.GetClaudeSettings().ThinkingAdapterEnabled &&
strings.HasSuffix(textRequest.Model, "-thinking") {
if claudeRequest.MaxTokens == 0 {
claudeRequest.MaxTokens = uint(model_setting.GetClaudeSettings().ThinkingAdapterMaxTokens)
}
// 因为BudgetTokens 必须大于1024
if claudeRequest.MaxTokens < 1280 {
@@ -117,9 +118,6 @@ func RequestOpenAI2ClaudeMessage(textRequest dto.GeneralOpenAIRequest) (*ClaudeR
claudeRequest.Model = strings.TrimSuffix(textRequest.Model, "-thinking")
}
if claudeRequest.MaxTokens == 0 {
claudeRequest.MaxTokens = 4096
}
if textRequest.Stop != nil {
// stop maybe string/array string, convert to array string
switch textRequest.Stop.(type) {
@@ -445,28 +443,18 @@ func ClaudeStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.
usage = &dto.Usage{}
responseText := ""
createdTime := common.GetTimestamp()
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
service.SetEventStreamHeaders(c)
for scanner.Scan() {
data := scanner.Text()
info.SetFirstResponseTime()
if len(data) < 6 || !strings.HasPrefix(data, "data:") {
continue
}
data = strings.TrimPrefix(data, "data:")
data = strings.TrimSpace(data)
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
var claudeResponse ClaudeResponse
err := json.Unmarshal([]byte(data), &claudeResponse)
if err != nil {
common.SysError("error unmarshalling stream response: " + err.Error())
continue
return true
}
response, claudeUsage := StreamResponseClaude2OpenAI(requestMode, &claudeResponse)
if response == nil {
continue
return true
}
if requestMode == RequestModeCompletion {
responseText += claudeResponse.Completion
@@ -483,9 +471,9 @@ func ClaudeStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.
usage.CompletionTokens = claudeUsage.OutputTokens
usage.TotalTokens = claudeUsage.InputTokens + claudeUsage.OutputTokens
} else if claudeResponse.Type == "content_block_start" {
return true
} else {
continue
return true
}
}
//response.Id = responseId
@@ -493,11 +481,12 @@ func ClaudeStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.
response.Created = createdTime
response.Model = info.UpstreamModelName
err = service.ObjectData(c, response)
err = helper.ObjectData(c, response)
if err != nil {
common.LogError(c, "send_stream_response_failed: "+err.Error())
}
}
return true
})
if requestMode == RequestModeCompletion {
usage, _ = service.ResponseText2Usage(responseText, info.UpstreamModelName, info.PromptTokens)
@@ -510,14 +499,14 @@ func ClaudeStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.
}
}
if info.ShouldIncludeUsage {
response := service.GenerateFinalUsageResponse(responseId, createdTime, info.UpstreamModelName, *usage)
err := service.ObjectData(c, response)
response := helper.GenerateFinalUsageResponse(responseId, createdTime, info.UpstreamModelName, *usage)
err := helper.ObjectData(c, response)
if err != nil {
common.SysError("send final response failed: " + err.Error())
}
}
service.Done(c)
resp.Body.Close()
helper.Done(c)
//resp.Body.Close()
return nil, usage
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"strings"
"time"
@@ -28,8 +29,8 @@ func cfStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rela
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
service.SetEventStreamHeaders(c)
id := service.GetResponseID(c)
helper.SetEventStreamHeaders(c)
id := helper.GetResponseID(c)
var responseText string
isFirst := true
@@ -57,7 +58,7 @@ func cfStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rela
}
response.Id = id
response.Model = info.UpstreamModelName
err = service.ObjectData(c, response)
err = helper.ObjectData(c, response)
if isFirst {
isFirst = false
info.FirstResponseTime = time.Now()
@@ -72,13 +73,13 @@ func cfStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rela
}
usage, _ := service.ResponseText2Usage(responseText, info.UpstreamModelName, info.PromptTokens)
if info.ShouldIncludeUsage {
response := service.GenerateFinalUsageResponse(id, info.StartTime.Unix(), info.UpstreamModelName, *usage)
err := service.ObjectData(c, response)
response := helper.GenerateFinalUsageResponse(id, info.StartTime.Unix(), info.UpstreamModelName, *usage)
err := helper.ObjectData(c, response)
if err != nil {
common.LogError(c, "error_rendering_final_usage_response: "+err.Error())
}
}
service.Done(c)
helper.Done(c)
err := resp.Body.Close()
if err != nil {
@@ -109,7 +110,7 @@ func cfHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo)
}
usage, _ := service.ResponseText2Usage(responseText, info.UpstreamModelName, info.PromptTokens)
response.Usage = *usage
response.Id = service.GetResponseID(c)
response.Id = helper.GetResponseID(c)
jsonResponse, err := json.Marshal(response)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil

View File

@@ -10,6 +10,7 @@ import (
"one-api/common"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"strings"
"time"
@@ -103,7 +104,7 @@ func cohereStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.
}
stopChan <- true
}()
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
isFirst := true
c.Stream(func(w io.Writer) bool {
select {

View File

@@ -10,6 +10,7 @@ import (
"one-api/constant"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"strings"
)
@@ -66,7 +67,7 @@ func difyStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
for scanner.Scan() {
data := scanner.Text()
@@ -92,7 +93,7 @@ func difyStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
responseText += openaiResponse.Choices[0].Delta.GetContentString()
}
}
err = service.ObjectData(c, openaiResponse)
err = helper.ObjectData(c, openaiResponse)
if err != nil {
common.SysError(err.Error())
}
@@ -100,7 +101,7 @@ func difyStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
if err := scanner.Err(); err != nil {
common.SysError("error reading stream: " + err.Error())
}
service.Done(c)
helper.Done(c)
err := resp.Body.Close()
if err != nil {
//return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil

View File

@@ -70,6 +70,12 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
return fmt.Sprintf("%s/%s/models/%s:predict", info.BaseUrl, version, info.UpstreamModelName), nil
}
if strings.HasPrefix(info.UpstreamModelName, "text-embedding") ||
strings.HasPrefix(info.UpstreamModelName, "embedding") ||
strings.HasPrefix(info.UpstreamModelName, "gemini-embedding") {
return fmt.Sprintf("%s/%s/models/%s:embedContent", info.BaseUrl, version, info.UpstreamModelName), nil
}
action := "generateContent"
if info.IsStream {
action = "streamGenerateContent?alt=sse"
@@ -99,8 +105,37 @@ func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dt
}
func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error) {
//TODO implement me
return nil, errors.New("not implemented")
if request.Input == nil {
return nil, errors.New("input is required")
}
inputs := request.ParseInput()
if len(inputs) == 0 {
return nil, errors.New("input is empty")
}
// only process the first input
geminiRequest := GeminiEmbeddingRequest{
Content: GeminiChatContent{
Parts: []GeminiPart{
{
Text: inputs[0],
},
},
},
}
// set specific parameters for different models
// https://ai.google.dev/api/embeddings?hl=zh-cn#method:-models.embedcontent
switch info.UpstreamModelName {
case "text-embedding-004":
// except embedding-001 supports setting `OutputDimensionality`
if request.Dimensions > 0 {
geminiRequest.OutputDimensionality = request.Dimensions
}
}
return geminiRequest, nil
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
@@ -112,6 +147,13 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
return GeminiImageHandler(c, resp, info)
}
// check if the model is an embedding model
if strings.HasPrefix(info.UpstreamModelName, "text-embedding") ||
strings.HasPrefix(info.UpstreamModelName, "embedding") ||
strings.HasPrefix(info.UpstreamModelName, "gemini-embedding") {
return GeminiEmbeddingHandler(c, resp, info)
}
if info.IsStream {
err, usage = GeminiChatStreamHandler(c, resp, info)
} else {

View File

@@ -18,6 +18,10 @@ var ModelList = []string{
"gemini-2.0-flash-thinking-exp",
// imagen models
"imagen-3.0-generate-002",
// embedding models
"gemini-embedding-exp-03-07",
"text-embedding-004",
"embedding-001",
}
var SafetySettingList = []string{

View File

@@ -136,3 +136,19 @@ type GeminiImagePrediction struct {
RaiFilteredReason string `json:"raiFilteredReason,omitempty"`
SafetyAttributes any `json:"safetyAttributes,omitempty"`
}
// Embedding related structs
type GeminiEmbeddingRequest struct {
Content GeminiChatContent `json:"content"`
TaskType string `json:"taskType,omitempty"`
Title string `json:"title,omitempty"`
OutputDimensionality int `json:"outputDimensionality,omitempty"`
}
type GeminiEmbeddingResponse struct {
Embedding ContentEmbedding `json:"embedding"`
}
type ContentEmbedding struct {
Values []float64 `json:"values"`
}

View File

@@ -1,7 +1,6 @@
package gemini
import (
"bufio"
"encoding/json"
"fmt"
"io"
@@ -10,6 +9,7 @@ import (
"one-api/constant"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/service"
"one-api/setting/model_setting"
"strings"
@@ -429,10 +429,10 @@ func responseGeminiChat2OpenAI(response *GeminiChatResponse) *dto.OpenAITextResp
func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.ChatCompletionsStreamResponse, bool) {
choices := make([]dto.ChatCompletionsStreamResponseChoice, 0, len(geminiResponse.Candidates))
is_stop := false
isStop := false
for _, candidate := range geminiResponse.Candidates {
if candidate.FinishReason != nil && *candidate.FinishReason == "STOP" {
is_stop = true
isStop = true
candidate.FinishReason = nil
}
choice := dto.ChatCompletionsStreamResponseChoice{
@@ -482,9 +482,8 @@ func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.C
var response dto.ChatCompletionsStreamResponse
response.Object = "chat.completion.chunk"
response.Model = "gemini"
response.Choices = choices
return &response, is_stop
return &response, isStop
}
func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
@@ -492,27 +491,16 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
id := fmt.Sprintf("chatcmpl-%s", common.GetUUID())
createAt := common.GetTimestamp()
var usage = &dto.Usage{}
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
service.SetEventStreamHeaders(c)
for scanner.Scan() {
data := scanner.Text()
info.SetFirstResponseTime()
data = strings.TrimSpace(data)
if !strings.HasPrefix(data, "data: ") {
continue
}
data = strings.TrimPrefix(data, "data: ")
data = strings.TrimSuffix(data, "\"")
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
var geminiResponse GeminiChatResponse
err := json.Unmarshal([]byte(data), &geminiResponse)
if err != nil {
common.LogError(c, "error unmarshalling stream response: "+err.Error())
continue
return false
}
response, is_stop := streamResponseGeminiChat2OpenAI(&geminiResponse)
response, isStop := streamResponseGeminiChat2OpenAI(&geminiResponse)
response.Id = id
response.Created = createAt
response.Model = info.UpstreamModelName
@@ -521,15 +509,16 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
usage.PromptTokens = geminiResponse.UsageMetadata.PromptTokenCount
usage.CompletionTokens = geminiResponse.UsageMetadata.CandidatesTokenCount
}
err = service.ObjectData(c, response)
err = helper.ObjectData(c, response)
if err != nil {
common.LogError(c, err.Error())
}
if is_stop {
response := service.GenerateStopResponse(id, createAt, info.UpstreamModelName, constant.FinishReasonStop)
service.ObjectData(c, response)
if isStop {
response := helper.GenerateStopResponse(id, createAt, info.UpstreamModelName, constant.FinishReasonStop)
helper.ObjectData(c, response)
}
}
return true
})
var response *dto.ChatCompletionsStreamResponse
@@ -538,14 +527,14 @@ func GeminiChatStreamHandler(c *gin.Context, resp *http.Response, info *relaycom
usage.CompletionTokenDetails.TextTokens = usage.CompletionTokens
if info.ShouldIncludeUsage {
response = service.GenerateFinalUsageResponse(id, createAt, info.UpstreamModelName, *usage)
err := service.ObjectData(c, response)
response = helper.GenerateFinalUsageResponse(id, createAt, info.UpstreamModelName, *usage)
err := helper.ObjectData(c, response)
if err != nil {
common.SysError("send final response failed: " + err.Error())
}
}
service.Done(c)
resp.Body.Close()
helper.Done(c)
//resp.Body.Close()
return nil, usage
}
@@ -591,3 +580,52 @@ func GeminiChatHandler(c *gin.Context, resp *http.Response, info *relaycommon.Re
_, err = c.Writer.Write(jsonResponse)
return nil, &usage
}
func GeminiEmbeddingHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
responseBody, readErr := io.ReadAll(resp.Body)
if readErr != nil {
return nil, service.OpenAIErrorWrapper(readErr, "read_response_body_failed", http.StatusInternalServerError)
}
_ = resp.Body.Close()
var geminiResponse GeminiEmbeddingResponse
if jsonErr := json.Unmarshal(responseBody, &geminiResponse); jsonErr != nil {
return nil, service.OpenAIErrorWrapper(jsonErr, "unmarshal_response_body_failed", http.StatusInternalServerError)
}
// convert to openai format response
openAIResponse := dto.OpenAIEmbeddingResponse{
Object: "list",
Data: []dto.OpenAIEmbeddingResponseItem{
{
Object: "embedding",
Embedding: geminiResponse.Embedding.Values,
Index: 0,
},
},
Model: info.UpstreamModelName,
}
// calculate usage
// https://ai.google.dev/gemini-api/docs/pricing?hl=zh-cn#text-embedding-004
// Google has not yet clarified how embedding models will be billed
// refer to openai billing method to use input tokens billing
// https://platform.openai.com/docs/guides/embeddings#what-are-embeddings
usage = &dto.Usage{
PromptTokens: info.PromptTokens,
CompletionTokens: 0,
TotalTokens: info.PromptTokens,
}
openAIResponse.Usage = *usage.(*dto.Usage)
jsonResponse, jsonErr := json.Marshal(openAIResponse)
if jsonErr != nil {
return nil, service.OpenAIErrorWrapper(jsonErr, "marshal_response_failed", http.StatusInternalServerError)
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, _ = c.Writer.Write(jsonResponse)
return usage, nil
}

View File

@@ -11,6 +11,7 @@ var ModelList = []string{
"chatgpt-4o-latest",
"gpt-4o", "gpt-4o-2024-05-13", "gpt-4o-2024-08-06", "gpt-4o-2024-11-20",
"gpt-4o-mini", "gpt-4o-mini-2024-07-18",
"gpt-4.5-preview", "gpt-4.5-preview-2025-02-27",
"o1-preview", "o1-preview-2024-09-12",
"o1-mini", "o1-mini-2024-09-12",
"o3-mini", "o3-mini-2025-01-31",

View File

@@ -1,7 +1,6 @@
package openai
import (
"bufio"
"bytes"
"encoding/json"
"fmt"
@@ -14,11 +13,10 @@ import (
"one-api/dto"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
"one-api/relay/helper"
"one-api/service"
"os"
"strings"
"sync"
"time"
"github.com/bytedance/gopkg/util/gopool"
"github.com/gin-gonic/gin"
@@ -32,7 +30,7 @@ func sendStreamData(c *gin.Context, info *relaycommon.RelayInfo, data string, fo
}
if !forceFormat && !thinkToContent {
return service.StringData(c, data)
return helper.StringData(c, data)
}
var lastStreamResponse dto.ChatCompletionsStreamResponse
@@ -41,44 +39,68 @@ func sendStreamData(c *gin.Context, info *relaycommon.RelayInfo, data string, fo
}
if !thinkToContent {
return service.ObjectData(c, lastStreamResponse)
return helper.ObjectData(c, lastStreamResponse)
}
hasThinkingContent := false
hasContent := false
var thinkingContent strings.Builder
for _, choice := range lastStreamResponse.Choices {
if len(choice.Delta.GetReasoningContent()) > 0 {
hasThinkingContent = true
thinkingContent.WriteString(choice.Delta.GetReasoningContent())
}
if len(choice.Delta.GetContentString()) > 0 {
hasContent = true
}
}
// Handle think to content conversion
if info.IsFirstResponse {
response := lastStreamResponse.Copy()
for i := range response.Choices {
response.Choices[i].Delta.SetContentString("<think>\n")
response.Choices[i].Delta.SetReasoningContent("")
if info.ThinkingContentInfo.IsFirstThinkingContent {
if hasThinkingContent {
response := lastStreamResponse.Copy()
for i := range response.Choices {
// send `think` tag with thinking content
response.Choices[i].Delta.SetContentString("<think>\n" + thinkingContent.String())
response.Choices[i].Delta.ReasoningContent = nil
response.Choices[i].Delta.Reasoning = nil
}
info.ThinkingContentInfo.IsFirstThinkingContent = false
return helper.ObjectData(c, response)
}
service.ObjectData(c, response)
}
if lastStreamResponse.Choices == nil || len(lastStreamResponse.Choices) == 0 {
return service.ObjectData(c, lastStreamResponse)
return helper.ObjectData(c, lastStreamResponse)
}
// Process each choice
for i, choice := range lastStreamResponse.Choices {
// Handle transition from thinking to content
if len(choice.Delta.GetContentString()) > 0 && !info.SendLastReasoningResponse {
if hasContent && !info.ThinkingContentInfo.SendLastThinkingContent {
response := lastStreamResponse.Copy()
for j := range response.Choices {
response.Choices[j].Delta.SetContentString("\n</think>")
response.Choices[j].Delta.SetReasoningContent("")
response.Choices[j].Delta.SetContentString("\n</think>\n")
response.Choices[j].Delta.ReasoningContent = nil
response.Choices[j].Delta.Reasoning = nil
}
info.SendLastReasoningResponse = true
service.ObjectData(c, response)
info.ThinkingContentInfo.SendLastThinkingContent = true
helper.ObjectData(c, response)
}
// Convert reasoning content to regular content
if len(choice.Delta.GetReasoningContent()) > 0 {
lastStreamResponse.Choices[i].Delta.SetContentString(choice.Delta.GetReasoningContent())
lastStreamResponse.Choices[i].Delta.SetReasoningContent("")
lastStreamResponse.Choices[i].Delta.ReasoningContent = nil
lastStreamResponse.Choices[i].Delta.Reasoning = nil
} else if !hasThinkingContent && !hasContent {
// flush thinking content
lastStreamResponse.Choices[i].Delta.ReasoningContent = nil
lastStreamResponse.Choices[i].Delta.Reasoning = nil
}
}
return service.ObjectData(c, lastStreamResponse)
return helper.ObjectData(c, lastStreamResponse)
}
func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
@@ -108,64 +130,22 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
}
toolCount := 0
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
service.SetEventStreamHeaders(c)
streamingTimeout := time.Duration(constant.StreamingTimeout) * time.Second
if strings.HasPrefix(info.UpstreamModelName, "o1") || strings.HasPrefix(info.UpstreamModelName, "o3") {
// twice timeout for o1 model
streamingTimeout *= 2
}
ticker := time.NewTicker(streamingTimeout)
defer ticker.Stop()
stopChan := make(chan bool)
defer close(stopChan)
var (
lastStreamData string
mu sync.Mutex
)
gopool.Go(func() {
for scanner.Scan() {
//info.SetFirstResponseTime()
ticker.Reset(time.Duration(constant.StreamingTimeout) * time.Second)
data := scanner.Text()
if common.DebugEnabled {
println(data)
}
if len(data) < 6 { // ignore blank line or wrong format
continue
}
if data[:5] != "data:" && data[:6] != "[DONE]" {
continue
}
mu.Lock()
data = data[5:]
data = strings.TrimSpace(data)
if !strings.HasPrefix(data, "[DONE]") {
if lastStreamData != "" {
err := sendStreamData(c, info, lastStreamData, forceFormat, thinkToContent)
if err != nil {
common.LogError(c, "streaming error: "+err.Error())
}
info.SetFirstResponseTime()
}
lastStreamData = data
streamItems = append(streamItems, data)
}
mu.Unlock()
}
common.SafeSendBool(stopChan, true)
})
select {
case <-ticker.C:
// 超时处理逻辑
common.LogError(c, "streaming timeout")
case <-stopChan:
// 正常结束
}
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
if lastStreamData != "" {
err := sendStreamData(c, info, lastStreamData, forceFormat, thinkToContent)
if err != nil {
common.LogError(c, "streaming error: "+err.Error())
}
}
lastStreamData = data
streamItems = append(streamItems, data)
return true
})
shouldSendLastResp := true
var lastStreamResponse dto.ChatCompletionsStreamResponse
@@ -210,7 +190,10 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
//}
for _, choice := range streamResponse.Choices {
responseTextBuilder.WriteString(choice.Delta.GetContentString())
// handle both reasoning_content and reasoning
responseTextBuilder.WriteString(choice.Delta.GetReasoningContent())
if choice.Delta.ToolCalls != nil {
if len(choice.Delta.ToolCalls) > toolCount {
toolCount = len(choice.Delta.ToolCalls)
@@ -231,7 +214,7 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
//}
for _, choice := range streamResponse.Choices {
responseTextBuilder.WriteString(choice.Delta.GetContentString())
responseTextBuilder.WriteString(choice.Delta.GetReasoningContent())
responseTextBuilder.WriteString(choice.Delta.GetReasoningContent()) // This will handle both reasoning_content and reasoning
if choice.Delta.ToolCalls != nil {
if len(choice.Delta.ToolCalls) > toolCount {
toolCount = len(choice.Delta.ToolCalls)
@@ -271,17 +254,23 @@ func OaiStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
if !containStreamUsage {
usage, _ = service.ResponseText2Usage(responseTextBuilder.String(), info.UpstreamModelName, info.PromptTokens)
usage.CompletionTokens += toolCount * 7
} else {
if info.ChannelType == common.ChannelTypeDeepSeek {
if usage.PromptCacheHitTokens != 0 {
usage.PromptTokensDetails.CachedTokens = usage.PromptCacheHitTokens
}
}
}
if info.ShouldIncludeUsage && !containStreamUsage {
response := service.GenerateFinalUsageResponse(responseId, createAt, model, *usage)
response := helper.GenerateFinalUsageResponse(responseId, createAt, model, *usage)
response.SetSystemFingerprint(systemFingerprint)
service.ObjectData(c, response)
helper.ObjectData(c, response)
}
service.Done(c)
helper.Done(c)
resp.Body.Close()
//resp.Body.Close()
return nil, usage
}
@@ -323,7 +312,7 @@ func OpenaiHandler(c *gin.Context, resp *http.Response, promptTokens int, model
if simpleResponse.Usage.TotalTokens == 0 || (simpleResponse.Usage.PromptTokens == 0 && simpleResponse.Usage.CompletionTokens == 0) {
completionTokens := 0
for _, choice := range simpleResponse.Choices {
ctkm, _ := service.CountTextToken(choice.Message.StringContent()+choice.Message.ReasoningContent, model)
ctkm, _ := service.CountTextToken(choice.Message.StringContent()+choice.Message.ReasoningContent+choice.Message.Reasoning, model)
completionTokens += ctkm
}
simpleResponse.Usage = dto.Usage{
@@ -512,7 +501,7 @@ func OpenaiRealtimeHandler(c *gin.Context, info *relaycommon.RelayInfo) (*dto.Op
localUsage.InputTokenDetails.TextTokens += textToken
localUsage.InputTokenDetails.AudioTokens += audioToken
err = service.WssString(c, targetConn, string(message))
err = helper.WssString(c, targetConn, string(message))
if err != nil {
errChan <- fmt.Errorf("error writing to target: %v", err)
return
@@ -618,7 +607,7 @@ func OpenaiRealtimeHandler(c *gin.Context, info *relaycommon.RelayInfo) (*dto.Op
localUsage.OutputTokenDetails.AudioTokens += audioToken
}
err = service.WssString(c, clientConn, string(message))
err = helper.WssString(c, clientConn, string(message))
if err != nil {
errChan <- fmt.Errorf("error writing to client: %v", err)
return

View File

@@ -9,6 +9,7 @@ import (
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
)
@@ -112,7 +113,7 @@ func palmStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWit
dataChan <- string(jsonResponse)
stopChan <- true
}()
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
c.Stream(func(w io.Writer) bool {
select {
case data := <-dataChan:

View File

@@ -14,6 +14,7 @@ import (
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
"strconv"
"strings"
@@ -91,7 +92,7 @@ func tencentStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIError
scanner := bufio.NewScanner(resp.Body)
scanner.Split(bufio.ScanLines)
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
for scanner.Scan() {
data := scanner.Text()
@@ -112,7 +113,7 @@ func tencentStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIError
responseText += response.Choices[0].Delta.GetContentString()
}
err = service.ObjectData(c, response)
err = helper.ObjectData(c, response)
if err != nil {
common.SysError(err.Error())
}
@@ -122,7 +123,7 @@ func tencentStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIError
common.SysError("error reading stream: " + err.Error())
}
service.Done(c)
helper.Done(c)
err := resp.Body.Close()
if err != nil {

View File

@@ -5,7 +5,6 @@ import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"github.com/jinzhu/copier"
"io"
"net/http"
"one-api/dto"
@@ -28,6 +27,7 @@ var claudeModelMap = map[string]string{
"claude-3-opus-20240229": "claude-3-opus@20240229",
"claude-3-haiku-20240307": "claude-3-haiku@20240307",
"claude-3-5-sonnet-20240620": "claude-3-5-sonnet@20240620",
"claude-3-5-sonnet-20241022": "claude-3-5-sonnet-v2@20241022",
"claude-3-7-sonnet-20250219": "claude-3-7-sonnet@20250219",
}
@@ -86,15 +86,16 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
} else {
suffix = "rawPredict"
}
model := info.UpstreamModelName
if v, ok := claudeModelMap[info.UpstreamModelName]; ok {
info.UpstreamModelName = v
model = v
}
return fmt.Sprintf(
"https://%s-aiplatform.googleapis.com/v1/projects/%s/locations/%s/publishers/anthropic/models/%s:%s",
region,
adc.ProjectID,
region,
info.UpstreamModelName,
model,
suffix,
), nil
} else if a.RequestMode == RequestModeLlama {
@@ -127,13 +128,9 @@ func (a *Adaptor) ConvertRequest(c *gin.Context, info *relaycommon.RelayInfo, re
if err != nil {
return nil, err
}
vertexClaudeReq := &VertexAIClaudeRequest{
AnthropicVersion: anthropicVersion,
}
if err = copier.Copy(vertexClaudeReq, claudeReq); err != nil {
return nil, errors.New("failed to copy claude request")
}
vertexClaudeReq := copyRequest(claudeReq, anthropicVersion)
c.Set("request_model", claudeReq.Model)
info.UpstreamModelName = claudeReq.Model
return vertexClaudeReq, nil
} else if a.RequestMode == RequestModeGemini {
geminiRequest, err := gemini.CovertGemini2OpenAI(*request)

View File

@@ -1,17 +1,37 @@
package vertex
import "one-api/relay/channel/claude"
import (
"one-api/relay/channel/claude"
)
type VertexAIClaudeRequest struct {
AnthropicVersion string `json:"anthropic_version"`
Messages []claude.ClaudeMessage `json:"messages"`
System string `json:"system,omitempty"`
MaxTokens int `json:"max_tokens,omitempty"`
System any `json:"system,omitempty"`
MaxTokens uint `json:"max_tokens,omitempty"`
StopSequences []string `json:"stop_sequences,omitempty"`
Stream bool `json:"stream,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
Tools []claude.Tool `json:"tools,omitempty"`
Tools any `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
Thinking *claude.Thinking `json:"thinking,omitempty"`
}
func copyRequest(req *claude.ClaudeRequest, version string) *VertexAIClaudeRequest {
return &VertexAIClaudeRequest{
AnthropicVersion: version,
System: req.System,
Messages: req.Messages,
MaxTokens: req.MaxTokens,
Stream: req.Stream,
Temperature: req.Temperature,
TopP: req.TopP,
TopK: req.TopK,
StopSequences: req.StopSequences,
Tools: req.Tools,
ToolChoice: req.ToolChoice,
Thinking: req.Thinking,
}
}

View File

@@ -14,6 +14,7 @@ import (
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
"strings"
"time"
@@ -132,7 +133,7 @@ func xunfeiStreamHandler(c *gin.Context, textRequest dto.GeneralOpenAIRequest, a
if err != nil {
return service.OpenAIErrorWrapper(err, "make xunfei request err", http.StatusInternalServerError), nil
}
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
var usage dto.Usage
c.Stream(func(w io.Writer) bool {
select {

View File

@@ -10,6 +10,7 @@ import (
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
"strings"
"sync"
@@ -177,7 +178,7 @@ func zhipuStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWi
}
stopChan <- true
}()
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
c.Stream(func(w io.Writer) bool {
select {
case data := <-dataChan:

View File

@@ -10,6 +10,7 @@ import (
"net/http"
"one-api/common"
"one-api/dto"
"one-api/relay/helper"
"one-api/service"
"strings"
"sync"
@@ -197,7 +198,7 @@ func zhipuStreamHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWi
}
stopChan <- true
}()
service.SetEventStreamHeaders(c)
helper.SetEventStreamHeaders(c)
c.Stream(func(w io.Writer) bool {
select {
case data := <-dataChan:

View File

@@ -12,25 +12,30 @@ import (
"github.com/gorilla/websocket"
)
type ThinkingContentInfo struct {
IsFirstThinkingContent bool
SendLastThinkingContent bool
}
type RelayInfo struct {
ChannelType int
ChannelId int
TokenId int
TokenKey string
UserId int
Group string
TokenUnlimited bool
StartTime time.Time
FirstResponseTime time.Time
IsFirstResponse bool
SendLastReasoningResponse bool
ApiType int
IsStream bool
IsPlayground bool
UsePrice bool
RelayMode int
UpstreamModelName string
OriginModelName string
ChannelType int
ChannelId int
TokenId int
TokenKey string
UserId int
Group string
TokenUnlimited bool
StartTime time.Time
FirstResponseTime time.Time
isFirstResponse bool
//SendLastReasoningResponse bool
ApiType int
IsStream bool
IsPlayground bool
UsePrice bool
RelayMode int
UpstreamModelName string
OriginModelName string
//RecodeModelName string
RequestURLPath string
ApiVersion string
@@ -53,6 +58,7 @@ type RelayInfo struct {
UserSetting map[string]interface{}
UserEmail string
UserQuota int
ThinkingContentInfo
}
// 定义支持流式选项的通道类型
@@ -95,7 +101,7 @@ func GenRelayInfo(c *gin.Context) *RelayInfo {
UserQuota: c.GetInt(constant.ContextKeyUserQuota),
UserSetting: c.GetStringMap(constant.ContextKeyUserSetting),
UserEmail: c.GetString(constant.ContextKeyUserEmail),
IsFirstResponse: true,
isFirstResponse: true,
RelayMode: relayconstant.Path2RelayMode(c.Request.URL.Path),
BaseUrl: c.GetString("base_url"),
RequestURLPath: c.Request.URL.String(),
@@ -117,6 +123,10 @@ func GenRelayInfo(c *gin.Context) *RelayInfo {
ApiKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
Organization: c.GetString("channel_organization"),
ChannelSetting: channelSetting,
ThinkingContentInfo: ThinkingContentInfo{
IsFirstThinkingContent: true,
SendLastThinkingContent: false,
},
}
if strings.HasPrefix(c.Request.URL.Path, "/pg") {
info.IsPlayground = true
@@ -147,9 +157,9 @@ func (info *RelayInfo) SetIsStream(isStream bool) {
}
func (info *RelayInfo) SetFirstResponseTime() {
if info.IsFirstResponse {
if info.isFirstResponse {
info.FirstResponseTime = time.Now()
info.IsFirstResponse = false
info.isFirstResponse = false
}
}

View File

@@ -1,4 +1,4 @@
package service
package helper
import (
"encoding/json"

View File

@@ -1,31 +1,47 @@
package helper
import (
"fmt"
"github.com/gin-gonic/gin"
"one-api/common"
relaycommon "one-api/relay/common"
"one-api/setting"
"one-api/setting/operation_setting"
)
type PriceData struct {
ModelPrice float64
ModelRatio float64
CompletionRatio float64
CacheRatio float64
GroupRatio float64
UsePrice bool
ShouldPreConsumedQuota int
}
func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens int, maxTokens int) PriceData {
modelPrice, usePrice := common.GetModelPrice(info.OriginModelName, false)
func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens int, maxTokens int) (PriceData, error) {
modelPrice, usePrice := operation_setting.GetModelPrice(info.OriginModelName, false)
groupRatio := setting.GetGroupRatio(info.Group)
var preConsumedQuota int
var modelRatio float64
var completionRatio float64
var cacheRatio float64
if !usePrice {
preConsumedTokens := common.PreConsumedQuota
if maxTokens != 0 {
preConsumedTokens = promptTokens + maxTokens
}
modelRatio = common.GetModelRatio(info.OriginModelName)
var success bool
modelRatio, success = operation_setting.GetModelRatio(info.OriginModelName)
if !success {
if info.UserId == 1 {
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置请设置或开始自用模式Model %s ratio or price not set, please set or start self-use mode", info.OriginModelName, info.OriginModelName)
} else {
return PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置, 请联系管理员设置Model %s ratio or price not set, please contact administrator to set", info.OriginModelName, info.OriginModelName)
}
}
completionRatio = operation_setting.GetCompletionRatio(info.OriginModelName)
cacheRatio, _ = operation_setting.GetCacheRatio(info.OriginModelName)
ratio := modelRatio * groupRatio
preConsumedQuota = int(float64(preConsumedTokens) * ratio)
} else {
@@ -34,8 +50,10 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
return PriceData{
ModelPrice: modelPrice,
ModelRatio: modelRatio,
CompletionRatio: completionRatio,
GroupRatio: groupRatio,
UsePrice: usePrice,
CacheRatio: cacheRatio,
ShouldPreConsumedQuota: preConsumedQuota,
}
}, nil
}

View File

@@ -0,0 +1,91 @@
package helper
import (
"bufio"
"context"
"io"
"net/http"
"one-api/common"
"one-api/constant"
relaycommon "one-api/relay/common"
"strings"
"time"
"github.com/gin-gonic/gin"
)
func StreamScannerHandler(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo, dataHandler func(data string) bool) {
if resp == nil {
return
}
defer resp.Body.Close()
streamingTimeout := time.Duration(constant.StreamingTimeout) * time.Second
if strings.HasPrefix(info.UpstreamModelName, "o1") || strings.HasPrefix(info.UpstreamModelName, "o3") {
// twice timeout for thinking model
streamingTimeout *= 2
}
var (
stopChan = make(chan bool, 2)
scanner = bufio.NewScanner(resp.Body)
ticker = time.NewTicker(streamingTimeout)
)
defer func() {
ticker.Stop()
close(stopChan)
}()
scanner.Split(bufio.ScanLines)
SetEventStreamHeaders(c)
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
ctx = context.WithValue(ctx, "stop_chan", stopChan)
common.RelayCtxGo(ctx, func() {
for scanner.Scan() {
ticker.Reset(streamingTimeout)
data := scanner.Text()
if common.DebugEnabled {
println(data)
}
if len(data) < 6 {
continue
}
if data[:5] != "data:" && data[:6] != "[DONE]" {
continue
}
data = data[5:]
data = strings.TrimLeft(data, " ")
data = strings.TrimSuffix(data, "\"")
if !strings.HasPrefix(data, "[DONE]") {
info.SetFirstResponseTime()
success := dataHandler(data)
if !success {
break
}
}
}
if err := scanner.Err(); err != nil {
if err != io.EOF {
common.LogError(c, "scanner error: "+err.Error())
}
}
common.SafeSendBool(stopChan, true)
})
select {
case <-ticker.C:
// 超时处理逻辑
common.LogError(c, "streaming timeout")
case <-stopChan:
// 正常结束
}
}

View File

@@ -7,7 +7,6 @@ import (
"net/http"
"one-api/common"
"one-api/dto"
"one-api/model"
relaycommon "one-api/relay/common"
relayconstant "one-api/relay/constant"
"one-api/relay/helper"
@@ -75,12 +74,11 @@ func AudioHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
relayInfo.PromptTokens = promptTokens
}
priceData := helper.ModelPriceHelper(c, relayInfo, preConsumedTokens, 0)
userQuota, err := model.GetUserQuota(relayInfo.UserId, false)
priceData, err := helper.ModelPriceHelper(c, relayInfo, preConsumedTokens, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "get_user_quota_failed", http.StatusInternalServerError)
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {
return openaiErr

View File

@@ -86,7 +86,10 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
imageRequest.Model = relayInfo.UpstreamModelName
priceData := helper.ModelPriceHelper(c, relayInfo, 0, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, 0, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
if !priceData.UsePrice {
// modelRatio 16 = modelPrice $0.04
// per 1 modelRatio = $0.04 / 16

View File

@@ -15,6 +15,7 @@ import (
relayconstant "one-api/relay/constant"
"one-api/service"
"one-api/setting"
"one-api/setting/operation_setting"
"strconv"
"strings"
"time"
@@ -157,10 +158,10 @@ func RelaySwapFace(c *gin.Context) *dto.MidjourneyResponse {
return service.MidjourneyErrorWrapper(constant.MjRequestError, "sour_base64_and_target_base64_is_required")
}
modelName := service.CoverActionToModelName(constant.MjActionSwapFace)
modelPrice, success := common.GetModelPrice(modelName, true)
modelPrice, success := operation_setting.GetModelPrice(modelName, true)
// 如果没有配置价格,则使用默认价格
if !success {
defaultPrice, ok := common.GetDefaultModelRatioMap()[modelName]
defaultPrice, ok := operation_setting.GetDefaultModelRatioMap()[modelName]
if !ok {
modelPrice = 0.1
} else {
@@ -463,10 +464,10 @@ func RelayMidjourneySubmit(c *gin.Context, relayMode int) *dto.MidjourneyRespons
fullRequestURL := fmt.Sprintf("%s%s", baseURL, requestURL)
modelName := service.CoverActionToModelName(midjRequest.Action)
modelPrice, success := common.GetModelPrice(modelName, true)
modelPrice, success := operation_setting.GetModelPrice(modelName, true)
// 如果没有配置价格,则使用默认价格
if !success {
defaultPrice, ok := common.GetDefaultModelRatioMap()[modelName]
defaultPrice, ok := operation_setting.GetDefaultModelRatioMap()[modelName]
if !ok {
modelPrice = 0.1
} else {

View File

@@ -5,7 +5,6 @@ import (
"encoding/json"
"errors"
"fmt"
"github.com/bytedance/gopkg/util/gopool"
"io"
"math"
"net/http"
@@ -21,6 +20,9 @@ import (
"strings"
"time"
"github.com/bytedance/gopkg/util/gopool"
"github.com/shopspring/decimal"
"github.com/gin-gonic/gin"
)
@@ -106,7 +108,10 @@ func TextHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
c.Set("prompt_tokens", promptTokens)
}
priceData := helper.ModelPriceHelper(c, relayInfo, promptTokens, int(textRequest.MaxTokens))
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptTokens, int(textRequest.MaxTokens))
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
// pre-consume quota 预消耗配额
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
@@ -301,34 +306,55 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
CompletionTokens: 0,
TotalTokens: relayInfo.PromptTokens,
}
extraContent += " (可能是请求出错)"
extraContent += "(可能是请求出错)"
}
useTimeSeconds := time.Now().Unix() - relayInfo.StartTime.Unix()
promptTokens := usage.PromptTokens
cacheTokens := usage.PromptTokensDetails.CachedTokens
completionTokens := usage.CompletionTokens
modelName := relayInfo.OriginModelName
tokenName := ctx.GetString("token_name")
completionRatio := common.GetCompletionRatio(modelName)
ratio := priceData.ModelRatio * priceData.GroupRatio
completionRatio := priceData.CompletionRatio
cacheRatio := priceData.CacheRatio
modelRatio := priceData.ModelRatio
groupRatio := priceData.GroupRatio
modelPrice := priceData.ModelPrice
usePrice := priceData.UsePrice
quota := 0
// Convert values to decimal for precise calculation
dPromptTokens := decimal.NewFromInt(int64(promptTokens))
dCacheTokens := decimal.NewFromInt(int64(cacheTokens))
dCompletionTokens := decimal.NewFromInt(int64(completionTokens))
dCompletionRatio := decimal.NewFromFloat(completionRatio)
dCacheRatio := decimal.NewFromFloat(cacheRatio)
dModelRatio := decimal.NewFromFloat(modelRatio)
dGroupRatio := decimal.NewFromFloat(groupRatio)
dModelPrice := decimal.NewFromFloat(modelPrice)
dQuotaPerUnit := decimal.NewFromFloat(common.QuotaPerUnit)
ratio := dModelRatio.Mul(dGroupRatio)
var quotaCalculateDecimal decimal.Decimal
if !priceData.UsePrice {
quota = promptTokens + int(math.Round(float64(completionTokens)*completionRatio))
quota = int(math.Round(float64(quota) * ratio))
if ratio != 0 && quota <= 0 {
quota = 1
nonCachedTokens := dPromptTokens.Sub(dCacheTokens)
cachedTokensWithRatio := dCacheTokens.Mul(dCacheRatio)
promptQuota := nonCachedTokens.Add(cachedTokensWithRatio)
completionQuota := dCompletionTokens.Mul(dCompletionRatio)
quotaCalculateDecimal = promptQuota.Add(completionQuota).Mul(ratio)
if !ratio.IsZero() && quotaCalculateDecimal.LessThanOrEqual(decimal.Zero) {
quotaCalculateDecimal = decimal.NewFromInt(1)
}
} else {
quota = int(modelPrice * common.QuotaPerUnit * groupRatio)
quotaCalculateDecimal = dModelPrice.Mul(dQuotaPerUnit).Mul(dGroupRatio)
}
quota := int(quotaCalculateDecimal.Round(0).IntPart())
totalTokens := promptTokens + completionTokens
var logContent string
if !usePrice {
if !priceData.UsePrice {
logContent = fmt.Sprintf("模型倍率 %.2f,补全倍率 %.2f,分组倍率 %.2f", modelRatio, completionRatio, groupRatio)
} else {
logContent = fmt.Sprintf("模型价格 %.2f,分组倍率 %.2f", modelPrice, groupRatio)
@@ -343,9 +369,6 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
common.LogError(ctx, fmt.Sprintf("total tokens is 0, cannot consume quota, userId %d, channelId %d, "+
"tokenId %d, model %s pre-consumed quota %d", relayInfo.UserId, relayInfo.ChannelId, relayInfo.TokenId, modelName, preConsumedQuota))
} else {
//if sensitiveResp != nil {
// logContent += fmt.Sprintf(",敏感词:%s", strings.Join(sensitiveResp.SensitiveWords, ", "))
//}
quotaDelta := quota - preConsumedQuota
if quotaDelta != 0 {
err := service.PostConsumeQuota(relayInfo, quotaDelta, preConsumedQuota, true)
@@ -369,11 +392,7 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
if extraContent != "" {
logContent += ", " + extraContent
}
other := service.GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, modelPrice)
other := service.GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, cacheTokens, cacheRatio, modelPrice)
model.RecordConsumeLog(ctx, relayInfo.UserId, relayInfo.ChannelId, promptTokens, completionTokens, logModel,
tokenName, quota, logContent, relayInfo.TokenId, userQuota, int(useTimeSeconds), relayInfo.IsStream, relayInfo.Group, other)
//if quota != 0 {
//
//}
}

View File

@@ -57,8 +57,10 @@ func EmbeddingHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode)
promptToken := getEmbeddingPromptToken(*embeddingRequest)
relayInfo.PromptTokens = promptToken
priceData := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
// pre-consume quota 预消耗配额
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {

View File

@@ -50,8 +50,10 @@ func RerankHelper(c *gin.Context, relayMode int) (openaiErr *dto.OpenAIErrorWith
promptToken := getRerankPromptToken(*rerankRequest)
relayInfo.PromptTokens = promptToken
priceData := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
// pre-consume quota 预消耗配额
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {

View File

@@ -16,6 +16,7 @@ import (
relayconstant "one-api/relay/constant"
"one-api/service"
"one-api/setting"
"one-api/setting/operation_setting"
)
/*
@@ -37,9 +38,9 @@ func RelayTaskSubmit(c *gin.Context, relayMode int) (taskErr *dto.TaskError) {
}
modelName := service.CoverTaskActionToModelName(platform, relayInfo.Action)
modelPrice, success := common.GetModelPrice(modelName, true)
modelPrice, success := operation_setting.GetModelPrice(modelName, true)
if !success {
defaultPrice, ok := common.GetDefaultModelRatioMap()[modelName]
defaultPrice, ok := operation_setting.GetDefaultModelRatioMap()[modelName]
if !ok {
modelPrice = 0.1
} else {

View File

@@ -11,6 +11,7 @@ import (
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/setting"
"one-api/setting/operation_setting"
)
func WssHelper(c *gin.Context, ws *websocket.Conn) (openaiErr *dto.OpenAIErrorWithStatusCode) {
@@ -39,7 +40,7 @@ func WssHelper(c *gin.Context, ws *websocket.Conn) (openaiErr *dto.OpenAIErrorWi
}
}
//relayInfo.UpstreamModelName = textRequest.Model
modelPrice, getModelPriceSuccess := common.GetModelPrice(relayInfo.UpstreamModelName, false)
modelPrice, getModelPriceSuccess := operation_setting.GetModelPrice(relayInfo.UpstreamModelName, false)
groupRatio := setting.GetGroupRatio(relayInfo.Group)
var preConsumedQuota int
@@ -65,7 +66,7 @@ func WssHelper(c *gin.Context, ws *websocket.Conn) (openaiErr *dto.OpenAIErrorWi
//if realtimeEvent.Session.MaxResponseOutputTokens != 0 {
// preConsumedTokens = promptTokens + int(realtimeEvent.Session.MaxResponseOutputTokens)
//}
modelRatio = common.GetModelRatio(relayInfo.UpstreamModelName)
modelRatio, _ = operation_setting.GetModelRatio(relayInfo.UpstreamModelName)
ratio = modelRatio * groupRatio
preConsumedQuota = int(float64(preConsumedTokens) * ratio)
} else {

View File

@@ -84,6 +84,7 @@ func SetApiRouter(router *gin.Engine) {
channelRoute.GET("/", controller.GetAllChannels)
channelRoute.GET("/search", controller.SearchChannels)
channelRoute.GET("/models", controller.ChannelListModels)
channelRoute.GET("/models_enabled", controller.EnabledListModels)
channelRoute.GET("/:id", controller.GetChannel)
channelRoute.GET("/test", controller.TestAllChannels)
channelRoute.GET("/test/:id", controller.TestChannel)

View File

@@ -6,23 +6,31 @@ import (
"one-api/common"
"one-api/dto"
"one-api/model"
"one-api/setting"
"one-api/setting/operation_setting"
"strings"
)
func formatNotifyType(channelId int, status int) string {
return fmt.Sprintf("%s_%d_%d", dto.NotifyTypeChannelUpdate, channelId, status)
}
// disable & notify
func DisableChannel(channelId int, channelName string, reason string) {
model.UpdateChannelStatusById(channelId, common.ChannelStatusAutoDisabled, reason)
subject := fmt.Sprintf("通道「%s」#%d已被禁用", channelName, channelId)
content := fmt.Sprintf("通道「%s」#%d已被禁用,原因:%s", channelName, channelId, reason)
NotifyRootUser(subject, content, dto.NotifyTypeChannelUpdate)
success := model.UpdateChannelStatusById(channelId, common.ChannelStatusAutoDisabled, reason)
if success {
subject := fmt.Sprintf("通道「%s」#%d已被禁用", channelName, channelId)
content := fmt.Sprintf("通道「%s」#%d已被禁用原因%s", channelName, channelId, reason)
NotifyRootUser(formatNotifyType(channelId, common.ChannelStatusAutoDisabled), subject, content)
}
}
func EnableChannel(channelId int, channelName string) {
model.UpdateChannelStatusById(channelId, common.ChannelStatusEnabled, "")
subject := fmt.Sprintf("通道「%s」#%d已被启用", channelName, channelId)
content := fmt.Sprintf("通道「%s」#%d已被启用", channelName, channelId)
NotifyRootUser(subject, content, dto.NotifyTypeChannelUpdate)
success := model.UpdateChannelStatusById(channelId, common.ChannelStatusEnabled, "")
if success {
subject := fmt.Sprintf("通道「%s」#%d已被启用", channelName, channelId)
content := fmt.Sprintf("通道「%s」#%d已被启用", channelName, channelId)
NotifyRootUser(formatNotifyType(channelId, common.ChannelStatusEnabled), subject, content)
}
}
func ShouldDisableChannel(channelType int, err *dto.OpenAIErrorWithStatusCode) bool {
@@ -67,7 +75,7 @@ func ShouldDisableChannel(channelType int, err *dto.OpenAIErrorWithStatusCode) b
}
lowerMessage := strings.ToLower(err.Error.Message)
search, _ := AcSearch(lowerMessage, setting.AutomaticDisableKeywords, true)
search, _ := AcSearch(lowerMessage, operation_setting.AutomaticDisableKeywords, true)
if search {
return true
}

View File

@@ -7,7 +7,9 @@ import (
"fmt"
"image"
"io"
"net/http"
"one-api/common"
"one-api/constant"
"strings"
"golang.org/x/image/webp"
@@ -23,7 +25,7 @@ func DecodeBase64ImageData(base64String string) (image.Config, string, string, e
decodedData, err := base64.StdEncoding.DecodeString(base64String)
if err != nil {
fmt.Println("Error: Failed to decode base64 string")
return image.Config{}, "", "", err
return image.Config{}, "", "", fmt.Errorf("failed to decode base64 string: %s", err.Error())
}
// 创建一个bytes.Buffer用于存储解码后的数据
@@ -61,20 +63,51 @@ func DecodeBase64FileData(base64String string) (string, string, error) {
func GetImageFromUrl(url string) (mimeType string, data string, err error) {
resp, err := DoDownloadRequest(url)
if err != nil {
return "", "", err
}
if !strings.HasPrefix(resp.Header.Get("Content-Type"), "image/") {
return "", "", fmt.Errorf("invalid content type: %s, required image/*", resp.Header.Get("Content-Type"))
return "", "", fmt.Errorf("failed to download image: %w", err)
}
defer resp.Body.Close()
buffer := bytes.NewBuffer(nil)
_, err = buffer.ReadFrom(resp.Body)
if err != nil {
return
// Check HTTP status code
if resp.StatusCode != http.StatusOK {
return "", "", fmt.Errorf("failed to download image: HTTP %d", resp.StatusCode)
}
mimeType = resp.Header.Get("Content-Type")
contentType := resp.Header.Get("Content-Type")
if contentType != "application/octet-stream" && !strings.HasPrefix(contentType, "image/") {
return "", "", fmt.Errorf("invalid content type: %s, required image/*", contentType)
}
maxImageSize := int64(constant.MaxFileDownloadMB * 1024 * 1024)
// Check Content-Length if available
if resp.ContentLength > maxImageSize {
return "", "", fmt.Errorf("image size %d exceeds maximum allowed size of %d bytes", resp.ContentLength, maxImageSize)
}
// Use LimitReader to prevent reading oversized images
limitReader := io.LimitReader(resp.Body, maxImageSize)
buffer := &bytes.Buffer{}
written, err := io.Copy(buffer, limitReader)
if err != nil {
return "", "", fmt.Errorf("failed to read image data: %w", err)
}
if written >= maxImageSize {
return "", "", fmt.Errorf("image size exceeds maximum allowed size of %d bytes", maxImageSize)
}
data = base64.StdEncoding.EncodeToString(buffer.Bytes())
return
mimeType = contentType
// Handle application/octet-stream type
if mimeType == "application/octet-stream" {
_, format, _, err := DecodeBase64ImageData(data)
if err != nil {
return "", "", err
}
mimeType = "image/" + format
}
return mimeType, data, nil
}
func DecodeUrlImageData(imageUrl string) (image.Config, string, error) {
@@ -92,7 +125,7 @@ func DecodeUrlImageData(imageUrl string) (image.Config, string, error) {
mimeType := response.Header.Get("Content-Type")
if !strings.HasPrefix(mimeType, "image/") {
if mimeType != "application/octet-stream" && !strings.HasPrefix(mimeType, "image/") {
return image.Config{}, "", fmt.Errorf("invalid content type: %s, required image/*", mimeType)
}

View File

@@ -1,16 +1,20 @@
package service
import (
"github.com/gin-gonic/gin"
"one-api/dto"
relaycommon "one-api/relay/common"
"github.com/gin-gonic/gin"
)
func GenerateTextOtherInfo(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, modelRatio, groupRatio, completionRatio, modelPrice float64) map[string]interface{} {
func GenerateTextOtherInfo(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, modelRatio, groupRatio, completionRatio float64,
cacheTokens int, cacheRatio float64, modelPrice float64) map[string]interface{} {
other := make(map[string]interface{})
other["model_ratio"] = modelRatio
other["group_ratio"] = groupRatio
other["completion_ratio"] = completionRatio
other["cache_tokens"] = cacheTokens
other["cache_ratio"] = cacheRatio
other["model_price"] = modelPrice
other["frt"] = float64(relayInfo.FirstResponseTime.UnixMilli() - relayInfo.StartTime.UnixMilli())
if relayInfo.ReasoningEffort != "" {
@@ -27,7 +31,7 @@ func GenerateTextOtherInfo(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, m
}
func GenerateWssOtherInfo(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, usage *dto.RealtimeUsage, modelRatio, groupRatio, completionRatio, audioRatio, audioCompletionRatio, modelPrice float64) map[string]interface{} {
info := GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, modelPrice)
info := GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, 0, 0.0, modelPrice)
info["ws"] = true
info["audio_input"] = usage.InputTokenDetails.AudioTokens
info["audio_output"] = usage.OutputTokenDetails.AudioTokens
@@ -39,7 +43,7 @@ func GenerateWssOtherInfo(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, us
}
func GenerateAudioOtherInfo(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, usage *dto.Usage, modelRatio, groupRatio, completionRatio, audioRatio, audioCompletionRatio, modelPrice float64) map[string]interface{} {
info := GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, modelPrice)
info := GenerateTextOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio, 0, 0.0, modelPrice)
info["audio"] = true
info["audio_input"] = usage.PromptTokensDetails.AudioTokens
info["audio_output"] = usage.CompletionTokenDetails.AudioTokens

View File

@@ -3,8 +3,6 @@ package service
import (
"errors"
"fmt"
"github.com/bytedance/gopkg/util/gopool"
"math"
"one-api/common"
constant2 "one-api/constant"
"one-api/dto"
@@ -12,10 +10,14 @@ import (
relaycommon "one-api/relay/common"
"one-api/relay/helper"
"one-api/setting"
"one-api/setting/operation_setting"
"strings"
"time"
"github.com/bytedance/gopkg/util/gopool"
"github.com/gin-gonic/gin"
"github.com/shopspring/decimal"
)
type TokenDetails struct {
@@ -35,24 +37,41 @@ type QuotaInfo struct {
func calculateAudioQuota(info QuotaInfo) int {
if info.UsePrice {
return int(info.ModelPrice * common.QuotaPerUnit * info.GroupRatio)
modelPrice := decimal.NewFromFloat(info.ModelPrice)
quotaPerUnit := decimal.NewFromFloat(common.QuotaPerUnit)
groupRatio := decimal.NewFromFloat(info.GroupRatio)
quota := modelPrice.Mul(quotaPerUnit).Mul(groupRatio)
return int(quota.IntPart())
}
completionRatio := common.GetCompletionRatio(info.ModelName)
audioRatio := common.GetAudioRatio(info.ModelName)
audioCompletionRatio := common.GetAudioCompletionRatio(info.ModelName)
ratio := info.GroupRatio * info.ModelRatio
completionRatio := decimal.NewFromFloat(operation_setting.GetCompletionRatio(info.ModelName))
audioRatio := decimal.NewFromFloat(operation_setting.GetAudioRatio(info.ModelName))
audioCompletionRatio := decimal.NewFromFloat(operation_setting.GetAudioCompletionRatio(info.ModelName))
quota := info.InputDetails.TextTokens + int(math.Round(float64(info.OutputDetails.TextTokens)*completionRatio))
quota += int(math.Round(float64(info.InputDetails.AudioTokens)*audioRatio)) +
int(math.Round(float64(info.OutputDetails.AudioTokens)*audioRatio*audioCompletionRatio))
groupRatio := decimal.NewFromFloat(info.GroupRatio)
modelRatio := decimal.NewFromFloat(info.ModelRatio)
ratio := groupRatio.Mul(modelRatio)
quota = int(math.Round(float64(quota) * ratio))
if ratio != 0 && quota <= 0 {
quota = 1
inputTextTokens := decimal.NewFromInt(int64(info.InputDetails.TextTokens))
outputTextTokens := decimal.NewFromInt(int64(info.OutputDetails.TextTokens))
inputAudioTokens := decimal.NewFromInt(int64(info.InputDetails.AudioTokens))
outputAudioTokens := decimal.NewFromInt(int64(info.OutputDetails.AudioTokens))
quota := decimal.Zero
quota = quota.Add(inputTextTokens)
quota = quota.Add(outputTextTokens.Mul(completionRatio))
quota = quota.Add(inputAudioTokens.Mul(audioRatio))
quota = quota.Add(outputAudioTokens.Mul(audioRatio).Mul(audioCompletionRatio))
quota = quota.Mul(ratio)
// If ratio is not zero and quota is less than or equal to zero, set quota to 1
if !ratio.IsZero() && quota.LessThanOrEqual(decimal.Zero) {
quota = decimal.NewFromInt(1)
}
return quota
return int(quota.Round(0).IntPart())
}
func PreWssConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, usage *dto.RealtimeUsage) error {
@@ -75,7 +94,7 @@ func PreWssConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, usag
audioInputTokens := usage.InputTokenDetails.AudioTokens
audioOutTokens := usage.OutputTokenDetails.AudioTokens
groupRatio := setting.GetGroupRatio(relayInfo.Group)
modelRatio := common.GetModelRatio(modelName)
modelRatio, _ := operation_setting.GetModelRatio(modelName)
quotaInfo := QuotaInfo{
InputDetails: TokenDetails{
@@ -122,9 +141,9 @@ func PostWssConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, mod
audioOutTokens := usage.OutputTokenDetails.AudioTokens
tokenName := ctx.GetString("token_name")
completionRatio := common.GetCompletionRatio(modelName)
audioRatio := common.GetAudioRatio(relayInfo.OriginModelName)
audioCompletionRatio := common.GetAudioCompletionRatio(modelName)
completionRatio := decimal.NewFromFloat(operation_setting.GetCompletionRatio(modelName))
audioRatio := decimal.NewFromFloat(operation_setting.GetAudioRatio(relayInfo.OriginModelName))
audioCompletionRatio := decimal.NewFromFloat(operation_setting.GetAudioCompletionRatio(modelName))
quotaInfo := QuotaInfo{
InputDetails: TokenDetails{
@@ -146,7 +165,8 @@ func PostWssConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, mod
totalTokens := usage.TotalTokens
var logContent string
if !usePrice {
logContent = fmt.Sprintf("模型倍率 %.2f,补全倍率 %.2f,音频倍率 %.2f,音频补全倍率 %.2f,分组倍率 %.2f", modelRatio, completionRatio, audioRatio, audioCompletionRatio, groupRatio)
logContent = fmt.Sprintf("模型倍率 %.2f,补全倍率 %.2f,音频倍率 %.2f,音频补全倍率 %.2f,分组倍率 %.2f",
modelRatio, completionRatio.InexactFloat64(), audioRatio.InexactFloat64(), audioCompletionRatio.InexactFloat64(), groupRatio)
} else {
logContent = fmt.Sprintf("模型价格 %.2f,分组倍率 %.2f", modelPrice, groupRatio)
}
@@ -168,7 +188,8 @@ func PostWssConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, mod
if extraContent != "" {
logContent += ", " + extraContent
}
other := GenerateWssOtherInfo(ctx, relayInfo, usage, modelRatio, groupRatio, completionRatio, audioRatio, audioCompletionRatio, modelPrice)
other := GenerateWssOtherInfo(ctx, relayInfo, usage, modelRatio, groupRatio,
completionRatio.InexactFloat64(), audioRatio.InexactFloat64(), audioCompletionRatio.InexactFloat64(), modelPrice)
model.RecordConsumeLog(ctx, relayInfo.UserId, relayInfo.ChannelId, usage.InputTokens, usage.OutputTokens, logModel,
tokenName, quota, logContent, relayInfo.TokenId, userQuota, int(useTimeSeconds), relayInfo.IsStream, relayInfo.Group, other)
}
@@ -184,9 +205,9 @@ func PostAudioConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
audioOutTokens := usage.CompletionTokenDetails.AudioTokens
tokenName := ctx.GetString("token_name")
completionRatio := common.GetCompletionRatio(relayInfo.OriginModelName)
audioRatio := common.GetAudioRatio(relayInfo.OriginModelName)
audioCompletionRatio := common.GetAudioCompletionRatio(relayInfo.OriginModelName)
completionRatio := decimal.NewFromFloat(operation_setting.GetCompletionRatio(relayInfo.OriginModelName))
audioRatio := decimal.NewFromFloat(operation_setting.GetAudioRatio(relayInfo.OriginModelName))
audioCompletionRatio := decimal.NewFromFloat(operation_setting.GetAudioCompletionRatio(relayInfo.OriginModelName))
modelRatio := priceData.ModelRatio
groupRatio := priceData.GroupRatio
@@ -213,7 +234,8 @@ func PostAudioConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
totalTokens := usage.TotalTokens
var logContent string
if !usePrice {
logContent = fmt.Sprintf("模型倍率 %.2f,补全倍率 %.2f,音频倍率 %.2f,音频补全倍率 %.2f,分组倍率 %.2f", modelRatio, completionRatio, audioRatio, audioCompletionRatio, groupRatio)
logContent = fmt.Sprintf("模型倍率 %.2f,补全倍率 %.2f,音频倍率 %.2f,音频补全倍率 %.2f,分组倍率 %.2f",
modelRatio, completionRatio.InexactFloat64(), audioRatio.InexactFloat64(), audioCompletionRatio.InexactFloat64(), groupRatio)
} else {
logContent = fmt.Sprintf("模型价格 %.2f,分组倍率 %.2f", modelPrice, groupRatio)
}
@@ -242,7 +264,8 @@ func PostAudioConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
if extraContent != "" {
logContent += ", " + extraContent
}
other := GenerateAudioOtherInfo(ctx, relayInfo, usage, modelRatio, groupRatio, completionRatio, audioRatio, audioCompletionRatio, modelPrice)
other := GenerateAudioOtherInfo(ctx, relayInfo, usage, modelRatio, groupRatio,
completionRatio.InexactFloat64(), audioRatio.InexactFloat64(), audioCompletionRatio.InexactFloat64(), modelPrice)
model.RecordConsumeLog(ctx, relayInfo.UserId, relayInfo.ChannelId, usage.PromptTokens, usage.CompletionTokens, logModel,
tokenName, quota, logContent, relayInfo.TokenId, userQuota, int(useTimeSeconds), relayInfo.IsStream, relayInfo.Group, other)
}

View File

@@ -10,6 +10,7 @@ import (
"one-api/constant"
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/setting/operation_setting"
"strings"
"unicode/utf8"
@@ -32,7 +33,7 @@ func InitTokenEncoders() {
if err != nil {
common.FatalLog(fmt.Sprintf("failed to get gpt-4o token encoder: %s", err.Error()))
}
for model, _ := range common.GetDefaultModelRatioMap() {
for model, _ := range operation_setting.GetDefaultModelRatioMap() {
if strings.HasPrefix(model, "gpt-3.5") {
tokenEncoderMap[model] = cl100TokenEncoder
} else if strings.HasPrefix(model, "gpt-4") {

View File

@@ -11,7 +11,10 @@ import (
func NotifyRootUser(t string, subject string, content string) {
user := model.GetRootUser().ToBaseUser()
_ = NotifyUser(user.Id, user.Email, user.GetSetting(), dto.NewNotify(t, subject, content, nil))
err := NotifyUser(user.Id, user.Email, user.GetSetting(), dto.NewNotify(t, subject, content, nil))
if err != nil {
common.SysError(fmt.Sprintf("failed to notify root user: %s", err.Error()))
}
}
func NotifyUser(userId int, userEmail string, userSetting map[string]interface{}, data dto.Notify) error {

View File

@@ -13,17 +13,19 @@ import (
// ClaudeSettings 定义Claude模型的配置
type ClaudeSettings struct {
HeadersSettings map[string][]string `json:"headers_settings"`
ThinkingAdapterEnabled bool `json:"thinking_adapter_enabled"`
ThinkingAdapterMaxTokens int `json:"thinking_adapter_max_tokens"`
ThinkingAdapterBudgetTokensPercentage float64 `json:"thinking_adapter_budget_tokens_percentage"`
HeadersSettings map[string]map[string][]string `json:"model_headers_settings"`
DefaultMaxTokens map[string]int `json:"default_max_tokens"`
ThinkingAdapterEnabled bool `json:"thinking_adapter_enabled"`
ThinkingAdapterBudgetTokensPercentage float64 `json:"thinking_adapter_budget_tokens_percentage"`
}
// 默认配置
var defaultClaudeSettings = ClaudeSettings{
HeadersSettings: map[string][]string{},
ThinkingAdapterEnabled: true,
ThinkingAdapterMaxTokens: 8192,
HeadersSettings: map[string]map[string][]string{},
ThinkingAdapterEnabled: true,
DefaultMaxTokens: map[string]int{
"default": 8192,
},
ThinkingAdapterBudgetTokensPercentage: 0.8,
}
@@ -37,13 +39,27 @@ func init() {
// GetClaudeSettings 获取Claude配置
func GetClaudeSettings() *ClaudeSettings {
// check default max tokens must have default key
if _, ok := claudeSettings.DefaultMaxTokens["default"]; !ok {
claudeSettings.DefaultMaxTokens["default"] = 8192
}
return &claudeSettings
}
func (c *ClaudeSettings) WriteHeaders(headers *http.Header) {
for key, values := range c.HeadersSettings {
for _, value := range values {
headers.Add(key, value)
func (c *ClaudeSettings) WriteHeaders(originModel string, httpHeader *http.Header) {
if headers, ok := c.HeadersSettings[originModel]; ok {
for headerKey, headerValues := range headers {
httpHeader.Del(headerKey)
for _, headerValue := range headerValues {
httpHeader.Add(headerKey, headerValue)
}
}
}
}
func (c *ClaudeSettings) GetDefaultMaxTokens(model string) int {
if maxTokens, ok := c.DefaultMaxTokens[model]; ok {
return maxTokens
}
return c.DefaultMaxTokens["default"]
}

View File

@@ -0,0 +1,84 @@
package operation_setting
import (
"encoding/json"
"one-api/common"
"sync"
)
var defaultCacheRatio = map[string]float64{
"gpt-4": 0.5,
"o1": 0.5,
"o1-2024-12-17": 0.5,
"o1-preview-2024-09-12": 0.5,
"o1-preview": 0.5,
"o1-mini-2024-09-12": 0.5,
"o1-mini": 0.5,
"gpt-4o-2024-11-20": 0.5,
"gpt-4o-2024-08-06": 0.5,
"gpt-4o": 0.5,
"gpt-4o-mini-2024-07-18": 0.5,
"gpt-4o-mini": 0.5,
"gpt-4o-realtime-preview": 0.5,
"gpt-4o-mini-realtime-preview": 0.5,
"deepseek-chat": 0.1,
"deepseek-reasoner": 0.1,
"deepseek-coder": 0.1,
}
var defaultCreateCacheRatio = map[string]float64{}
var cacheRatioMap map[string]float64
var cacheRatioMapMutex sync.RWMutex
// GetCacheRatioMap returns the cache ratio map
func GetCacheRatioMap() map[string]float64 {
cacheRatioMapMutex.Lock()
defer cacheRatioMapMutex.Unlock()
if cacheRatioMap == nil {
cacheRatioMap = defaultCacheRatio
}
return cacheRatioMap
}
// CacheRatio2JSONString converts the cache ratio map to a JSON string
func CacheRatio2JSONString() string {
GetCacheRatioMap()
jsonBytes, err := json.Marshal(cacheRatioMap)
if err != nil {
common.SysError("error marshalling cache ratio: " + err.Error())
}
return string(jsonBytes)
}
// UpdateCacheRatioByJSONString updates the cache ratio map from a JSON string
func UpdateCacheRatioByJSONString(jsonStr string) error {
cacheRatioMapMutex.Lock()
defer cacheRatioMapMutex.Unlock()
cacheRatioMap = make(map[string]float64)
return json.Unmarshal([]byte(jsonStr), &cacheRatioMap)
}
// GetCacheRatio returns the cache ratio for a model
func GetCacheRatio(name string) (float64, bool) {
GetCacheRatioMap()
ratio, ok := cacheRatioMap[name]
if !ok {
return 1, false // Default to 0.5 if not found
}
return ratio, true
}
// DefaultCacheRatio2JSONString converts the default cache ratio map to a JSON string
func DefaultCacheRatio2JSONString() string {
jsonBytes, err := json.Marshal(defaultCacheRatio)
if err != nil {
common.SysError("error marshalling default cache ratio: " + err.Error())
}
return string(jsonBytes)
}
// GetDefaultCacheRatioMap returns the default cache ratio map
func GetDefaultCacheRatioMap() map[string]float64 {
return defaultCacheRatio
}

View File

@@ -0,0 +1,21 @@
package operation_setting
import "one-api/setting/config"
type GeneralSetting struct {
DocsLink string `json:"docs_link"`
}
// 默认配置
var generalSetting = GeneralSetting{
DocsLink: "https://docs.newapi.pro",
}
func init() {
// 注册到全局配置管理器
config.GlobalConfig.Register("general_setting", &generalSetting)
}
func GetGeneralSetting() *GeneralSetting {
return &generalSetting
}

View File

@@ -1,7 +1,8 @@
package common
package operation_setting
import (
"encoding/json"
"one-api/common"
"strings"
"sync"
)
@@ -50,24 +51,26 @@ var defaultModelRatio = map[string]float64{
"gpt-4o-realtime-preview-2024-12-17": 2.5,
"gpt-4o-mini-realtime-preview": 0.3,
"gpt-4o-mini-realtime-preview-2024-12-17": 0.3,
"o1": 7.5,
"o1-2024-12-17": 7.5,
"o1-preview": 7.5,
"o1-preview-2024-09-12": 7.5,
"o1-mini": 0.55,
"o1-mini-2024-09-12": 0.55,
"o3-mini": 0.55,
"o3-mini-2025-01-31": 0.55,
"o3-mini-high": 0.55,
"o3-mini-2025-01-31-high": 0.55,
"o3-mini-low": 0.55,
"o3-mini-2025-01-31-low": 0.55,
"o3-mini-medium": 0.55,
"o3-mini-2025-01-31-medium": 0.55,
"gpt-4o-mini": 0.075,
"gpt-4o-mini-2024-07-18": 0.075,
"gpt-4-turbo": 5, // $0.01 / 1K tokens
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
"o1": 7.5,
"o1-2024-12-17": 7.5,
"o1-preview": 7.5,
"o1-preview-2024-09-12": 7.5,
"o1-mini": 0.55,
"o1-mini-2024-09-12": 0.55,
"o3-mini": 0.55,
"o3-mini-2025-01-31": 0.55,
"o3-mini-high": 0.55,
"o3-mini-2025-01-31-high": 0.55,
"o3-mini-low": 0.55,
"o3-mini-2025-01-31-low": 0.55,
"o3-mini-medium": 0.55,
"o3-mini-2025-01-31-medium": 0.55,
"gpt-4o-mini": 0.075,
"gpt-4o-mini-2024-07-18": 0.075,
"gpt-4-turbo": 5, // $0.01 / 1K tokens
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
"gpt-4.5-preview": 37.5,
"gpt-4.5-preview-2025-02-27": 37.5,
//"gpt-3.5-turbo-0301": 0.75, //deprecated
"gpt-3.5-turbo": 0.25,
"gpt-3.5-turbo-0613": 0.75,
@@ -259,7 +262,7 @@ func ModelPrice2JSONString() string {
GetModelPriceMap()
jsonBytes, err := json.Marshal(modelPriceMap)
if err != nil {
SysError("error marshalling model price: " + err.Error())
common.SysError("error marshalling model price: " + err.Error())
}
return string(jsonBytes)
}
@@ -283,7 +286,7 @@ func GetModelPrice(name string, printErr bool) (float64, bool) {
price, ok := modelPriceMap[name]
if !ok {
if printErr {
SysError("model price not found: " + name)
common.SysError("model price not found: " + name)
}
return -1, false
}
@@ -303,7 +306,7 @@ func ModelRatio2JSONString() string {
GetModelRatioMap()
jsonBytes, err := json.Marshal(modelRatioMap)
if err != nil {
SysError("error marshalling model ratio: " + err.Error())
common.SysError("error marshalling model ratio: " + err.Error())
}
return string(jsonBytes)
}
@@ -315,23 +318,22 @@ func UpdateModelRatioByJSONString(jsonStr string) error {
return json.Unmarshal([]byte(jsonStr), &modelRatioMap)
}
func GetModelRatio(name string) float64 {
func GetModelRatio(name string) (float64, bool) {
GetModelRatioMap()
if strings.HasPrefix(name, "gpt-4-gizmo") {
name = "gpt-4-gizmo-*"
}
ratio, ok := modelRatioMap[name]
if !ok {
SysError("model ratio not found: " + name)
return 30
return 37.5, SelfUseModeEnabled
}
return ratio
return ratio, true
}
func DefaultModelRatio2JSONString() string {
jsonBytes, err := json.Marshal(defaultModelRatio)
if err != nil {
SysError("error marshalling model ratio: " + err.Error())
common.SysError("error marshalling model ratio: " + err.Error())
}
return string(jsonBytes)
}
@@ -353,7 +355,7 @@ func CompletionRatio2JSONString() string {
GetCompletionRatioMap()
jsonBytes, err := json.Marshal(CompletionRatio)
if err != nil {
SysError("error marshalling completion ratio: " + err.Error())
common.SysError("error marshalling completion ratio: " + err.Error())
}
return string(jsonBytes)
}
@@ -387,6 +389,9 @@ func GetCompletionRatio(name string) float64 {
}
return 4
}
if strings.HasPrefix(name, "gpt-4.5") {
return 2
}
if strings.HasPrefix(name, "gpt-4-turbo") || strings.HasSuffix(name, "preview") {
return 3
}

View File

@@ -1,8 +1,9 @@
package setting
package operation_setting
import "strings"
var DemoSiteEnabled = false
var SelfUseModeEnabled = false
var AutomaticDisableKeywords = []string{
"Your credit balance is too low",

View File

@@ -30,6 +30,7 @@ import { useTranslation } from 'react-i18next';
import { StatusContext } from './context/Status';
import { setStatusData } from './helpers/data.js';
import { API, showError } from './helpers';
import PersonalSetting from './components/PersonalSetting.js';
const Home = lazy(() => import('./pages/Home'));
const Detail = lazy(() => import('./pages/Detail'));
@@ -177,6 +178,16 @@ function App() {
</PrivateRoute>
}
/>
<Route
path='/personal'
element={
<PrivateRoute>
<Suspense fallback={<Loading></Loading>}>
<PersonalSetting />
</Suspense>
</PrivateRoute>
}
/>
<Route
path='/topup'
element={

View File

@@ -15,7 +15,7 @@ import {
getQuotaPerUnit,
renderGroup,
renderNumberWithPoint,
renderQuota, renderQuotaWithPrompt
renderQuota, renderQuotaWithPrompt, stringToColor
} from '../helpers/render';
import {
Button, Divider,
@@ -29,10 +29,12 @@ import {
Table,
Tag,
Tooltip,
Typography
Typography,
Checkbox,
Layout
} from '@douyinfe/semi-ui';
import EditChannel from '../pages/Channel/EditChannel';
import { IconList, IconTreeTriangleDown } from '@douyinfe/semi-icons';
import { IconList, IconTreeTriangleDown, IconClose, IconFilter, IconPlus, IconRefresh, IconSetting } from '@douyinfe/semi-icons';
import { loadChannelModels } from './utils.js';
import EditTagModal from '../pages/Channel/EditTagModal.js';
import TextNumberInput from './custom/TextNumberInput.js';
@@ -141,21 +143,105 @@ const ChannelsTable = () => {
}
};
const columns = [
// {
// title: '',
// dataIndex: 'checkbox',
// className: 'checkbox',
// },
// Define column keys for selection
const COLUMN_KEYS = {
ID: 'id',
NAME: 'name',
GROUP: 'group',
TYPE: 'type',
STATUS: 'status',
RESPONSE_TIME: 'response_time',
BALANCE: 'balance',
PRIORITY: 'priority',
WEIGHT: 'weight',
OPERATE: 'operate'
};
// State for column visibility
const [visibleColumns, setVisibleColumns] = useState({});
const [showColumnSelector, setShowColumnSelector] = useState(false);
// Load saved column preferences from localStorage
useEffect(() => {
const savedColumns = localStorage.getItem('channels-table-columns');
if (savedColumns) {
try {
const parsed = JSON.parse(savedColumns);
// Make sure all columns are accounted for
const defaults = getDefaultColumnVisibility();
const merged = { ...defaults, ...parsed };
setVisibleColumns(merged);
} catch (e) {
console.error('Failed to parse saved column preferences', e);
initDefaultColumns();
}
} else {
initDefaultColumns();
}
}, []);
// Update table when column visibility changes
useEffect(() => {
if (Object.keys(visibleColumns).length > 0) {
// Save to localStorage
localStorage.setItem('channels-table-columns', JSON.stringify(visibleColumns));
}
}, [visibleColumns]);
// Get default column visibility
const getDefaultColumnVisibility = () => {
return {
[COLUMN_KEYS.ID]: true,
[COLUMN_KEYS.NAME]: true,
[COLUMN_KEYS.GROUP]: true,
[COLUMN_KEYS.TYPE]: true,
[COLUMN_KEYS.STATUS]: true,
[COLUMN_KEYS.RESPONSE_TIME]: true,
[COLUMN_KEYS.BALANCE]: true,
[COLUMN_KEYS.PRIORITY]: true,
[COLUMN_KEYS.WEIGHT]: true,
[COLUMN_KEYS.OPERATE]: true
};
};
// Initialize default column visibility
const initDefaultColumns = () => {
const defaults = getDefaultColumnVisibility();
setVisibleColumns(defaults);
};
// Handle column visibility change
const handleColumnVisibilityChange = (columnKey, checked) => {
const updatedColumns = { ...visibleColumns, [columnKey]: checked };
setVisibleColumns(updatedColumns);
};
// Handle "Select All" checkbox
const handleSelectAll = (checked) => {
const allKeys = Object.keys(COLUMN_KEYS).map(key => COLUMN_KEYS[key]);
const updatedColumns = {};
allKeys.forEach(key => {
updatedColumns[key] = checked;
});
setVisibleColumns(updatedColumns);
};
// Define all columns with keys
const allColumns = [
{
key: COLUMN_KEYS.ID,
title: t('ID'),
dataIndex: 'id'
},
{
key: COLUMN_KEYS.NAME,
title: t('名称'),
dataIndex: 'name'
},
{
key: COLUMN_KEYS.GROUP,
title: t('分组'),
dataIndex: 'group',
render: (text, record, index) => {
@@ -177,6 +263,7 @@ const ChannelsTable = () => {
}
},
{
key: COLUMN_KEYS.TYPE,
title: t('类型'),
dataIndex: 'type',
render: (text, record, index) => {
@@ -188,6 +275,7 @@ const ChannelsTable = () => {
}
},
{
key: COLUMN_KEYS.STATUS,
title: t('状态'),
dataIndex: 'status',
render: (text, record, index) => {
@@ -211,6 +299,7 @@ const ChannelsTable = () => {
}
},
{
key: COLUMN_KEYS.RESPONSE_TIME,
title: t('响应时间'),
dataIndex: 'response_time',
render: (text, record, index) => {
@@ -218,6 +307,7 @@ const ChannelsTable = () => {
}
},
{
key: COLUMN_KEYS.BALANCE,
title: t('已用/剩余'),
dataIndex: 'expired_time',
render: (text, record, index) => {
@@ -255,6 +345,7 @@ const ChannelsTable = () => {
}
},
{
key: COLUMN_KEYS.PRIORITY,
title: t('优先级'),
dataIndex: 'priority',
render: (text, record, index) => {
@@ -304,6 +395,7 @@ const ChannelsTable = () => {
}
},
{
key: COLUMN_KEYS.WEIGHT,
title: t('权重'),
dataIndex: 'weight',
render: (text, record, index) => {
@@ -353,6 +445,7 @@ const ChannelsTable = () => {
}
},
{
key: COLUMN_KEYS.OPERATE,
title: '',
dataIndex: 'operate',
render: (text, record, index) => {
@@ -378,17 +471,15 @@ const ChannelsTable = () => {
>
{t('测试')}
</Button>
<Dropdown
trigger="click"
position="bottomRight"
menu={modelMenuItems} // 使用即时生成的菜单项
>
<Button
style={{ padding: '8px 4px' }}
type="primary"
icon={<IconTreeTriangleDown />}
></Button>
</Dropdown>
<Button
style={{ padding: '8px 4px' }}
type="primary"
icon={<IconTreeTriangleDown />}
onClick={() => {
setCurrentTestChannel(record);
setShowModelTestModal(true);
}}
></Button>
</SplitButtonGroup>
<Popconfirm
title={t('确定是否要删除此渠道?')}
@@ -495,6 +586,72 @@ const ChannelsTable = () => {
}
];
// Filter columns based on visibility settings
const getVisibleColumns = () => {
return allColumns.filter(column => visibleColumns[column.key]);
};
// Column selector modal
const renderColumnSelector = () => {
return (
<Modal
title={t('列设置')}
visible={showColumnSelector}
onCancel={() => setShowColumnSelector(false)}
footer={
<>
<Button onClick={() => initDefaultColumns()}>{t('重置')}</Button>
<Button onClick={() => setShowColumnSelector(false)}>{t('取消')}</Button>
<Button type="primary" onClick={() => setShowColumnSelector(false)}>{t('确定')}</Button>
</>
}
style={{ width: isMobile() ? '90%' : 500 }}
bodyStyle={{ padding: '24px' }}
>
<div style={{ marginBottom: 20 }}>
<Checkbox
checked={Object.values(visibleColumns).every(v => v === true)}
indeterminate={Object.values(visibleColumns).some(v => v === true) && !Object.values(visibleColumns).every(v => v === true)}
onChange={e => handleSelectAll(e.target.checked)}
>
{t('全选')}
</Checkbox>
</div>
<div style={{
display: 'flex',
flexWrap: 'wrap',
maxHeight: '400px',
overflowY: 'auto',
border: '1px solid var(--semi-color-border)',
borderRadius: '6px',
padding: '16px'
}}>
{allColumns.map(column => {
// Skip columns without title
if (!column.title) {
return null;
}
return (
<div key={column.key} style={{
width: isMobile() ? '100%' : '50%',
marginBottom: 16,
paddingRight: 8
}}>
<Checkbox
checked={!!visibleColumns[column.key]}
onChange={e => handleColumnVisibilityChange(column.key, e.target.checked)}
>
{column.title}
</Checkbox>
</div>
);
})}
</div>
</Modal>
);
};
const [channels, setChannels] = useState([]);
const [loading, setLoading] = useState(true);
const [activePage, setActivePage] = useState(1);
@@ -522,6 +679,9 @@ const ChannelsTable = () => {
const [enableTagMode, setEnableTagMode] = useState(false);
const [showBatchSetTag, setShowBatchSetTag] = useState(false);
const [batchSetTagValue, setBatchSetTagValue] = useState('');
const [showModelTestModal, setShowModelTestModal] = useState(false);
const [currentTestChannel, setCurrentTestChannel] = useState(null);
const [modelSearchKeyword, setModelSearchKeyword] = useState('');
const removeRecord = (record) => {
@@ -1031,6 +1191,7 @@ const ChannelsTable = () => {
return (
<>
{renderColumnSelector()}
<EditTagModal
visible={showEditTag}
tag={editingTag}
@@ -1096,87 +1257,137 @@ const ChannelsTable = () => {
<Divider style={{ marginBottom: 15 }} />
<div
style={{
display: isMobile() ? '' : 'flex',
display: 'flex',
flexDirection: isMobile() ? 'column' : 'row',
marginTop: isMobile() ? 0 : -45,
zIndex: 999,
pointerEvents: 'none'
}}
>
<Space
style={{ pointerEvents: 'auto', marginTop: isMobile() ? 0 : 45 }}
style={{
pointerEvents: 'auto',
marginTop: isMobile() ? 0 : 45,
marginBottom: isMobile() ? 16 : 0,
display: 'flex',
flexWrap: isMobile() ? 'wrap' : 'nowrap',
gap: '8px'
}}
>
<Typography.Text strong>{t('使用ID排序')}</Typography.Text>
<Switch
checked={idSort}
label={t('使用ID排序')}
uncheckedText={t('关')}
aria-label={t('是否用ID排序')}
onChange={(v) => {
localStorage.setItem('id-sort', v + '');
setIdSort(v);
loadChannels(0, pageSize, v, enableTagMode)
.then()
.catch((reason) => {
showError(reason);
<div style={{
display: 'flex',
alignItems: 'center',
marginRight: 16,
flexWrap: 'nowrap'
}}>
<Typography.Text strong style={{ marginRight: 8 }}>{t('使用ID排序')}</Typography.Text>
<Switch
checked={idSort}
label={t('使用ID排序')}
uncheckedText={t('关')}
aria-label={t('是否用ID排序')}
onChange={(v) => {
localStorage.setItem('id-sort', v + '');
setIdSort(v);
loadChannels(0, pageSize, v, enableTagMode)
.then()
.catch((reason) => {
showError(reason);
});
}}
></Switch>
</div>
<div style={{
display: 'flex',
flexWrap: 'wrap',
gap: '8px'
}}>
<Button
theme="light"
type="primary"
icon={<IconPlus />}
onClick={() => {
setEditingChannel({
id: undefined
});
}}
></Switch>
<Button
theme="light"
type="primary"
style={{ marginRight: 8 }}
onClick={() => {
setEditingChannel({
id: undefined
});
setShowEdit(true);
}}
>
{t('添加渠道')}
</Button>
<Popconfirm
title={t('确定?')}
okType={'warning'}
onConfirm={testAllChannels}
position={isMobile() ? 'top' : 'top'}
>
<Button theme="light" type="warning" style={{ marginRight: 8 }}>
{t('测试所有通道')}
setShowEdit(true);
}}
>
{t('添加渠道')}
</Button>
</Popconfirm>
<Popconfirm
title={t('确定?')}
okType={'secondary'}
onConfirm={updateAllChannelsBalance}
>
<Button theme="light" type="secondary" style={{ marginRight: 8 }}>
{t('更新所有已启用通道余额')}
<Button
theme="light"
type="primary"
icon={<IconRefresh />}
onClick={refresh}
>
{t('刷新')}
</Button>
</Popconfirm>
<Popconfirm
title={t('确定是否要删除禁用通道?')}
content={t('此修改将不可逆')}
okType={'danger'}
onConfirm={deleteAllDisabledChannels}
>
<Button theme="light" type="danger" style={{ marginRight: 8 }}>
{t('删除禁用通道')}
</Button>
</Popconfirm>
<Button
theme="light"
type="primary"
style={{ marginRight: 8 }}
onClick={refresh}
>
{t('刷新')}
</Button>
<Dropdown
trigger="click"
render={
<Dropdown.Menu>
<Dropdown.Item>
<Popconfirm
title={t('确定?')}
okType={'warning'}
onConfirm={testAllChannels}
position={isMobile() ? 'top' : 'top'}
>
<Button theme="light" type="warning" style={{ width: '100%' }}>
{t('测试所有通道')}
</Button>
</Popconfirm>
</Dropdown.Item>
<Dropdown.Item>
<Popconfirm
title={t('确定?')}
okType={'secondary'}
onConfirm={updateAllChannelsBalance}
>
<Button theme="light" type="secondary" style={{ width: '100%' }}>
{t('更新所有已启用通道余额')}
</Button>
</Popconfirm>
</Dropdown.Item>
<Dropdown.Item>
<Popconfirm
title={t('确定是否要删除禁用通道?')}
content={t('此修改将不可逆')}
okType={'danger'}
onConfirm={deleteAllDisabledChannels}
>
<Button theme="light" type="danger" style={{ width: '100%' }}>
{t('删除禁用通道')}
</Button>
</Popconfirm>
</Dropdown.Item>
</Dropdown.Menu>
}
>
<Button theme="light" type="tertiary" icon={<IconSetting />}>
{t('批量操作')}
</Button>
</Dropdown>
</div>
</Space>
</div>
<div style={{ marginTop: 20 }}>
<Space>
<Typography.Text strong>{t('开启批量操作')}</Typography.Text>
<div style={{
marginTop: 20,
display: 'flex',
flexDirection: isMobile() ? 'column' : 'row',
alignItems: isMobile() ? 'flex-start' : 'center',
gap: isMobile() ? '8px' : '16px'
}}>
<div style={{
display: 'flex',
alignItems: 'center',
marginBottom: isMobile() ? 8 : 0
}}>
<Typography.Text strong style={{ marginRight: 8 }}>{t('开启批量操作')}</Typography.Text>
<Switch
label={t('开启批量操作')}
uncheckedText={t('关')}
@@ -1184,20 +1395,25 @@ const ChannelsTable = () => {
onChange={(v) => {
setEnableBatchDelete(v);
}}
></Switch>
/>
</div>
<div style={{
display: 'flex',
flexWrap: 'wrap',
gap: '8px'
}}>
<Popconfirm
title={t('确定是否要删除所选通道?')}
content={t('此修改将不可逆')}
okType={'danger'}
onConfirm={batchDeleteChannels}
disabled={!enableBatchDelete}
position={'top'}
>
<Button
disabled={!enableBatchDelete}
theme="light"
type="danger"
style={{ marginRight: 8 }}
>
{t('删除所选通道')}
</Button>
@@ -1207,17 +1423,27 @@ const ChannelsTable = () => {
content={t('进行该操作时,可能导致渠道访问错误,请仅在数据库出现问题时使用')}
okType={'warning'}
onConfirm={fixChannelsAbilities}
position={'top'}
>
<Button theme="light" type="secondary" style={{ marginRight: 8 }}>
<Button theme="light" type="secondary">
{t('修复数据库一致性')}
</Button>
</Popconfirm>
</Space>
</div>
</div>
<div style={{ marginTop: 20 }}>
<Space>
<Typography.Text strong>{t('标签聚合模式')}</Typography.Text>
<div style={{
marginTop: 20,
display: 'flex',
flexDirection: isMobile() ? 'column' : 'row',
alignItems: isMobile() ? 'flex-start' : 'center',
gap: isMobile() ? '8px' : '16px'
}}>
<div style={{
display: 'flex',
alignItems: 'center',
marginBottom: isMobile() ? 8 : 0
}}>
<Typography.Text strong style={{ marginRight: 8 }}>{t('标签聚合模式')}</Typography.Text>
<Switch
checked={enableTagMode}
label={t('标签聚合模式')}
@@ -1228,24 +1454,36 @@ const ChannelsTable = () => {
loadChannels(0, pageSize, idSort, v);
}}
/>
</div>
<div style={{
display: 'flex',
flexWrap: 'wrap',
gap: '8px'
}}>
<Button
disabled={!enableBatchDelete}
theme="light"
type="primary"
style={{ marginRight: 8 }}
onClick={() => setShowBatchSetTag(true)}
>
{t('批量设置标签')}
</Button>
</Space>
<Button
theme="light"
type="tertiary"
icon={<IconSetting />}
onClick={() => setShowColumnSelector(true)}
>
{t('列设置')}
</Button>
</div>
</div>
<Table
className={'channel-table'}
style={{ marginTop: 15 }}
columns={columns}
loading={loading}
columns={getVisibleColumns()}
dataSource={pageData}
pagination={{
currentPage: activePage,
@@ -1259,7 +1497,7 @@ const ChannelsTable = () => {
},
onPageChange: handlePageChange
}}
loading={loading}
expandAllRows={false}
onRow={handleRow}
rowSelection={
enableBatchDelete
@@ -1279,6 +1517,7 @@ const ChannelsTable = () => {
onCancel={() => setShowBatchSetTag(false)}
maskClosable={false}
centered={true}
style={{ width: isMobile() ? '90%' : 500 }}
>
<div style={{ marginBottom: 20 }}>
<Typography.Text>{t('请输入要设置的标签名称')}</Typography.Text>
@@ -1287,7 +1526,84 @@ const ChannelsTable = () => {
placeholder={t('请输入标签名称')}
value={batchSetTagValue}
onChange={(v) => setBatchSetTagValue(v)}
size="large"
/>
<div style={{ marginTop: 16 }}>
<Typography.Text type="secondary">
{t('已选择 ${count} 个渠道').replace('${count}', selectedChannels.length)}
</Typography.Text>
</div>
</Modal>
{/* 模型测试弹窗 */}
<Modal
title={t('选择模型进行测试')}
visible={showModelTestModal && currentTestChannel !== null}
onCancel={() => {
setShowModelTestModal(false);
setModelSearchKeyword('');
}}
footer={null}
maskClosable={true}
centered={true}
>
<div style={{ maxHeight: '500px', overflowY: 'auto', padding: '10px' }}>
{currentTestChannel && (
<div>
<Typography.Title heading={6} style={{ marginBottom: '16px' }}>
{t('渠道')}: {currentTestChannel.name}
</Typography.Title>
{/* 搜索框 */}
<Input
placeholder={t('搜索模型...')}
value={modelSearchKeyword}
onChange={(v) => setModelSearchKeyword(v)}
style={{ marginBottom: '16px' }}
prefix={<IconFilter />}
showClear
/>
<div style={{
display: 'grid',
gridTemplateColumns: 'repeat(auto-fill, minmax(180px, 1fr))',
gap: '10px'
}}>
{currentTestChannel.models.split(',')
.filter(model => model.toLowerCase().includes(modelSearchKeyword.toLowerCase()))
.map((model, index) => {
return (
<Button
key={index}
theme="light"
type="tertiary"
style={{
height: 'auto',
padding: '8px 12px',
textAlign: 'center',
}}
onClick={() => {
testChannel(currentTestChannel, model);
}}
>
{model}
</Button>
);
})}
</div>
{/* 显示搜索结果数量 */}
{modelSearchKeyword && (
<Typography.Text type="secondary" style={{ marginTop: '16px', display: 'block' }}>
{t('找到')} {currentTestChannel.models.split(',').filter(model =>
model.toLowerCase().includes(modelSearchKeyword.toLowerCase())
).length} {t('个模型')}
</Typography.Text>
)}
</div>
)}
</div>
</Modal>
</>
);

View File

@@ -1,12 +1,14 @@
import React, { useEffect, useState } from 'react';
import React, { useEffect, useState, useContext } from 'react';
import { useTranslation } from 'react-i18next';
import { getFooterHTML, getSystemName } from '../helpers';
import { Layout, Tooltip } from '@douyinfe/semi-ui';
import { StyleContext } from '../context/Style/index.js';
const FooterBar = () => {
const { t } = useTranslation();
const systemName = getSystemName();
const [footer, setFooter] = useState(getFooterHTML());
const [styleState] = useContext(StyleContext);
let remainCheckTimes = 5;
const loadFooter = () => {
@@ -57,7 +59,10 @@ const FooterBar = () => {
}, []);
return (
<div style={{ textAlign: 'center' }}>
<div style={{
textAlign: 'center',
paddingBottom: styleState?.isMobile ? '112px' : '5px',
}}>
{footer ? (
<div
className='custom-footer'

View File

@@ -19,17 +19,89 @@ import {
IconNoteMoneyStroked,
IconPriceTag,
IconUser,
IconLanguage
IconLanguage,
IconInfoCircle,
IconCreditCard,
IconTerminal
} from '@douyinfe/semi-icons';
import { Avatar, Button, Dropdown, Layout, Nav, Switch } from '@douyinfe/semi-ui';
import { Avatar, Button, Dropdown, Layout, Nav, Switch, Tag } from '@douyinfe/semi-ui';
import { stringToColor } from '../helpers/render';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
import { StyleContext } from '../context/Style/index.js';
import { StatusContext } from '../context/Status/index.js';
// 自定义顶部栏样式
const headerStyle = {
boxShadow: '0 2px 10px rgba(0, 0, 0, 0.1)',
borderBottom: '1px solid var(--semi-color-border)',
background: 'var(--semi-color-bg-0)',
transition: 'all 0.3s ease',
width: '100%'
};
// 自定义顶部栏按钮样式
const headerItemStyle = {
borderRadius: '4px',
margin: '0 4px',
transition: 'all 0.3s ease'
};
// 自定义顶部栏按钮悬停样式
const headerItemHoverStyle = {
backgroundColor: 'var(--semi-color-primary-light-default)',
color: 'var(--semi-color-primary)'
};
// 自定义顶部栏Logo样式
const logoStyle = {
display: 'flex',
alignItems: 'center',
gap: '10px',
padding: '0 10px',
height: '100%'
};
// 自定义顶部栏系统名称样式
const systemNameStyle = {
fontWeight: 'bold',
fontSize: '18px',
background: 'linear-gradient(45deg, var(--semi-color-primary), var(--semi-color-secondary))',
WebkitBackgroundClip: 'text',
WebkitTextFillColor: 'transparent',
padding: '0 5px'
};
// 自定义顶部栏按钮图标样式
const headerIconStyle = {
fontSize: '18px',
transition: 'all 0.3s ease'
};
// 自定义头像样式
const avatarStyle = {
margin: '4px',
cursor: 'pointer',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)',
transition: 'all 0.3s ease'
};
// 自定义下拉菜单样式
const dropdownStyle = {
borderRadius: '8px',
boxShadow: '0 4px 12px rgba(0, 0, 0, 0.15)',
overflow: 'hidden'
};
// 自定义主题切换开关样式
const switchStyle = {
margin: '0 8px'
};
const HeaderBar = () => {
const { t, i18n } = useTranslation();
const [userState, userDispatch] = useContext(UserContext);
const [styleState, styleDispatch] = useContext(StyleContext);
const [statusState, statusDispatch] = useContext(StatusContext);
let navigate = useNavigate();
const [currentLang, setCurrentLang] = useState(i18n.language);
@@ -40,26 +112,43 @@ const HeaderBar = () => {
const isNewYear =
(currentDate.getMonth() === 0 && currentDate.getDate() === 1);
// Check if self-use mode is enabled
const isSelfUseMode = statusState?.status?.self_use_mode_enabled || false;
const docsLink = statusState?.status?.docs_link || '';
const isDemoSiteMode = statusState?.status?.demo_site_enabled || false;
let buttons = [
{
text: t('首页'),
itemKey: 'home',
to: '/',
icon: <IconHome style={headerIconStyle} />,
},
{
text: t('控制台'),
itemKey: 'detail',
to: '/',
icon: <IconTerminal style={headerIconStyle} />,
},
{
text: t('定价'),
itemKey: 'pricing',
to: '/pricing',
icon: <IconPriceTag style={headerIconStyle} />,
},
// Only include the docs button if docsLink exists
...(docsLink ? [{
text: t('文档'),
itemKey: 'docs',
isExternal: true,
externalLink: docsLink,
icon: <IconHelpCircle style={headerIconStyle} />,
}] : []),
{
text: t('关于'),
itemKey: 'about',
to: '/about',
icon: <IconInfoCircle style={headerIconStyle} />,
},
];
@@ -129,6 +218,9 @@ const HeaderBar = () => {
<Nav
className={'topnav'}
mode={'horizontal'}
style={headerStyle}
itemStyle={headerItemStyle}
hoverStyle={headerItemHoverStyle}
renderWrapper={({ itemElement, isSubNav, isInSubNav, props }) => {
const routerMap = {
about: '/about',
@@ -151,13 +243,25 @@ const HeaderBar = () => {
}
}
}}>
<Link
className="header-bar-text"
style={{ textDecoration: 'none' }}
to={routerMap[props.itemKey]}
>
{itemElement}
</Link>
{props.isExternal ? (
<a
className="header-bar-text"
style={{ textDecoration: 'none' }}
href={props.externalLink}
target="_blank"
rel="noopener noreferrer"
>
{itemElement}
</a>
) : (
<Link
className="header-bar-text"
style={{ textDecoration: 'none' }}
to={routerMap[props.itemKey]}
>
{itemElement}
</Link>
)}
</div>
);
}}
@@ -166,7 +270,7 @@ const HeaderBar = () => {
onSelect={(key) => {}}
header={styleState.isMobile?{
logo: (
<>
<div style={{ display: 'flex', alignItems: 'center', position: 'relative' }}>
{
!styleState.showSider ?
<Button icon={<IconMenu />} theme="light" aria-label={t('展开侧边栏')} onClick={
@@ -176,13 +280,54 @@ const HeaderBar = () => {
() => styleDispatch({ type: 'SET_SIDER', payload: false })
} />
}
</>
{(isSelfUseMode || isDemoSiteMode) && (
<Tag
color={isSelfUseMode ? 'purple' : 'blue'}
style={{
position: 'absolute',
top: '-8px',
right: '-15px',
fontSize: '0.7rem',
padding: '0 4px',
height: 'auto',
lineHeight: '1.2',
zIndex: 1,
pointerEvents: 'none'
}}
>
{isSelfUseMode ? t('自用模式') : t('演示站点')}
</Tag>
)}
</div>
),
}:{
logo: (
<img src={logo} alt='logo' />
<div style={logoStyle}>
<img src={logo} alt='logo' style={{ height: '28px' }} />
</div>
),
text: (
<div style={{ position: 'relative', display: 'inline-block' }}>
<span style={systemNameStyle}>{systemName}</span>
{(isSelfUseMode || isDemoSiteMode) && (
<Tag
color={isSelfUseMode ? 'purple' : 'blue'}
style={{
position: 'absolute',
top: '-10px',
right: '-25px',
fontSize: '0.7rem',
padding: '0 4px',
whiteSpace: 'nowrap',
zIndex: 1,
boxShadow: '0 0 3px rgba(255, 255, 255, 0.7)'
}}
>
{isSelfUseMode ? t('自用模式') : t('演示站点')}
</Tag>
)}
</div>
),
text: systemName,
}}
items={buttons}
footer={
@@ -192,7 +337,7 @@ const HeaderBar = () => {
<Dropdown
position='bottomRight'
render={
<Dropdown.Menu>
<Dropdown.Menu style={dropdownStyle}>
<Dropdown.Item onClick={handleNewYearClick}>
Happy New Year!!!
</Dropdown.Item>
@@ -209,6 +354,7 @@ const HeaderBar = () => {
size={styleState.isMobile?'default':'large'}
checked={theme === 'dark'}
uncheckedText='🌙'
style={switchStyle}
onChange={(checked) => {
setTheme(checked);
}}
@@ -217,7 +363,7 @@ const HeaderBar = () => {
<Dropdown
position='bottomRight'
render={
<Dropdown.Menu>
<Dropdown.Menu style={dropdownStyle}>
<Dropdown.Item
onClick={() => handleLanguageChange('zh')}
type={currentLang === 'zh' ? 'primary' : 'tertiary'}
@@ -235,7 +381,7 @@ const HeaderBar = () => {
>
<Nav.Item
itemKey={'language'}
icon={<IconLanguage />}
icon={<IconLanguage style={headerIconStyle} />}
/>
</Dropdown>
{userState.user ? (
@@ -243,7 +389,7 @@ const HeaderBar = () => {
<Dropdown
position='bottomRight'
render={
<Dropdown.Menu>
<Dropdown.Menu style={dropdownStyle}>
<Dropdown.Item onClick={logout}>{t('退出')}</Dropdown.Item>
</Dropdown.Menu>
}
@@ -251,11 +397,11 @@ const HeaderBar = () => {
<Avatar
size='small'
color={stringToColor(userState.user.username)}
style={{ margin: 4 }}
style={avatarStyle}
>
{userState.user.username[0]}
</Avatar>
{styleState.isMobile?null:<Text>{userState.user.username}</Text>}
{styleState.isMobile?null:<Text style={{ marginLeft: '4px', fontWeight: '500' }}>{userState.user.username}</Text>}
</Dropdown>
</>
) : (
@@ -263,14 +409,15 @@ const HeaderBar = () => {
<Nav.Item
itemKey={'login'}
text={!styleState.isMobile?t('登录'):null}
icon={<IconUser />}
icon={<IconUser style={headerIconStyle} />}
/>
{
!styleState.isMobile && (
// Hide register option in self-use mode
!styleState.isMobile && !isSelfUseMode && (
<Nav.Item
itemKey={'register'}
text={t('注册')}
icon={<IconKey />}
icon={<IconKey style={headerIconStyle} />}
/>
)
}

View File

@@ -21,7 +21,8 @@ import {
Spin,
Table,
Tag,
Tooltip
Tooltip,
Checkbox
} from '@douyinfe/semi-ui';
import { ITEMS_PER_PAGE } from '../constants';
import {
@@ -34,7 +35,7 @@ import {
import Paragraph from '@douyinfe/semi-ui/lib/es/typography/paragraph';
import { getLogOther } from '../helpers/other.js';
import { StyleContext } from '../context/Style/index.js';
import { IconInherit, IconRefresh } from '@douyinfe/semi-icons';
import { IconInherit, IconRefresh, IconSetting } from '@douyinfe/semi-icons';
const { Header } = Layout;
@@ -215,12 +216,104 @@ const LogsTable = () => {
}
const columns = [
// Define column keys for selection
const COLUMN_KEYS = {
TIME: 'time',
CHANNEL: 'channel',
USERNAME: 'username',
TOKEN: 'token',
GROUP: 'group',
TYPE: 'type',
MODEL: 'model',
USE_TIME: 'use_time',
PROMPT: 'prompt',
COMPLETION: 'completion',
COST: 'cost',
RETRY: 'retry',
DETAILS: 'details'
};
// State for column visibility
const [visibleColumns, setVisibleColumns] = useState({});
const [showColumnSelector, setShowColumnSelector] = useState(false);
// Load saved column preferences from localStorage
useEffect(() => {
const savedColumns = localStorage.getItem('logs-table-columns');
if (savedColumns) {
try {
const parsed = JSON.parse(savedColumns);
// Make sure all columns are accounted for
const defaults = getDefaultColumnVisibility();
const merged = { ...defaults, ...parsed };
setVisibleColumns(merged);
} catch (e) {
console.error('Failed to parse saved column preferences', e);
initDefaultColumns();
}
} else {
initDefaultColumns();
}
}, []);
// Get default column visibility based on user role
const getDefaultColumnVisibility = () => {
return {
[COLUMN_KEYS.TIME]: true,
[COLUMN_KEYS.CHANNEL]: isAdminUser,
[COLUMN_KEYS.USERNAME]: isAdminUser,
[COLUMN_KEYS.TOKEN]: true,
[COLUMN_KEYS.GROUP]: true,
[COLUMN_KEYS.TYPE]: true,
[COLUMN_KEYS.MODEL]: true,
[COLUMN_KEYS.USE_TIME]: true,
[COLUMN_KEYS.PROMPT]: true,
[COLUMN_KEYS.COMPLETION]: true,
[COLUMN_KEYS.COST]: true,
[COLUMN_KEYS.RETRY]: isAdminUser,
[COLUMN_KEYS.DETAILS]: true
};
};
// Initialize default column visibility
const initDefaultColumns = () => {
const defaults = getDefaultColumnVisibility();
setVisibleColumns(defaults);
localStorage.setItem('logs-table-columns', JSON.stringify(defaults));
};
// Handle column visibility change
const handleColumnVisibilityChange = (columnKey, checked) => {
const updatedColumns = { ...visibleColumns, [columnKey]: checked };
setVisibleColumns(updatedColumns);
};
// Handle "Select All" checkbox
const handleSelectAll = (checked) => {
const allKeys = Object.keys(COLUMN_KEYS).map(key => COLUMN_KEYS[key]);
const updatedColumns = {};
allKeys.forEach(key => {
// For admin-only columns, only enable them if user is admin
if ((key === COLUMN_KEYS.CHANNEL || key === COLUMN_KEYS.USERNAME || key === COLUMN_KEYS.RETRY) && !isAdminUser) {
updatedColumns[key] = false;
} else {
updatedColumns[key] = checked;
}
});
setVisibleColumns(updatedColumns);
};
// Define all columns
const allColumns = [
{
key: COLUMN_KEYS.TIME,
title: t('时间'),
dataIndex: 'timestamp2string',
},
{
key: COLUMN_KEYS.CHANNEL,
title: t('渠道'),
dataIndex: 'channel',
className: isAdmin() ? 'tableShow' : 'tableHiddle',
@@ -249,6 +342,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.USERNAME,
title: t('用户'),
dataIndex: 'username',
className: isAdmin() ? 'tableShow' : 'tableHiddle',
@@ -274,6 +368,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.TOKEN,
title: t('令牌'),
dataIndex: 'token_name',
render: (text, record, index) => {
@@ -297,6 +392,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.GROUP,
title: t('分组'),
dataIndex: 'group',
render: (text, record, index) => {
@@ -333,6 +429,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.TYPE,
title: t('类型'),
dataIndex: 'type',
render: (text, record, index) => {
@@ -340,6 +437,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.MODEL,
title: t('模型'),
dataIndex: 'model_name',
render: (text, record, index) => {
@@ -351,6 +449,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.USE_TIME,
title: t('用时/首字'),
dataIndex: 'use_time',
render: (text, record, index) => {
@@ -360,7 +459,7 @@ const LogsTable = () => {
<>
<Space>
{renderUseTime(text)}
{renderFirstUseTime(other.frt)}
{renderFirstUseTime(other?.frt)}
{renderIsStream(record.is_stream)}
</Space>
</>
@@ -378,6 +477,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.PROMPT,
title: t('提示'),
dataIndex: 'prompt_tokens',
render: (text, record, index) => {
@@ -389,6 +489,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.COMPLETION,
title: t('补全'),
dataIndex: 'completion_tokens',
render: (text, record, index) => {
@@ -401,6 +502,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.COST,
title: t('花费'),
dataIndex: 'quota',
render: (text, record, index) => {
@@ -412,6 +514,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.RETRY,
title: t('重试'),
dataIndex: 'retry',
className: isAdmin() ? 'tableShow' : 'tableHiddle',
@@ -439,6 +542,7 @@ const LogsTable = () => {
},
},
{
key: COLUMN_KEYS.DETAILS,
title: t('详情'),
dataIndex: 'content',
render: (text, record, index) => {
@@ -464,6 +568,8 @@ const LogsTable = () => {
other.model_ratio,
other.model_price,
other.group_ratio,
other.cache_tokens || 0,
other.cache_ratio || 1.0,
);
return (
<Paragraph
@@ -479,6 +585,76 @@ const LogsTable = () => {
},
];
// Update table when column visibility changes
useEffect(() => {
if (Object.keys(visibleColumns).length > 0) {
// Save to localStorage
localStorage.setItem('logs-table-columns', JSON.stringify(visibleColumns));
}
}, [visibleColumns]);
// Filter columns based on visibility settings
const getVisibleColumns = () => {
return allColumns.filter(column => visibleColumns[column.key]);
};
// Column selector modal
const renderColumnSelector = () => {
return (
<Modal
title={t('列设置')}
visible={showColumnSelector}
onCancel={() => setShowColumnSelector(false)}
footer={
<>
<Button onClick={() => initDefaultColumns()}>{t('重置')}</Button>
<Button onClick={() => setShowColumnSelector(false)}>{t('取消')}</Button>
<Button type="primary" onClick={() => setShowColumnSelector(false)}>{t('确定')}</Button>
</>
}
>
<div style={{ marginBottom: 20 }}>
<Checkbox
checked={Object.values(visibleColumns).every(v => v === true)}
indeterminate={Object.values(visibleColumns).some(v => v === true) && !Object.values(visibleColumns).every(v => v === true)}
onChange={e => handleSelectAll(e.target.checked)}
>
{t('全选')}
</Checkbox>
</div>
<div style={{
display: 'flex',
flexWrap: 'wrap',
maxHeight: '400px',
overflowY: 'auto',
border: '1px solid var(--semi-color-border)',
borderRadius: '6px',
padding: '16px'
}}>
{allColumns.map(column => {
// Skip admin-only columns for non-admin users
if (!isAdminUser && (column.key === COLUMN_KEYS.CHANNEL ||
column.key === COLUMN_KEYS.USERNAME ||
column.key === COLUMN_KEYS.RETRY)) {
return null;
}
return (
<div key={column.key} style={{ width: '50%', marginBottom: 16, paddingRight: 8 }}>
<Checkbox
checked={!!visibleColumns[column.key]}
onChange={e => handleColumnVisibilityChange(column.key, e.target.checked)}
>
{column.title}
</Checkbox>
</div>
);
})}
</div>
</Modal>
);
};
const [styleState, styleDispatch] = useContext(StyleContext);
const [logs, setLogs] = useState([]);
const [expandData, setExpandData] = useState({});
@@ -636,6 +812,12 @@ const LogsTable = () => {
value: other.text_output,
});
}
if (other?.cache_tokens > 0) {
expandDataLocal.push({
key: t('缓存 Tokens'),
value: other.cache_tokens,
});
}
expandDataLocal.push({
key: t('日志详情'),
value: logs[i].content,
@@ -655,25 +837,29 @@ const LogsTable = () => {
let content = '';
if (other?.ws || other?.audio) {
content = renderAudioModelPrice(
other.text_input,
other.text_output,
other.model_ratio,
other.model_price,
other.completion_ratio,
other.audio_input,
other.audio_output,
other?.text_input,
other?.text_output,
other?.model_ratio,
other?.model_price,
other?.completion_ratio,
other?.audio_input,
other?.audio_output,
other?.audio_ratio,
other?.audio_completion_ratio,
other.group_ratio,
other?.group_ratio,
other?.cache_tokens || 0,
other?.cache_ratio || 1.0,
);
} else {
content = renderModelPrice(
logs[i].prompt_tokens,
logs[i].completion_tokens,
other.model_ratio,
other.model_price,
other.completion_ratio,
other.group_ratio,
other?.model_ratio,
other?.model_price,
other?.completion_ratio,
other?.group_ratio,
other?.cache_tokens || 0,
other?.cache_ratio || 1.0,
);
}
expandDataLocal.push({
@@ -770,17 +956,34 @@ const LogsTable = () => {
return (
<>
{renderColumnSelector()}
<Layout>
<Header>
<Spin spinning={loadingStat}>
<Space>
<Tag color='green' size='large' style={{ padding: 15 }}>
{t('总消耗额度')}: {renderQuota(stat.quota)}
<Tag color='blue' size='large' style={{
padding: 15,
borderRadius: '8px',
fontWeight: 500,
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)'
}}>
{t('消耗额度')}: {renderQuota(stat.quota)}
</Tag>
<Tag color='blue' size='large' style={{ padding: 15 }}>
<Tag color='pink' size='large' style={{
padding: 15,
borderRadius: '8px',
fontWeight: 500,
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)'
}}>
RPM: {stat.rpm}
</Tag>
<Tag color='purple' size='large' style={{ padding: 15 }}>
<Tag color='white' size='large' style={{
padding: 15,
border: 'none',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.1)',
borderRadius: '8px',
fontWeight: 500,
}}>
TPM: {stat.tpm}
</Tag>
</Space>
@@ -905,10 +1108,19 @@ const LogsTable = () => {
<Select.Option value='3'>{t('管理')}</Select.Option>
<Select.Option value='4'>{t('系统')}</Select.Option>
</Select>
<Button
theme='light'
type='tertiary'
icon={<IconSetting />}
onClick={() => setShowColumnSelector(true)}
style={{ marginLeft: 8 }}
>
{t('列设置')}
</Button>
</div>
<Table
style={{ marginTop: 5 }}
columns={columns}
columns={getVisibleColumns()}
expandedRowRender={expandRowRender}
expandRowByClick={true}
dataSource={logs}

View File

@@ -3,7 +3,6 @@ import { Card, Spin, Tabs } from '@douyinfe/semi-ui';
import { API, showError, showSuccess } from '../helpers';
import SettingsChats from '../pages/Setting/Operation/SettingsChats.js';
import { useTranslation } from 'react-i18next';
import SettingGeminiModel from '../pages/Setting/Model/SettingGeminiModel.js';
import SettingClaudeModel from '../pages/Setting/Model/SettingClaudeModel.js';
@@ -13,9 +12,9 @@ const ModelSetting = () => {
let [inputs, setInputs] = useState({
'gemini.safety_settings': '',
'gemini.version_settings': '',
'claude.headers_settings': '',
'claude.model_headers_settings': '',
'claude.thinking_adapter_enabled': true,
'claude.thinking_adapter_max_tokens': 8192,
'claude.default_max_tokens': '',
'claude.thinking_adapter_budget_tokens_percentage': 0.8,
});
@@ -30,7 +29,8 @@ const ModelSetting = () => {
if (
item.key === 'gemini.safety_settings' ||
item.key === 'gemini.version_settings' ||
item.key === 'claude.headers_settings'
item.key === 'claude.model_headers_settings'||
item.key === 'claude.default_max_tokens'
) {
item.value = JSON.stringify(JSON.parse(item.value), null, 2);
}

View File

@@ -7,7 +7,6 @@ import SettingsLog from '../pages/Setting/Operation/SettingsLog.js';
import SettingsDataDashboard from '../pages/Setting/Operation/SettingsDataDashboard.js';
import SettingsMonitoring from '../pages/Setting/Operation/SettingsMonitoring.js';
import SettingsCreditLimit from '../pages/Setting/Operation/SettingsCreditLimit.js';
import SettingsMagnification from '../pages/Setting/Operation/SettingsMagnification.js';
import ModelSettingsVisualEditor from '../pages/Setting/Operation/ModelSettingsVisualEditor.js';
import GroupRatioSettings from '../pages/Setting/Operation/GroupRatioSettings.js';
import ModelRatioSettings from '../pages/Setting/Operation/ModelRatioSettings.js';
@@ -16,6 +15,7 @@ import ModelRatioSettings from '../pages/Setting/Operation/ModelRatioSettings.js
import { API, showError, showSuccess } from '../helpers';
import SettingsChats from '../pages/Setting/Operation/SettingsChats.js';
import { useTranslation } from 'react-i18next';
import ModelRatioNotSetEditor from '../pages/Setting/Operation/ModelRationNotSetEditor.js';
const OperationSetting = () => {
const { t } = useTranslation();
@@ -27,13 +27,14 @@ const OperationSetting = () => {
PreConsumedQuota: 0,
StreamCacheQueueLength: 0,
ModelRatio: '',
CacheRatio: '',
CompletionRatio: '',
ModelPrice: '',
GroupRatio: '',
UserUsableGroups: '',
TopUpLink: '',
ChatLink: '',
ChatLink2: '', // 添加的新状态变量
'general_setting.docs_link': '',
// ChatLink2: '', // 添加的新状态变量
QuotaPerUnit: 0,
AutomaticDisableChannelEnabled: false,
AutomaticEnableChannelEnabled: false,
@@ -59,6 +60,7 @@ const OperationSetting = () => {
RetryTimes: 0,
Chats: "[]",
DemoSiteEnabled: false,
SelfUseModeEnabled: false,
AutomaticDisableKeywords: '',
});
@@ -75,7 +77,8 @@ const OperationSetting = () => {
item.key === 'GroupRatio' ||
item.key === 'UserUsableGroups' ||
item.key === 'CompletionRatio' ||
item.key === 'ModelPrice'
item.key === 'ModelPrice' ||
item.key === 'CacheRatio'
) {
item.value = JSON.stringify(JSON.parse(item.value), null, 2);
}
@@ -158,6 +161,9 @@ const OperationSetting = () => {
<Tabs.TabPane tab={t('可视化倍率设置')} itemKey="visual">
<ModelSettingsVisualEditor options={inputs} refresh={onRefresh} />
</Tabs.TabPane>
<Tabs.TabPane tab={t('未设置倍率模型')} itemKey="unset_models">
<ModelRatioNotSetEditor options={inputs} refresh={onRefresh} />
</Tabs.TabPane>
</Tabs>
</Card>
</Spin>

View File

@@ -1,8 +1,10 @@
import React, { useEffect, useRef, useState } from 'react';
import { Banner, Button, Col, Form, Row } from '@douyinfe/semi-ui';
import { API, showError, showSuccess } from '../helpers';
import React, { useContext, useEffect, useRef, useState } from 'react';
import { Banner, Button, Col, Form, Row, Modal, Space } from '@douyinfe/semi-ui';
import { API, showError, showSuccess, timestamp2string } from '../helpers';
import { marked } from 'marked';
import { useTranslation } from 'react-i18next';
import { StatusContext } from '../context/Status/index.js';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
const OtherSetting = () => {
const { t } = useTranslation();
@@ -16,6 +18,7 @@ const OtherSetting = () => {
});
let [loading, setLoading] = useState(false);
const [showUpdateModal, setShowUpdateModal] = useState(false);
const [statusState, statusDispatch] = useContext(StatusContext);
const [updateData, setUpdateData] = useState({
tag_name: '',
content: '',
@@ -43,6 +46,7 @@ const OtherSetting = () => {
HomePageContent: false,
About: false,
Footer: false,
CheckUpdate: false
});
const handleInputChange = async (value, e) => {
const name = e.target.id;
@@ -145,23 +149,48 @@ const OtherSetting = () => {
}
};
const openGitHubRelease = () => {
window.location = 'https://github.com/songquanpeng/one-api/releases/latest';
};
const checkUpdate = async () => {
const res = await API.get(
'https://api.github.com/repos/songquanpeng/one-api/releases/latest',
);
const { tag_name, body } = res.data;
if (tag_name === process.env.REACT_APP_VERSION) {
showSuccess(`已是最新版本:${tag_name}`);
} else {
setUpdateData({
tag_name: tag_name,
content: marked.parse(body),
});
setShowUpdateModal(true);
try {
setLoadingInput((loadingInput) => ({ ...loadingInput, CheckUpdate: true }));
// Use a CORS proxy to avoid direct cross-origin requests to GitHub API
// Option 1: Use a public CORS proxy service
// const proxyUrl = 'https://cors-anywhere.herokuapp.com/';
// const res = await API.get(
// `${proxyUrl}https://api.github.com/repos/Calcium-Ion/new-api/releases/latest`,
// );
// Option 2: Use the JSON proxy approach which often works better with GitHub API
const res = await fetch(
'https://api.github.com/repos/Calcium-Ion/new-api/releases/latest',
{
headers: {
'Accept': 'application/json',
'Content-Type': 'application/json',
// Adding User-Agent which is often required by GitHub API
'User-Agent': 'new-api-update-checker'
}
}
).then(response => response.json());
// Option 3: Use a local proxy endpoint
// Create a cached version of the response to avoid frequent GitHub API calls
// const res = await API.get('/api/status/github-latest-release');
const { tag_name, body } = res;
if (tag_name === statusState?.status?.version) {
showSuccess(`已是最新版本:${tag_name}`);
} else {
setUpdateData({
tag_name: tag_name,
content: marked.parse(body),
});
setShowUpdateModal(true);
}
} catch (error) {
console.error('Failed to check for updates:', error);
showError('检查更新失败,请稍后再试');
} finally {
setLoadingInput((loadingInput) => ({ ...loadingInput, CheckUpdate: false }));
}
};
const getOptions = async () => {
@@ -186,9 +215,41 @@ const OtherSetting = () => {
getOptions();
}, []);
// Function to open GitHub release page
const openGitHubRelease = () => {
window.open(`https://github.com/Calcium-Ion/new-api/releases/tag/${updateData.tag_name}`, '_blank');
};
const getStartTimeString = () => {
const timestamp = statusState?.status?.start_time;
return statusState.status ? timestamp2string(timestamp) : '';
};
return (
<Row>
<Col span={24}>
{/* 版本信息 */}
<Form style={{ marginBottom: 15 }}>
<Form.Section text={t('系统信息')}>
<Row>
<Col span={16}>
<Space>
<Text>
{t('当前版本')}{statusState?.status?.version || t('未知')}
</Text>
<Button type="primary" onClick={checkUpdate} loading={loadingInput['CheckUpdate']}>
{t('检查更新')}
</Button>
</Space>
</Col>
</Row>
<Row>
<Col span={16}>
<Text>{t('启动时间')}{getStartTimeString()}</Text>
</Col>
</Row>
</Form.Section>
</Form>
{/* 通用设置 */}
<Form
values={inputs}
@@ -282,28 +343,25 @@ const OtherSetting = () => {
</Form.Section>
</Form>
</Col>
{/*<Modal*/}
{/* onClose={() => setShowUpdateModal(false)}*/}
{/* onOpen={() => setShowUpdateModal(true)}*/}
{/* open={showUpdateModal}*/}
{/*>*/}
{/* <Modal.Header>新版本:{updateData.tag_name}</Modal.Header>*/}
{/* <Modal.Content>*/}
{/* <Modal.Description>*/}
{/* <div dangerouslySetInnerHTML={{ __html: updateData.content }}></div>*/}
{/* </Modal.Description>*/}
{/* </Modal.Content>*/}
{/* <Modal.Actions>*/}
{/* <Button onClick={() => setShowUpdateModal(false)}>关闭</Button>*/}
{/* <Button*/}
{/* content='详情'*/}
{/* onClick={() => {*/}
{/* setShowUpdateModal(false);*/}
{/* openGitHubRelease();*/}
{/* }}*/}
{/* />*/}
{/* </Modal.Actions>*/}
{/*</Modal>*/}
<Modal
title={t('新版本') + '' + updateData.tag_name}
visible={showUpdateModal}
onCancel={() => setShowUpdateModal(false)}
footer={[
<Button
key="details"
type="primary"
onClick={() => {
setShowUpdateModal(false);
openGitHubRelease();
}}
>
{t('详情')}
</Button>
]}
>
<div dangerouslySetInnerHTML={{ __html: updateData.content }}></div>
</Modal>
</Row>
);
};

View File

@@ -62,24 +62,77 @@ const PageLayout = () => {
if (savedLang) {
i18n.changeLanguage(savedLang);
}
// 默认显示侧边栏
styleDispatch({ type: 'SET_SIDER', payload: true });
}, [i18n]);
// 获取侧边栏折叠状态
const isSidebarCollapsed = localStorage.getItem('default_collapse_sidebar') === 'true';
return (
<Layout style={{ height: '100vh', display: 'flex', flexDirection: 'column' }}>
<Header>
<Layout style={{
height: '100vh',
display: 'flex',
flexDirection: 'column',
overflow: 'hidden'
}}>
<Header style={{
padding: 0,
height: 'auto',
lineHeight: 'normal',
position: 'fixed',
width: '100%',
top: 0,
zIndex: 100,
boxShadow: '0 1px 6px rgba(0, 0, 0, 0.08)'
}}>
<HeaderBar />
</Header>
<Layout style={{ flex: 1, overflow: 'hidden' }}>
<Sider>
{styleState.showSider ? <SiderBar /> : null}
</Sider>
<Layout>
<Layout style={{
marginTop: '56px',
height: 'calc(100vh - 56px)',
overflow: 'auto',
display: 'flex',
flexDirection: 'column'
}}>
{styleState.showSider && (
<Sider style={{
position: 'fixed',
left: 0,
top: '56px',
zIndex: 99,
background: 'var(--semi-color-bg-1)',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.15)',
border: 'none',
paddingRight: '0',
height: 'calc(100vh - 56px)',
}}>
<SiderBar />
</Sider>
)}
<Layout style={{
marginLeft: styleState.isMobile ? '0' : (styleState.showSider ? (styleState.siderCollapsed ? '60px' : '200px') : '0'),
transition: 'margin-left 0.3s ease',
flex: '1 1 auto',
display: 'flex',
flexDirection: 'column'
}}>
<Content
style={{ overflowY: 'auto', padding: styleState.shouldInnerPadding? '24px': '0' }}
style={{
flex: '1 0 auto',
overflowY: 'auto',
WebkitOverflowScrolling: 'touch',
padding: styleState.shouldInnerPadding? '24px': '0',
position: 'relative',
}}
>
<App />
</Content>
<Layout.Footer>
<Layout.Footer style={{
flex: '0 0 auto',
width: '100%'
}}>
<FooterBar />
</Layout.Footer>
</Layout>

View File

@@ -69,7 +69,11 @@ const PersonalSetting = () => {
const [models, setModels] = useState([]);
const [openTransfer, setOpenTransfer] = useState(false);
const [transferAmount, setTransferAmount] = useState(0);
const [isModelsExpanded, setIsModelsExpanded] = useState(false);
const [isModelsExpanded, setIsModelsExpanded] = useState(() => {
// Initialize from localStorage if available
const savedState = localStorage.getItem('modelsExpanded');
return savedState ? JSON.parse(savedState) : false;
});
const MODELS_DISPLAY_COUNT = 10; // 默认显示的模型数量
const [notificationSettings, setNotificationSettings] = useState({
warningType: 'email',
@@ -124,6 +128,11 @@ const PersonalSetting = () => {
}
}, [userState?.user?.setting]);
// Save models expanded state to localStorage whenever it changes
useEffect(() => {
localStorage.setItem('modelsExpanded', JSON.stringify(isModelsExpanded));
}, [isModelsExpanded]);
const handleInputChange = (name, value) => {
setInputs((inputs) => ({...inputs, [name]: value}));
};
@@ -384,7 +393,7 @@ const PersonalSetting = () => {
</div>
</div>
</Modal>
<div style={{marginTop: 20}}>
<div>
<Card
title={
<Card.Meta

View File

@@ -1,16 +1,5 @@
import React, { useEffect, useState } from 'react';
import { Card, Spin, Tabs } from '@douyinfe/semi-ui';
import SettingsGeneral from '../pages/Setting/Operation/SettingsGeneral.js';
import SettingsDrawing from '../pages/Setting/Operation/SettingsDrawing.js';
import SettingsSensitiveWords from '../pages/Setting/Operation/SettingsSensitiveWords.js';
import SettingsLog from '../pages/Setting/Operation/SettingsLog.js';
import SettingsDataDashboard from '../pages/Setting/Operation/SettingsDataDashboard.js';
import SettingsMonitoring from '../pages/Setting/Operation/SettingsMonitoring.js';
import SettingsCreditLimit from '../pages/Setting/Operation/SettingsCreditLimit.js';
import SettingsMagnification from '../pages/Setting/Operation/SettingsMagnification.js';
import ModelSettingsVisualEditor from '../pages/Setting/Operation/ModelSettingsVisualEditor.js';
import GroupRatioSettings from '../pages/Setting/Operation/GroupRatioSettings.js';
import ModelRatioSettings from '../pages/Setting/Operation/ModelRatioSettings.js';
import { API, showError, showSuccess } from '../helpers';

View File

@@ -1,5 +1,5 @@
import React, { useContext, useEffect, useMemo, useState } from 'react';
import { Link, useNavigate } from 'react-router-dom';
import { Link, useNavigate, useLocation } from 'react-router-dom';
import { UserContext } from '../context/User';
import { StatusContext } from '../context/Status';
import { useTranslation } from 'react-i18next';
@@ -28,13 +28,39 @@ import {
IconSetting,
IconUser
} from '@douyinfe/semi-icons';
import { Avatar, Dropdown, Layout, Nav, Switch } from '@douyinfe/semi-ui';
import { Avatar, Dropdown, Layout, Nav, Switch, Divider } from '@douyinfe/semi-ui';
import { setStatusData } from '../helpers/data.js';
import { stringToColor } from '../helpers/render.js';
import { useSetTheme, useTheme } from '../context/Theme/index.js';
import { StyleContext } from '../context/Style/index.js';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
// HeaderBar Buttons
// 自定义侧边栏按钮样式
const navItemStyle = {
borderRadius: '6px',
margin: '4px 8px',
};
// 自定义侧边栏按钮悬停样式
const navItemHoverStyle = {
backgroundColor: 'var(--semi-color-primary-light-default)',
color: 'var(--semi-color-primary)'
};
// 自定义侧边栏按钮选中样式
const navItemSelectedStyle = {
backgroundColor: 'var(--semi-color-primary-light-default)',
color: 'var(--semi-color-primary)',
fontWeight: '600'
};
// 自定义图标样式
const iconStyle = (itemKey, selectedKeys) => {
return {
fontSize: '18px',
color: selectedKeys.includes(itemKey) ? 'var(--semi-color-primary)' : 'var(--semi-color-text-2)',
};
};
const SiderBar = () => {
const { t } = useTranslation();
@@ -46,8 +72,30 @@ const SiderBar = () => {
const [selectedKeys, setSelectedKeys] = useState(['home']);
const [isCollapsed, setIsCollapsed] = useState(defaultIsCollapsed);
const [chatItems, setChatItems] = useState([]);
const [openedKeys, setOpenedKeys] = useState([]);
const theme = useTheme();
const setTheme = useSetTheme();
const location = useLocation();
// 预先计算所有可能的图标样式
const allItemKeys = useMemo(() => {
const keys = ['home', 'channel', 'token', 'redemption', 'topup', 'user', 'log', 'midjourney',
'setting', 'about', 'chat', 'detail', 'pricing', 'task', 'playground', 'personal'];
// 添加聊天项的keys
for (let i = 0; i < chatItems.length; i++) {
keys.push('chat' + i);
}
return keys;
}, [chatItems]);
// 使用useMemo一次性计算所有图标样式
const iconStyles = useMemo(() => {
const styles = {};
allItemKeys.forEach(key => {
styles[key] = iconStyle(key, selectedKeys);
});
return styles;
}, [allItemKeys, selectedKeys]);
const routerMap = {
home: '/',
@@ -65,35 +113,11 @@ const SiderBar = () => {
pricing: '/pricing',
task: '/task',
playground: '/playground',
personal: '/personal',
};
const headerButtons = useMemo(
const workspaceItems = useMemo(
() => [
{
text: 'Playground',
itemKey: 'playground',
to: '/playground',
icon: <IconCommentStroked />,
},
{
text: t('渠道'),
itemKey: 'channel',
to: '/channel',
icon: <IconLayers />,
className: isAdmin() ? '' : 'tableHiddle',
},
{
text: t('聊天'),
itemKey: 'chat',
items: chatItems,
icon: <IconComment />,
},
{
text: t('令牌'),
itemKey: 'token',
to: '/token',
icon: <IconKey />,
},
{
text: t('数据看板'),
itemKey: 'detail',
@@ -105,33 +129,19 @@ const SiderBar = () => {
: 'tableHiddle',
},
{
text: t('兑换码'),
itemKey: 'redemption',
to: '/redemption',
icon: <IconGift />,
className: isAdmin() ? '' : 'tableHiddle',
text: t('API令牌'),
itemKey: 'token',
to: '/token',
icon: <IconKey />,
},
{
text: t('钱包'),
itemKey: 'topup',
to: '/topup',
icon: <IconCreditCard />,
},
{
text: t('用户管理'),
itemKey: 'user',
to: '/user',
icon: <IconUser />,
className: isAdmin() ? '' : 'tableHiddle',
},
{
text: t('日志'),
text: t('使用日志'),
itemKey: 'log',
to: '/log',
icon: <IconHistogram />,
},
{
text: t('绘图'),
text: t('绘图日志'),
itemKey: 'midjourney',
to: '/midjourney',
icon: <IconImage />,
@@ -141,105 +151,211 @@ const SiderBar = () => {
: 'tableHiddle',
},
{
text: t('异步任务'),
text: t('任务日志'),
itemKey: 'task',
to: '/task',
icon: <IconChecklistStroked />,
className:
localStorage.getItem('enable_task') === 'true'
? ''
: 'tableHiddle',
},
{
text: t('设置'),
itemKey: 'setting',
to: '/setting',
icon: <IconSetting />,
},
localStorage.getItem('enable_task') === 'true'
? ''
: 'tableHiddle',
}
],
[
localStorage.getItem('enable_data_export'),
localStorage.getItem('enable_drawing'),
localStorage.getItem('enable_task'),
localStorage.getItem('chat_link'),
chatItems,
isAdmin(),
t,
],
);
const financeItems = useMemo(
() => [
{
text: t('钱包'),
itemKey: 'topup',
to: '/topup',
icon: <IconCreditCard />,
},
{
text: t('个人设置'),
itemKey: 'personal',
to: '/personal',
icon: <IconUser />,
},
],
[t],
);
const adminItems = useMemo(
() => [
{
text: t('渠道'),
itemKey: 'channel',
to: '/channel',
icon: <IconLayers />,
className: isAdmin() ? '' : 'tableHiddle',
},
{
text: t('兑换码'),
itemKey: 'redemption',
to: '/redemption',
icon: <IconGift />,
className: isAdmin() ? '' : 'tableHiddle',
},
{
text: t('用户管理'),
itemKey: 'user',
to: '/user',
icon: <IconUser />,
},
{
text: t('系统设置'),
itemKey: 'setting',
to: '/setting',
icon: <IconSetting />,
},
],
[isAdmin(), t],
);
const chatMenuItems = useMemo(
() => [
{
text: 'Playground',
itemKey: 'playground',
to: '/playground',
icon: <IconCommentStroked />,
},
{
text: t('聊天'),
itemKey: 'chat',
items: chatItems,
icon: <IconComment />,
},
],
[chatItems, t],
);
useEffect(() => {
let localKey = window.location.pathname.split('/')[1];
if (localKey === '') {
localKey = 'home';
const currentPath = location.pathname;
const matchingKey = Object.keys(routerMap).find(key => routerMap[key] === currentPath);
if (matchingKey) {
setSelectedKeys([matchingKey]);
} else if (currentPath.startsWith('/chat/')) {
setSelectedKeys(['chat']);
}
setSelectedKeys([localKey]);
let chatLink = localStorage.getItem('chat_link');
if (!chatLink) {
let chats = localStorage.getItem('chats');
if (chats) {
// console.log(chats);
try {
chats = JSON.parse(chats);
if (Array.isArray(chats)) {
let chatItems = [];
for (let i = 0; i < chats.length; i++) {
let chat = {};
for (let key in chats[i]) {
chat.text = key;
chat.itemKey = 'chat' + i;
chat.to = '/chat/' + i;
}
// setRouterMap({ ...routerMap, chat: '/chat/' + i })
chatItems.push(chat);
}
setChatItems(chatItems);
}
} catch (e) {
console.error(e);
showError('聊天数据解析失败')
}, [location.pathname]);
useEffect(() => {
let chats = localStorage.getItem('chats');
if (chats) {
// console.log(chats);
try {
chats = JSON.parse(chats);
if (Array.isArray(chats)) {
let chatItems = [];
for (let i = 0; i < chats.length; i++) {
let chat = {};
for (let key in chats[i]) {
chat.text = key;
chat.itemKey = 'chat' + i;
chat.to = '/chat/' + i;
}
// setRouterMap({ ...routerMap, chat: '/chat/' + i })
chatItems.push(chat);
}
setChatItems(chatItems);
}
} catch (e) {
console.error(e);
showError('聊天数据解析失败')
}
}
setIsCollapsed(localStorage.getItem('default_collapse_sidebar') === 'true');
}, []);
setIsCollapsed(styleState.siderCollapsed);
})
// Custom divider style
const dividerStyle = {
margin: '8px 0',
opacity: 0.6,
};
// Custom group label style
const groupLabelStyle = {
padding: '8px 16px',
color: 'var(--semi-color-text-2)',
fontSize: '12px',
fontWeight: 'bold',
textTransform: 'uppercase',
letterSpacing: '0.5px',
};
return (
<>
<Nav
style={{ maxWidth: 220, height: '100%' }}
className="custom-sidebar-nav"
style={{
width: isCollapsed ? '60px' : '200px',
boxShadow: '0 2px 8px rgba(0, 0, 0, 0.15)',
borderRight: '1px solid var(--semi-color-border)',
background: 'var(--semi-color-bg-1)',
borderRadius: styleState.isMobile ? '0' : '0 8px 8px 0',
position: 'relative',
zIndex: 95,
height: '100%',
overflowY: 'auto',
WebkitOverflowScrolling: 'touch', // Improve scrolling on iOS devices
}}
defaultIsCollapsed={
localStorage.getItem('default_collapse_sidebar') === 'true'
}
isCollapsed={isCollapsed}
onCollapseChange={(collapsed) => {
setIsCollapsed(collapsed);
// styleDispatch({ type: 'SET_SIDER', payload: true });
styleDispatch({ type: 'SET_SIDER_COLLAPSED', payload: collapsed });
localStorage.setItem('default_collapse_sidebar', collapsed);
// 确保在收起侧边栏时有选中的项目,避免不必要的计算
if (selectedKeys.length === 0) {
const currentPath = location.pathname;
const matchingKey = Object.keys(routerMap).find(key => routerMap[key] === currentPath);
if (matchingKey) {
setSelectedKeys([matchingKey]);
} else if (currentPath.startsWith('/chat/')) {
setSelectedKeys(['chat']);
} else {
setSelectedKeys([]); // 默认选中首页
}
}
}}
selectedKeys={selectedKeys}
itemStyle={navItemStyle}
hoverStyle={navItemHoverStyle}
selectedStyle={navItemSelectedStyle}
renderWrapper={({ itemElement, isSubNav, isInSubNav, props }) => {
let chatLink = localStorage.getItem('chat_link');
if (!chatLink) {
let chats = localStorage.getItem('chats');
if (chats) {
chats = JSON.parse(chats);
if (Array.isArray(chats) && chats.length > 0) {
for (let i = 0; i < chats.length; i++) {
routerMap['chat' + i] = '/chat/' + i;
}
if (chats.length > 1) {
// delete /chat
if (routerMap['chat']) {
delete routerMap['chat'];
}
} else {
// rename /chat to /chat/0
routerMap['chat'] = '/chat/0';
}
}
let chats = localStorage.getItem('chats');
if (chats) {
chats = JSON.parse(chats);
if (Array.isArray(chats) && chats.length > 0) {
for (let i = 0; i < chats.length; i++) {
routerMap['chat' + i] = '/chat/' + i;
}
if (chats.length > 1) {
// delete /chat
if (routerMap['chat']) {
delete routerMap['chat'];
}
} else {
// rename /chat to /chat/0
routerMap['chat'] = '/chat/0';
}
}
}
return (
<Link
style={{ textDecoration: 'none' }}
@@ -249,21 +365,119 @@ const SiderBar = () => {
</Link>
);
}}
items={headerButtons}
onSelect={(key) => {
if (key.itemKey.toString().startsWith('chat')) {
styleDispatch({ type: 'SET_INNER_PADDING', payload: false });
} else {
styleDispatch({ type: 'SET_INNER_PADDING', payload: true });
}
// 如果点击的是已经展开的子菜单的父项,则收起子菜单
if (openedKeys.includes(key.itemKey)) {
setOpenedKeys(openedKeys.filter(k => k !== key.itemKey));
}
setSelectedKeys([key.itemKey]);
}}
footer={
<>
</>
}
openKeys={openedKeys}
onOpenChange={(data) => {
setOpenedKeys(data.openKeys);
}}
>
<Nav.Footer collapseButton={true}></Nav.Footer>
{/* Chat Section - Only show if there are chat items */}
{chatMenuItems.map((item) => {
if (item.items && item.items.length > 0) {
return (
<Nav.Sub
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
>
{item.items.map((subItem) => (
<Nav.Item
key={subItem.itemKey}
itemKey={subItem.itemKey}
text={subItem.text}
/>
))}
</Nav.Sub>
);
} else {
return (
<Nav.Item
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
/>
);
}
})}
{/* Divider */}
<Divider style={dividerStyle} />
{/* Workspace Section */}
{!isCollapsed && <Text style={groupLabelStyle}>{t('控制台')}</Text>}
{workspaceItems.map((item) => (
<Nav.Item
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
className={item.className}
/>
))}
{isAdmin() && (
<>
{/* Divider */}
<Divider style={dividerStyle} />
{/* Admin Section */}
{!isCollapsed && <Text style={groupLabelStyle}>{t('管理员')}</Text>}
{adminItems.map((item) => (
<Nav.Item
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
className={item.className}
/>
))}
</>
)}
{/* Divider */}
<Divider style={dividerStyle} />
{/* Finance Management Section */}
{!isCollapsed && <Text style={groupLabelStyle}>{t('个人中心')}</Text>}
{financeItems.map((item) => (
<Nav.Item
key={item.itemKey}
itemKey={item.itemKey}
text={item.text}
icon={React.cloneElement(item.icon, { style: iconStyles[item.itemKey] })}
className={item.className}
/>
))}
<Nav.Footer
style={{
paddingBottom: styleState?.isMobile ? '112px' : '20px',
}}
collapseButton={true}
collapseText={(collapsed)=>
{
if(collapsed){
return t('展开侧边栏')
}
return t('收起侧边栏')
}
}
/>
</Nav>
</>
);

View File

@@ -144,33 +144,8 @@ const TokensTable = () => {
render: (text, record, index) => {
let chats = localStorage.getItem('chats');
let chatsArray = []
let chatLink = localStorage.getItem('chat_link');
let mjLink = localStorage.getItem('chat_link2');
let shouldUseCustom = true;
if (chatLink) {
shouldUseCustom = false;
chatLink += `/#/?settings={"key":"{key}","url":"{address}"}`;
chatsArray.push({
node: 'item',
key: 'default',
name: 'ChatGPT Next Web',
onClick: () => {
onOpenLink('default', chatLink, record);
},
});
}
if (mjLink) {
shouldUseCustom = false;
mjLink += `/#/?settings={"key":"{key}","url":"{address}"}`;
chatsArray.push({
node: 'item',
key: 'mj',
name: 'ChatGPT Next Midjourney',
onClick: () => {
onOpenLink('mj', mjLink, record);
},
});
}
if (shouldUseCustom) {
try {
// console.log(chats);

View File

@@ -82,7 +82,7 @@ export const CHANNEL_OPTIONS = [
{
value: 45,
color: 'blue',
label: '火山方舟豆包'
label: '字节火山方舟豆包、DeepSeek通用'
},
{ value: 25, color: 'green', label: 'Moonshot' },
{ value: 19, color: 'blue', label: '360 智脑' },

View File

@@ -9,8 +9,9 @@ export const StyleContext = React.createContext({
export const StyleProvider = ({ children }) => {
const [state, setState] = useState({
isMobile: false,
isMobile: isMobile(),
showSider: false,
siderCollapsed: false,
shouldInnerPadding: false,
});
@@ -26,6 +27,9 @@ export const StyleProvider = ({ children }) => {
case 'SET_MOBILE':
setState(prev => ({ ...prev, isMobile: action.payload }));
break;
case 'SET_SIDER_COLLAPSED':
setState(prev => ({ ...prev, siderCollapsed: action.payload }));
break
case 'SET_INNER_PADDING':
setState(prev => ({ ...prev, shouldInnerPadding: action.payload }));
break;
@@ -39,7 +43,13 @@ export const StyleProvider = ({ children }) => {
useEffect(() => {
const updateIsMobile = () => {
dispatch({ type: 'SET_MOBILE', payload: isMobile() });
const mobileDetected = isMobile();
dispatch({ type: 'SET_MOBILE', payload: mobileDetected });
// If on mobile, we might want to auto-hide the sidebar
if (mobileDetected && state.showSider) {
dispatch({ type: 'SET_SIDER', payload: false });
}
};
updateIsMobile();
@@ -51,24 +61,31 @@ export const StyleProvider = ({ children }) => {
dispatch({ type: 'SET_SIDER', payload: false });
dispatch({ type: 'SET_INNER_PADDING', payload: false });
} else {
dispatch({ type: 'SET_SIDER', payload: true });
// Only show sidebar on non-mobile devices by default
dispatch({ type: 'SET_SIDER', payload: !isMobile() });
dispatch({ type: 'SET_INNER_PADDING', payload: true });
}
if (isMobile()) {
dispatch({ type: 'SET_SIDER', payload: false });
}
};
updateShowSider()
updateShowSider();
const updateSiderCollapsed = () => {
const isCollapsed = localStorage.getItem('default_collapse_sidebar') === 'true';
dispatch({ type: 'SET_SIDER_COLLAPSED', payload: isCollapsed });
};
// Optionally, add event listeners to handle window resize
window.addEventListener('resize', updateIsMobile);
updateSiderCollapsed();
// Add event listeners to handle window resize
const handleResize = () => {
updateIsMobile();
};
window.addEventListener('resize', handleResize);
// Cleanup event listener on component unmount
return () => {
window.removeEventListener('resize', updateIsMobile);
window.removeEventListener('resize', handleResize);
};
}, []);

View File

@@ -19,15 +19,20 @@ export function setStatusData(data) {
);
localStorage.setItem('mj_notify_enabled', data.mj_notify_enabled);
if (data.chat_link) {
localStorage.setItem('chat_link', data.chat_link);
// localStorage.setItem('chat_link', data.chat_link);
} else {
localStorage.removeItem('chat_link');
}
if (data.chat_link2) {
localStorage.setItem('chat_link2', data.chat_link2);
// localStorage.setItem('chat_link2', data.chat_link2);
} else {
localStorage.removeItem('chat_link2');
}
if (data.docs_link) {
localStorage.setItem('docs_link', data.docs_link);
} else {
localStorage.removeItem('docs_link');
}
}
export function setUserData(data) {

View File

@@ -298,6 +298,8 @@ export function renderModelPrice(
modelPrice = -1,
completionRatio,
groupRatio,
cacheTokens = 0,
cacheRatio = 1.0,
) {
if (modelPrice !== -1) {
return i18next.t('模型价格:${{price}} * 分组倍率:{{ratio}} = ${{total}}', {
@@ -311,32 +313,56 @@ export function renderModelPrice(
}
let inputRatioPrice = modelRatio * 2.0;
let completionRatioPrice = modelRatio * 2.0 * completionRatio;
let cacheRatioPrice = modelRatio * 2.0 * cacheRatio;
// Calculate effective input tokens (non-cached + cached with ratio applied)
const effectiveInputTokens = (inputTokens - cacheTokens) + (cacheTokens * cacheRatio);
let price =
(inputTokens / 1000000) * inputRatioPrice * groupRatio +
(effectiveInputTokens / 1000000) * inputRatioPrice * groupRatio +
(completionTokens / 1000000) * completionRatioPrice * groupRatio;
return (
<>
<article>
<p>{i18next.t('提示:${{price}} * {{ratio}} = ${{total}} / 1M tokens', {
<p>{i18next.t('提示价格${{price}} = ${{total}} / 1M tokens', {
price: inputRatioPrice,
ratio: groupRatio,
total: inputRatioPrice * groupRatio
total: inputRatioPrice
})}</p>
<p>{i18next.t('补全:${{price}} * {{ratio}} = ${{total}} / 1M tokens', {
price: completionRatioPrice,
ratio: groupRatio,
total: completionRatioPrice * groupRatio
<p>{i18next.t('补全价格${{price}} * {{completionRatio}} = ${{total}} / 1M tokens (补全倍率: {{completionRatio}})', {
price: inputRatioPrice,
total: completionRatioPrice,
completionRatio: completionRatio
})}</p>
{cacheTokens > 0 && (
<p>{i18next.t('缓存价格:${{price}} * {{cacheRatio}} = ${{total}} / 1M tokens (缓存倍率: {{cacheRatio}})', {
price: inputRatioPrice,
total: inputRatioPrice * cacheRatio,
cacheRatio: cacheRatio
})}</p>
)}
<p></p>
<p>
{i18next.t('提示 {{input}} tokens / 1M tokens * ${{price}} + 补全 {{completion}} tokens / 1M tokens * ${{compPrice}} * 分组 {{ratio}} = ${{total}}', {
input: inputTokens,
price: inputRatioPrice,
completion: completionTokens,
compPrice: completionRatioPrice,
ratio: groupRatio,
total: price.toFixed(6)
})}
{cacheTokens > 0 ?
i18next.t('提示 {{nonCacheInput}} tokens / 1M tokens * ${{price}} + 缓存 {{cacheInput}} tokens / 1M tokens * ${{cachePrice}} + 补全 {{completion}} tokens / 1M tokens * ${{compPrice}} * 分组 {{ratio}} = ${{total}}', {
nonCacheInput: inputTokens - cacheTokens,
cacheInput: cacheTokens,
cachePrice: inputRatioPrice * cacheRatio,
price: inputRatioPrice,
completion: completionTokens,
compPrice: completionRatioPrice,
ratio: groupRatio,
total: price.toFixed(6)
}) :
i18next.t('提示 {{input}} tokens / 1M tokens * ${{price}} + 补全 {{completion}} tokens / 1M tokens * ${{compPrice}} * 分组 {{ratio}} = ${{total}}', {
input: inputTokens,
price: inputRatioPrice,
completion: completionTokens,
compPrice: completionRatioPrice,
ratio: groupRatio,
total: price.toFixed(6)
})
}
</p>
<p>{i18next.t('仅供参考,以实际扣费为准')}</p>
</article>
@@ -349,6 +375,8 @@ export function renderModelPriceSimple(
modelRatio,
modelPrice = -1,
groupRatio,
cacheTokens = 0,
cacheRatio = 1.0,
) {
if (modelPrice !== -1) {
return i18next.t('价格:${{price}} * 分组:{{ratio}}', {
@@ -356,10 +384,18 @@ export function renderModelPriceSimple(
ratio: groupRatio
});
} else {
return i18next.t('模型: {{ratio}} * 分组: {{groupRatio}}', {
ratio: modelRatio,
groupRatio: groupRatio
});
if (cacheTokens !== 0) {
return i18next.t('模型: {{ratio}} * 分组: {{groupRatio}} * 缓存: {{cacheRatio}}', {
ratio: modelRatio,
groupRatio: groupRatio,
cacheRatio: cacheRatio
});
} else {
return i18next.t('模型: {{ratio}} * 分组: {{groupRatio}}', {
ratio: modelRatio,
groupRatio: groupRatio
});
}
}
}
@@ -374,10 +410,16 @@ export function renderAudioModelPrice(
audioRatio,
audioCompletionRatio,
groupRatio,
cacheTokens = 0,
cacheRatio = 1.0,
) {
// 1 ratio = $0.002 / 1K tokens
if (modelPrice !== -1) {
return '模型价格:$' + modelPrice + ' * 分组倍率:' + groupRatio + ' = $' + modelPrice * groupRatio;
return i18next.t('模型价格:${{price}} * 分组倍率:{{ratio}} = ${{total}}', {
price: modelPrice,
ratio: groupRatio,
total: modelPrice * groupRatio
});
} else {
if (completionRatio === undefined) {
completionRatio = 0;
@@ -388,58 +430,82 @@ export function renderAudioModelPrice(
// 这里的 *2 是因为 1倍率=0.002刀,请勿删除
let inputRatioPrice = modelRatio * 2.0;
let completionRatioPrice = modelRatio * 2.0 * completionRatio;
let price =
(inputTokens / 1000000) * inputRatioPrice * groupRatio +
(completionTokens / 1000000) * completionRatioPrice * groupRatio +
let cacheRatioPrice = modelRatio * 2.0 * cacheRatio;
// Calculate effective input tokens (non-cached + cached with ratio applied)
const effectiveInputTokens = (inputTokens - cacheTokens) + (cacheTokens * cacheRatio);
let textPrice =
(effectiveInputTokens / 1000000) * inputRatioPrice * groupRatio +
(completionTokens / 1000000) * completionRatioPrice * groupRatio
let audioPrice =
(audioInputTokens / 1000000) * inputRatioPrice * audioRatio * groupRatio +
(audioCompletionTokens / 1000000) * inputRatioPrice * audioRatio * audioCompletionRatio * groupRatio;
let price = textPrice + audioPrice;
return (
<>
<article>
<p>{i18next.t('提示:${{price}} * {{ratio}} = ${{total}} / 1M tokens', {
<p>{i18next.t('提示价格${{price}} = ${{total}} / 1M tokens', {
price: inputRatioPrice,
ratio: groupRatio,
total: inputRatioPrice * groupRatio
total: inputRatioPrice
})}</p>
<p>{i18next.t('补全:${{price}} * {{ratio}} = ${{total}} / 1M tokens', {
price: completionRatioPrice,
ratio: groupRatio,
total: completionRatioPrice * groupRatio
})}</p>
<p>{i18next.t('音频提示:${{price}} * {{ratio}} * {{audioRatio}} = ${{total}} / 1M tokens', {
<p>{i18next.t('补全价格${{price}} * {{completionRatio}} = ${{total}} / 1M tokens (补全倍率: {{completionRatio}})', {
price: inputRatioPrice,
ratio: groupRatio,
audioRatio,
total: inputRatioPrice * audioRatio * groupRatio
total: completionRatioPrice,
completionRatio: completionRatio
})}</p>
<p>{i18next.t('音频补全:${{price}} * {{ratio}} * {{audioRatio}} * {{audioCompRatio}} = ${{total}} / 1M tokens', {
{cacheTokens > 0 && (
<p>{i18next.t('缓存价格:${{price}} * {{cacheRatio}} = ${{total}} / 1M tokens (缓存倍率: {{cacheRatio}})', {
price: inputRatioPrice,
total: inputRatioPrice * cacheRatio,
cacheRatio: cacheRatio
})}</p>
)}
<p>{i18next.t('音频提示价格:${{price}} * {{audioRatio}} = ${{total}} / 1M tokens (音频倍率: {{audioRatio}})', {
price: inputRatioPrice,
ratio: groupRatio,
audioRatio,
audioCompRatio: audioCompletionRatio,
total: inputRatioPrice * audioRatio * audioCompletionRatio * groupRatio
total: inputRatioPrice * audioRatio,
audioRatio: audioRatio
})}</p>
<p>{i18next.t('音频补全价格:${{price}} * {{audioRatio}} * {{audioCompRatio}} = ${{total}} / 1M tokens (音频补全倍率: {{audioCompRatio}})', {
price: inputRatioPrice,
total: inputRatioPrice * audioRatio * audioCompletionRatio,
audioRatio: audioRatio,
audioCompRatio: audioCompletionRatio
})}</p>
<p>
{i18next.t('文字提示 {{input}} tokens / 1M tokens * ${{price}} + 文字补全 {{completion}} tokens / 1M tokens * ${{compPrice}} +', {
input: inputTokens,
price: inputRatioPrice,
completion: completionTokens,
compPrice: completionRatioPrice
})}
{cacheTokens > 0 ?
i18next.t('文字提示 {{nonCacheInput}} tokens / 1M tokens * ${{price}} + 缓存 {{cacheInput}} tokens / 1M tokens * ${{cachePrice}} + 文字补全 {{completion}} tokens / 1M tokens * ${{compPrice}} = ${{total}}', {
nonCacheInput: inputTokens - cacheTokens,
cacheInput: cacheTokens,
cachePrice: inputRatioPrice * cacheRatio,
price: inputRatioPrice,
completion: completionTokens,
compPrice: completionRatioPrice,
total: textPrice.toFixed(6)
}) :
i18next.t('文字提示 {{input}} tokens / 1M tokens * ${{price}} + 文字补全 {{completion}} tokens / 1M tokens * ${{compPrice}} = ${{total}}', {
input: inputTokens,
price: inputRatioPrice,
completion: completionTokens,
compPrice: completionRatioPrice,
total: textPrice.toFixed(6)
})
}
</p>
<p>
{i18next.t('音频提示 {{input}} tokens / 1M tokens * ${{price}} * {{audioRatio}} + 音频补全 {{completion}} tokens / 1M tokens * ${{price}} * {{audioRatio}} * {{audioCompRatio}}', {
{i18next.t('音频提示 {{input}} tokens / 1M tokens * ${{audioInputPrice}} + 音频补全 {{completion}} tokens / 1M tokens * ${{audioCompPrice}} = ${{total}}', {
input: audioInputTokens,
completion: audioCompletionTokens,
price: inputRatioPrice,
audioRatio,
audioCompRatio: audioCompletionRatio
audioInputPrice: audioRatio * inputRatioPrice,
audioCompPrice: audioRatio * audioCompletionRatio * inputRatioPrice,
total: audioPrice.toFixed(6)
})}
</p>
<p>
{i18next.t('(文字 + 音频)* 分组倍率 {{ratio}} = ${{total}}', {
ratio: groupRatio,
total: price.toFixed(6)
{i18next.t('总价:文字价格 {{textPrice}} + 音频价格 {{audioPrice}} = ${{total}}', {
total: price.toFixed(6),
textPrice: textPrice.toFixed(6),
audioPrice: audioPrice.toFixed(6)
})}
</p>
<p>{i18next.t('仅供参考,以实际扣费为准')}</p>

View File

@@ -621,6 +621,7 @@
"窗口等待": "window wait",
"失败": "Failed",
"绘图": "Drawing",
"绘图日志": "Drawing log",
"放大": "Upscalers",
"微妙放大": "Upscale (Subtle)",
"创造放大": "Upscale (Creative)",
@@ -1120,7 +1121,7 @@
"知识库 ID": "Knowledge Base ID",
"请输入知识库 ID例如123456": "Please enter knowledge base ID, e.g.: 123456",
"可选值": "Optional value",
"异步任务": "Async task",
"任务日志": "Task log",
"你好": "Hello",
"你好,请问有什么可以帮助您的吗?": "Hello, how may I help you?",
"用户分组": "Your default group",
@@ -1280,5 +1281,66 @@
"频率限制的周期(分钟)": "Rate limit period (minutes)",
"只包括请求成功的次数": "Only include successful request times",
"保存模型速率限制": "Save model rate limit settings",
"速率限制设置": "Rate limit settings"
"速率限制设置": "Rate limit settings",
"获取启用模型失败:": "Failed to get enabled models:",
"获取启用模型失败": "Failed to get enabled models",
"JSON解析错误:": "JSON parsing error:",
"保存失败:": "Save failed:",
"输入模型倍率": "Enter model ratio",
"输入补全倍率": "Enter completion ratio",
"请输入数字": "Please enter a number",
"模型名称已存在": "Model name already exists",
"添加成功": "Added successfully",
"请先选择需要批量设置的模型": "Please select models for batch setting first",
"请输入模型倍率和补全倍率": "Please enter model ratio and completion ratio",
"请输入有效的数字": "Please enter a valid number",
"请输入填充值": "Please enter a value",
"批量设置成功": "Batch setting successful",
"已为 {{count}} 个模型设置{{type}}": "Set {{type}} for {{count}} models",
"固定价格": "Fixed Price",
"模型倍率和补全倍率": "Model Ratio and Completion Ratio",
"批量设置": "Batch Setting",
"搜索模型名称": "Search model name",
"此页面仅显示未设置价格或倍率的模型,设置后将自动从列表中移除": "This page only shows models without price or ratio settings. After setting, they will be automatically removed from the list",
"没有未设置的模型": "No unconfigured models",
"定价模式": "Pricing Mode",
"固定价格(每次)": "Fixed Price (per use)",
"输入每次价格": "Enter per-use price",
"批量设置模型参数": "Batch Set Model Parameters",
"设置类型": "Setting Type",
"模型倍率值": "Model Ratio Value",
"补全倍率值": "Completion Ratio Value",
"请输入模型倍率": "Enter model ratio",
"请输入补全倍率": "Enter completion ratio",
"请输入数值": "Enter a value",
"将为选中的 ": "Will set for selected ",
" 个模型设置相同的值": " models with the same value",
"当前设置类型: ": "Current setting type: ",
"固定价格值": "Fixed Price Value",
"未设置倍率模型": "Models without ratio settings",
"模型倍率和补全倍率同时设置": "Both model ratio and completion ratio are set",
"自用模式": "Self-use mode",
"开启后不限制:必须设置模型倍率": "After enabling, no limit: must set model ratio",
"演示站点模式": "Demo site mode",
"当前版本": "Current version",
"Gemini设置": "Gemini settings",
"Gemini安全设置": "Gemini safety settings",
"default为默认设置可单独设置每个分类的安全等级": "\"default\" is the default setting, and each category can be set separately",
"Gemini版本设置": "Gemini version settings",
"default为默认设置可单独设置每个模型的版本": "\"default\" is the default setting, and each model can be set separately",
"Claude设置": "Claude settings",
"Claude请求头覆盖": "Claude request header override",
"示例": "Example",
"缺省 MaxTokens": "Default MaxTokens",
"启用Claude思考适配-thinking后缀": "Enable Claude thinking adaptation (-thinking suffix)",
"Claude思考适配 BudgetTokens = MaxTokens * BudgetTokens 百分比": "Claude thinking adaptation BudgetTokens = MaxTokens * BudgetTokens percentage",
"思考适配 BudgetTokens 百分比": "Thinking adaptation BudgetTokens percentage",
"0.1-1之间的小数": "Decimal between 0.1 and 1",
"模型相关设置": "Model related settings",
"收起侧边栏": "Collapse sidebar",
"展开侧边栏": "Expand sidebar",
"提示缓存倍率": "Prompt cache ratio",
"缓存:${{price}} * {{ratio}} = ${{total}} / 1M tokens (缓存倍率: {{cacheRatio}})": "Cache: ${{price}} * {{ratio}} = ${{total}} / 1M tokens (cache ratio: {{cacheRatio}})",
"提示 {{nonCacheInput}} tokens + 缓存 {{cacheInput}} tokens * {{cacheRatio}} / 1M tokens * ${{price}} + 补全 {{completion}} tokens / 1M tokens * ${{compPrice}} * 分组 {{ratio}} = ${{total}}": "Prompt {{nonCacheInput}} tokens + cache {{cacheInput}} tokens * {{cacheRatio}} / 1M tokens * ${{price}} + completion {{completion}} tokens / 1M tokens * ${{compPrice}} * group {{ratio}} = ${{total}}",
"缓存 Tokens": "Cache Tokens"
}

View File

@@ -1,7 +1,7 @@
body {
margin: 0;
padding-top: 55px;
overflow-y: scroll;
padding-top: 0;
overflow: hidden;
font-family: Lato, 'Helvetica Neue', Arial, Helvetica, 'Microsoft YaHei',
sans-serif;
-webkit-font-smoothing: antialiased;
@@ -15,6 +15,7 @@ body {
#root {
height: 100vh;
flex-direction: column;
overflow: hidden;
}
#root > section > header > section > div > div > div > div.semi-navigation-header-list-outer > div.semi-navigation-list-wrapper > ul > div > a > li > span{
@@ -29,6 +30,15 @@ body {
/*.semi-navigation-sub-wrap .semi-navigation-sub-title, .semi-navigation-item {*/
/* padding: 0 0;*/
/*}*/
.topnav {
padding: 0 8px;
}
.topnav .semi-navigation-item {
margin: 0 1px;
padding: 0 4px;
}
.topnav .semi-navigation-list-wrapper {
max-width: calc(55vw - 20px);
overflow-x: auto;
@@ -72,6 +82,16 @@ body {
.semi-navigation-horizontal .semi-navigation-header {
margin-right: 0;
}
/* 确保移动端内容可滚动 */
.semi-layout-content {
-webkit-overflow-scrolling: touch !important;
}
/* 隐藏在移动设备上 */
.hide-on-mobile {
display: none !important;
}
}
.semi-table-tbody > .semi-table-row > .semi-table-row-cell {
@@ -120,15 +140,47 @@ code {
margin-bottom: 0;
}
.semi-navigation-vertical {
/*flex: 0 0 auto;*/
/*display: flex;*/
/*flex-direction: column;*/
/*width: 100%;*/
height: 100%;
/* 自定义侧边栏按钮悬停效果 */
.semi-navigation-item:hover {
transform: translateX(2px);
box-shadow: 0 2px 8px rgba(var(--semi-color-primary-rgb), 0.2);
}
/* 自定义侧边栏按钮选中效果 */
.semi-navigation-item-selected {
position: relative;
overflow: hidden;
}
.semi-navigation-item-selected::before {
content: '';
position: absolute;
left: 0;
top: 0;
height: 100%;
width: 4px;
background-color: var(--semi-color-primary);
animation: slideIn 0.3s ease;
}
@keyframes slideIn {
from {
transform: translateY(-100%);
}
to {
transform: translateY(0);
}
}
/*.semi-navigation-vertical {*/
/* !*flex: 0 0 auto;*!*/
/* !*display: flex;*!*/
/* !*flex-direction: column;*!*/
/* !*width: 100%;*!*/
/* height: 100%;*/
/* overflow: hidden;*/
/*}*/
.main-content {
padding: 4px;
height: 100%;
@@ -142,8 +194,67 @@ code {
font-size: 1.1em;
}
@media only screen and (max-width: 600px) {
.hide-on-mobile {
display: none !important;
}
/* 顶部栏样式 */
.topnav {
padding: 0 16px;
}
.topnav .semi-navigation-item {
border-radius: 4px;
margin: 0 2px;
transition: all 0.3s ease;
}
.topnav .semi-navigation-item:hover {
background-color: var(--semi-color-primary-light-default);
transform: translateY(-2px);
box-shadow: 0 2px 8px rgba(var(--semi-color-primary-rgb), 0.2);
}
.topnav .semi-navigation-item-selected {
background-color: var(--semi-color-primary-light-default);
color: var(--semi-color-primary);
font-weight: 600;
}
/* 顶部栏文本样式 */
.header-bar-text {
color: var(--semi-color-text-0);
font-weight: 500;
transition: all 0.3s ease;
}
.header-bar-text:hover {
color: var(--semi-color-primary);
}
/* 自定义滚动条样式 */
.semi-layout-content::-webkit-scrollbar,
.semi-sider::-webkit-scrollbar {
width: 6px;
height: 6px;
}
.semi-layout-content::-webkit-scrollbar-thumb,
.semi-sider::-webkit-scrollbar-thumb {
background: var(--semi-color-tertiary-light-default);
border-radius: 3px;
}
.semi-layout-content::-webkit-scrollbar-thumb:hover,
.semi-sider::-webkit-scrollbar-thumb:hover {
background: var(--semi-color-tertiary);
}
.semi-layout-content::-webkit-scrollbar-track,
.semi-sider::-webkit-scrollbar-track {
background: transparent;
}
/* Custom sidebar shadow */
/*.custom-sidebar-nav {*/
/* box-shadow: 0 1px 6px rgba(0, 0, 0, 0.08) !important;*/
/* -webkit-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.08) !important;*/
/* -moz-box-shadow: 0 1px 6px rgba(0, 0, 0, 0.08) !important;*/
/* min-height: 100%;*/
/*}*/

View File

@@ -10,10 +10,8 @@ const ChatPage = () => {
const comLink = (key) => {
// console.log('chatLink:', chatLink);
if (!serverAddress || !key) return '';
let link = localStorage.getItem('chat_link');
if (link) {
link = `${link}/#/?settings={"key":"sk-${key}","url":"${encodeURIComponent(serverAddress)}"}`;
} else if (id) {
let link = "";
if (id) {
let chats = localStorage.getItem('chats');
if (chats) {
chats = JSON.parse(chats);

View File

@@ -11,17 +11,26 @@ import { useTranslation } from 'react-i18next';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
const CLAUDE_HEADER = {
'anthropic-beta': ['output-128k-2025-02-19', 'token-efficient-tools-2025-02-19'],
'claude-3-7-sonnet-20250219-thinking': {
'anthropic-beta': ['output-128k-2025-02-19', 'token-efficient-tools-2025-02-19'],
}
};
const CLAUDE_DEFAULT_MAX_TOKENS = {
'default': 8192,
"claude-3-haiku-20240307": 4096,
"claude-3-opus-20240229": 4096,
'claude-3-7-sonnet-20250219-thinking': 8192,
}
export default function SettingClaudeModel(props) {
const { t } = useTranslation();
const [loading, setLoading] = useState(false);
const [inputs, setInputs] = useState({
'claude.headers_settings': '',
'claude.model_headers_settings': '',
'claude.thinking_adapter_enabled': true,
'claude.thinking_adapter_max_tokens': 8192,
'claude.default_max_tokens': '',
'claude.thinking_adapter_budget_tokens_percentage': 0.8,
});
const refForm = useRef();
@@ -31,12 +40,8 @@ export default function SettingClaudeModel(props) {
const updateArray = compareObjects(inputs, inputsRow);
if (!updateArray.length) return showWarning(t('你似乎并没有修改什么'));
const requestQueue = updateArray.map((item) => {
let value = '';
if (typeof inputs[item.key] === 'boolean') {
value = String(inputs[item.key]);
} else {
value = inputs[item.key];
}
let value = String(inputs[item.key]);
return API.put('/api/option/', {
key: item.key,
value,
@@ -83,12 +88,12 @@ export default function SettingClaudeModel(props) {
>
<Form.Section text={t('Claude设置')}>
<Row>
<Col span={16}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.TextArea
label={t('Claude请求头覆盖')}
field={'claude.headers_settings'}
field={'claude.model_headers_settings'}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(CLAUDE_HEADER, null, 2)}
extraText={t('示例') + JSON.stringify(CLAUDE_HEADER, null, 2)}
extraText={t('示例') + '\n' + JSON.stringify(CLAUDE_HEADER, null, 2)}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
@@ -98,7 +103,27 @@ export default function SettingClaudeModel(props) {
message: t('不是合法的 JSON 字符串')
}
]}
onChange={(value) => setInputs({ ...inputs, 'claude.headers_settings': value })}
onChange={(value) => setInputs({ ...inputs, 'claude.model_headers_settings': value })}
/>
</Col>
</Row>
<Row>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.TextArea
label={t('缺省 MaxTokens')}
field={'claude.default_max_tokens'}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(CLAUDE_DEFAULT_MAX_TOKENS, null, 2)}
extraText={t('示例') + '\n' + JSON.stringify(CLAUDE_DEFAULT_MAX_TOKENS, null, 2)}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => verifyJSON(value),
message: t('不是合法的 JSON 字符串')
}
]}
onChange={(value) => setInputs({ ...inputs, 'claude.default_max_tokens': value })}
/>
</Col>
</Row>
@@ -120,20 +145,14 @@ export default function SettingClaudeModel(props) {
</Col>
</Row>
<Row>
<Col span={8}>
<Form.InputNumber
label={t('思考适配 MaxTokens')}
field={'claude.thinking_adapter_max_tokens'}
initValue={''}
onChange={(value) => setInputs({ ...inputs, 'claude.thinking_adapter_max_tokens': value })}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.InputNumber
label={t('思考适配 BudgetTokens 百分比')}
field={'claude.thinking_adapter_budget_tokens_percentage'}
initValue={''}
suffix={t('%')}
extraText={t('0.1-1之间的小数')}
min={0.1}
max={1}
onChange={(value) => setInputs({ ...inputs, 'claude.thinking_adapter_budget_tokens_percentage': value })}
/>
</Col>

View File

@@ -86,7 +86,7 @@ export default function SettingGeminiModel(props) {
>
<Form.Section text={t('Gemini设置')}>
<Row>
<Col span={16}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.TextArea
label={t('Gemini安全设置')}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(GEMINI_SETTING_EXAMPLE, null, 2)}
@@ -106,7 +106,7 @@ export default function SettingGeminiModel(props) {
</Col>
</Row>
<Row>
<Col span={16}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.TextArea
label={t('Gemini版本设置')}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(GEMINI_VERSION_EXAMPLE, null, 2)}

View File

@@ -86,7 +86,7 @@ export default function GroupRatioSettings(props) {
>
<Form.Section text={t('分组设置')}>
<Row gutter={16}>
<Col span={16}>
<Col xs={24} sm={16}>
<Form.TextArea
label={t('分组倍率')}
placeholder={t('为一个 JSON 文本,键为分组名称,值为倍率')}
@@ -105,7 +105,7 @@ export default function GroupRatioSettings(props) {
</Col>
</Row>
<Row gutter={16}>
<Col span={16}>
<Col xs={24} sm={16}>
<Form.TextArea
label={t('用户可选分组')}
placeholder={t('为一个 JSON 文本,键为分组名称,值为分组描述')}

View File

@@ -15,6 +15,7 @@ export default function ModelRatioSettings(props) {
const [inputs, setInputs] = useState({
ModelPrice: '',
ModelRatio: '',
CacheRatio: '',
CompletionRatio: '',
});
const refForm = useRef();
@@ -101,7 +102,7 @@ export default function ModelRatioSettings(props) {
>
<Form.Section>
<Row gutter={16}>
<Col span={16}>
<Col xs={24} sm={16}>
<Form.TextArea
label={t('模型固定价格')}
extraText={t('一次调用消耗多少刀,优先级大于模型倍率')}
@@ -121,7 +122,7 @@ export default function ModelRatioSettings(props) {
</Col>
</Row>
<Row gutter={16}>
<Col span={16}>
<Col xs={24} sm={16}>
<Form.TextArea
label={t('模型倍率')}
placeholder={t('为一个 JSON 文本,键为模型名称,值为倍率')}
@@ -140,7 +141,26 @@ export default function ModelRatioSettings(props) {
</Col>
</Row>
<Row gutter={16}>
<Col span={16}>
<Col xs={24} sm={16}>
<Form.TextArea
label={t('提示缓存倍率')}
placeholder={t('为一个 JSON 文本,键为模型名称,值为倍率')}
field={'CacheRatio'}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => verifyJSON(value),
message: '不是合法的 JSON 字符串'
}
]}
onChange={(value) => setInputs({ ...inputs, CacheRatio: value })}
/>
</Col>
</Row>
<Row gutter={16}>
<Col xs={24} sm={16}>
<Form.TextArea
label={t('模型补全倍率(仅对自定义模型有效)')}
extraText={t('仅对自定义模型有效')}

View File

@@ -0,0 +1,549 @@
import React, { useEffect, useState } from 'react';
import { Table, Button, Input, Modal, Form, Space, Typography, Radio, Notification } from '@douyinfe/semi-ui';
import { IconDelete, IconPlus, IconSearch, IconSave, IconBolt } from '@douyinfe/semi-icons';
import { showError, showSuccess } from '../../../helpers';
import { API } from '../../../helpers';
import { useTranslation } from 'react-i18next';
export default function ModelRatioNotSetEditor(props) {
const { t } = useTranslation();
const [models, setModels] = useState([]);
const [visible, setVisible] = useState(false);
const [batchVisible, setBatchVisible] = useState(false);
const [currentModel, setCurrentModel] = useState(null);
const [searchText, setSearchText] = useState('');
const [currentPage, setCurrentPage] = useState(1);
const [pageSize, setPageSize] = useState(10);
const [loading, setLoading] = useState(false);
const [enabledModels, setEnabledModels] = useState([]);
const [selectedRowKeys, setSelectedRowKeys] = useState([]);
const [batchFillType, setBatchFillType] = useState('ratio');
const [batchFillValue, setBatchFillValue] = useState('');
const [batchRatioValue, setBatchRatioValue] = useState('');
const [batchCompletionRatioValue, setBatchCompletionRatioValue] = useState('');
const { Text } = Typography;
// 定义可选的每页显示条数
const pageSizeOptions = [10, 20, 50, 100];
const getAllEnabledModels = async () => {
try {
const res = await API.get('/api/channel/models_enabled');
const { success, message, data } = res.data;
if (success) {
setEnabledModels(data);
} else {
showError(message);
}
} catch (error) {
console.error(t('获取启用模型失败:'), error);
showError(t('获取启用模型失败'));
}
}
useEffect(() => {
// 获取所有启用的模型
getAllEnabledModels();
}, []);
useEffect(() => {
try {
const modelPrice = JSON.parse(props.options.ModelPrice || '{}');
const modelRatio = JSON.parse(props.options.ModelRatio || '{}');
const completionRatio = JSON.parse(props.options.CompletionRatio || '{}');
// 找出所有未设置价格和倍率的模型
const unsetModels = enabledModels.filter(modelName => {
const hasPrice = modelPrice[modelName] !== undefined;
const hasRatio = modelRatio[modelName] !== undefined;
// 如果模型没有价格或者没有倍率设置,则显示
return !hasPrice && !hasRatio;
});
// 创建模型数据
const modelData = unsetModels.map(name => ({
name,
price: modelPrice[name] || '',
ratio: modelRatio[name] || '',
completionRatio: completionRatio[name] || ''
}));
setModels(modelData);
// 清空选择
setSelectedRowKeys([]);
} catch (error) {
console.error(t('JSON解析错误:'), error);
}
}, [props.options, enabledModels]);
// 首先声明分页相关的工具函数
const getPagedData = (data, currentPage, pageSize) => {
const start = (currentPage - 1) * pageSize;
const end = start + pageSize;
return data.slice(start, end);
};
// 处理页面大小变化
const handlePageSizeChange = (size) => {
setPageSize(size);
// 重新计算当前页,避免数据丢失
const totalPages = Math.ceil(filteredModels.length / size);
if (currentPage > totalPages) {
setCurrentPage(totalPages || 1);
}
};
// 在 return 语句之前,先处理过滤和分页逻辑
const filteredModels = models.filter(model =>
searchText ? model.name.toLowerCase().includes(searchText.toLowerCase()) : true
);
// 然后基于过滤后的数据计算分页数据
const pagedData = getPagedData(filteredModels, currentPage, pageSize);
const SubmitData = async () => {
setLoading(true);
const output = {
ModelPrice: JSON.parse(props.options.ModelPrice || '{}'),
ModelRatio: JSON.parse(props.options.ModelRatio || '{}'),
CompletionRatio: JSON.parse(props.options.CompletionRatio || '{}')
};
try {
// 数据转换 - 只处理已修改的模型
models.forEach(model => {
// 只有当用户设置了值时才更新
if (model.price !== '') {
// 如果价格不为空,则转换为浮点数,忽略倍率参数
output.ModelPrice[model.name] = parseFloat(model.price);
} else {
if (model.ratio !== '') output.ModelRatio[model.name] = parseFloat(model.ratio);
if (model.completionRatio !== '') output.CompletionRatio[model.name] = parseFloat(model.completionRatio);
}
});
// 准备API请求数组
const finalOutput = {
ModelPrice: JSON.stringify(output.ModelPrice, null, 2),
ModelRatio: JSON.stringify(output.ModelRatio, null, 2),
CompletionRatio: JSON.stringify(output.CompletionRatio, null, 2)
};
const requestQueue = Object.entries(finalOutput).map(([key, value]) => {
return API.put('/api/option/', {
key,
value
});
});
// 批量处理请求
const results = await Promise.all(requestQueue);
// 验证结果
if (requestQueue.length === 1) {
if (results.includes(undefined)) return;
} else if (requestQueue.length > 1) {
if (results.includes(undefined)) {
return showError(t('部分保存失败,请重试'));
}
}
// 检查每个请求的结果
for (const res of results) {
if (!res.data.success) {
return showError(res.data.message);
}
}
showSuccess(t('保存成功'));
props.refresh();
// 重新获取未设置的模型
getAllEnabledModels();
} catch (error) {
console.error(t('保存失败:'), error);
showError(t('保存失败,请重试'));
} finally {
setLoading(false);
}
};
const columns = [
{
title: t('模型名称'),
dataIndex: 'name',
key: 'name',
},
{
title: t('模型固定价格'),
dataIndex: 'price',
key: 'price',
render: (text, record) => (
<Input
value={text}
placeholder={t('按量计费')}
onChange={value => updateModel(record.name, 'price', value)}
/>
)
},
{
title: t('模型倍率'),
dataIndex: 'ratio',
key: 'ratio',
render: (text, record) => (
<Input
value={text}
placeholder={record.price !== '' ? t('模型倍率') : t('输入模型倍率')}
disabled={record.price !== ''}
onChange={value => updateModel(record.name, 'ratio', value)}
/>
)
},
{
title: t('补全倍率'),
dataIndex: 'completionRatio',
key: 'completionRatio',
render: (text, record) => (
<Input
value={text}
placeholder={record.price !== '' ? t('补全倍率') : t('输入补全倍率')}
disabled={record.price !== ''}
onChange={value => updateModel(record.name, 'completionRatio', value)}
/>
)
}
];
const updateModel = (name, field, value) => {
if (value !== '' && isNaN(value)) {
showError(t('请输入数字'));
return;
}
setModels(prev =>
prev.map(model =>
model.name === name
? { ...model, [field]: value }
: model
)
);
};
const addModel = (values) => {
// 检查模型名称是否存在, 如果存在则拒绝添加
if (models.some(model => model.name === values.name)) {
showError(t('模型名称已存在'));
return;
}
setModels(prev => [{
name: values.name,
price: values.price || '',
ratio: values.ratio || '',
completionRatio: values.completionRatio || ''
}, ...prev]);
setVisible(false);
showSuccess(t('添加成功'));
};
// 批量填充功能
const handleBatchFill = () => {
if (selectedRowKeys.length === 0) {
showError(t('请先选择需要批量设置的模型'));
return;
}
if (batchFillType === 'bothRatio') {
if (batchRatioValue === '' || batchCompletionRatioValue === '') {
showError(t('请输入模型倍率和补全倍率'));
return;
}
if (isNaN(batchRatioValue) || isNaN(batchCompletionRatioValue)) {
showError(t('请输入有效的数字'));
return;
}
} else {
if (batchFillValue === '') {
showError(t('请输入填充值'));
return;
}
if (isNaN(batchFillValue)) {
showError(t('请输入有效的数字'));
return;
}
}
// 根据选择的类型批量更新模型
setModels(prev =>
prev.map(model => {
if (selectedRowKeys.includes(model.name)) {
if (batchFillType === 'price') {
return {
...model,
price: batchFillValue,
ratio: '',
completionRatio: ''
};
} else if (batchFillType === 'ratio') {
return {
...model,
price: '',
ratio: batchFillValue
};
} else if (batchFillType === 'completionRatio') {
return {
...model,
price: '',
completionRatio: batchFillValue
};
} else if (batchFillType === 'bothRatio') {
return {
...model,
price: '',
ratio: batchRatioValue,
completionRatio: batchCompletionRatioValue
};
}
}
return model;
})
);
setBatchVisible(false);
Notification.success({
title: t('批量设置成功'),
content: t('已为 {{count}} 个模型设置{{type}}', {
count: selectedRowKeys.length,
type: batchFillType === 'price' ? t('固定价格') :
batchFillType === 'ratio' ? t('模型倍率') :
batchFillType === 'completionRatio' ? t('补全倍率') : t('模型倍率和补全倍率')
}),
duration: 3,
});
};
const handleBatchTypeChange = (value) => {
console.log(t('Changing batch type to:'), value);
setBatchFillType(value);
// 切换类型时清空对应的值
if (value !== 'bothRatio') {
setBatchFillValue('');
} else {
setBatchRatioValue('');
setBatchCompletionRatioValue('');
}
};
const rowSelection = {
selectedRowKeys,
onChange: (selectedKeys) => {
setSelectedRowKeys(selectedKeys);
},
};
return (
<>
<Space vertical align="start" style={{ width: '100%' }}>
<Space>
<Button icon={<IconPlus />} onClick={() => setVisible(true)}>
{t('添加模型')}
</Button>
<Button
icon={<IconBolt />}
type="secondary"
onClick={() => setBatchVisible(true)}
disabled={selectedRowKeys.length === 0}
>
{t('批量设置')} ({selectedRowKeys.length})
</Button>
<Button type="primary" icon={<IconSave />} onClick={SubmitData} loading={loading}>
{t('应用更改')}
</Button>
<Input
prefix={<IconSearch />}
placeholder={t('搜索模型名称')}
value={searchText}
onChange={value => {
setSearchText(value)
setCurrentPage(1);
}}
style={{ width: 200 }}
/>
</Space>
<Text>{t('此页面仅显示未设置价格或倍率的模型,设置后将自动从列表中移除')}</Text>
<Table
columns={columns}
dataSource={pagedData}
rowSelection={rowSelection}
rowKey="name"
pagination={{
currentPage: currentPage,
pageSize: pageSize,
total: filteredModels.length,
onPageChange: page => setCurrentPage(page),
onPageSizeChange: handlePageSizeChange,
pageSizeOptions: pageSizeOptions,
formatPageText: (page) =>
t('第 {{start}} - {{end}} 条,共 {{total}} 条', {
start: page.currentStart,
end: page.currentEnd,
total: filteredModels.length
}),
showTotal: true,
showSizeChanger: true
}}
empty={
<div style={{ textAlign: 'center', padding: '20px' }}>
{t('没有未设置的模型')}
</div>
}
/>
</Space>
{/* 添加模型弹窗 */}
<Modal
title={t('添加模型')}
visible={visible}
onCancel={() => setVisible(false)}
onOk={() => {
currentModel && addModel(currentModel);
}}
>
<Form>
<Form.Input
field="name"
label={t('模型名称')}
placeholder="strawberry"
required
onChange={value => setCurrentModel(prev => ({ ...prev, name: value }))}
/>
<Form.Switch
field="priceMode"
label={<>{t('定价模式')}{currentModel?.priceMode ? t("固定价格") : t("倍率模式")}</>}
onChange={checked => {
setCurrentModel(prev => ({
...prev,
price: '',
ratio: '',
completionRatio: '',
priceMode: checked
}));
}}
/>
{currentModel?.priceMode ? (
<Form.Input
field="price"
label={t('固定价格(每次)')}
placeholder={t('输入每次价格')}
onChange={value => setCurrentModel(prev => ({ ...prev, price: value }))}
/>
) : (
<>
<Form.Input
field="ratio"
label={t('模型倍率')}
placeholder={t('输入模型倍率')}
onChange={value => setCurrentModel(prev => ({ ...prev, ratio: value }))}
/>
<Form.Input
field="completionRatio"
label={t('补全倍率')}
placeholder={t('输入补全价格')}
onChange={value => setCurrentModel(prev => ({ ...prev, completionRatio: value }))}
/>
</>
)}
</Form>
</Modal>
{/* 批量设置弹窗 */}
<Modal
title={t('批量设置模型参数')}
visible={batchVisible}
onCancel={() => setBatchVisible(false)}
onOk={handleBatchFill}
width={500}
>
<Form>
<Form.Section text={t('设置类型')}>
<div style={{ marginBottom: '16px' }}>
<Space>
<Radio
checked={batchFillType === 'price'}
onChange={() => handleBatchTypeChange('price')}
>
{t('固定价格')}
</Radio>
<Radio
checked={batchFillType === 'ratio'}
onChange={() => handleBatchTypeChange('ratio')}
>
{t('模型倍率')}
</Radio>
<Radio
checked={batchFillType === 'completionRatio'}
onChange={() => handleBatchTypeChange('completionRatio')}
>
{t('补全倍率')}
</Radio>
<Radio
checked={batchFillType === 'bothRatio'}
onChange={() => handleBatchTypeChange('bothRatio')}
>
{t('模型倍率和补全倍率同时设置')}
</Radio>
</Space>
</div>
</Form.Section>
{batchFillType === 'bothRatio' ? (
<>
<Form.Input
field="batchRatioValue"
label={t('模型倍率值')}
placeholder={t('请输入模型倍率')}
value={batchRatioValue}
onChange={value => setBatchRatioValue(value)}
/>
<Form.Input
field="batchCompletionRatioValue"
label={t('补全倍率值')}
placeholder={t('请输入补全倍率')}
value={batchCompletionRatioValue}
onChange={value => setBatchCompletionRatioValue(value)}
/>
</>
) : (
<Form.Input
field="batchFillValue"
label={
batchFillType === 'price'
? t('固定价格值')
: batchFillType === 'ratio'
? t('模型倍率值')
: t('补全倍率值')
}
placeholder={t('请输入数值')}
value={batchFillValue}
onChange={value => setBatchFillValue(value)}
/>
)}
<Text type="tertiary">
{t('将为选中的 ')} <Text strong>{selectedRowKeys.length}</Text> {t(' ')}
</Text>
<div style={{ marginTop: '8px' }}>
<Text type="tertiary">
{t('当前设置类型: ')} <Text strong>{
batchFillType === 'price' ? t('固定价格') :
batchFillType === 'ratio' ? t('模型倍率') :
batchFillType === 'completionRatio' ? t('补全倍率') : t('模型倍率和补全倍率')
}</Text>
</Text>
</div>
</Form>
</Modal>
</>
);
}

View File

@@ -76,7 +76,7 @@ export default function SettingsCreditLimit(props) {
>
<Form.Section text={t('额度设置')}>
<Row gutter={16}>
<Col span={6}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.InputNumber
label={t('新用户初始额度')}
field={'QuotaForNewUser'}
@@ -92,7 +92,7 @@ export default function SettingsCreditLimit(props) {
}
/>
</Col>
<Col span={6}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.InputNumber
label={t('请求预扣费额度')}
field={'PreConsumedQuota'}
@@ -109,7 +109,7 @@ export default function SettingsCreditLimit(props) {
}
/>
</Col>
<Col span={6}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.InputNumber
label={t('邀请新用户奖励额度')}
field={'QuotaForInviter'}
@@ -126,7 +126,9 @@ export default function SettingsCreditLimit(props) {
}
/>
</Col>
<Col span={6}>
</Row>
<Row>
<Col xs={24} sm={12} md={8} lg={8} xl={6}>
<Form.InputNumber
label={t('新用户使用邀请码奖励额度')}
field={'QuotaForInvitee'}

View File

@@ -86,7 +86,7 @@ export default function DataDashboard(props) {
>
<Form.Section text={t('数据看板设置')}>
<Row gutter={16}>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'DataExportEnabled'}
label={t('启用数据看板(实验性)')}
@@ -103,7 +103,7 @@ export default function DataDashboard(props) {
</Col>
</Row>
<Row>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.InputNumber
label={t('数据看板更新间隔')}
step={1}
@@ -120,7 +120,7 @@ export default function DataDashboard(props) {
}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Select
label={t('数据看板默认时间粒度')}
optionList={optionsDataExportDefaultTime}

View File

@@ -80,7 +80,7 @@ export default function SettingsDrawing(props) {
>
<Form.Section text={t('绘图设置')}>
<Row gutter={16}>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'DrawingEnabled'}
label={t('启用绘图功能')}
@@ -95,7 +95,7 @@ export default function SettingsDrawing(props) {
}}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'MjNotifyEnabled'}
label={t('允许回调(会泄露服务器 IP 地址)')}
@@ -110,7 +110,7 @@ export default function SettingsDrawing(props) {
}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'MjAccountFilterEnabled'}
label={t('允许 AccountFilter 参数')}
@@ -125,7 +125,7 @@ export default function SettingsDrawing(props) {
}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'MjForwardUrlEnabled'}
label={t('开启之后将上游地址替换为服务器地址')}
@@ -140,7 +140,7 @@ export default function SettingsDrawing(props) {
}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'MjModeClearEnabled'}
label={
@@ -160,7 +160,7 @@ export default function SettingsDrawing(props) {
}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'MjActionCheckSuccessEnabled'}
label={t('检测必须等待绘图成功才能进行放大等操作')}

View File

@@ -1,5 +1,5 @@
import React, { useEffect, useState, useRef } from 'react';
import { Banner, Button, Col, Form, Row, Spin } from '@douyinfe/semi-ui';
import { Banner, Button, Col, Form, Row, Spin, Collapse, Modal } from '@douyinfe/semi-ui';
import {
compareObjects,
API,
@@ -12,16 +12,17 @@ import { useTranslation } from 'react-i18next';
export default function GeneralSettings(props) {
const { t } = useTranslation();
const [loading, setLoading] = useState(false);
const [showQuotaWarning, setShowQuotaWarning] = useState(false);
const [inputs, setInputs] = useState({
TopUpLink: '',
ChatLink: '',
ChatLink2: '',
'general_setting.docs_link': '',
QuotaPerUnit: '',
RetryTimes: '',
DisplayInCurrencyEnabled: false,
DisplayTokenStatEnabled: false,
DefaultCollapseSidebar: false,
DemoSiteEnabled: false,
SelfUseModeEnabled: false,
});
const refForm = useRef();
const [inputsRow, setInputsRow] = useState(inputs);
@@ -91,7 +92,7 @@ export default function GeneralSettings(props) {
>
<Form.Section text={t('通用设置')}>
<Row gutter={16}>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Input
field={'TopUpLink'}
label={t('充值链接')}
@@ -101,27 +102,17 @@ export default function GeneralSettings(props) {
showClear
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Input
field={'ChatLink'}
label={t('默认聊天页面链接')}
field={'general_setting.docs_link'}
label={t('文档地址')}
initValue={''}
placeholder={t('例如 ChatGPT Next Web 的部署地址')}
placeholder={t('例如 https://docs.newapi.pro')}
onChange={onChange}
showClear
/>
</Col>
<Col span={8}>
<Form.Input
field={'ChatLink2'}
label={t('聊天页面 2 链接')}
initValue={''}
placeholder={t('例如 ChatGPT Next Web 的部署地址')}
onChange={onChange}
showClear
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Input
field={'QuotaPerUnit'}
label={t('单位美元额度')}
@@ -129,9 +120,10 @@ export default function GeneralSettings(props) {
placeholder={t('一单位货币能兑换的额度')}
onChange={onChange}
showClear
onClick={() => setShowQuotaWarning(true)}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Input
field={'RetryTimes'}
label={t('失败重试次数')}
@@ -143,7 +135,7 @@ export default function GeneralSettings(props) {
</Col>
</Row>
<Row gutter={16}>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'DisplayInCurrencyEnabled'}
label={t('以货币形式显示额度')}
@@ -158,7 +150,7 @@ export default function GeneralSettings(props) {
}}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'DisplayTokenStatEnabled'}
label={t('额度查询接口返回令牌额度而非用户额度')}
@@ -173,7 +165,7 @@ export default function GeneralSettings(props) {
}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'DefaultCollapseSidebar'}
label={t('默认折叠侧边栏')}
@@ -190,7 +182,7 @@ export default function GeneralSettings(props) {
</Col>
</Row>
<Row>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'DemoSiteEnabled'}
label={t('演示站点模式')}
@@ -205,6 +197,22 @@ export default function GeneralSettings(props) {
}
/>
</Col>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'SelfUseModeEnabled'}
label={t('自用模式')}
extraText={t('开启后不限制:必须设置模型倍率')}
size='default'
checkedText=''
uncheckedText=''
onChange={(value) =>
setInputs({
...inputs,
SelfUseModeEnabled: value
})
}
/>
</Col>
</Row>
<Row>
<Button size='default' onClick={onSubmit}>
@@ -214,6 +222,23 @@ export default function GeneralSettings(props) {
</Form.Section>
</Form>
</Spin>
<Modal
title={t('警告')}
visible={showQuotaWarning}
onOk={() => setShowQuotaWarning(false)}
onCancel={() => setShowQuotaWarning(false)}
closeOnEsc={true}
width={500}
>
<Banner
type='warning'
description={t('此设置用于系统内部计算默认值500000是为了精确到6位小数点设计不推荐修改。')}
bordered
fullMode={false}
closeIcon={null}
/>
</Modal>
</>
);
}

View File

@@ -100,7 +100,7 @@ export default function SettingsLog(props) {
>
<Form.Section text={t('日志设置')}>
<Row gutter={16}>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Form.Switch
field={'LogConsumeEnabled'}
label={t('启用额度消费日志记录')}
@@ -115,7 +115,7 @@ export default function SettingsLog(props) {
}}
/>
</Col>
<Col span={8}>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
<Spin spinning={loadingCleanHistoryLog}>
<Form.DatePicker
label={t('日志记录时间')}

View File

@@ -1,273 +0,0 @@
import React, { useEffect, useState, useRef } from 'react';
import { Button, Col, Form, Popconfirm, Row, Space, Spin } from '@douyinfe/semi-ui';
import {
compareObjects,
API,
showError,
showSuccess,
showWarning,
verifyJSON,
verifyJSONPromise
} from '../../../helpers';
export default function SettingsMagnification(props) {
const [loading, setLoading] = useState(false);
const [inputs, setInputs] = useState({
ModelPrice: '',
ModelRatio: '',
CompletionRatio: '',
GroupRatio: '',
UserUsableGroups: ''
});
const refForm = useRef();
const [inputsRow, setInputsRow] = useState(inputs);
async function onSubmit() {
try {
console.log('Starting validation...');
await refForm.current.validate().then(() => {
console.log('Validation passed');
const updateArray = compareObjects(inputs, inputsRow);
if (!updateArray.length) return showWarning('你似乎并没有修改什么');
const requestQueue = updateArray.map((item) => {
let value = '';
if (typeof inputs[item.key] === 'boolean') {
value = String(inputs[item.key]);
} else {
value = inputs[item.key];
}
return API.put('/api/option/', {
key: item.key,
value
});
});
setLoading(true);
Promise.all(requestQueue)
.then((res) => {
if (requestQueue.length === 1) {
if (res.includes(undefined)) return;
} else if (requestQueue.length > 1) {
if (res.includes(undefined))
return showError('部分保存失败,请重试');
}
for (let i = 0; i < res.length; i++) {
if (!res[i].data.success) {
return showError(res[i].data.message)
}
}
showSuccess('保存成功');
props.refresh();
})
.catch(error => {
console.error('Unexpected error in Promise.all:', error);
showError('保存失败,请重试');
})
.finally(() => {
setLoading(false);
});
}).catch((error) => {
console.error('Validation failed:', error);
showError('请检查输入');
});
} catch (error) {
showError('请检查输入');
console.error(error);
}
}
async function resetModelRatio() {
try {
let res = await API.post(`/api/option/rest_model_ratio`);
// return {success, message}
if (res.data.success) {
showSuccess(res.data.message);
props.refresh();
} else {
showError(res.data.message);
}
} catch (error) {
showError(error);
}
}
useEffect(() => {
const currentInputs = {};
for (let key in props.options) {
if (Object.keys(inputs).includes(key)) {
currentInputs[key] = props.options[key];
}
}
setInputs(currentInputs);
setInputsRow(structuredClone(currentInputs));
refForm.current.setValues(currentInputs);
}, [props.options]);
return (
<Spin spinning={loading}>
<Form
values={inputs}
getFormApi={(formAPI) => (refForm.current = formAPI)}
style={{ marginBottom: 15 }}
>
<Form.Section text={'倍率设置'}>
<Row gutter={16}>
<Col span={16}>
<Form.TextArea
label={'模型固定价格'}
extraText={'一次调用消耗多少刀,优先级大于模型倍率'}
placeholder={
'为一个 JSON 文本,键为模型名称,值为一次调用消耗多少刀,比如 "gpt-4-gizmo-*": 0.1一次消耗0.1刀'
}
field={'ModelPrice'}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => {
return verifyJSON(value);
},
message: '不是合法的 JSON 字符串'
}
]}
onChange={(value) =>
setInputs({
...inputs,
ModelPrice: value
})
}
/>
</Col>
</Row>
<Row gutter={16}>
<Col span={16}>
<Form.TextArea
label={'模型倍率'}
extraText={''}
placeholder={'为一个 JSON 文本,键为模型名称,值为倍率'}
field={'ModelRatio'}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => {
return verifyJSON(value);
},
message: '不是合法的 JSON 字符串'
}
]}
onChange={(value) =>
setInputs({
...inputs,
ModelRatio: value
})
}
/>
</Col>
</Row>
<Row gutter={16}>
<Col span={16}>
<Form.TextArea
label={'模型补全倍率(仅对自定义模型有效)'}
extraText={'仅对自定义模型有效'}
placeholder={'为一个 JSON 文本,键为模型名称,值为倍率'}
field={'CompletionRatio'}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => {
return verifyJSON(value);
},
message: '不是合法的 JSON 字符串'
}
]}
onChange={(value) =>
setInputs({
...inputs,
CompletionRatio: value
})
}
/>
</Col>
</Row>
<Row gutter={16}>
<Col span={16}>
<Form.TextArea
label={'分组倍率'}
extraText={''}
placeholder={'为一个 JSON 文本,键为分组名称,值为倍率'}
field={'GroupRatio'}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => {
return verifyJSON(value);
},
message: '不是合法的 JSON 字符串'
}
]}
onChange={(value) =>
setInputs({
...inputs,
GroupRatio: value
})
}
/>
</Col>
</Row>
<Row gutter={16}>
<Col span={16}>
<Form.TextArea
label={'用户可选分组'}
extraText={''}
placeholder={'为一个 JSON 文本,键为分组名称,值为倍率'}
field={'UserUsableGroups'}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => {
return verifyJSON(value);
},
message: '不是合法的 JSON 字符串'
}
]}
onChange={(value) =>
setInputs({
...inputs,
UserUsableGroups: value
})
}
/>
</Col>
</Row>
</Form.Section>
</Form>
<Space>
<Button onClick={onSubmit}>
保存倍率设置
</Button>
<Popconfirm
title='确定重置模型倍率吗?'
content='此修改将不可逆'
okType={'danger'}
position={'top'}
onConfirm={() => {
resetModelRatio();
}}
>
<Button type={'danger'}>
重置模型倍率
</Button>
</Popconfirm>
</Space>
</Spin>
);
}

Some files were not shown because too many files have changed in this diff Show More