Compare commits

...

29 Commits

Author SHA1 Message Date
Calcium-Ion
f4cc90c8d6 Merge pull request #893 from wizcas/replace-linux-do-icon
替换登录界面的 Linux.do OAuth 图标
2025-03-31 22:38:41 +08:00
Calcium-Ion
140d3a974b Merge pull request #895 from Feiyuyu0503/main
docs: fix a typo
2025-03-31 22:38:25 +08:00
Calcium-Ion
2ecb742e47 Merge pull request #912 from OrdinarySF/main
fix: fixed bug where target.id was null when clicking 'x' icon
2025-03-31 22:38:08 +08:00
Calcium-Ion
9066cfa8a0 Merge pull request #914 from JoeyLearnsToCode/main
feat: Add Parameters Override
2025-03-31 22:37:26 +08:00
Calcium-Ion
4f437f30e0 Merge pull request #916 from xifan2333/fix/systemSettingsUI
 feat: Update option handling in SystemSetting
2025-03-31 22:36:14 +08:00
xifan
3c2a86f94d feat: Update option handling in SystemSetting
-  Add backend validation for OIDC & Telegram OAuth config
- ♻️ Refactor frontend option updates with batch processing
2025-03-31 00:46:13 +08:00
JoeyLearnsToCode
1b07282153 feat: Add Parameters Override 2025-03-29 14:39:39 +08:00
Ordinary
af7f886c39 refactor: use handleFieldChange function on change event 2025-03-28 12:44:40 +00:00
Ordinary
9cfa138796 fix: fixed bug where target.id was null when clicking 'x' icon 2025-03-28 12:43:26 +00:00
1808837298@qq.com
a378665b8c feat: Add new cache ratios for o3-mini and gpt-4.5-preview models 2025-03-27 18:47:50 +08:00
1808837298@qq.com
3516aad349 update model ratio 2025-03-27 17:02:09 +08:00
1808837298@qq.com
58525c574b feat: Enhance GetCompletionRatio function 2025-03-27 16:38:29 +08:00
1808837298@qq.com
1df39e5a7f update model ratio 2025-03-27 16:24:30 +08:00
feiyuyu
be6ffd3c60 docs: fix a typo 2025-03-22 21:28:25 +08:00
Wizcas Chen
a9522075c6 replace the linuxdo icon in the login form 2025-03-22 17:16:07 +08:00
Calcium-Ion
983d31bfd3 Merge pull request #886 from seefs001/main
fix: claude function calling type
2025-03-20 23:22:20 +08:00
Seefs
20c043f584 fix: claude function calling type 2025-03-19 22:48:49 +08:00
1808837298@qq.com
73263e02d6 fix: Adjust MaxTokens logic for non-Claude models in test request 2025-03-17 23:44:32 +08:00
1808837298@qq.com
7143b0f160 feat: Add support for cross-region AWS model handling in awsStreamHandler 2025-03-17 23:41:00 +08:00
1808837298@qq.com
dd82618c05 refactor: Improve token quota consumption logic 2025-03-17 17:52:54 +08:00
1808837298@qq.com
19935ee8ac feat: Enhance ConvertClaudeRequest method to set request model and handle vertex-specific request conversion 2025-03-17 17:13:33 +08:00
1808837298@qq.com
6fef5aaf22 feat: Update RerankerInfo structure and modify GenRelayInfoRerank function to accept RerankRequest 2025-03-17 16:44:53 +08:00
Calcium-Ion
b5aa3c129b Merge pull request #872 from neotf/main
feat: support AWS Model CrossRegion
2025-03-17 16:18:11 +08:00
1808837298@qq.com
8c7c39550c refactor: Update ClaudeResponse error handling to use pointer for ClaudeError and improve nil checks in response processing 2025-03-16 23:14:45 +08:00
1808837298@qq.com
962e803d8a Update README 2025-03-16 21:53:00 +08:00
1808837298@qq.com
ff57ced2bb Update README 2025-03-16 21:47:32 +08:00
1808837298@qq.com
2223806c00 Update README 2025-03-16 21:17:08 +08:00
1808837298@qq.com
d1c62a583d feat: support xinference rerank to jina format 2025-03-16 21:06:29 +08:00
neotf
892d014c26 feat: support AWS Model CrossRegion 2025-03-15 01:42:24 +08:00
31 changed files with 1130 additions and 969 deletions

View File

@@ -36,8 +36,8 @@
> 本项目为开源项目,在[One API](https://github.com/songquanpeng/one-api)的基础上进行二次开发
> [!IMPORTANT]
> - 使用者必须在遵循 OpenAI 的[使用条款](https://openai.com/policies/terms-of-use)以及**法律法规**的情况下使用,不得用于非法用途。
> - 本项目仅供个人学习使用,不保证稳定性,且不提供任何技术支持。
> - 使用者必须在遵循 OpenAI 的[使用条款](https://openai.com/policies/terms-of-use)以及**法律法规**的情况下使用,不得用于非法用途。
> - 根据[《生成式人工智能服务管理暂行办法》](http://www.cac.gov.cn/2023-07/13/c_1690898327029107.htm)的要求,请勿对中国地区公众提供一切未经备案的生成式人工智能服务。
## 📚 文档
@@ -46,35 +46,32 @@
## ✨ 主要特性
New API提供了丰富的功能详细特性请参考[维基百科-特性说明](https://docs.newapi.pro/wiki/features-introduction)
New API提供了丰富的功能详细特性请参考[特性说明](https://docs.newapi.pro/wiki/features-introduction)
1. 🎨 全新的UI界面
2. 🌍 多语言支持
3. 🎨 支持[Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy)接口,[对接文档](https://docs.newapi.pro/api/relay/image/midjourney)
4. 💰 支持在线充值功能(易支付
5. 🔍 支持用key查询使用额度配合[neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool)
6. 📑 分页支持选择每页显示数量
7. 🔄 兼容原版One API的数据库
8. 💵 支持模型按次数收费
9. ⚖️ 支持渠道加权随机
10. 📈 数据看板(控制台
11. 🔒 可设置令牌能调用的模型
12. 🤖 支持Telegram授权登录
13. 🎵 支持[Suno API](https://github.com/Suno-API/Suno-API)接口[接口文档](https://docs.newapi.pro/api/suno-music)
14. 🔄 支持Rerank模型Cohere和Jina[接口文档](https://docs.newapi.pro/api/jinaai-rerank)
15. 支持OpenAI Realtime API包括Azure渠道[接口文档](https://docs.newapi.pro/api/openai-realtime)
16. ⚡ 支持Claude Messages 格式,[接口文档](https://docs.newapi.pro/api/anthropic-chat)
17. 支持使用路由/chat2link进入聊天界面
18. 🧠 支持通过模型名称后缀设置 reasoning effort
3. 💰 支持在线充值功能(易支付)
4. 🔍 支持用key查询使用额度配合[neko-api-key-tool](https://github.com/Calcium-Ion/neko-api-key-tool)
5. 🔄 兼容原版One API的数据库
6. 💵 支持模型按次数收费
7. ⚖️ 支持渠道加权随机
8. 📈 数据看板(控制台)
9. 🔒 令牌分组、模型限制
10. 🤖 支持更多授权登陆方式LinuxDO,Telegram、OIDC
11. 🔄 支持Rerank模型Cohere和Jina[接口文档](https://docs.newapi.pro/api/jinaai-rerank)
12. 支持OpenAI Realtime API包括Azure渠道[接口文档](https://docs.newapi.pro/api/openai-realtime)
13. 支持Claude Messages 格式[接口文档](https://docs.newapi.pro/api/anthropic-chat)
14. 支持使用路由/chat2link进入聊天界面
15. 🧠 支持通过模型名称后缀设置 reasoning effort
1. OpenAI o系列模型
- 添加后缀 `-high` 设置为 high reasoning effort (例如: `o3-mini-high`)
- 添加后缀 `-medium` 设置为 medium reasoning effort (例如: `o3-mini-medium`)
- 添加后缀 `-low` 设置为 low reasoning effort (例如: `o3-mini-low`)
2. Claude 思考模型
- 添加后缀 `-thinking` 启用思考模式 (例如: `claude-3-7-sonnet-20250219-thinking`)
19. 🔄 思考转内容功能
20. 🔄 模型限流功能
20. 💰 缓存计费支持,开启后可以在缓存命中时按照设定的比例计费:
16. 🔄 思考转内容功能
17. 🔄 针对用户的模型限流功能
18. 💰 缓存计费支持,开启后可以在缓存命中时按照设定的比例计费:
1.`系统设置-运营设置` 中设置 `提示缓存倍率` 选项
2. 在渠道中设置 `提示缓存倍率`,范围 0-1例如设置为 0.5 表示缓存命中时按照 50% 计费
3. 支持的渠道:
@@ -88,12 +85,12 @@ New API提供了丰富的功能详细特性请参考[维基百科-特性说
此版本支持多种模型,详情请参考[接口文档-中继接口](https://docs.newapi.pro/api)
1. 第三方模型 **gpts** gpt-4-gizmo-*
2. [Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy)接口,[接口文档](https://docs.newapi.pro/api/midjourney-proxy-image)
3. 自定义渠道,支持填入完整调用地址
4. [Suno API](https://github.com/Suno-API/Suno-API)接口,[接口文档](https://docs.newapi.pro/api/suno-music)
2. 第三方渠道[Midjourney-Proxy(Plus)](https://github.com/novicezk/midjourney-proxy)接口,[接口文档](https://docs.newapi.pro/api/midjourney-proxy-image)
3. 第三方渠道[Suno API](https://github.com/Suno-API/Suno-API)接口,[接口文档](https://docs.newapi.pro/api/suno-music)
4. 自定义渠道,支持填入完整调用地址
5. Rerank模型[Cohere](https://cohere.ai/)和[Jina](https://jina.ai/)[接口文档](https://docs.newapi.pro/api/jinaai-rerank)
6. Claude Messages 格式,[接口文档](https://docs.newapi.pro/api/anthropic-chat)
7. Dify
7. Dify当前仅支持chatflow
## 环境变量配置
@@ -168,9 +165,6 @@ docker run --name new-api -d --restart always -p 3000:3000 -e SQL_DSN="root:1234
- [聊天接口Chat](https://docs.newapi.pro/api/openai-chat)
- [图像接口Image](https://docs.newapi.pro/api/openai-image)
- [Midjourney接口](https://docs.newapi.pro/api/midjourney-proxy-image)
- [音乐接口Music](https://docs.newapi.pro/api/relay/music)
- [Suno接口](https://docs.newapi.pro/api/suno-music)
- [重排序接口Rerank](https://docs.newapi.pro/api/jinaai-rerank)
- [实时对话接口Realtime](https://docs.newapi.pro/api/openai-realtime)
- [Claude聊天接口messages](https://docs.newapi.pro/api/anthropic-chat)

View File

@@ -187,7 +187,9 @@ func buildTestRequest(model string) *dto.GeneralOpenAIRequest {
if strings.HasPrefix(model, "o1") || strings.HasPrefix(model, "o3") {
testRequest.MaxCompletionTokens = 10
} else if strings.Contains(model, "thinking") {
testRequest.MaxTokens = 50
if !strings.Contains(model, "claude") {
testRequest.MaxTokens = 50
}
} else {
testRequest.MaxTokens = 10
}

View File

@@ -53,11 +53,12 @@ func UpdateOption(c *gin.Context) {
return
}
case "oidc.enabled":
if option.Value == "true" && system_setting.GetOIDCSettings().Enabled {
if option.Value == "true" && system_setting.GetOIDCSettings().ClientId == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无法启用 OIDC 登录,请先填入 OIDC Client Id 以及 OIDC Client Secret",
})
return
}
case "LinuxDOOAuthEnabled":
if option.Value == "true" && common.LinuxDOClientId == "" {
@@ -89,6 +90,15 @@ func UpdateOption(c *gin.Context) {
"success": false,
"message": "无法启用 Turnstile 校验,请先填入 Turnstile 校验相关配置信息!",
})
return
}
case "TelegramOAuthEnabled":
if option.Value == "true" && common.TelegramBotToken == "" {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "无法启用 Telegram OAuth请先填入 Telegram Bot Token",
})
return
}
case "GroupRatio":
@@ -100,6 +110,7 @@ func UpdateOption(c *gin.Context) {
})
return
}
}
err = model.UpdateOption(option.Key, option.Value)
if err != nil {

View File

@@ -11,7 +11,7 @@
- 类型为字符串,填写代理地址(例如 socks5 协议的代理地址)
3. thinking_to_content
- 用于标识是否将思考内容`reasoning_conetnt`转换为`<think>`标签拼接到内容中返回
- 用于标识是否将思考内容`reasoning_content`转换为`<think>`标签拼接到内容中返回
- 类型为布尔值,设置为 true 时启用思考内容转换
--------------------------------------------------------------
@@ -30,4 +30,4 @@
--------------------------------------------------------------
通过调整上述 JSON 配置中的值,可以灵活控制渠道的额外行为,比如是否进行格式化以及使用特定的网络代理。
通过调整上述 JSON 配置中的值,可以灵活控制渠道的额外行为,比如是否进行格式化以及使用特定的网络代理。

View File

@@ -183,7 +183,7 @@ type ClaudeResponse struct {
Completion string `json:"completion,omitempty"`
StopReason string `json:"stop_reason,omitempty"`
Model string `json:"model,omitempty"`
Error ClaudeError `json:"error,omitempty"`
Error *ClaudeError `json:"error,omitempty"`
Usage *ClaudeUsage `json:"usage,omitempty"`
Index *int `json:"index,omitempty"`
ContentBlock *ClaudeMediaMessage `json:"content_block,omitempty"`

View File

@@ -5,18 +5,29 @@ type RerankRequest struct {
Query string `json:"query"`
Model string `json:"model"`
TopN int `json:"top_n"`
ReturnDocuments bool `json:"return_documents,omitempty"`
ReturnDocuments *bool `json:"return_documents,omitempty"`
MaxChunkPerDoc int `json:"max_chunk_per_doc,omitempty"`
OverLapTokens int `json:"overlap_tokens,omitempty"`
}
type RerankResponseDocument struct {
func (r *RerankRequest) GetReturnDocuments() bool {
if r.ReturnDocuments == nil {
return false
}
return *r.ReturnDocuments
}
type RerankResponseResult struct {
Document any `json:"document,omitempty"`
Index int `json:"index"`
RelevanceScore float64 `json:"relevance_score"`
}
type RerankResponse struct {
Results []RerankResponseDocument `json:"results"`
Usage Usage `json:"usage"`
type RerankDocument struct {
Text any `json:"text"`
}
type RerankResponse struct {
Results []RerankResponseResult `json:"results"`
Usage Usage `json:"usage"`
}

View File

@@ -212,6 +212,7 @@ func SetupContextForSelectedChannel(c *gin.Context, channel *model.Channel, mode
c.Set("channel_name", channel.Name)
c.Set("channel_type", channel.Type)
c.Set("channel_setting", channel.GetSetting())
c.Set("param_override", channel.GetParamOverride())
if nil != channel.OpenAIOrganization && "" != *channel.OpenAIOrganization {
c.Set("channel_organization", *channel.OpenAIOrganization)
}

View File

@@ -36,6 +36,7 @@ type Channel struct {
OtherInfo string `json:"other_info"`
Tag *string `json:"tag" gorm:"index"`
Setting *string `json:"setting" gorm:"type:text"`
ParamOverride *string `json:"param_override" gorm:"type:text"`
}
func (channel *Channel) GetModels() []string {
@@ -511,6 +512,17 @@ func (channel *Channel) SetSetting(setting map[string]interface{}) {
channel.Setting = common.GetPointer[string](string(settingBytes))
}
func (channel *Channel) GetParamOverride() map[string]interface{} {
paramOverride := make(map[string]interface{})
if channel.ParamOverride != nil && *channel.ParamOverride != "" {
err := json.Unmarshal([]byte(*channel.ParamOverride), &paramOverride)
if err != nil {
common.SysError("failed to unmarshal param override: " + err.Error())
}
}
return paramOverride
}
func GetChannelsByIds(ids []int) ([]*Channel, error) {
var channels []*Channel
err := DB.Where("id in (?)", ids).Find(&channels).Error

View File

@@ -21,6 +21,8 @@ type Adaptor struct {
}
func (a *Adaptor) ConvertClaudeRequest(c *gin.Context, info *relaycommon.RelayInfo, request *dto.ClaudeRequest) (any, error) {
c.Set("request_model", request.Model)
c.Set("converted_request", request)
return request, nil
}

View File

@@ -13,4 +13,41 @@ var awsModelIDMap = map[string]string{
"claude-3-7-sonnet-20250219": "anthropic.claude-3-7-sonnet-20250219-v1:0",
}
var awsModelCanCrossRegionMap = map[string]map[string]bool{
"anthropic.claude-3-sonnet-20240229-v1:0": {
"us": true,
"eu": true,
"ap": true,
},
"anthropic.claude-3-opus-20240229-v1:0": {
"us": true,
},
"anthropic.claude-3-haiku-20240307-v1:0": {
"us": true,
"eu": true,
"ap": true,
},
"anthropic.claude-3-5-sonnet-20240620-v1:0": {
"us": true,
"eu": true,
"ap": true,
},
"anthropic.claude-3-5-sonnet-20241022-v2:0": {
"us": true,
"ap": true,
},
"anthropic.claude-3-5-haiku-20241022-v1:0": {
"us": true,
},
"anthropic.claude-3-7-sonnet-20250219-v1:0": {
"us": true,
},
}
var awsRegionCrossModelPrefixMap = map[string]string{
"us": "us",
"eu": "eu",
"ap": "apac",
}
var ChannelName = "aws"

View File

@@ -43,6 +43,28 @@ func wrapErr(err error) *dto.OpenAIErrorWithStatusCode {
}
}
func awsRegionPrefix(awsRegionId string) string {
parts := strings.Split(awsRegionId, "-")
regionPrefix := ""
if len(parts) > 0 {
regionPrefix = parts[0]
}
return regionPrefix
}
func awsModelCanCrossRegion(awsModelId, awsRegionPrefix string) bool {
regionSet, exists := awsModelCanCrossRegionMap[awsModelId]
return exists && regionSet[awsRegionPrefix]
}
func awsModelCrossRegion(awsModelId, awsRegionPrefix string) string {
modelPrefix, find := awsRegionCrossModelPrefixMap[awsRegionPrefix]
if !find {
return awsModelId
}
return modelPrefix + "." + awsModelId
}
func awsModelID(requestModel string) (string, error) {
if awsModelID, ok := awsModelIDMap[requestModel]; ok {
return awsModelID, nil
@@ -62,6 +84,12 @@ func awsHandler(c *gin.Context, info *relaycommon.RelayInfo, requestMode int) (*
return wrapErr(errors.Wrap(err, "awsModelID")), nil
}
awsRegionPrefix := awsRegionPrefix(awsCli.Options().Region)
canCrossRegion := awsModelCanCrossRegion(awsModelId, awsRegionPrefix)
if canCrossRegion {
awsModelId = awsModelCrossRegion(awsModelId, awsRegionPrefix)
}
awsReq := &bedrockruntime.InvokeModelInput{
ModelId: aws.String(awsModelId),
Accept: aws.String("application/json"),
@@ -107,6 +135,12 @@ func awsStreamHandler(c *gin.Context, resp *http.Response, info *relaycommon.Rel
return wrapErr(errors.Wrap(err, "awsModelID")), nil
}
awsRegionPrefix := awsRegionPrefix(awsCli.Options().Region)
canCrossRegion := awsModelCanCrossRegion(awsModelId, awsRegionPrefix)
if canCrossRegion {
awsModelId = awsModelCrossRegion(awsModelId, awsRegionPrefix)
}
awsReq := &bedrockruntime.InvokeModelWithResponseStreamInput{
ModelId: aws.String(awsModelId),
Accept: aws.String("application/json"),

View File

@@ -70,7 +70,9 @@ func RequestOpenAI2ClaudeMessage(textRequest dto.GeneralOpenAIRequest) (*dto.Cla
Description: tool.Function.Description,
}
claudeTool.InputSchema = make(map[string]interface{})
claudeTool.InputSchema["type"] = params["type"].(string)
if params["type"] != nil {
claudeTool.InputSchema["type"] = params["type"].(string)
}
claudeTool.InputSchema["properties"] = params["properties"]
claudeTool.InputSchema["required"] = params["required"]
for s, a := range params {
@@ -485,7 +487,7 @@ func HandleStreamResponseData(c *gin.Context, info *relaycommon.RelayInfo, claud
common.SysError("error unmarshalling stream response: " + err.Error())
return service.OpenAIErrorWrapper(err, "stream_response_error", http.StatusInternalServerError)
}
if claudeResponse.Error.Type != "" {
if claudeResponse.Error != nil && claudeResponse.Error.Type != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Code: "stream_response_error",
@@ -598,7 +600,7 @@ func HandleClaudeResponseData(c *gin.Context, info *relaycommon.RelayInfo, claud
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_claude_response_failed", http.StatusInternalServerError)
}
if claudeResponse.Error.Type != "" {
if claudeResponse.Error != nil && claudeResponse.Error.Type != "" {
return &dto.OpenAIErrorWithStatusCode{
Error: dto.OpenAIError{
Message: claudeResponse.Error.Message,

View File

@@ -40,8 +40,8 @@ type CohereRerankRequest struct {
}
type CohereRerankResponseResult struct {
Results []dto.RerankResponseDocument `json:"results"`
Meta CohereMeta `json:"meta"`
Results []dto.RerankResponseResult `json:"results"`
Meta CohereMeta `json:"meta"`
}
type CohereMeta struct {

View File

@@ -69,7 +69,7 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
if info.RelayMode == constant.RelayModeRerank {
err, usage = common_handler.RerankHandler(c, resp)
err, usage = common_handler.RerankHandler(c, info, resp)
} else if info.RelayMode == constant.RelayModeEmbeddings {
err, usage = openai.OpenaiHandler(c, resp, info)
}

View File

@@ -262,7 +262,7 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
case constant.RelayModeImagesGenerations:
err, usage = OpenaiTTSHandler(c, resp, info)
case constant.RelayModeRerank:
err, usage = common_handler.RerankHandler(c, resp)
err, usage = common_handler.RerankHandler(c, info, resp)
default:
if info.IsStream {
err, usage = OaiStreamHandler(c, resp, info)

View File

@@ -12,6 +12,6 @@ type SFMeta struct {
}
type SFRerankResponse struct {
Results []dto.RerankResponseDocument `json:"results"`
Meta SFMeta `json:"meta"`
Results []dto.RerankResponseResult `json:"results"`
Meta SFMeta `json:"meta"`
}

View File

@@ -39,8 +39,15 @@ type Adaptor struct {
}
func (a *Adaptor) ConvertClaudeRequest(c *gin.Context, info *relaycommon.RelayInfo, request *dto.ClaudeRequest) (any, error) {
return request, nil
if v, ok := claudeModelMap[info.UpstreamModelName]; ok {
c.Set("request_model", v)
} else {
c.Set("request_model", request.Model)
}
vertexClaudeReq := copyRequest(request, anthropicVersion)
return vertexClaudeReq, nil
}
func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error) {
//TODO implement me
return nil, errors.New("not implemented")

View File

@@ -0,0 +1,11 @@
package xinference
type XinRerankResponseDocument struct {
Document string `json:"document,omitempty"`
Index int `json:"index"`
RelevanceScore float64 `json:"relevance_score"`
}
type XinRerankResponse struct {
Results []XinRerankResponseDocument `json:"results"`
}

View File

@@ -33,6 +33,11 @@ const (
RelayFormatClaude = "claude"
)
type RerankerInfo struct {
Documents []any
ReturnDocuments bool
}
type RelayInfo struct {
ChannelType int
ChannelId int
@@ -71,6 +76,7 @@ type RelayInfo struct {
AudioUsage bool
ReasoningEffort string
ChannelSetting map[string]interface{}
ParamOverride map[string]interface{}
UserSetting map[string]interface{}
UserEmail string
UserQuota int
@@ -78,6 +84,7 @@ type RelayInfo struct {
SendResponseCount int
ThinkingContentInfo
ClaudeConvertInfo
*RerankerInfo
}
// 定义支持流式选项的通道类型
@@ -111,10 +118,21 @@ func GenRelayInfoClaude(c *gin.Context) *RelayInfo {
return info
}
func GenRelayInfoRerank(c *gin.Context, req *dto.RerankRequest) *RelayInfo {
info := GenRelayInfo(c)
info.RelayMode = relayconstant.RelayModeRerank
info.RerankerInfo = &RerankerInfo{
Documents: req.Documents,
ReturnDocuments: req.GetReturnDocuments(),
}
return info
}
func GenRelayInfo(c *gin.Context) *RelayInfo {
channelType := c.GetInt("channel_type")
channelId := c.GetInt("channel_id")
channelSetting := c.GetStringMap("channel_setting")
paramOverride := c.GetStringMap("param_override")
tokenId := c.GetInt("token_id")
tokenKey := c.GetString("token_key")
@@ -152,6 +170,7 @@ func GenRelayInfo(c *gin.Context) *RelayInfo {
ApiKey: strings.TrimPrefix(c.Request.Header.Get("Authorization"), "Bearer "),
Organization: c.GetString("channel_organization"),
ChannelSetting: channelSetting,
ParamOverride: paramOverride,
RelayFormat: RelayFormatOpenAI,
ThinkingContentInfo: ThinkingContentInfo{
IsFirstThinkingContent: true,

View File

@@ -1,15 +1,17 @@
package common_handler
import (
"encoding/json"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/common"
"one-api/dto"
"one-api/relay/channel/xinference"
relaycommon "one-api/relay/common"
"one-api/service"
)
func RerankHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func RerankHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil
@@ -18,18 +20,49 @@ func RerankHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithSta
if err != nil {
return service.OpenAIErrorWrapper(err, "close_response_body_failed", http.StatusInternalServerError), nil
}
if common.DebugEnabled {
println("reranker response body: ", string(responseBody))
}
var jinaResp dto.RerankResponse
err = json.Unmarshal(responseBody, &jinaResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
if info.ChannelType == common.ChannelTypeXinference {
var xinRerankResponse xinference.XinRerankResponse
err = common.DecodeJson(responseBody, &xinRerankResponse)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
jinaRespResults := make([]dto.RerankResponseResult, len(xinRerankResponse.Results))
for i, result := range xinRerankResponse.Results {
respResult := dto.RerankResponseResult{
Index: result.Index,
RelevanceScore: result.RelevanceScore,
}
if info.ReturnDocuments {
var document any
if result.Document == "" {
document = info.Documents[result.Index]
} else {
document = result.Document
}
respResult.Document = document
}
jinaRespResults[i] = respResult
}
jinaResp = dto.RerankResponse{
Results: jinaRespResults,
Usage: dto.Usage{
PromptTokens: info.PromptTokens,
TotalTokens: info.PromptTokens,
},
}
} else {
err = common.DecodeJson(responseBody, &jinaResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "unmarshal_response_body_failed", http.StatusInternalServerError), nil
}
jinaResp.Usage.PromptTokens = jinaResp.Usage.TotalTokens
}
jsonResponse, err := json.Marshal(jinaResp)
if err != nil {
return service.OpenAIErrorWrapper(err, "marshal_response_body_failed", http.StatusInternalServerError), nil
}
c.Writer.Header().Set("Content-Type", "application/json")
c.Writer.WriteHeader(resp.StatusCode)
_, err = c.Writer.Write(jsonResponse)
c.JSON(http.StatusOK, jinaResp)
return nil, &jinaResp.Usage
}

View File

@@ -20,6 +20,10 @@ type PriceData struct {
ShouldPreConsumedQuota int
}
func (p PriceData) ToSetting() string {
return fmt.Sprintf("ModelPrice: %f, ModelRatio: %f, CompletionRatio: %f, CacheRatio: %f, GroupRatio: %f, UsePrice: %t, CacheCreationRatio: %f, ShouldPreConsumedQuota: %d", p.ModelPrice, p.ModelRatio, p.CompletionRatio, p.CacheRatio, p.GroupRatio, p.UsePrice, p.CacheCreationRatio, p.ShouldPreConsumedQuota)
}
func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens int, maxTokens int) (PriceData, error) {
modelPrice, usePrice := operation_setting.GetModelPrice(info.OriginModelName, false)
groupRatio := setting.GetGroupRatio(info.Group)
@@ -50,7 +54,8 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
} else {
preConsumedQuota = int(modelPrice * common.QuotaPerUnit * groupRatio)
}
return PriceData{
priceData := PriceData{
ModelPrice: modelPrice,
ModelRatio: modelRatio,
CompletionRatio: completionRatio,
@@ -59,5 +64,11 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
CacheRatio: cacheRatio,
CacheCreationRatio: cacheCreationRatio,
ShouldPreConsumedQuota: preConsumedQuota,
}, nil
}
if common.DebugEnabled {
println(fmt.Sprintf("model_price_helper result: %s", priceData.ToSetting()))
}
return priceData, nil
}

View File

@@ -109,7 +109,7 @@ func TextHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
c.Set("prompt_tokens", promptTokens)
}
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptTokens, int(textRequest.MaxTokens))
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptTokens, int(math.Max(float64(textRequest.MaxTokens), float64(textRequest.MaxCompletionTokens))))
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
@@ -168,6 +168,23 @@ func TextHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "json_marshal_failed", http.StatusInternalServerError)
}
// apply param override
if len(relayInfo.ParamOverride) > 0 {
reqMap := make(map[string]interface{})
err = json.Unmarshal(jsonData, &reqMap)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "param_override_unmarshal_failed", http.StatusInternalServerError)
}
for key, value := range relayInfo.ParamOverride {
reqMap[key] = value
}
jsonData, err = json.Marshal(reqMap)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "param_override_marshal_failed", http.StatusInternalServerError)
}
}
if common.DebugEnabled {
println("requestBody: ", string(jsonData))
}
@@ -372,17 +389,18 @@ func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
common.LogError(ctx, fmt.Sprintf("total tokens is 0, cannot consume quota, userId %d, channelId %d, "+
"tokenId %d, model %s pre-consumed quota %d", relayInfo.UserId, relayInfo.ChannelId, relayInfo.TokenId, modelName, preConsumedQuota))
} else {
quotaDelta := quota - preConsumedQuota
if quotaDelta != 0 {
err := service.PostConsumeQuota(relayInfo, quotaDelta, preConsumedQuota, true)
if err != nil {
common.LogError(ctx, "error consuming token remain quota: "+err.Error())
}
}
model.UpdateUserUsedQuotaAndRequestCount(relayInfo.UserId, quota)
model.UpdateChannelUsedQuota(relayInfo.ChannelId, quota)
}
quotaDelta := quota - preConsumedQuota
if quotaDelta != 0 {
err := service.PostConsumeQuota(relayInfo, quotaDelta, preConsumedQuota, true)
if err != nil {
common.LogError(ctx, "error consuming token remain quota: "+err.Error())
}
}
logModel := modelName
if strings.HasPrefix(logModel, "gpt-4-gizmo") {
logModel = "gpt-4-gizmo-*"

View File

@@ -25,7 +25,6 @@ func getRerankPromptToken(rerankRequest dto.RerankRequest) int {
}
func RerankHelper(c *gin.Context, relayMode int) (openaiErr *dto.OpenAIErrorWithStatusCode) {
relayInfo := relaycommon.GenRelayInfo(c)
var rerankRequest *dto.RerankRequest
err := common.UnmarshalBodyReusable(c, &rerankRequest)
@@ -33,6 +32,9 @@ func RerankHelper(c *gin.Context, relayMode int) (openaiErr *dto.OpenAIErrorWith
common.LogError(c, fmt.Sprintf("getAndValidateTextRequest failed: %s", err.Error()))
return service.OpenAIErrorWrapperLocal(err, "invalid_text_request", http.StatusBadRequest)
}
relayInfo := relaycommon.GenRelayInfoRerank(c, rerankRequest)
if rerankRequest.Query == "" {
return service.OpenAIErrorWrapperLocal(fmt.Errorf("query is empty"), "invalid_query", http.StatusBadRequest)
}

View File

@@ -243,20 +243,18 @@ func PostClaudeConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
common.LogError(ctx, fmt.Sprintf("total tokens is 0, cannot consume quota, userId %d, channelId %d, "+
"tokenId %d, model %s pre-consumed quota %d", relayInfo.UserId, relayInfo.ChannelId, relayInfo.TokenId, modelName, preConsumedQuota))
} else {
//if sensitiveResp != nil {
// logContent += fmt.Sprintf(",敏感词:%s", strings.Join(sensitiveResp.SensitiveWords, ", "))
//}
quotaDelta := quota - preConsumedQuota
if quotaDelta != 0 {
err := PostConsumeQuota(relayInfo, quotaDelta, preConsumedQuota, true)
if err != nil {
common.LogError(ctx, "error consuming token remain quota: "+err.Error())
}
}
model.UpdateUserUsedQuotaAndRequestCount(relayInfo.UserId, quota)
model.UpdateChannelUsedQuota(relayInfo.ChannelId, quota)
}
quotaDelta := quota - preConsumedQuota
if quotaDelta != 0 {
err := PostConsumeQuota(relayInfo, quotaDelta, preConsumedQuota, true)
if err != nil {
common.LogError(ctx, "error consuming token remain quota: "+err.Error())
}
}
other := GenerateClaudeOtherInfo(ctx, relayInfo, modelRatio, groupRatio, completionRatio,
cacheTokens, cacheRatio, cacheCreationTokens, cacheCreationRatio, modelPrice)
model.RecordConsumeLog(ctx, relayInfo.UserId, relayInfo.ChannelId, promptTokens, completionTokens, modelName,
@@ -318,17 +316,18 @@ func PostAudioConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo,
common.LogError(ctx, fmt.Sprintf("total tokens is 0, cannot consume quota, userId %d, channelId %d, "+
"tokenId %d, model %s pre-consumed quota %d", relayInfo.UserId, relayInfo.ChannelId, relayInfo.TokenId, relayInfo.OriginModelName, preConsumedQuota))
} else {
quotaDelta := quota - preConsumedQuota
if quotaDelta != 0 {
err := PostConsumeQuota(relayInfo, quotaDelta, preConsumedQuota, true)
if err != nil {
common.LogError(ctx, "error consuming token remain quota: "+err.Error())
}
}
model.UpdateUserUsedQuotaAndRequestCount(relayInfo.UserId, quota)
model.UpdateChannelUsedQuota(relayInfo.ChannelId, quota)
}
quotaDelta := quota - preConsumedQuota
if quotaDelta != 0 {
err := PostConsumeQuota(relayInfo, quotaDelta, preConsumedQuota, true)
if err != nil {
common.LogError(ctx, "error consuming token remain quota: "+err.Error())
}
}
logModel := relayInfo.OriginModelName
if extraContent != "" {
logContent += ", " + extraContent

View File

@@ -14,6 +14,8 @@ var defaultCacheRatio = map[string]float64{
"o1-preview": 0.5,
"o1-mini-2024-09-12": 0.5,
"o1-mini": 0.5,
"o3-mini": 0.5,
"o3-mini-2025-01-31": 0.5,
"gpt-4o-2024-11-20": 0.5,
"gpt-4o-2024-08-06": 0.5,
"gpt-4o": 0.5,
@@ -21,6 +23,8 @@ var defaultCacheRatio = map[string]float64{
"gpt-4o-mini": 0.5,
"gpt-4o-realtime-preview": 0.5,
"gpt-4o-mini-realtime-preview": 0.5,
"gpt-4.5-preview": 0.5,
"gpt-4.5-preview-2025-02-27": 0.5,
"deepseek-chat": 0.25,
"deepseek-reasoner": 0.25,
"deepseek-coder": 0.25,

View File

@@ -375,6 +375,17 @@ func GetCompletionRatio(name string) float64 {
return ratio
}
}
hardCodedRatio, contain := getHardcodedCompletionModelRatio(name)
if contain {
return hardCodedRatio
}
if ratio, ok := CompletionRatio[name]; ok {
return ratio
}
return hardCodedRatio
}
func getHardcodedCompletionModelRatio(name string) (float64, bool) {
lowercaseName := strings.ToLower(name)
if strings.HasPrefix(name, "gpt-4-gizmo") {
name = "gpt-4-gizmo-*"
@@ -385,87 +396,86 @@ func GetCompletionRatio(name string) float64 {
if strings.HasPrefix(name, "gpt-4") && !strings.HasSuffix(name, "-all") && !strings.HasSuffix(name, "-gizmo-*") {
if strings.HasPrefix(name, "gpt-4o") {
if name == "gpt-4o-2024-05-13" {
return 3
return 3, true
}
return 4
return 4, true
}
if strings.HasPrefix(name, "gpt-4.5") {
return 2
// gpt-4.5-preview匹配
if strings.HasPrefix(name, "gpt-4.5-preview") {
return 2, true
}
if strings.HasPrefix(name, "gpt-4-turbo") || strings.HasSuffix(name, "preview") {
return 3
if strings.HasPrefix(name, "gpt-4-turbo") || strings.HasSuffix(name, "gpt-4-1106") || strings.HasSuffix(name, "gpt-4-1105") {
return 3, true
}
return 2
// 没有特殊标记的 gpt-4 模型默认倍率为 2
return 2, false
}
if strings.HasPrefix(name, "o1") || strings.HasPrefix(name, "o3") {
return 4
return 4, true
}
if name == "chatgpt-4o-latest" {
return 3
return 3, true
}
if strings.Contains(name, "claude-instant-1") {
return 3
return 3, true
} else if strings.Contains(name, "claude-2") {
return 3
return 3, true
} else if strings.Contains(name, "claude-3") {
return 5
return 5, true
}
if strings.HasPrefix(name, "gpt-3.5") {
if name == "gpt-3.5-turbo" || strings.HasSuffix(name, "0125") {
// https://openai.com/blog/new-embedding-models-and-api-updates
// Updated GPT-3.5 Turbo model and lower pricing
return 3
return 3, true
}
if strings.HasSuffix(name, "1106") {
return 2
return 2, true
}
return 4.0 / 3.0
return 4.0 / 3.0, true
}
if strings.HasPrefix(name, "mistral-") {
return 3
return 3, true
}
if strings.HasPrefix(name, "gemini-") {
return 4
return 4, true
}
if strings.HasPrefix(name, "command") {
switch name {
case "command-r":
return 3
return 3, true
case "command-r-plus":
return 5
return 5, true
case "command-r-08-2024":
return 4
return 4, true
case "command-r-plus-08-2024":
return 4
return 4, true
default:
return 4
return 4, true
}
}
// hint 只给官方上4倍率由于开源模型供应商自行定价不对其进行补全倍率进行强制对齐
if lowercaseName == "deepseek-chat" || lowercaseName == "deepseek-reasoner" {
return 4
return 4, true
}
if strings.HasPrefix(name, "ERNIE-Speed-") {
return 2
return 2, true
} else if strings.HasPrefix(name, "ERNIE-Lite-") {
return 2
return 2, true
} else if strings.HasPrefix(name, "ERNIE-Character") {
return 2
return 2, true
} else if strings.HasPrefix(name, "ERNIE-Functions") {
return 2
return 2, true
}
switch name {
case "llama2-70b-4096":
return 0.8 / 0.64
return 0.8 / 0.64, true
case "llama3-8b-8192":
return 2
return 2, true
case "llama3-70b-8192":
return 0.79 / 0.59
return 0.79 / 0.59, true
}
if ratio, ok := CompletionRatio[name]; ok {
return ratio
}
return 1
return 1, false
}
func GetAudioRatio(name string) float64 {

View File

@@ -6,17 +6,27 @@ const LinuxDoIcon = (props) => {
return (
<svg
className='icon'
viewBox='0 0 24 24'
viewBox='0 0 16 16'
version='1.1'
xmlns='http://www.w3.org/2000/svg'
width='1em'
height='1em'
{...props}
>
<path
d='M19.7,17.6c-0.1-0.2-0.2-0.4-0.2-0.6c0-0.4-0.2-0.7-0.5-1c-0.1-0.1-0.3-0.2-0.4-0.2c0.6-1.8-0.3-3.6-1.3-4.9c0,0,0,0,0,0c-0.8-1.2-2-2.1-1.9-3.7c0-1.9,0.2-5.4-3.3-5.1C8.5,2.3,9.5,6,9.4,7.3c0,1.1-0.5,2.2-1.3,3.1c-0.2,0.2-0.4,0.5-0.5,0.7c-1,1.2-1.5,2.8-1.5,4.3c-0.2,0.2-0.4,0.4-0.5,0.6c-0.1,0.1-0.2,0.2-0.2,0.3c-0.1,0.1-0.3,0.2-0.5,0.3c-0.4,0.1-0.7,0.3-0.9,0.7c-0.1,0.3-0.2,0.7-0.1,1.1c0.1,0.2,0.1,0.4,0,0.7c-0.2,0.4-0.2,0.9,0,1.4c0.3,0.4,0.8,0.5,1.5,0.6c0.5,0,1.1,0.2,1.6,0.4l0,0c0.5,0.3,1.1,0.5,1.7,0.5c0.3,0,0.7-0.1,1-0.2c0.3-0.2,0.5-0.4,0.6-0.7c0.4,0,1-0.2,1.7-0.2c0.6,0,1.2,0.2,2,0.1c0,0.1,0,0.2,0.1,0.3c0.2,0.5,0.7,0.9,1.3,1c0.1,0,0.1,0,0.2,0c0.8-0.1,1.6-0.5,2.1-1.1l0,0c0.4-0.4,0.9-0.7,1.4-0.9c0.6-0.3,1-0.5,1.1-1C20.3,18.6,20.1,18.2,19.7,17.6z M12.8,4.8c0.6,0.1,1.1,0.6,1,1.2c0,0.3-0.1,0.6-0.3,0.9c0,0,0,0-0.1,0c-0.2-0.1-0.3-0.1-0.4-0.2c0.1-0.1,0.1-0.3,0.2-0.5c0-0.4-0.2-0.7-0.4-0.7c-0.3,0-0.5,0.3-0.5,0.7c0,0,0,0.1,0,0.1c-0.1-0.1-0.3-0.1-0.4-0.2c0,0,0-0.1,0-0.1C11.8,5.5,12.2,4.9,12.8,4.8z M12.5,6.8c0.1,0.1,0.3,0.2,0.4,0.2c0.1,0,0.3,0.1,0.4,0.2c0.2,0.1,0.4,0.2,0.4,0.5c0,0.3-0.3,0.6-0.9,0.8c-0.2,0.1-0.3,0.1-0.4,0.2c-0.3,0.2-0.6,0.3-1,0.3c-0.3,0-0.6-0.2-0.8-0.4c-0.1-0.1-0.2-0.2-0.4-0.3C10.1,8.2,9.9,8,9.8,7.7c0-0.1,0.1-0.2,0.2-0.3c0.3-0.2,0.4-0.3,0.5-0.4l0.1-0.1c0.2-0.3,0.6-0.5,1-0.5C11.9,6.5,12.2,6.6,12.5,6.8z M10.4,5c0.4,0,0.7,0.4,0.8,1.1c0,0.1,0,0.1,0,0.2c-0.1,0-0.3,0.1-0.4,0.2c0,0,0-0.1,0-0.2c0-0.3-0.2-0.6-0.4-0.5c-0.2,0-0.3,0.3-0.3,0.6c0,0.2,0.1,0.3,0.2,0.4l0,0c0,0-0.1,0.1-0.2,0.1C9.9,6.7,9.7,6.4,9.7,6.1C9.7,5.5,10,5,10.4,5z M9.4,21.1c-0.7,0.3-1.6,0.2-2.2-0.2c-0.6-0.3-1.1-0.4-1.8-0.4c-0.5-0.1-1-0.1-1.1-0.3c-0.1-0.2-0.1-0.5,0.1-1c0.1-0.3,0.1-0.6,0-0.9c-0.1-0.3-0.1-0.5,0-0.8C4.5,17.2,4.7,17.1,5,17c0.3-0.1,0.5-0.2,0.7-0.4c0.1-0.1,0.2-0.2,0.3-0.4c0.3-0.4,0.5-0.6,0.8-0.6c0.6,0.1,1.1,1,1.5,1.9c0.2,0.3,0.4,0.7,0.7,1c0.4,0.5,0.9,1.2,0.9,1.6C9.9,20.6,9.7,20.9,9.4,21.1z M14.3,18.9c0,0.1,0,0.1-0.1,0.2c-1.2,0.9-2.8,1-4.1,0.3c-0.2-0.3-0.4-0.6-0.6-0.9c0.9-0.1,0.7-1.3-1.2-2.5c-2-1.3-0.6-3.7,0.1-4.8c0.1-0.1,0.1,0-0.3,0.8c-0.3,0.6-0.9,2.1-0.1,3.2c0-0.8,0.2-1.6,0.5-2.4c0.7-1.3,1.2-2.8,1.5-4.3c0.1,0.1,0.1,0.1,0.2,0.1c0.1,0.1,0.2,0.2,0.3,0.2c0.2,0.3,0.6,0.4,0.9,0.4c0,0,0.1,0,0.1,0c0.4,0,0.8-0.1,1.1-0.4c0.1-0.1,0.2-0.2,0.4-0.2c0.3-0.1,0.6-0.3,0.9-0.6c0.4,1.3,0.8,2.5,1.4,3.6c0.4,0.8,0.7,1.6,0.9,2.5c0.3,0,0.7,0.1,1,0.3c0.8,0.4,1.1,0.7,1,1.2c-0.1,0-0.1,0-0.2,0c0-0.3-0.2-0.6-0.9-0.9c-0.7-0.3-1.3-0.3-1.5,0.4c-0.1,0-0.2,0.1-0.3,0.1c-0.8,0.4-0.8,1.5-0.9,2.6C14.5,18.2,14.4,18.5,14.3,18.9z M18.9,19.5c-0.6,0.2-1.1,0.6-1.5,1.1c-0.4,0.6-1.1,1-1.9,0.9c-0.4,0-0.8-0.3-0.9-0.7c-0.1-0.6-0.1-1.2,0.2-1.8c0.1-0.4,0.2-0.7,0.3-1.1c0.1-1.2,0.1-1.9,0.6-2.2h0c0,0.5,0.3,0.8,0.7,1c0.5,0,1-0.1,1.4-0.5c0.1,0,0.1,0,0.2,0c0.3,0,0.5,0,0.7,0.2c0.2,0.2,0.3,0.5,0.3,0.7c0,0.3,0.2,0.6,0.3,0.9c0.5,0.5,0.5,0.8,0.5,0.9C19.7,19.1,19.3,19.3,18.9,19.5z M9.9,7.5c-0.1,0-0.1,0-0.1,0.1c0,0,0,0.1,0.1,0.1c0,0,0,0,0,0c0.1,0,0.1,0.1,0.1,0.1c0.3,0.4,0.8,0.6,1.4,0.7c0.5-0.1,1-0.2,1.5-0.6c0.2-0.1,0.4-0.2,0.6-0.3c0.1,0,0.1-0.1,0.1-0.1c0-0.1,0-0.1-0.1-0.1l0,0c-0.2,0.1-0.5,0.2-0.7,0.3c-0.4,0.3-0.9,0.5-1.4,0.5c-0.5,0-0.9-0.3-1.2-0.6C10.1,7.6,10,7.5,9.9,7.5z'
fill='currentColor'
/>
<g id='linuxdo_icon' data-name='linuxdo_icon'>
<path
d='m7.44,0s.09,0,.13,0c.09,0,.19,0,.28,0,.14,0,.29,0,.43,0,.09,0,.18,0,.27,0q.12,0,.25,0t.26.08c.15.03.29.06.44.08,1.97.38,3.78,1.47,4.95,3.11.04.06.09.12.13.18.67.96,1.15,2.11,1.3,3.28q0,.19.09.26c0,.15,0,.29,0,.44,0,.04,0,.09,0,.13,0,.09,0,.19,0,.28,0,.14,0,.29,0,.43,0,.09,0,.18,0,.27,0,.08,0,.17,0,.25q0,.19-.08.26c-.03.15-.06.29-.08.44-.38,1.97-1.47,3.78-3.11,4.95-.06.04-.12.09-.18.13-.96.67-2.11,1.15-3.28,1.3q-.19,0-.26.09c-.15,0-.29,0-.44,0-.04,0-.09,0-.13,0-.09,0-.19,0-.28,0-.14,0-.29,0-.43,0-.09,0-.18,0-.27,0-.08,0-.17,0-.25,0q-.19,0-.26-.08c-.15-.03-.29-.06-.44-.08-1.97-.38-3.78-1.47-4.95-3.11q-.07-.09-.13-.18c-.67-.96-1.15-2.11-1.3-3.28q0-.19-.09-.26c0-.15,0-.29,0-.44,0-.04,0-.09,0-.13,0-.09,0-.19,0-.28,0-.14,0-.29,0-.43,0-.09,0-.18,0-.27,0-.08,0-.17,0-.25q0-.19.08-.26c.03-.15.06-.29.08-.44.38-1.97,1.47-3.78,3.11-4.95.06-.04.12-.09.18-.13C4.42.73,5.57.26,6.74.1,7,.07,7.15,0,7.44,0Z'
fill='#EFEFEF'
/>
<path
d='m1.27,11.33h13.45c-.94,1.89-2.51,3.21-4.51,3.88-1.99.59-3.96.37-5.8-.57-1.25-.7-2.67-1.9-3.14-3.3Z'
fill='#FEB005'
/>
<path
d='m12.54,1.99c.87.7,1.82,1.59,2.18,2.68H1.27c.87-1.74,2.33-3.13,4.2-3.78,2.44-.79,5-.47,7.07,1.1Z'
fill='#1D1D1F'
/>
</g>
</svg>
);
}
@@ -24,4 +34,4 @@ const LinuxDoIcon = (props) => {
return <Icon svg={<CustomIcon />} />;
};
export default LinuxDoIcon;
export default LinuxDoIcon;

File diff suppressed because it is too large Load Diff

View File

@@ -1275,6 +1275,7 @@
"代理站地址": "Base URL",
"对于官方渠道new-api已经内置地址除非是第三方代理站点或者Azure的特殊接入地址否则不需要填写": "For official channels, the new-api has a built-in address. Unless it is a third-party proxy site or a special Azure access address, there is no need to fill it in",
"渠道额外设置": "Channel extra settings",
"参数覆盖": "Parameters override",
"模型请求速率限制": "Model request rate limit",
"启用用户模型请求速率限制(可能会影响高并发性能)": "Enable user model request rate limit (may affect high concurrency performance)",
"限制周期": "Limit period",

View File

@@ -983,6 +983,23 @@ const EditChannel = (props) => {
</Typography.Text>
</Space>
</>
<>
<div style={{ marginTop: 10 }}>
<Typography.Text strong>
{t('参数覆盖')}
</Typography.Text>
</div>
<TextArea
placeholder={t('此项可选,用于覆盖请求参数。不支持覆盖 stream 参数。为一个 JSON 字符串,例如:') + '\n{\n "temperature": 0\n}'}
name="setting"
onChange={(value) => {
handleInputChange('param_override', value);
}}
autosize
value={inputs.param_override}
autoComplete="new-password"
/>
</>
{inputs.type === 1 && (
<>
<div style={{ marginTop: 10 }}>

View File

@@ -27,9 +27,10 @@ export default function GeneralSettings(props) {
const refForm = useRef();
const [inputsRow, setInputsRow] = useState(inputs);
function onChange(value, e) {
const name = e.target.id;
setInputs((inputs) => ({ ...inputs, [name]: value }));
function handleFieldChange(fieldName) {
return (value) => {
setInputs((inputs) => ({ ...inputs, [fieldName]: value }));
};
}
function onSubmit() {
@@ -98,7 +99,7 @@ export default function GeneralSettings(props) {
label={t('充值链接')}
initValue={''}
placeholder={t('例如发卡网站的购买链接')}
onChange={onChange}
onChange={handleFieldChange('TopUpLink')}
showClear
/>
</Col>
@@ -108,7 +109,7 @@ export default function GeneralSettings(props) {
label={t('文档地址')}
initValue={''}
placeholder={t('例如 https://docs.newapi.pro')}
onChange={onChange}
onChange={handleFieldChange('general_setting.docs_link')}
showClear
/>
</Col>
@@ -118,7 +119,7 @@ export default function GeneralSettings(props) {
label={t('单位美元额度')}
initValue={''}
placeholder={t('一单位货币能兑换的额度')}
onChange={onChange}
onChange={handleFieldChange('QuotaPerUnit')}
showClear
onClick={() => setShowQuotaWarning(true)}
/>
@@ -129,7 +130,7 @@ export default function GeneralSettings(props) {
label={t('失败重试次数')}
initValue={''}
placeholder={t('失败重试次数')}
onChange={onChange}
onChange={handleFieldChange('RetryTimes')}
showClear
/>
</Col>
@@ -142,12 +143,7 @@ export default function GeneralSettings(props) {
size='default'
checkedText=''
uncheckedText=''
onChange={(value) => {
setInputs({
...inputs,
DisplayInCurrencyEnabled: value,
});
}}
onChange={handleFieldChange('DisplayInCurrencyEnabled')}
/>
</Col>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
@@ -157,12 +153,7 @@ export default function GeneralSettings(props) {
size='default'
checkedText=''
uncheckedText=''
onChange={(value) =>
setInputs({
...inputs,
DisplayTokenStatEnabled: value,
})
}
onChange={handleFieldChange('DisplayTokenStatEnabled')}
/>
</Col>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
@@ -172,12 +163,7 @@ export default function GeneralSettings(props) {
size='default'
checkedText=''
uncheckedText=''
onChange={(value) =>
setInputs({
...inputs,
DefaultCollapseSidebar: value,
})
}
onChange={handleFieldChange('DefaultCollapseSidebar')}
/>
</Col>
</Row>
@@ -189,12 +175,7 @@ export default function GeneralSettings(props) {
size='default'
checkedText=''
uncheckedText=''
onChange={(value) =>
setInputs({
...inputs,
DemoSiteEnabled: value
})
}
onChange={handleFieldChange('DemoSiteEnabled')}
/>
</Col>
<Col xs={24} sm={12} md={8} lg={8} xl={8}>
@@ -205,12 +186,7 @@ export default function GeneralSettings(props) {
size='default'
checkedText=''
uncheckedText=''
onChange={(value) =>
setInputs({
...inputs,
SelfUseModeEnabled: value
})
}
onChange={handleFieldChange('SelfUseModeEnabled')}
/>
</Col>
</Row>