Compare commits

..

26 Commits

Author SHA1 Message Date
1808837298@qq.com
18d3706ff8 feat: Add new model management features
- Implement `/api/channel/models_enabled` endpoint to retrieve enabled models
- Add `EnabledListModels` handler in controller
- Create new `ModelRatioNotSetEditor` component for managing unset model ratios
- Update router to include new models_enabled route
- Add internationalization support for new model management UI
- Include GPT-4.5 preview model in OpenAI model list
2025-02-28 21:13:30 +08:00
1808837298@qq.com
152950497e fix 2025-02-28 20:28:44 +08:00
1808837298@qq.com
d6fd50e382 feat: add new GPT-4.5 preview model ratios 2025-02-28 19:17:15 +08:00
1808837298@qq.com
cfd3f6c073 feat: Enhance Claude default max tokens configuration
- Replace ThinkingAdapterMaxTokens with a more flexible DefaultMaxTokens map
- Add support for model-specific default max tokens configuration
- Update relay and web interface to use the new configuration approach
- Implement a fallback mechanism for default max tokens
2025-02-28 17:53:08 +08:00
1808837298@qq.com
45c56b5ded feat: Implement model-specific headers configuration for Claude 2025-02-28 16:47:31 +08:00
1808837298@qq.com
d306394f33 fix: Simplify Claude settings value conversion logic 2025-02-27 22:26:21 +08:00
1808837298@qq.com
cdba87a7da fix: Prevent duplicate headers in Claude settings 2025-02-27 22:14:53 +08:00
1808837298@qq.com
ae5b874a6c refactor: Reorganize Claude MaxTokens configuration UI layout 2025-02-27 22:12:14 +08:00
1808837298@qq.com
d0bc8d17d1 feat: Enhance Claude MaxTokens configuration handling
- Update Claude relay to set default MaxTokens dynamically
- Modify web interface to clarify default MaxTokens input purpose
- Improve token configuration logic for thinking adapter models
2025-02-27 22:10:29 +08:00
1808837298@qq.com
4784ca7514 fix: Update Claude thinking adapter token percentage input guidance 2025-02-27 20:59:32 +08:00
1808837298@qq.com
3a18c0ce9f fix: Correct model request configuration in Vertex Claude adaptor 2025-02-27 20:51:10 +08:00
1808837298@qq.com
929668bead feat: Refactor model configuration management with new config system
- Introduce a new configuration management approach for model-specific settings
- Update Gemini settings to use the new config system with more flexible management
- Add support for dynamic configuration updates in option handling
- Modify Claude and Vertex adaptors to use new configuration methods
- Enhance web interface to support namespaced configuration keys
2025-02-27 20:49:34 +08:00
1808837298@qq.com
06a78f9042 feat: Add Claude model configuration management #791 2025-02-27 20:49:21 +08:00
1808837298@qq.com
0f1c4c4ebe fix: Add pagination support to user search functionality 2025-02-27 16:55:02 +08:00
1808837298@qq.com
1bcf7a3c39 chore: Update Azure OpenAI API version and embedding model detection
- Enhance channel test to detect more embedding models
- Update Azure OpenAI default API version to 2024-12-01-preview
- Remove redundant default API version setting in channel edit
- Add user cache writing in channel test
2025-02-27 16:49:32 +08:00
1808837298@qq.com
5f0b3f6d6f fix: Improve AWS Claude adaptor request conversion error handling #796 2025-02-27 14:57:00 +08:00
1808837298@qq.com
19a318c943 init openrouter adaptor 2025-02-27 00:01:21 +08:00
1808837298@qq.com
13ab0f8e4f fix: gemini&claude tool call format #795 #766 2025-02-26 23:56:10 +08:00
1808837298@qq.com
6d8d40e67b fix: claude tool call format #795 #766 2025-02-26 23:40:16 +08:00
1808837298@qq.com
287caf8e38 feat: Add Jina reranking support for OpenAI adaptor 2025-02-26 21:46:06 +08:00
1808837298@qq.com
c802b3b41a fix: Update Gemini safety settings to use 'OFF' as default 2025-02-26 19:20:17 +08:00
1808837298@qq.com
ed4e1c2332 fix: Update Gemini safety settings category 2025-02-26 19:18:00 +08:00
1808837298@qq.com
e581ea33c2 fix: Update Gemini safety settings default value 2025-02-26 19:01:45 +08:00
1808837298@qq.com
bf80d71ddf feat: Add Gemini version settings configuration support (close #568) 2025-02-26 18:19:09 +08:00
1808837298@qq.com
e19b244e73 feat: Add Gemini safety settings configuration support (close #703) 2025-02-26 16:54:43 +08:00
1808837298@qq.com
f451268830 feat: Update Claude relay temperature setting 2025-02-25 22:01:05 +08:00
49 changed files with 1749 additions and 215 deletions

View File

@@ -50,10 +50,6 @@
# CHANNEL_TEST_FREQUENCY=10
# 生成默认token
# GENERATE_DEFAULT_TOKEN=false
# Gemini 安全设置
# GEMINI_SAFETY_SETTING=BLOCK_NONE
# Gemini版本设置
# GEMINI_MODEL_MAP=gemini-1.0-pro:v1
# Cohere 安全设置
# COHERE_SAFETY_SETTING=NONE
# 是否统计图片token

View File

@@ -94,7 +94,6 @@
- `GET_MEDIA_TOKEN`是否统计图片token默认为 `true`关闭后将不再在本地计算图片token可能会导致和上游计费不同此项覆盖 `GET_MEDIA_TOKEN_NOT_STREAM` 选项作用。
- `GET_MEDIA_TOKEN_NOT_STREAM`:是否在非流(`stream=false`情况下统计图片token默认为 `true`
- `UPDATE_TASK`是否更新异步任务Midjourney、Suno默认为 `true`,关闭后将不会更新任务进度。
- `GEMINI_MODEL_MAP`Gemini模型指定版本(v1/v1beta),使用"模型:版本"指定,","分隔,例如:-e GEMINI_MODEL_MAP="gemini-1.5-pro-latest:v1beta,gemini-1.5-pro-001:v1beta",为空则使用默认配置(v1beta)
- `COHERE_SAFETY_SETTING`Cohere模型[安全设置](https://docs.cohere.com/docs/safety-modes#overview),可选值为 `NONE`, `CONTEXTUAL`, `STRICT`,默认为 `NONE`
- `GEMINI_VISION_MAX_IMAGE_NUM`Gemini模型最大图片数量默认为 `16`,设置为 `-1` 则不限制。
- `MAX_FILE_DOWNLOAD_MB`: 最大文件下载大小,单位 MB默认为 `20`
@@ -103,6 +102,10 @@
- `NOTIFICATION_LIMIT_DURATION_MINUTE`:通知限制的持续时间(分钟),默认为 `10`
- `NOTIFY_LIMIT_COUNT`:用户通知在指定持续时间内的最大数量,默认为 `2`
## 已废弃的环境变量
- ~~`GEMINI_MODEL_MAP`(已废弃)~~:改为到`设置-模型相关设置`中设置
- ~~`GEMINI_SAFETY_SETTING`(已废弃)~~:改为到`设置-模型相关设置`中设置
## 部署
> [!TIP]

View File

@@ -50,24 +50,26 @@ var defaultModelRatio = map[string]float64{
"gpt-4o-realtime-preview-2024-12-17": 2.5,
"gpt-4o-mini-realtime-preview": 0.3,
"gpt-4o-mini-realtime-preview-2024-12-17": 0.3,
"o1": 7.5,
"o1-2024-12-17": 7.5,
"o1-preview": 7.5,
"o1-preview-2024-09-12": 7.5,
"o1-mini": 0.55,
"o1-mini-2024-09-12": 0.55,
"o3-mini": 0.55,
"o3-mini-2025-01-31": 0.55,
"o3-mini-high": 0.55,
"o3-mini-2025-01-31-high": 0.55,
"o3-mini-low": 0.55,
"o3-mini-2025-01-31-low": 0.55,
"o3-mini-medium": 0.55,
"o3-mini-2025-01-31-medium": 0.55,
"gpt-4o-mini": 0.075,
"gpt-4o-mini-2024-07-18": 0.075,
"gpt-4-turbo": 5, // $0.01 / 1K tokens
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
"o1": 7.5,
"o1-2024-12-17": 7.5,
"o1-preview": 7.5,
"o1-preview-2024-09-12": 7.5,
"o1-mini": 0.55,
"o1-mini-2024-09-12": 0.55,
"o3-mini": 0.55,
"o3-mini-2025-01-31": 0.55,
"o3-mini-high": 0.55,
"o3-mini-2025-01-31-high": 0.55,
"o3-mini-low": 0.55,
"o3-mini-2025-01-31-low": 0.55,
"o3-mini-medium": 0.55,
"o3-mini-2025-01-31-medium": 0.55,
"gpt-4o-mini": 0.075,
"gpt-4o-mini-2024-07-18": 0.075,
"gpt-4-turbo": 5, // $0.01 / 1K tokens
"gpt-4-turbo-2024-04-09": 5, // $0.01 / 1K tokens
"gpt-4.5-preview": 37.5,
"gpt-4.5-preview-2025-02-27": 37.5,
//"gpt-3.5-turbo-0301": 0.75, //deprecated
"gpt-3.5-turbo": 0.25,
"gpt-3.5-turbo-0613": 0.75,
@@ -315,7 +317,7 @@ func UpdateModelRatioByJSONString(jsonStr string) error {
return json.Unmarshal([]byte(jsonStr), &modelRatioMap)
}
func GetModelRatio(name string) float64 {
func GetModelRatio(name string) (float64, bool) {
GetModelRatioMap()
if strings.HasPrefix(name, "gpt-4-gizmo") {
name = "gpt-4-gizmo-*"
@@ -323,9 +325,9 @@ func GetModelRatio(name string) float64 {
ratio, ok := modelRatioMap[name]
if !ok {
SysError("model ratio not found: " + name)
return 30
return 37.5, false
}
return ratio
return ratio, true
}
func DefaultModelRatio2JSONString() string {
@@ -387,6 +389,9 @@ func GetCompletionRatio(name string) float64 {
}
return 4
}
if strings.HasPrefix(name, "gpt-4.5") {
return 2
}
if strings.HasPrefix(name, "gpt-4-turbo") || strings.HasSuffix(name, "preview") {
return 3
}

View File

@@ -5,6 +5,7 @@ import (
"context"
crand "crypto/rand"
"encoding/base64"
"encoding/json"
"fmt"
"github.com/pkg/errors"
"html/template"
@@ -213,6 +214,24 @@ func RandomSleep() {
time.Sleep(time.Duration(rand.Intn(3000)) * time.Millisecond)
}
func GetPointer[T any](v T) *T {
return &v
}
func Any2Type[T any](data any) (T, error) {
var zero T
bytes, err := json.Marshal(data)
if err != nil {
return zero, err
}
var res T
err = json.Unmarshal(bytes, &res)
if err != nil {
return zero, err
}
return res, nil
}
// SaveTmpFile saves data to a temporary file. The filename would be apppended with a random string.
func SaveTmpFile(filename string, data io.Reader) (string, error) {
f, err := os.CreateTemp(os.TempDir(), filename)

View File

@@ -1,10 +1,7 @@
package constant
import (
"fmt"
"one-api/common"
"os"
"strings"
)
var StreamingTimeout = common.GetEnvOrDefault("STREAMING_TIMEOUT", 60)
@@ -23,9 +20,9 @@ var UpdateTask = common.GetEnvOrDefaultBool("UPDATE_TASK", true)
var AzureDefaultAPIVersion = common.GetEnvOrDefaultString("AZURE_DEFAULT_API_VERSION", "2024-12-01-preview")
var GeminiModelMap = map[string]string{
"gemini-1.0-pro": "v1",
}
//var GeminiModelMap = map[string]string{
// "gemini-1.0-pro": "v1",
//}
var GeminiVisionMaxImageNum = common.GetEnvOrDefault("GEMINI_VISION_MAX_IMAGE_NUM", 16)
@@ -33,18 +30,18 @@ var NotifyLimitCount = common.GetEnvOrDefault("NOTIFY_LIMIT_COUNT", 2)
var NotificationLimitDurationMinute = common.GetEnvOrDefault("NOTIFICATION_LIMIT_DURATION_MINUTE", 10)
func InitEnv() {
modelVersionMapStr := strings.TrimSpace(os.Getenv("GEMINI_MODEL_MAP"))
if modelVersionMapStr == "" {
return
}
for _, pair := range strings.Split(modelVersionMapStr, ",") {
parts := strings.Split(pair, ":")
if len(parts) == 2 {
GeminiModelMap[parts[0]] = parts[1]
} else {
common.SysError(fmt.Sprintf("invalid model version map: %s", pair))
}
}
//modelVersionMapStr := strings.TrimSpace(os.Getenv("GEMINI_MODEL_MAP"))
//if modelVersionMapStr == "" {
// return
//}
//for _, pair := range strings.Split(modelVersionMapStr, ",") {
// parts := strings.Split(pair, ":")
// if len(parts) == 2 {
// GeminiModelMap[parts[0]] = parts[1]
// } else {
// common.SysError(fmt.Sprintf("invalid model version map: %s", pair))
// }
//}
}
// GenerateDefaultToken 是否生成初始令牌,默认关闭。

View File

@@ -48,7 +48,7 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
if strings.Contains(strings.ToLower(testModel), "embedding") ||
strings.HasPrefix(testModel, "m3e") || // m3e 系列模型
strings.Contains(testModel, "bge-") || // bge 系列模型
testModel == "text-embedding-v1" ||
strings.Contains(testModel, "embed") ||
channel.Type == common.ChannelTypeMokaAI { // 其他 embedding 模型
requestPath = "/v1/embeddings" // 修改请求路径
}
@@ -84,6 +84,12 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
}
}
cache, err := model.GetUserCache(1)
if err != nil {
return err, nil
}
cache.WriteContext(c)
c.Request.Header.Set("Authorization", "Bearer "+channel.Key)
c.Request.Header.Set("Content-Type", "application/json")
c.Set("channel", channel.Type)
@@ -140,7 +146,10 @@ func testChannel(channel *model.Channel, testModel string) (err error, openAIErr
return err, nil
}
modelPrice, usePrice := common.GetModelPrice(testModel, false)
modelRatio := common.GetModelRatio(testModel)
modelRatio, success := common.GetModelRatio(testModel)
if !success {
return fmt.Errorf("模型 %s 倍率未设置", testModel), nil
}
completionRatio := common.GetCompletionRatio(testModel)
ratio := modelRatio
quota := 0

View File

@@ -216,6 +216,13 @@ func DashboardListModels(c *gin.Context) {
})
}
func EnabledListModels(c *gin.Context) {
c.JSON(200, gin.H{
"success": true,
"data": model.GetEnabledModels(),
})
}
func RetrieveModel(c *gin.Context) {
modelId := c.Param("model")
if aiModel, ok := openAIModelsMap[modelId]; ok {

View File

@@ -18,50 +18,52 @@ type FormatJsonSchema struct {
}
type GeneralOpenAIRequest struct {
Model string `json:"model,omitempty"`
Messages []Message `json:"messages,omitempty"`
Prompt any `json:"prompt,omitempty"`
Prefix any `json:"prefix,omitempty"`
Suffix any `json:"suffix,omitempty"`
Stream bool `json:"stream,omitempty"`
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
MaxTokens uint `json:"max_tokens,omitempty"`
MaxCompletionTokens uint `json:"max_completion_tokens,omitempty"`
ReasoningEffort string `json:"reasoning_effort,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
Stop any `json:"stop,omitempty"`
N int `json:"n,omitempty"`
Input any `json:"input,omitempty"`
Instruction string `json:"instruction,omitempty"`
Size string `json:"size,omitempty"`
Functions any `json:"functions,omitempty"`
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
PresencePenalty float64 `json:"presence_penalty,omitempty"`
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
EncodingFormat any `json:"encoding_format,omitempty"`
Seed float64 `json:"seed,omitempty"`
Tools []ToolCall `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
User string `json:"user,omitempty"`
LogProbs bool `json:"logprobs,omitempty"`
TopLogProbs int `json:"top_logprobs,omitempty"`
Dimensions int `json:"dimensions,omitempty"`
Modalities any `json:"modalities,omitempty"`
Audio any `json:"audio,omitempty"`
ExtraBody any `json:"extra_body,omitempty"`
Model string `json:"model,omitempty"`
Messages []Message `json:"messages,omitempty"`
Prompt any `json:"prompt,omitempty"`
Prefix any `json:"prefix,omitempty"`
Suffix any `json:"suffix,omitempty"`
Stream bool `json:"stream,omitempty"`
StreamOptions *StreamOptions `json:"stream_options,omitempty"`
MaxTokens uint `json:"max_tokens,omitempty"`
MaxCompletionTokens uint `json:"max_completion_tokens,omitempty"`
ReasoningEffort string `json:"reasoning_effort,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
TopP float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
Stop any `json:"stop,omitempty"`
N int `json:"n,omitempty"`
Input any `json:"input,omitempty"`
Instruction string `json:"instruction,omitempty"`
Size string `json:"size,omitempty"`
Functions any `json:"functions,omitempty"`
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
PresencePenalty float64 `json:"presence_penalty,omitempty"`
ResponseFormat *ResponseFormat `json:"response_format,omitempty"`
EncodingFormat any `json:"encoding_format,omitempty"`
Seed float64 `json:"seed,omitempty"`
Tools []ToolCallRequest `json:"tools,omitempty"`
ToolChoice any `json:"tool_choice,omitempty"`
User string `json:"user,omitempty"`
LogProbs bool `json:"logprobs,omitempty"`
TopLogProbs int `json:"top_logprobs,omitempty"`
Dimensions int `json:"dimensions,omitempty"`
Modalities any `json:"modalities,omitempty"`
Audio any `json:"audio,omitempty"`
ExtraBody any `json:"extra_body,omitempty"`
}
type OpenAITools struct {
Type string `json:"type"`
Function OpenAIFunction `json:"function"`
type ToolCallRequest struct {
ID string `json:"id,omitempty"`
Type string `json:"type"`
Function FunctionRequest `json:"function"`
}
type OpenAIFunction struct {
type FunctionRequest struct {
Description string `json:"description,omitempty"`
Name string `json:"name"`
Parameters any `json:"parameters,omitempty"`
Arguments string `json:"arguments,omitempty"`
}
type StreamOptions struct {
@@ -137,11 +139,11 @@ func (m *Message) SetPrefix(prefix bool) {
m.Prefix = &prefix
}
func (m *Message) ParseToolCalls() []ToolCall {
func (m *Message) ParseToolCalls() []ToolCallRequest {
if m.ToolCalls == nil {
return nil
}
var toolCalls []ToolCall
var toolCalls []ToolCallRequest
if err := json.Unmarshal(m.ToolCalls, &toolCalls); err == nil {
return toolCalls
}

View File

@@ -62,10 +62,10 @@ type ChatCompletionsStreamResponseChoice struct {
}
type ChatCompletionsStreamResponseChoiceDelta struct {
Content *string `json:"content,omitempty"`
ReasoningContent *string `json:"reasoning_content,omitempty"`
Role string `json:"role,omitempty"`
ToolCalls []ToolCall `json:"tool_calls,omitempty"`
Content *string `json:"content,omitempty"`
ReasoningContent *string `json:"reasoning_content,omitempty"`
Role string `json:"role,omitempty"`
ToolCalls []ToolCallResponse `json:"tool_calls,omitempty"`
}
func (c *ChatCompletionsStreamResponseChoiceDelta) SetContentString(s string) {
@@ -90,24 +90,24 @@ func (c *ChatCompletionsStreamResponseChoiceDelta) SetReasoningContent(s string)
c.ReasoningContent = &s
}
type ToolCall struct {
type ToolCallResponse struct {
// Index is not nil only in chat completion chunk object
Index *int `json:"index,omitempty"`
ID string `json:"id,omitempty"`
Type any `json:"type"`
Function FunctionCall `json:"function"`
Index *int `json:"index,omitempty"`
ID string `json:"id,omitempty"`
Type any `json:"type"`
Function FunctionResponse `json:"function"`
}
func (c *ToolCall) SetIndex(i int) {
func (c *ToolCallResponse) SetIndex(i int) {
c.Index = &i
}
type FunctionCall struct {
type FunctionResponse struct {
Description string `json:"description,omitempty"`
Name string `json:"name,omitempty"`
// call function with arguments in JSON format
Parameters any `json:"parameters,omitempty"` // request
Arguments string `json:"arguments,omitempty"`
Arguments string `json:"arguments"` // response
}
type ChatCompletionsStreamResponse struct {

View File

@@ -3,6 +3,7 @@ package model
import (
"one-api/common"
"one-api/setting"
"one-api/setting/config"
"strconv"
"strings"
"time"
@@ -23,6 +24,8 @@ func AllOption() ([]*Option, error) {
func InitOptionMap() {
common.OptionMapRWMutex.Lock()
common.OptionMap = make(map[string]string)
// 添加原有的系统配置
common.OptionMap["FileUploadPermission"] = strconv.Itoa(common.FileUploadPermission)
common.OptionMap["FileDownloadPermission"] = strconv.Itoa(common.FileDownloadPermission)
common.OptionMap["ImageUploadPermission"] = strconv.Itoa(common.ImageUploadPermission)
@@ -110,12 +113,17 @@ func InitOptionMap() {
common.OptionMap["DemoSiteEnabled"] = strconv.FormatBool(setting.DemoSiteEnabled)
common.OptionMap["ModelRequestRateLimitEnabled"] = strconv.FormatBool(setting.ModelRequestRateLimitEnabled)
common.OptionMap["CheckSensitiveOnPromptEnabled"] = strconv.FormatBool(setting.CheckSensitiveOnPromptEnabled)
//common.OptionMap["CheckSensitiveOnCompletionEnabled"] = strconv.FormatBool(constant.CheckSensitiveOnCompletionEnabled)
common.OptionMap["StopOnSensitiveEnabled"] = strconv.FormatBool(setting.StopOnSensitiveEnabled)
common.OptionMap["SensitiveWords"] = setting.SensitiveWordsToString()
common.OptionMap["StreamCacheQueueLength"] = strconv.Itoa(setting.StreamCacheQueueLength)
common.OptionMap["AutomaticDisableKeywords"] = setting.AutomaticDisableKeywordsToString()
// 自动添加所有注册的模型配置
modelConfigs := config.GlobalConfig.ExportAllConfigs()
for k, v := range modelConfigs {
common.OptionMap[k] = v
}
common.OptionMapRWMutex.Unlock()
loadOptionsFromDatabase()
}
@@ -158,6 +166,13 @@ func updateOptionMap(key string, value string) (err error) {
common.OptionMapRWMutex.Lock()
defer common.OptionMapRWMutex.Unlock()
common.OptionMap[key] = value
// 检查是否是模型配置 - 使用更规范的方式处理
if handleConfigUpdate(key, value) {
return nil // 已由配置系统处理
}
// 处理传统配置项...
if strings.HasSuffix(key, "Permission") {
intValue, _ := strconv.Atoi(value)
switch key {
@@ -232,9 +247,6 @@ func updateOptionMap(key string, value string) (err error) {
setting.CheckSensitiveOnPromptEnabled = boolValue
case "ModelRequestRateLimitEnabled":
setting.ModelRequestRateLimitEnabled = boolValue
//case "CheckSensitiveOnCompletionEnabled":
// constant.CheckSensitiveOnCompletionEnabled = boolValue
case "StopOnSensitiveEnabled":
setting.StopOnSensitiveEnabled = boolValue
case "SMTPSSLEnabled":
@@ -356,3 +368,28 @@ func updateOptionMap(key string, value string) (err error) {
}
return err
}
// handleConfigUpdate 处理分层配置更新,返回是否已处理
func handleConfigUpdate(key, value string) bool {
parts := strings.SplitN(key, ".", 2)
if len(parts) != 2 {
return false // 不是分层配置
}
configName := parts[0]
configKey := parts[1]
// 获取配置对象
cfg := config.GlobalConfig.Get(configName)
if cfg == nil {
return false // 未注册的配置
}
// 更新配置
configMap := map[string]string{
configKey: value,
}
config.UpdateConfigFromMap(cfg, configMap)
return true // 已处理
}

View File

@@ -69,7 +69,8 @@ func updatePricing() {
pricing.ModelPrice = modelPrice
pricing.QuotaType = 1
} else {
pricing.ModelRatio = common.GetModelRatio(model)
modelRatio, _ := common.GetModelRatio(model)
pricing.ModelRatio = modelRatio
pricing.CompletionRatio = common.GetCompletionRatio(model)
pricing.QuotaType = 0
}

View File

@@ -8,6 +8,7 @@ import (
"one-api/dto"
"one-api/relay/channel/claude"
relaycommon "one-api/relay/common"
"one-api/setting/model_setting"
)
const (
@@ -38,6 +39,7 @@ func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
}
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
model_setting.GetClaudeSettings().WriteHeaders(info.OriginModelName, req)
return nil
}
@@ -49,8 +51,10 @@ func (a *Adaptor) ConvertRequest(c *gin.Context, info *relaycommon.RelayInfo, re
var claudeReq *claude.ClaudeRequest
var err error
claudeReq, err = claude.RequestOpenAI2ClaudeMessage(*request)
c.Set("request_model", request.Model)
if err != nil {
return nil, err
}
c.Set("request_model", claudeReq.Model)
c.Set("converted_request", claudeReq)
return claudeReq, err
}
@@ -64,7 +68,6 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return nil, nil
}

View File

@@ -9,6 +9,7 @@ import (
"one-api/dto"
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/setting/model_setting"
"strings"
)
@@ -55,6 +56,7 @@ func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *rel
anthropicVersion = "2023-06-01"
}
req.Set("anthropic-version", anthropicVersion)
model_setting.GetClaudeSettings().WriteHeaders(info.OriginModelName, req)
return nil
}

View File

@@ -10,6 +10,7 @@ import (
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/setting/model_setting"
"strings"
"github.com/gin-gonic/gin"
@@ -93,10 +94,12 @@ func RequestOpenAI2ClaudeMessage(textRequest dto.GeneralOpenAIRequest) (*ClaudeR
Tools: claudeTools,
}
if strings.HasSuffix(textRequest.Model, "-thinking") {
if claudeRequest.MaxTokens == 0 {
claudeRequest.MaxTokens = 8192
}
if claudeRequest.MaxTokens == 0 {
claudeRequest.MaxTokens = uint(model_setting.GetClaudeSettings().GetDefaultMaxTokens(textRequest.Model))
}
if model_setting.GetClaudeSettings().ThinkingAdapterEnabled &&
strings.HasSuffix(textRequest.Model, "-thinking") {
// 因为BudgetTokens 必须大于1024
if claudeRequest.MaxTokens < 1280 {
@@ -106,15 +109,15 @@ func RequestOpenAI2ClaudeMessage(textRequest dto.GeneralOpenAIRequest) (*ClaudeR
// BudgetTokens 为 max_tokens 的 80%
claudeRequest.Thinking = &Thinking{
Type: "enabled",
BudgetTokens: int(float64(claudeRequest.MaxTokens) * 0.8),
BudgetTokens: int(float64(claudeRequest.MaxTokens) * model_setting.GetClaudeSettings().ThinkingAdapterBudgetTokensPercentage),
}
// TODO: 临时处理
// https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking#important-considerations-when-using-extended-thinking
claudeRequest.TopP = 0
claudeRequest.Temperature = common.GetPointer[float64](1.0)
claudeRequest.Model = strings.TrimSuffix(textRequest.Model, "-thinking")
}
if claudeRequest.MaxTokens == 0 {
claudeRequest.MaxTokens = 4096
}
if textRequest.Stop != nil {
// stop maybe string/array string, convert to array string
switch textRequest.Stop.(type) {
@@ -293,7 +296,7 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *ClaudeResponse) (*
response.Object = "chat.completion.chunk"
response.Model = claudeResponse.Model
response.Choices = make([]dto.ChatCompletionsStreamResponseChoice, 0)
tools := make([]dto.ToolCall, 0)
tools := make([]dto.ToolCallResponse, 0)
var choice dto.ChatCompletionsStreamResponseChoice
if reqMode == RequestModeCompletion {
choice.Delta.SetContentString(claudeResponse.Completion)
@@ -312,10 +315,10 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *ClaudeResponse) (*
if claudeResponse.ContentBlock != nil {
//choice.Delta.SetContentString(claudeResponse.ContentBlock.Text)
if claudeResponse.ContentBlock.Type == "tool_use" {
tools = append(tools, dto.ToolCall{
tools = append(tools, dto.ToolCallResponse{
ID: claudeResponse.ContentBlock.Id,
Type: "function",
Function: dto.FunctionCall{
Function: dto.FunctionResponse{
Name: claudeResponse.ContentBlock.Name,
Arguments: "",
},
@@ -330,8 +333,8 @@ func StreamResponseClaude2OpenAI(reqMode int, claudeResponse *ClaudeResponse) (*
choice.Delta.SetContentString(claudeResponse.Delta.Text)
switch claudeResponse.Delta.Type {
case "input_json_delta":
tools = append(tools, dto.ToolCall{
Function: dto.FunctionCall{
tools = append(tools, dto.ToolCallResponse{
Function: dto.FunctionResponse{
Arguments: claudeResponse.Delta.PartialJson,
},
})
@@ -379,7 +382,7 @@ func ResponseClaude2OpenAI(reqMode int, claudeResponse *ClaudeResponse) *dto.Ope
if len(claudeResponse.Content) > 0 {
responseText = claudeResponse.Content[0].Text
}
tools := make([]dto.ToolCall, 0)
tools := make([]dto.ToolCallResponse, 0)
thinkingContent := ""
if reqMode == RequestModeCompletion {
@@ -400,10 +403,10 @@ func ResponseClaude2OpenAI(reqMode int, claudeResponse *ClaudeResponse) *dto.Ope
switch message.Type {
case "tool_use":
args, _ := json.Marshal(message.Input)
tools = append(tools, dto.ToolCall{
tools = append(tools, dto.ToolCallResponse{
ID: message.Id,
Type: "function", // compatible with other OpenAI derivative applications
Function: dto.FunctionCall{
Function: dto.FunctionResponse{
Name: message.Name,
Arguments: string(args),
},

View File

@@ -7,11 +7,11 @@ import (
"io"
"net/http"
"one-api/common"
"one-api/constant"
"one-api/dto"
"one-api/relay/channel"
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/setting/model_setting"
"strings"
@@ -64,15 +64,7 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
}
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
// 从映射中获取模型名称对应的版本,如果找不到就使用 info.ApiVersion 或默认的版本 "v1beta"
version, beta := constant.GeminiModelMap[info.UpstreamModelName]
if !beta {
if info.ApiVersion != "" {
version = info.ApiVersion
} else {
version = "v1beta"
}
}
version := model_setting.GetGeminiVersionSetting(info.UpstreamModelName)
if strings.HasPrefix(info.UpstreamModelName, "imagen") {
return fmt.Sprintf("%s/%s/models/%s:predict", info.BaseUrl, version, info.UpstreamModelName), nil

View File

@@ -20,4 +20,12 @@ var ModelList = []string{
"imagen-3.0-generate-002",
}
var SafetySettingList = []string{
"HARM_CATEGORY_HARASSMENT",
"HARM_CATEGORY_HATE_SPEECH",
"HARM_CATEGORY_SEXUALLY_EXPLICIT",
"HARM_CATEGORY_DANGEROUS_CONTENT",
"HARM_CATEGORY_CIVIC_INTEGRITY",
}
var ChannelName = "google gemini"

View File

@@ -11,6 +11,7 @@ import (
"one-api/dto"
relaycommon "one-api/relay/common"
"one-api/service"
"one-api/setting/model_setting"
"strings"
"unicode/utf8"
@@ -22,28 +23,7 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
geminiRequest := GeminiChatRequest{
Contents: make([]GeminiChatContent, 0, len(textRequest.Messages)),
SafetySettings: []GeminiChatSafetySettings{
{
Category: "HARM_CATEGORY_HARASSMENT",
Threshold: common.GeminiSafetySetting,
},
{
Category: "HARM_CATEGORY_HATE_SPEECH",
Threshold: common.GeminiSafetySetting,
},
{
Category: "HARM_CATEGORY_SEXUALLY_EXPLICIT",
Threshold: common.GeminiSafetySetting,
},
{
Category: "HARM_CATEGORY_DANGEROUS_CONTENT",
Threshold: common.GeminiSafetySetting,
},
{
Category: "HARM_CATEGORY_CIVIC_INTEGRITY",
Threshold: common.GeminiSafetySetting,
},
},
//SafetySettings: []GeminiChatSafetySettings{},
GenerationConfig: GeminiChatGenerationConfig{
Temperature: textRequest.Temperature,
TopP: textRequest.TopP,
@@ -52,9 +32,18 @@ func CovertGemini2OpenAI(textRequest dto.GeneralOpenAIRequest) (*GeminiChatReque
},
}
safetySettings := make([]GeminiChatSafetySettings, 0, len(SafetySettingList))
for _, category := range SafetySettingList {
safetySettings = append(safetySettings, GeminiChatSafetySettings{
Category: category,
Threshold: model_setting.GetGeminiSafetySetting(category),
})
}
geminiRequest.SafetySettings = safetySettings
// openaiContent.FuncToToolCalls()
if textRequest.Tools != nil {
functions := make([]dto.FunctionCall, 0, len(textRequest.Tools))
functions := make([]dto.FunctionRequest, 0, len(textRequest.Tools))
googleSearch := false
codeExecution := false
for _, tool := range textRequest.Tools {
@@ -349,7 +338,7 @@ func unescapeMapOrSlice(data interface{}) interface{} {
return data
}
func getToolCall(item *GeminiPart) *dto.ToolCall {
func getResponseToolCall(item *GeminiPart) *dto.ToolCallResponse {
var argsBytes []byte
var err error
if result, ok := item.FunctionCall.Arguments.(map[string]interface{}); ok {
@@ -361,10 +350,10 @@ func getToolCall(item *GeminiPart) *dto.ToolCall {
if err != nil {
return nil
}
return &dto.ToolCall{
return &dto.ToolCallResponse{
ID: fmt.Sprintf("call_%s", common.GetUUID()),
Type: "function",
Function: dto.FunctionCall{
Function: dto.FunctionResponse{
Arguments: string(argsBytes),
Name: item.FunctionCall.FunctionName,
},
@@ -379,7 +368,7 @@ func responseGeminiChat2OpenAI(response *GeminiChatResponse) *dto.OpenAITextResp
Choices: make([]dto.OpenAITextResponseChoice, 0, len(response.Candidates)),
}
content, _ := json.Marshal("")
is_tool_call := false
isToolCall := false
for _, candidate := range response.Candidates {
choice := dto.OpenAITextResponseChoice{
Index: int(candidate.Index),
@@ -391,12 +380,12 @@ func responseGeminiChat2OpenAI(response *GeminiChatResponse) *dto.OpenAITextResp
}
if len(candidate.Content.Parts) > 0 {
var texts []string
var tool_calls []dto.ToolCall
var toolCalls []dto.ToolCallResponse
for _, part := range candidate.Content.Parts {
if part.FunctionCall != nil {
choice.FinishReason = constant.FinishReasonToolCalls
if call := getToolCall(&part); call != nil {
tool_calls = append(tool_calls, *call)
if call := getResponseToolCall(&part); call != nil {
toolCalls = append(toolCalls, *call)
}
} else {
if part.ExecutableCode != nil {
@@ -411,9 +400,9 @@ func responseGeminiChat2OpenAI(response *GeminiChatResponse) *dto.OpenAITextResp
}
}
}
if len(tool_calls) > 0 {
choice.Message.SetToolCalls(tool_calls)
is_tool_call = true
if len(toolCalls) > 0 {
choice.Message.SetToolCalls(toolCalls)
isToolCall = true
}
choice.Message.SetStringContent(strings.Join(texts, "\n"))
@@ -429,7 +418,7 @@ func responseGeminiChat2OpenAI(response *GeminiChatResponse) *dto.OpenAITextResp
choice.FinishReason = constant.FinishReasonContentFilter
}
}
if is_tool_call {
if isToolCall {
choice.FinishReason = constant.FinishReasonToolCalls
}
@@ -468,7 +457,7 @@ func streamResponseGeminiChat2OpenAI(geminiResponse *GeminiChatResponse) (*dto.C
for _, part := range candidate.Content.Parts {
if part.FunctionCall != nil {
isTools = true
if call := getToolCall(&part); call != nil {
if call := getResponseToolCall(&part); call != nil {
call.SetIndex(len(choice.Delta.ToolCalls))
choice.Delta.ToolCalls = append(choice.Delta.ToolCalls, *call)
}

View File

@@ -61,7 +61,7 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
if info.RelayMode == constant.RelayModeRerank {
err, usage = jinaRerankHandler(c, resp)
err, usage = JinaRerankHandler(c, resp)
} else if info.RelayMode == constant.RelayModeEmbeddings {
err, usage = jinaEmbeddingHandler(c, resp)
}

View File

@@ -9,7 +9,7 @@ import (
"one-api/service"
)
func jinaRerankHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
func JinaRerankHandler(c *gin.Context, resp *http.Response) (*dto.OpenAIErrorWithStatusCode, *dto.Usage) {
responseBody, err := io.ReadAll(resp.Body)
if err != nil {
return service.OpenAIErrorWrapper(err, "read_response_body_failed", http.StatusInternalServerError), nil

View File

@@ -3,22 +3,22 @@ package ollama
import "one-api/dto"
type OllamaRequest struct {
Model string `json:"model,omitempty"`
Messages []dto.Message `json:"messages,omitempty"`
Stream bool `json:"stream,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
Seed float64 `json:"seed,omitempty"`
Topp float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
Stop any `json:"stop,omitempty"`
MaxTokens uint `json:"max_tokens,omitempty"`
Tools []dto.ToolCall `json:"tools,omitempty"`
ResponseFormat any `json:"response_format,omitempty"`
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
PresencePenalty float64 `json:"presence_penalty,omitempty"`
Suffix any `json:"suffix,omitempty"`
StreamOptions *dto.StreamOptions `json:"stream_options,omitempty"`
Prompt any `json:"prompt,omitempty"`
Model string `json:"model,omitempty"`
Messages []dto.Message `json:"messages,omitempty"`
Stream bool `json:"stream,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
Seed float64 `json:"seed,omitempty"`
Topp float64 `json:"top_p,omitempty"`
TopK int `json:"top_k,omitempty"`
Stop any `json:"stop,omitempty"`
MaxTokens uint `json:"max_tokens,omitempty"`
Tools []dto.ToolCallRequest `json:"tools,omitempty"`
ResponseFormat any `json:"response_format,omitempty"`
FrequencyPenalty float64 `json:"frequency_penalty,omitempty"`
PresencePenalty float64 `json:"presence_penalty,omitempty"`
Suffix any `json:"suffix,omitempty"`
StreamOptions *dto.StreamOptions `json:"stream_options,omitempty"`
Prompt any `json:"prompt,omitempty"`
}
type Options struct {

View File

@@ -14,6 +14,7 @@ import (
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/channel/ai360"
"one-api/relay/channel/jina"
"one-api/relay/channel/lingyiwanwu"
"one-api/relay/channel/minimax"
"one-api/relay/channel/moonshot"
@@ -146,7 +147,7 @@ func (a *Adaptor) ConvertRequest(c *gin.Context, info *relaycommon.RelayInfo, re
}
func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dto.RerankRequest) (any, error) {
return nil, errors.New("not implemented")
return request, nil
}
func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error) {
@@ -228,6 +229,8 @@ func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycom
err, usage = OpenaiSTTHandler(c, resp, info, a.ResponseFormat)
case constant.RelayModeImagesGenerations:
err, usage = OpenaiTTSHandler(c, resp, info)
case constant.RelayModeRerank:
err, usage = jina.JinaRerankHandler(c, resp)
default:
if info.IsStream {
err, usage = OaiStreamHandler(c, resp, info)

View File

@@ -11,6 +11,7 @@ var ModelList = []string{
"chatgpt-4o-latest",
"gpt-4o", "gpt-4o-2024-05-13", "gpt-4o-2024-08-06", "gpt-4o-2024-11-20",
"gpt-4o-mini", "gpt-4o-mini-2024-07-18",
"gpt-4.5-preview", "gpt-4.5-preview-2025-02-27",
"o1-preview", "o1-preview-2024-09-12",
"o1-mini", "o1-mini-2024-09-12",
"o3-mini", "o3-mini-2025-01-31",

View File

@@ -0,0 +1,74 @@
package openrouter
import (
"errors"
"fmt"
"github.com/gin-gonic/gin"
"io"
"net/http"
"one-api/dto"
"one-api/relay/channel"
"one-api/relay/channel/openai"
relaycommon "one-api/relay/common"
)
type Adaptor struct {
}
func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error) {
//TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
//TODO implement me
return nil, errors.New("not implemented")
}
func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
}
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
return fmt.Sprintf("%s/v1/chat/completions", info.BaseUrl), nil
}
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
channel.SetupApiRequestHeader(info, c, req)
req.Set("Authorization", fmt.Sprintf("Bearer %s", info.ApiKey))
req.Set("HTTP-Referer", "https://github.com/Calcium-Ion/new-api")
req.Set("X-Title", "New API")
return nil
}
func (a *Adaptor) ConvertRequest(c *gin.Context, info *relaycommon.RelayInfo, request *dto.GeneralOpenAIRequest) (any, error) {
return request, nil
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dto.RerankRequest) (any, error) {
return nil, errors.New("not implemented")
}
func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error) {
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *dto.OpenAIErrorWithStatusCode) {
if info.IsStream {
err, usage = openai.OaiStreamHandler(c, resp, info)
} else {
err, usage = openai.OpenaiHandler(c, resp, info.PromptTokens, info.UpstreamModelName)
}
return
}
func (a *Adaptor) GetModelList() []string {
return ModelList
}
func (a *Adaptor) GetChannelName() string {
return ChannelName
}

View File

@@ -0,0 +1,5 @@
package openrouter
var ModelList = []string{}
var ChannelName = "openrouter"

View File

@@ -28,6 +28,7 @@ var claudeModelMap = map[string]string{
"claude-3-opus-20240229": "claude-3-opus@20240229",
"claude-3-haiku-20240307": "claude-3-haiku@20240307",
"claude-3-5-sonnet-20240620": "claude-3-5-sonnet@20240620",
"claude-3-7-sonnet-20250219": "claude-3-7-sonnet@20250219",
}
const anthropicVersion = "vertex-2023-10-16"
@@ -132,7 +133,7 @@ func (a *Adaptor) ConvertRequest(c *gin.Context, info *relaycommon.RelayInfo, re
if err = copier.Copy(vertexClaudeReq, claudeReq); err != nil {
return nil, errors.New("failed to copy claude request")
}
c.Set("request_model", request.Model)
c.Set("request_model", claudeReq.Model)
return vertexClaudeReq, nil
} else if a.RequestMode == RequestModeGemini {
geminiRequest, err := gemini.CovertGemini2OpenAI(*request)
@@ -156,7 +157,6 @@ func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.Rela
return nil, errors.New("not implemented")
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}

View File

@@ -30,6 +30,7 @@ const (
APITypeMokaAI
APITypeVolcEngine
APITypeBaiduV2
APITypeOpenRouter
APITypeDummy // this one is only for count, do not add any channel after this
)
@@ -86,6 +87,8 @@ func ChannelType2APIType(channelType int) (int, bool) {
apiType = APITypeVolcEngine
case common.ChannelTypeBaiduV2:
apiType = APITypeBaiduV2
case common.ChannelTypeOpenRouter:
apiType = APITypeOpenRouter
}
if apiType == -1 {
return APITypeOpenAI, false

View File

@@ -1,6 +1,7 @@
package helper
import (
"fmt"
"github.com/gin-gonic/gin"
"one-api/common"
relaycommon "one-api/relay/common"
@@ -15,7 +16,7 @@ type PriceData struct {
ShouldPreConsumedQuota int
}
func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens int, maxTokens int) PriceData {
func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens int, maxTokens int) (PriceData, error) {
modelPrice, usePrice := common.GetModelPrice(info.OriginModelName, false)
groupRatio := setting.GetGroupRatio(info.Group)
var preConsumedQuota int
@@ -25,7 +26,11 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
if maxTokens != 0 {
preConsumedTokens = promptTokens + maxTokens
}
modelRatio = common.GetModelRatio(info.OriginModelName)
var success bool
modelRatio, success = common.GetModelRatio(info.OriginModelName)
if !success {
return PriceData{}, fmt.Errorf("model %s ratio not found", info.OriginModelName)
}
ratio := modelRatio * groupRatio
preConsumedQuota = int(float64(preConsumedTokens) * ratio)
} else {
@@ -37,5 +42,5 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
GroupRatio: groupRatio,
UsePrice: usePrice,
ShouldPreConsumedQuota: preConsumedQuota,
}
}, nil
}

View File

@@ -75,7 +75,10 @@ func AudioHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
relayInfo.PromptTokens = promptTokens
}
priceData := helper.ModelPriceHelper(c, relayInfo, preConsumedTokens, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, preConsumedTokens, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
userQuota, err := model.GetUserQuota(relayInfo.UserId, false)
if err != nil {

View File

@@ -86,7 +86,10 @@ func ImageHelper(c *gin.Context) *dto.OpenAIErrorWithStatusCode {
imageRequest.Model = relayInfo.UpstreamModelName
priceData := helper.ModelPriceHelper(c, relayInfo, 0, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, 0, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
if !priceData.UsePrice {
// modelRatio 16 = modelPrice $0.04
// per 1 modelRatio = $0.04 / 16

View File

@@ -106,8 +106,10 @@ func TextHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode) {
c.Set("prompt_tokens", promptTokens)
}
priceData := helper.ModelPriceHelper(c, relayInfo, promptTokens, int(textRequest.MaxTokens))
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptTokens, int(textRequest.MaxTokens))
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
// pre-consume quota 预消耗配额
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {

View File

@@ -18,6 +18,7 @@ import (
"one-api/relay/channel/mokaai"
"one-api/relay/channel/ollama"
"one-api/relay/channel/openai"
"one-api/relay/channel/openrouter"
"one-api/relay/channel/palm"
"one-api/relay/channel/perplexity"
"one-api/relay/channel/siliconflow"
@@ -83,6 +84,8 @@ func GetAdaptor(apiType int) channel.Adaptor {
return &volcengine.Adaptor{}
case constant.APITypeBaiduV2:
return &baidu_v2.Adaptor{}
case constant.APITypeOpenRouter:
return &openrouter.Adaptor{}
}
return nil
}

View File

@@ -57,8 +57,10 @@ func EmbeddingHelper(c *gin.Context) (openaiErr *dto.OpenAIErrorWithStatusCode)
promptToken := getEmbeddingPromptToken(*embeddingRequest)
relayInfo.PromptTokens = promptToken
priceData := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
// pre-consume quota 预消耗配额
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {

View File

@@ -50,8 +50,10 @@ func RerankHelper(c *gin.Context, relayMode int) (openaiErr *dto.OpenAIErrorWith
promptToken := getRerankPromptToken(*rerankRequest)
relayInfo.PromptTokens = promptToken
priceData := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
priceData, err := helper.ModelPriceHelper(c, relayInfo, promptToken, 0)
if err != nil {
return service.OpenAIErrorWrapperLocal(err, "model_price_error", http.StatusInternalServerError)
}
// pre-consume quota 预消耗配额
preConsumedQuota, userQuota, openaiErr := preConsumeQuota(c, priceData.ShouldPreConsumedQuota, relayInfo)
if openaiErr != nil {

View File

@@ -65,7 +65,7 @@ func WssHelper(c *gin.Context, ws *websocket.Conn) (openaiErr *dto.OpenAIErrorWi
//if realtimeEvent.Session.MaxResponseOutputTokens != 0 {
// preConsumedTokens = promptTokens + int(realtimeEvent.Session.MaxResponseOutputTokens)
//}
modelRatio = common.GetModelRatio(relayInfo.UpstreamModelName)
modelRatio, _ = common.GetModelRatio(relayInfo.UpstreamModelName)
ratio = modelRatio * groupRatio
preConsumedQuota = int(float64(preConsumedTokens) * ratio)
} else {

View File

@@ -84,6 +84,7 @@ func SetApiRouter(router *gin.Engine) {
channelRoute.GET("/", controller.GetAllChannels)
channelRoute.GET("/search", controller.SearchChannels)
channelRoute.GET("/models", controller.ChannelListModels)
channelRoute.GET("/models_enabled", controller.EnabledListModels)
channelRoute.GET("/:id", controller.GetChannel)
channelRoute.GET("/test", controller.TestAllChannels)
channelRoute.GET("/test/:id", controller.TestChannel)

View File

@@ -75,7 +75,7 @@ func PreWssConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, usag
audioInputTokens := usage.InputTokenDetails.AudioTokens
audioOutTokens := usage.OutputTokenDetails.AudioTokens
groupRatio := setting.GetGroupRatio(relayInfo.Group)
modelRatio := common.GetModelRatio(modelName)
modelRatio, _ := common.GetModelRatio(modelName)
quotaInfo := QuotaInfo{
InputDetails: TokenDetails{

View File

@@ -1,7 +1,6 @@
package service
import (
"encoding/json"
"errors"
"fmt"
"image"
@@ -170,12 +169,7 @@ func CountTokenChatRequest(info *relaycommon.RelayInfo, request dto.GeneralOpenA
}
tkm += msgTokens
if request.Tools != nil {
toolsData, _ := json.Marshal(request.Tools)
var openaiTools []dto.OpenAITools
err := json.Unmarshal(toolsData, &openaiTools)
if err != nil {
return 0, errors.New(fmt.Sprintf("count_tools_token_fail: %s", err.Error()))
}
openaiTools := request.Tools
countStr := ""
for _, tool := range openaiTools {
countStr = tool.Function.Name

259
setting/config/config.go Normal file
View File

@@ -0,0 +1,259 @@
package config
import (
"encoding/json"
"one-api/common"
"reflect"
"strconv"
"strings"
"sync"
)
// ConfigManager 统一管理所有配置
type ConfigManager struct {
configs map[string]interface{}
mutex sync.RWMutex
}
var GlobalConfig = NewConfigManager()
func NewConfigManager() *ConfigManager {
return &ConfigManager{
configs: make(map[string]interface{}),
}
}
// Register 注册一个配置模块
func (cm *ConfigManager) Register(name string, config interface{}) {
cm.mutex.Lock()
defer cm.mutex.Unlock()
cm.configs[name] = config
}
// Get 获取指定配置模块
func (cm *ConfigManager) Get(name string) interface{} {
cm.mutex.RLock()
defer cm.mutex.RUnlock()
return cm.configs[name]
}
// LoadFromDB 从数据库加载配置
func (cm *ConfigManager) LoadFromDB(options map[string]string) error {
cm.mutex.Lock()
defer cm.mutex.Unlock()
for name, config := range cm.configs {
prefix := name + "."
configMap := make(map[string]string)
// 收集属于此配置的所有选项
for key, value := range options {
if strings.HasPrefix(key, prefix) {
configKey := strings.TrimPrefix(key, prefix)
configMap[configKey] = value
}
}
// 如果找到配置项,则更新配置
if len(configMap) > 0 {
if err := updateConfigFromMap(config, configMap); err != nil {
common.SysError("failed to update config " + name + ": " + err.Error())
continue
}
}
}
return nil
}
// SaveToDB 将配置保存到数据库
func (cm *ConfigManager) SaveToDB(updateFunc func(key, value string) error) error {
cm.mutex.RLock()
defer cm.mutex.RUnlock()
for name, config := range cm.configs {
configMap, err := configToMap(config)
if err != nil {
return err
}
for key, value := range configMap {
dbKey := name + "." + key
if err := updateFunc(dbKey, value); err != nil {
return err
}
}
}
return nil
}
// 辅助函数将配置对象转换为map
func configToMap(config interface{}) (map[string]string, error) {
result := make(map[string]string)
val := reflect.ValueOf(config)
if val.Kind() == reflect.Ptr {
val = val.Elem()
}
if val.Kind() != reflect.Struct {
return nil, nil
}
typ := val.Type()
for i := 0; i < val.NumField(); i++ {
field := val.Field(i)
fieldType := typ.Field(i)
// 跳过未导出字段
if !fieldType.IsExported() {
continue
}
// 获取json标签作为键名
key := fieldType.Tag.Get("json")
if key == "" || key == "-" {
key = fieldType.Name
}
// 处理不同类型的字段
var strValue string
switch field.Kind() {
case reflect.String:
strValue = field.String()
case reflect.Bool:
strValue = strconv.FormatBool(field.Bool())
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
strValue = strconv.FormatInt(field.Int(), 10)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
strValue = strconv.FormatUint(field.Uint(), 10)
case reflect.Float32, reflect.Float64:
strValue = strconv.FormatFloat(field.Float(), 'f', -1, 64)
case reflect.Map, reflect.Slice, reflect.Struct:
// 复杂类型使用JSON序列化
bytes, err := json.Marshal(field.Interface())
if err != nil {
return nil, err
}
strValue = string(bytes)
default:
// 跳过不支持的类型
continue
}
result[key] = strValue
}
return result, nil
}
// 辅助函数从map更新配置对象
func updateConfigFromMap(config interface{}, configMap map[string]string) error {
val := reflect.ValueOf(config)
if val.Kind() != reflect.Ptr {
return nil
}
val = val.Elem()
if val.Kind() != reflect.Struct {
return nil
}
typ := val.Type()
for i := 0; i < val.NumField(); i++ {
field := val.Field(i)
fieldType := typ.Field(i)
// 跳过未导出字段
if !fieldType.IsExported() {
continue
}
// 获取json标签作为键名
key := fieldType.Tag.Get("json")
if key == "" || key == "-" {
key = fieldType.Name
}
// 检查map中是否有对应的值
strValue, ok := configMap[key]
if !ok {
continue
}
// 根据字段类型设置值
if !field.CanSet() {
continue
}
switch field.Kind() {
case reflect.String:
field.SetString(strValue)
case reflect.Bool:
boolValue, err := strconv.ParseBool(strValue)
if err != nil {
continue
}
field.SetBool(boolValue)
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
intValue, err := strconv.ParseInt(strValue, 10, 64)
if err != nil {
continue
}
field.SetInt(intValue)
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
uintValue, err := strconv.ParseUint(strValue, 10, 64)
if err != nil {
continue
}
field.SetUint(uintValue)
case reflect.Float32, reflect.Float64:
floatValue, err := strconv.ParseFloat(strValue, 64)
if err != nil {
continue
}
field.SetFloat(floatValue)
case reflect.Map, reflect.Slice, reflect.Struct:
// 复杂类型使用JSON反序列化
err := json.Unmarshal([]byte(strValue), field.Addr().Interface())
if err != nil {
continue
}
}
}
return nil
}
// ConfigToMap 将配置对象转换为map导出函数
func ConfigToMap(config interface{}) (map[string]string, error) {
return configToMap(config)
}
// UpdateConfigFromMap 从map更新配置对象导出函数
func UpdateConfigFromMap(config interface{}, configMap map[string]string) error {
return updateConfigFromMap(config, configMap)
}
// ExportAllConfigs 导出所有已注册的配置为扁平结构
func (cm *ConfigManager) ExportAllConfigs() map[string]string {
cm.mutex.RLock()
defer cm.mutex.RUnlock()
result := make(map[string]string)
for name, cfg := range cm.configs {
configMap, err := ConfigToMap(cfg)
if err != nil {
continue
}
// 使用 "模块名.配置项" 的格式添加到结果中
for key, value := range configMap {
result[name+"."+key] = value
}
}
return result
}

View File

@@ -0,0 +1,65 @@
package model_setting
import (
"net/http"
"one-api/setting/config"
)
//var claudeHeadersSettings = map[string][]string{}
//
//var ClaudeThinkingAdapterEnabled = true
//var ClaudeThinkingAdapterMaxTokens = 8192
//var ClaudeThinkingAdapterBudgetTokensPercentage = 0.8
// ClaudeSettings 定义Claude模型的配置
type ClaudeSettings struct {
HeadersSettings map[string]map[string][]string `json:"model_headers_settings"`
DefaultMaxTokens map[string]int `json:"default_max_tokens"`
ThinkingAdapterEnabled bool `json:"thinking_adapter_enabled"`
ThinkingAdapterBudgetTokensPercentage float64 `json:"thinking_adapter_budget_tokens_percentage"`
}
// 默认配置
var defaultClaudeSettings = ClaudeSettings{
HeadersSettings: map[string]map[string][]string{},
ThinkingAdapterEnabled: true,
DefaultMaxTokens: map[string]int{
"default": 8192,
},
ThinkingAdapterBudgetTokensPercentage: 0.8,
}
// 全局实例
var claudeSettings = defaultClaudeSettings
func init() {
// 注册到全局配置管理器
config.GlobalConfig.Register("claude", &claudeSettings)
}
// GetClaudeSettings 获取Claude配置
func GetClaudeSettings() *ClaudeSettings {
// check default max tokens must have default key
if _, ok := claudeSettings.DefaultMaxTokens["default"]; !ok {
claudeSettings.DefaultMaxTokens["default"] = 8192
}
return &claudeSettings
}
func (c *ClaudeSettings) WriteHeaders(originModel string, httpHeader *http.Header) {
if headers, ok := c.HeadersSettings[originModel]; ok {
for headerKey, headerValues := range headers {
httpHeader.Del(headerKey)
for _, headerValue := range headerValues {
httpHeader.Add(headerKey, headerValue)
}
}
}
}
func (c *ClaudeSettings) GetDefaultMaxTokens(model string) int {
if maxTokens, ok := c.DefaultMaxTokens[model]; ok {
return maxTokens
}
return c.DefaultMaxTokens["default"]
}

View File

@@ -0,0 +1,52 @@
package model_setting
import (
"one-api/setting/config"
)
// GeminiSettings 定义Gemini模型的配置
type GeminiSettings struct {
SafetySettings map[string]string `json:"safety_settings"`
VersionSettings map[string]string `json:"version_settings"`
}
// 默认配置
var defaultGeminiSettings = GeminiSettings{
SafetySettings: map[string]string{
"default": "OFF",
"HARM_CATEGORY_CIVIC_INTEGRITY": "BLOCK_NONE",
},
VersionSettings: map[string]string{
"default": "v1beta",
"gemini-1.0-pro": "v1",
},
}
// 全局实例
var geminiSettings = defaultGeminiSettings
func init() {
// 注册到全局配置管理器
config.GlobalConfig.Register("gemini", &geminiSettings)
}
// GetGeminiSettings 获取Gemini配置
func GetGeminiSettings() *GeminiSettings {
return &geminiSettings
}
// GetGeminiSafetySetting 获取安全设置
func GetGeminiSafetySetting(key string) string {
if value, ok := geminiSettings.SafetySettings[key]; ok {
return value
}
return geminiSettings.SafetySettings["default"]
}
// GetGeminiVersionSetting 获取版本设置
func GetGeminiVersionSetting(key string) string {
if value, ok := geminiSettings.VersionSettings[key]; ok {
return value
}
return geminiSettings.VersionSettings["default"]
}

View File

@@ -0,0 +1,83 @@
import React, { useEffect, useState } from 'react';
import { Card, Spin, Tabs } from '@douyinfe/semi-ui';
import { API, showError, showSuccess } from '../helpers';
import { useTranslation } from 'react-i18next';
import SettingGeminiModel from '../pages/Setting/Model/SettingGeminiModel.js';
import SettingClaudeModel from '../pages/Setting/Model/SettingClaudeModel.js';
const ModelSetting = () => {
const { t } = useTranslation();
let [inputs, setInputs] = useState({
'gemini.safety_settings': '',
'gemini.version_settings': '',
'claude.model_headers_settings': '',
'claude.thinking_adapter_enabled': true,
'claude.default_max_tokens': '',
'claude.thinking_adapter_budget_tokens_percentage': 0.8,
});
let [loading, setLoading] = useState(false);
const getOptions = async () => {
const res = await API.get('/api/option/');
const { success, message, data } = res.data;
if (success) {
let newInputs = {};
data.forEach((item) => {
if (
item.key === 'gemini.safety_settings' ||
item.key === 'gemini.version_settings' ||
item.key === 'claude.model_headers_settings'||
item.key === 'claude.default_max_tokens'
) {
item.value = JSON.stringify(JSON.parse(item.value), null, 2);
}
if (
item.key.endsWith('Enabled')
) {
newInputs[item.key] = item.value === 'true' ? true : false;
} else {
newInputs[item.key] = item.value;
}
});
setInputs(newInputs);
} else {
showError(message);
}
};
async function onRefresh() {
try {
setLoading(true);
await getOptions();
// showSuccess('刷新成功');
} catch (error) {
showError('刷新失败');
} finally {
setLoading(false);
}
}
useEffect(() => {
onRefresh();
}, []);
return (
<>
<Spin spinning={loading} size='large'>
{/* Gemini */}
<Card style={{ marginTop: '10px' }}>
<SettingGeminiModel options={inputs} refresh={onRefresh} />
</Card>
{/* Claude */}
<Card style={{ marginTop: '10px' }}>
<SettingClaudeModel options={inputs} refresh={onRefresh} />
</Card>
</Spin>
</>
);
};
export default ModelSetting;

View File

@@ -16,6 +16,7 @@ import ModelRatioSettings from '../pages/Setting/Operation/ModelRatioSettings.js
import { API, showError, showSuccess } from '../helpers';
import SettingsChats from '../pages/Setting/Operation/SettingsChats.js';
import { useTranslation } from 'react-i18next';
import ModelRatioNotSetEditor from '../pages/Setting/Operation/ModelRationNotSetEditor.js';
const OperationSetting = () => {
const { t } = useTranslation();
@@ -158,6 +159,9 @@ const OperationSetting = () => {
<Tabs.TabPane tab={t('可视化倍率设置')} itemKey="visual">
<ModelSettingsVisualEditor options={inputs} refresh={onRefresh} />
</Tabs.TabPane>
<Tabs.TabPane tab={t('未设置倍率模型')} itemKey="unset_models">
<ModelRatioNotSetEditor options={inputs} refresh={onRefresh} />
</Tabs.TabPane>
</Tabs>
</Card>
</Spin>

View File

@@ -376,7 +376,7 @@ const UsersTable = () => {
if (searchKeyword === '') {
await loadUsers(activePage, pageSize);
} else {
await searchUsers(searchKeyword, searchGroup);
await searchUsers(activePage, pageSize, searchKeyword, searchGroup);
}
};

View File

@@ -1074,10 +1074,9 @@
"删除所选通道": "Delete selected channels",
"标签聚合模式": "Enable tag mode",
"没有账户?": "No account? ",
"注意,模型部署名称必须和模型名称保持一致,因为 One API 会把请求体中的 model 参数替换为你的部署名称(模型名称中的点会被剔除)": "Note: The model deployment name must match the model name because One API will replace the model parameter in the request body with your deployment name (dots in the model name will be removed)",
"请输入 AZURE_OPENAI_ENDPOINT例如https://docs-test-001.openai.azure.com": "Please enter AZURE_OPENAI_ENDPOINT, e.g.: https://docs-test-001.openai.azure.com",
"默认 API 版本": "Default API Version",
"请输入默认 API 版本例如2023-06-01-preview,该配置可以被实际的请求查询参数所覆盖": "Please enter default API version, e.g.: 2023-06-01-preview. This configuration can be overridden by actual request query parameters",
"请输入默认 API 版本例如2024-12-01-preview": "Please enter default API version, e.g.: 2024-12-01-preview.",
"请为渠道命名": "Please name the channel",
"请选择可以使用该渠道的分组": "Please select groups that can use this channel",
"请在系统设置页面编辑分组倍率以添加新的分组:": "Please edit Group ratios in system settings to add new groups:",
@@ -1281,5 +1280,42 @@
"频率限制的周期(分钟)": "Rate limit period (minutes)",
"只包括请求成功的次数": "Only include successful request times",
"保存模型速率限制": "Save model rate limit settings",
"速率限制设置": "Rate limit settings"
"速率限制设置": "Rate limit settings",
"获取启用模型失败:": "Failed to get enabled models:",
"获取启用模型失败": "Failed to get enabled models",
"JSON解析错误:": "JSON parsing error:",
"保存失败:": "Save failed:",
"输入模型倍率": "Enter model ratio",
"输入补全倍率": "Enter completion ratio",
"请输入数字": "Please enter a number",
"模型名称已存在": "Model name already exists",
"添加成功": "Added successfully",
"请先选择需要批量设置的模型": "Please select models for batch setting first",
"请输入模型倍率和补全倍率": "Please enter model ratio and completion ratio",
"请输入有效的数字": "Please enter a valid number",
"请输入填充值": "Please enter a value",
"批量设置成功": "Batch setting successful",
"已为 {{count}} 个模型设置{{type}}": "Set {{type}} for {{count}} models",
"固定价格": "Fixed Price",
"模型倍率和补全倍率": "Model Ratio and Completion Ratio",
"批量设置": "Batch Setting",
"搜索模型名称": "Search model name",
"此页面仅显示未设置价格或倍率的模型,设置后将自动从列表中移除": "This page only shows models without price or ratio settings. After setting, they will be automatically removed from the list",
"没有未设置的模型": "No unconfigured models",
"定价模式": "Pricing Mode",
"固定价格(每次)": "Fixed Price (per use)",
"输入每次价格": "Enter per-use price",
"批量设置模型参数": "Batch Set Model Parameters",
"设置类型": "Setting Type",
"模型倍率值": "Model Ratio Value",
"补全倍率值": "Completion Ratio Value",
"请输入模型倍率": "Enter model ratio",
"请输入补全倍率": "Enter completion ratio",
"请输入数值": "Enter a value",
"将为选中的 ": "Will set for selected ",
" 个模型设置相同的值": " models with the same value",
"当前设置类型: ": "Current setting type: ",
"固定价格值": "Fixed Price Value",
"未设置倍率模型": "Models without ratio settings",
"模型倍率和补全倍率同时设置": "Both model ratio and completion ratio are set"
}

View File

@@ -327,9 +327,6 @@ const EditChannel = (props) => {
localInputs.base_url.length - 1
);
}
if (localInputs.type === 3 && localInputs.other === '') {
localInputs.other = '2023-06-01-preview';
}
if (localInputs.type === 18 && localInputs.other === '') {
localInputs.other = 'v2.1';
}
@@ -494,7 +491,7 @@ const EditChannel = (props) => {
<Input
label={t('默认 API 版本')}
name="azure_other"
placeholder={t('请输入默认 API 版本例如2023-06-01-preview,该配置可以被实际的请求查询参数所覆盖')}
placeholder={t('请输入默认 API 版本例如2024-12-01-preview')}
onChange={(value) => {
handleInputChange('other', value);
}}

View File

@@ -0,0 +1,169 @@
import React, { useEffect, useState, useRef } from 'react';
import { Button, Col, Form, Row, Spin } from '@douyinfe/semi-ui';
import {
compareObjects,
API,
showError,
showSuccess,
showWarning, verifyJSON
} from '../../../helpers';
import { useTranslation } from 'react-i18next';
import Text from '@douyinfe/semi-ui/lib/es/typography/text';
const CLAUDE_HEADER = {
'claude-3-7-sonnet-20250219-thinking': {
'anthropic-beta': ['output-128k-2025-02-19', 'token-efficient-tools-2025-02-19'],
}
};
const CLAUDE_DEFAULT_MAX_TOKENS = {
'default': 8192,
'claude-3-7-sonnet-20250219-thinking': 8192,
}
export default function SettingClaudeModel(props) {
const { t } = useTranslation();
const [loading, setLoading] = useState(false);
const [inputs, setInputs] = useState({
'claude.model_headers_settings': '',
'claude.thinking_adapter_enabled': true,
'claude.default_max_tokens': '',
'claude.thinking_adapter_budget_tokens_percentage': 0.8,
});
const refForm = useRef();
const [inputsRow, setInputsRow] = useState(inputs);
function onSubmit() {
const updateArray = compareObjects(inputs, inputsRow);
if (!updateArray.length) return showWarning(t('你似乎并没有修改什么'));
const requestQueue = updateArray.map((item) => {
let value = String(inputs[item.key]);
return API.put('/api/option/', {
key: item.key,
value,
});
});
setLoading(true);
Promise.all(requestQueue)
.then((res) => {
if (requestQueue.length === 1) {
if (res.includes(undefined)) return;
} else if (requestQueue.length > 1) {
if (res.includes(undefined)) return showError(t('部分保存失败,请重试'));
}
showSuccess(t('保存成功'));
props.refresh();
})
.catch(() => {
showError(t('保存失败,请重试'));
})
.finally(() => {
setLoading(false);
});
}
useEffect(() => {
const currentInputs = {};
for (let key in props.options) {
if (Object.keys(inputs).includes(key)) {
currentInputs[key] = props.options[key];
}
}
setInputs(currentInputs);
setInputsRow(structuredClone(currentInputs));
refForm.current.setValues(currentInputs);
}, [props.options]);
return (
<>
<Spin spinning={loading}>
<Form
values={inputs}
getFormApi={(formAPI) => (refForm.current = formAPI)}
style={{ marginBottom: 15 }}
>
<Form.Section text={t('Claude设置')}>
<Row>
<Col span={16}>
<Form.TextArea
label={t('Claude请求头覆盖')}
field={'claude.model_headers_settings'}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(CLAUDE_HEADER, null, 2)}
extraText={t('示例') + '\n' + JSON.stringify(CLAUDE_HEADER, null, 2)}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => verifyJSON(value),
message: t('不是合法的 JSON 字符串')
}
]}
onChange={(value) => setInputs({ ...inputs, 'claude.model_headers_settings': value })}
/>
</Col>
</Row>
<Row>
<Col span={8}>
<Form.TextArea
label={t('缺省 MaxTokens')}
field={'claude.default_max_tokens'}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(CLAUDE_DEFAULT_MAX_TOKENS, null, 2)}
extraText={t('示例') + '\n' + JSON.stringify(CLAUDE_DEFAULT_MAX_TOKENS, null, 2)}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => verifyJSON(value),
message: t('不是合法的 JSON 字符串')
}
]}
onChange={(value) => setInputs({ ...inputs, 'claude.default_max_tokens': value })}
/>
</Col>
</Row>
<Row>
<Col span={16}>
<Form.Switch
label={t('启用Claude思考适配-thinking后缀')}
field={'claude.thinking_adapter_enabled'}
onChange={(value) => setInputs({ ...inputs, 'claude.thinking_adapter_enabled': value })}
/>
</Col>
</Row>
<Row>
<Col span={16}>
{/*//展示MaxTokens和BudgetTokens的计算公式, 并展示实际数字*/}
<Text>
{t('Claude思考适配 BudgetTokens = MaxTokens * BudgetTokens 百分比')}
</Text>
</Col>
</Row>
<Row>
<Col span={8}>
<Form.InputNumber
label={t('思考适配 BudgetTokens 百分比')}
field={'claude.thinking_adapter_budget_tokens_percentage'}
initValue={''}
extraText={t('0.1-1之间的小数')}
min={0.1}
max={1}
onChange={(value) => setInputs({ ...inputs, 'claude.thinking_adapter_budget_tokens_percentage': value })}
/>
</Col>
</Row>
<Row>
<Button size='default' onClick={onSubmit}>
{t('保存')}
</Button>
</Row>
</Form.Section>
</Form>
</Spin>
</>
);
}

View File

@@ -0,0 +1,139 @@
import React, { useEffect, useState, useRef } from 'react';
import { Button, Col, Form, Row, Spin } from '@douyinfe/semi-ui';
import {
compareObjects,
API,
showError,
showSuccess,
showWarning, verifyJSON
} from '../../../helpers';
import { useTranslation } from 'react-i18next';
const GEMINI_SETTING_EXAMPLE = {
'default': 'OFF',
'HARM_CATEGORY_CIVIC_INTEGRITY': 'BLOCK_NONE',
};
const GEMINI_VERSION_EXAMPLE = {
'default': 'v1beta',
};
export default function SettingGeminiModel(props) {
const { t } = useTranslation();
const [loading, setLoading] = useState(false);
const [inputs, setInputs] = useState({
'gemini.safety_settings': '',
'gemini.version_settings': '',
});
const refForm = useRef();
const [inputsRow, setInputsRow] = useState(inputs);
function onSubmit() {
const updateArray = compareObjects(inputs, inputsRow);
if (!updateArray.length) return showWarning(t('你似乎并没有修改什么'));
const requestQueue = updateArray.map((item) => {
let value = '';
if (typeof inputs[item.key] === 'boolean') {
value = String(inputs[item.key]);
} else {
value = inputs[item.key];
}
return API.put('/api/option/', {
key: item.key,
value,
});
});
setLoading(true);
Promise.all(requestQueue)
.then((res) => {
if (requestQueue.length === 1) {
if (res.includes(undefined)) return;
} else if (requestQueue.length > 1) {
if (res.includes(undefined)) return showError(t('部分保存失败,请重试'));
}
showSuccess(t('保存成功'));
props.refresh();
})
.catch(() => {
showError(t('保存失败,请重试'));
})
.finally(() => {
setLoading(false);
});
}
useEffect(() => {
const currentInputs = {};
for (let key in props.options) {
if (Object.keys(inputs).includes(key)) {
currentInputs[key] = props.options[key];
}
}
setInputs(currentInputs);
setInputsRow(structuredClone(currentInputs));
refForm.current.setValues(currentInputs);
}, [props.options]);
return (
<>
<Spin spinning={loading}>
<Form
values={inputs}
getFormApi={(formAPI) => (refForm.current = formAPI)}
style={{ marginBottom: 15 }}
>
<Form.Section text={t('Gemini设置')}>
<Row>
<Col span={16}>
<Form.TextArea
label={t('Gemini安全设置')}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(GEMINI_SETTING_EXAMPLE, null, 2)}
field={'gemini.safety_settings'}
extraText={t('default为默认设置可单独设置每个分类的安全等级')}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => verifyJSON(value),
message: t('不是合法的 JSON 字符串')
}
]}
onChange={(value) => setInputs({ ...inputs, 'gemini.safety_settings': value })}
/>
</Col>
</Row>
<Row>
<Col span={16}>
<Form.TextArea
label={t('Gemini版本设置')}
placeholder={t('为一个 JSON 文本,例如:') + '\n' + JSON.stringify(GEMINI_VERSION_EXAMPLE, null, 2)}
field={'gemini.version_settings'}
extraText={t('default为默认设置可单独设置每个模型的版本')}
autosize={{ minRows: 6, maxRows: 12 }}
trigger='blur'
stopValidateWithError
rules={[
{
validator: (rule, value) => verifyJSON(value),
message: t('不是合法的 JSON 字符串')
}
]}
onChange={(value) => setInputs({ ...inputs, 'gemini.version_settings': value })}
/>
</Col>
</Row>
<Row>
<Button size='default' onClick={onSubmit}>
{t('保存')}
</Button>
</Row>
</Form.Section>
</Form>
</Spin>
</>
);
}

View File

@@ -0,0 +1,550 @@
import React, { useEffect, useState } from 'react';
import { Table, Button, Input, Modal, Form, Space, Typography, Radio, Notification } from '@douyinfe/semi-ui';
import { IconDelete, IconPlus, IconSearch, IconSave, IconBolt } from '@douyinfe/semi-icons';
import { showError, showSuccess } from '../../../helpers';
import { API } from '../../../helpers';
import { useTranslation } from 'react-i18next';
export default function ModelRatioNotSetEditor(props) {
const { t } = useTranslation();
const [models, setModels] = useState([]);
const [visible, setVisible] = useState(false);
const [batchVisible, setBatchVisible] = useState(false);
const [currentModel, setCurrentModel] = useState(null);
const [searchText, setSearchText] = useState('');
const [currentPage, setCurrentPage] = useState(1);
const [pageSize, setPageSize] = useState(10);
const [loading, setLoading] = useState(false);
const [enabledModels, setEnabledModels] = useState([]);
const [selectedRowKeys, setSelectedRowKeys] = useState([]);
const [batchFillType, setBatchFillType] = useState('ratio');
const [batchFillValue, setBatchFillValue] = useState('');
const [batchRatioValue, setBatchRatioValue] = useState('');
const [batchCompletionRatioValue, setBatchCompletionRatioValue] = useState('');
const { Text } = Typography;
// 定义可选的每页显示条数
const pageSizeOptions = [10, 20, 50, 100];
const getAllEnabledModels = async () => {
try {
const res = await API.get('/api/channel/models_enabled');
const { success, message, data } = res.data;
if (success) {
setEnabledModels(data);
} else {
showError(message);
}
} catch (error) {
console.error(t('获取启用模型失败:'), error);
showError(t('获取启用模型失败'));
}
}
useEffect(() => {
// 获取所有启用的模型
getAllEnabledModels();
}, []);
useEffect(() => {
try {
const modelPrice = JSON.parse(props.options.ModelPrice || '{}');
const modelRatio = JSON.parse(props.options.ModelRatio || '{}');
const completionRatio = JSON.parse(props.options.CompletionRatio || '{}');
// 找出所有未设置价格和倍率的模型
const unsetModels = enabledModels.filter(modelName => {
const hasPrice = modelPrice[modelName] !== undefined;
const hasRatio = modelRatio[modelName] !== undefined;
const hasCompletionRatio = completionRatio[modelName] !== undefined;
// 如果模型既没有价格也没有倍率设置,则显示
return !(hasPrice || (hasRatio && hasCompletionRatio));
});
// 创建模型数据
const modelData = unsetModels.map(name => ({
name,
price: '',
ratio: '',
completionRatio: ''
}));
setModels(modelData);
// 清空选择
setSelectedRowKeys([]);
} catch (error) {
console.error(t('JSON解析错误:'), error);
}
}, [props.options, enabledModels]);
// 首先声明分页相关的工具函数
const getPagedData = (data, currentPage, pageSize) => {
const start = (currentPage - 1) * pageSize;
const end = start + pageSize;
return data.slice(start, end);
};
// 处理页面大小变化
const handlePageSizeChange = (size) => {
setPageSize(size);
// 重新计算当前页,避免数据丢失
const totalPages = Math.ceil(filteredModels.length / size);
if (currentPage > totalPages) {
setCurrentPage(totalPages || 1);
}
};
// 在 return 语句之前,先处理过滤和分页逻辑
const filteredModels = models.filter(model =>
searchText ? model.name.toLowerCase().includes(searchText.toLowerCase()) : true
);
// 然后基于过滤后的数据计算分页数据
const pagedData = getPagedData(filteredModels, currentPage, pageSize);
const SubmitData = async () => {
setLoading(true);
const output = {
ModelPrice: JSON.parse(props.options.ModelPrice || '{}'),
ModelRatio: JSON.parse(props.options.ModelRatio || '{}'),
CompletionRatio: JSON.parse(props.options.CompletionRatio || '{}')
};
try {
// 数据转换 - 只处理已修改的模型
models.forEach(model => {
// 只有当用户设置了值时才更新
if (model.price !== '') {
// 如果价格不为空,则转换为浮点数,忽略倍率参数
output.ModelPrice[model.name] = parseFloat(model.price);
} else {
if (model.ratio !== '') output.ModelRatio[model.name] = parseFloat(model.ratio);
if (model.completionRatio !== '') output.CompletionRatio[model.name] = parseFloat(model.completionRatio);
}
});
// 准备API请求数组
const finalOutput = {
ModelPrice: JSON.stringify(output.ModelPrice, null, 2),
ModelRatio: JSON.stringify(output.ModelRatio, null, 2),
CompletionRatio: JSON.stringify(output.CompletionRatio, null, 2)
};
const requestQueue = Object.entries(finalOutput).map(([key, value]) => {
return API.put('/api/option/', {
key,
value
});
});
// 批量处理请求
const results = await Promise.all(requestQueue);
// 验证结果
if (requestQueue.length === 1) {
if (results.includes(undefined)) return;
} else if (requestQueue.length > 1) {
if (results.includes(undefined)) {
return showError(t('部分保存失败,请重试'));
}
}
// 检查每个请求的结果
for (const res of results) {
if (!res.data.success) {
return showError(res.data.message);
}
}
showSuccess(t('保存成功'));
props.refresh();
// 重新获取未设置的模型
getAllEnabledModels();
} catch (error) {
console.error(t('保存失败:'), error);
showError(t('保存失败,请重试'));
} finally {
setLoading(false);
}
};
const columns = [
{
title: t('模型名称'),
dataIndex: 'name',
key: 'name',
},
{
title: t('模型固定价格'),
dataIndex: 'price',
key: 'price',
render: (text, record) => (
<Input
value={text}
placeholder={t('按量计费')}
onChange={value => updateModel(record.name, 'price', value)}
/>
)
},
{
title: t('模型倍率'),
dataIndex: 'ratio',
key: 'ratio',
render: (text, record) => (
<Input
value={text}
placeholder={record.price !== '' ? t('模型倍率') : t('输入模型倍率')}
disabled={record.price !== ''}
onChange={value => updateModel(record.name, 'ratio', value)}
/>
)
},
{
title: t('补全倍率'),
dataIndex: 'completionRatio',
key: 'completionRatio',
render: (text, record) => (
<Input
value={text}
placeholder={record.price !== '' ? t('补全倍率') : t('输入补全倍率')}
disabled={record.price !== ''}
onChange={value => updateModel(record.name, 'completionRatio', value)}
/>
)
}
];
const updateModel = (name, field, value) => {
if (value !== '' && isNaN(value)) {
showError(t('请输入数字'));
return;
}
setModels(prev =>
prev.map(model =>
model.name === name
? { ...model, [field]: value }
: model
)
);
};
const addModel = (values) => {
// 检查模型名称是否存在, 如果存在则拒绝添加
if (models.some(model => model.name === values.name)) {
showError(t('模型名称已存在'));
return;
}
setModels(prev => [{
name: values.name,
price: values.price || '',
ratio: values.ratio || '',
completionRatio: values.completionRatio || ''
}, ...prev]);
setVisible(false);
showSuccess(t('添加成功'));
};
// 批量填充功能
const handleBatchFill = () => {
if (selectedRowKeys.length === 0) {
showError(t('请先选择需要批量设置的模型'));
return;
}
if (batchFillType === 'bothRatio') {
if (batchRatioValue === '' || batchCompletionRatioValue === '') {
showError(t('请输入模型倍率和补全倍率'));
return;
}
if (isNaN(batchRatioValue) || isNaN(batchCompletionRatioValue)) {
showError(t('请输入有效的数字'));
return;
}
} else {
if (batchFillValue === '') {
showError(t('请输入填充值'));
return;
}
if (isNaN(batchFillValue)) {
showError(t('请输入有效的数字'));
return;
}
}
// 根据选择的类型批量更新模型
setModels(prev =>
prev.map(model => {
if (selectedRowKeys.includes(model.name)) {
if (batchFillType === 'price') {
return {
...model,
price: batchFillValue,
ratio: '',
completionRatio: ''
};
} else if (batchFillType === 'ratio') {
return {
...model,
price: '',
ratio: batchFillValue
};
} else if (batchFillType === 'completionRatio') {
return {
...model,
price: '',
completionRatio: batchFillValue
};
} else if (batchFillType === 'bothRatio') {
return {
...model,
price: '',
ratio: batchRatioValue,
completionRatio: batchCompletionRatioValue
};
}
}
return model;
})
);
setBatchVisible(false);
Notification.success({
title: t('批量设置成功'),
content: t('已为 {{count}} 个模型设置{{type}}', {
count: selectedRowKeys.length,
type: batchFillType === 'price' ? t('固定价格') :
batchFillType === 'ratio' ? t('模型倍率') :
batchFillType === 'completionRatio' ? t('补全倍率') : t('模型倍率和补全倍率')
}),
duration: 3,
});
};
const handleBatchTypeChange = (value) => {
console.log(t('Changing batch type to:'), value);
setBatchFillType(value);
// 切换类型时清空对应的值
if (value !== 'bothRatio') {
setBatchFillValue('');
} else {
setBatchRatioValue('');
setBatchCompletionRatioValue('');
}
};
const rowSelection = {
selectedRowKeys,
onChange: (selectedKeys) => {
setSelectedRowKeys(selectedKeys);
},
};
return (
<>
<Space vertical align="start" style={{ width: '100%' }}>
<Space>
<Button icon={<IconPlus />} onClick={() => setVisible(true)}>
{t('添加模型')}
</Button>
<Button
icon={<IconBolt />}
type="secondary"
onClick={() => setBatchVisible(true)}
disabled={selectedRowKeys.length === 0}
>
{t('批量设置')} ({selectedRowKeys.length})
</Button>
<Button type="primary" icon={<IconSave />} onClick={SubmitData} loading={loading}>
{t('应用更改')}
</Button>
<Input
prefix={<IconSearch />}
placeholder={t('搜索模型名称')}
value={searchText}
onChange={value => {
setSearchText(value)
setCurrentPage(1);
}}
style={{ width: 200 }}
/>
</Space>
<Text>{t('此页面仅显示未设置价格或倍率的模型,设置后将自动从列表中移除')}</Text>
<Table
columns={columns}
dataSource={pagedData}
rowSelection={rowSelection}
rowKey="name"
pagination={{
currentPage: currentPage,
pageSize: pageSize,
total: filteredModels.length,
onPageChange: page => setCurrentPage(page),
onPageSizeChange: handlePageSizeChange,
pageSizeOptions: pageSizeOptions,
formatPageText: (page) =>
t('第 {{start}} - {{end}} 条,共 {{total}} 条', {
start: page.currentStart,
end: page.currentEnd,
total: filteredModels.length
}),
showTotal: true,
showSizeChanger: true
}}
empty={
<div style={{ textAlign: 'center', padding: '20px' }}>
{t('没有未设置的模型')}
</div>
}
/>
</Space>
{/* 添加模型弹窗 */}
<Modal
title={t('添加模型')}
visible={visible}
onCancel={() => setVisible(false)}
onOk={() => {
currentModel && addModel(currentModel);
}}
>
<Form>
<Form.Input
field="name"
label={t('模型名称')}
placeholder="strawberry"
required
onChange={value => setCurrentModel(prev => ({ ...prev, name: value }))}
/>
<Form.Switch
field="priceMode"
label={<>{t('定价模式')}{currentModel?.priceMode ? t("固定价格") : t("倍率模式")}</>}
onChange={checked => {
setCurrentModel(prev => ({
...prev,
price: '',
ratio: '',
completionRatio: '',
priceMode: checked
}));
}}
/>
{currentModel?.priceMode ? (
<Form.Input
field="price"
label={t('固定价格(每次)')}
placeholder={t('输入每次价格')}
onChange={value => setCurrentModel(prev => ({ ...prev, price: value }))}
/>
) : (
<>
<Form.Input
field="ratio"
label={t('模型倍率')}
placeholder={t('输入模型倍率')}
onChange={value => setCurrentModel(prev => ({ ...prev, ratio: value }))}
/>
<Form.Input
field="completionRatio"
label={t('补全倍率')}
placeholder={t('输入补全价格')}
onChange={value => setCurrentModel(prev => ({ ...prev, completionRatio: value }))}
/>
</>
)}
</Form>
</Modal>
{/* 批量设置弹窗 */}
<Modal
title={t('批量设置模型参数')}
visible={batchVisible}
onCancel={() => setBatchVisible(false)}
onOk={handleBatchFill}
width={500}
>
<Form>
<Form.Section text={t('设置类型')}>
<div style={{ marginBottom: '16px' }}>
<Space>
<Radio
checked={batchFillType === 'price'}
onChange={() => handleBatchTypeChange('price')}
>
{t('固定价格')}
</Radio>
<Radio
checked={batchFillType === 'ratio'}
onChange={() => handleBatchTypeChange('ratio')}
>
{t('模型倍率')}
</Radio>
<Radio
checked={batchFillType === 'completionRatio'}
onChange={() => handleBatchTypeChange('completionRatio')}
>
{t('补全倍率')}
</Radio>
<Radio
checked={batchFillType === 'bothRatio'}
onChange={() => handleBatchTypeChange('bothRatio')}
>
{t('模型倍率和补全倍率同时设置')}
</Radio>
</Space>
</div>
</Form.Section>
{batchFillType === 'bothRatio' ? (
<>
<Form.Input
field="batchRatioValue"
label={t('模型倍率值')}
placeholder={t('请输入模型倍率')}
value={batchRatioValue}
onChange={value => setBatchRatioValue(value)}
/>
<Form.Input
field="batchCompletionRatioValue"
label={t('补全倍率值')}
placeholder={t('请输入补全倍率')}
value={batchCompletionRatioValue}
onChange={value => setBatchCompletionRatioValue(value)}
/>
</>
) : (
<Form.Input
field="batchFillValue"
label={
batchFillType === 'price'
? t('固定价格值')
: batchFillType === 'ratio'
? t('模型倍率值')
: t('补全倍率值')
}
placeholder={t('请输入数值')}
value={batchFillValue}
onChange={value => setBatchFillValue(value)}
/>
)}
<Text type="tertiary">
{t('将为选中的 ')} <Text strong>{selectedRowKeys.length}</Text> {t(' ')}
</Text>
<div style={{ marginTop: '8px' }}>
<Text type="tertiary">
{t('当前设置类型: ')} <Text strong>{
batchFillType === 'price' ? t('固定价格') :
batchFillType === 'ratio' ? t('模型倍率') :
batchFillType === 'completionRatio' ? t('补全倍率') : t('模型倍率和补全倍率')
}</Text>
</Text>
</div>
</Form>
</Modal>
</>
);
}

View File

@@ -9,6 +9,7 @@ import OtherSetting from '../../components/OtherSetting';
import PersonalSetting from '../../components/PersonalSetting';
import OperationSetting from '../../components/OperationSetting';
import RateLimitSetting from '../../components/RateLimitSetting.js';
import ModelSetting from '../../components/ModelSetting.js';
const Setting = () => {
const { t } = useTranslation();
@@ -34,6 +35,11 @@ const Setting = () => {
content: <RateLimitSetting />,
itemKey: 'ratelimit',
});
panes.push({
tab: t('模型相关设置'),
content: <ModelSetting />,
itemKey: 'models',
});
panes.push({
tab: t('系统设置'),
content: <SystemSetting />,