Compare commits

..

34 Commits

Author SHA1 Message Date
Seefs
e5cb9ac03a feat: codex channel (#2652)
* feat: codex channel

* feat: codex channel

* feat: codex oauth flow

* feat: codex refresh cred

* feat: codex usage

* fix: codex err message detail

* fix: codex setting ui

* feat: codex refresh cred task

* fix: import err

* fix: codex store must be false

* fix: chat -> responses tool call

* fix: chat -> responses tool call
2026-01-14 22:29:43 +08:00
Seefs
ca11fcbabd Merge pull request #2632 from feitianbubu/pr/add-doubao-video-1.5 2026-01-14 16:33:30 +08:00
Seefs
6169f46cc6 Merge pull request #2627 from seefs001/feature/channel-test-param-override
feat: channel testing supports parameter overriding
2026-01-12 18:49:05 +08:00
Calcium-Ion
d73a0440b9 Merge pull request #2642 from seefs001/fix/gemini-propertyNames
fix: clean propertyNames for gemini function
2026-01-12 18:48:24 +08:00
Seefs
688280b3c3 fix: chat2response setting ui (#2643)
* fix: setting ui

* fix: rm global.chat_completions_to_responses_policy

* fix: rm global.chat_completions_to_responses_policy
2026-01-12 18:48:05 +08:00
Seefs
41da848c56 Merge pull request #2647 from seefs001/feature/status-code-auto-disable
feat: status code auto-disable configuration
2026-01-12 18:47:45 +08:00
Seefs
22b9438588 Merge pull request #2646 from deanxv/fix/gemini-unmarshal 2026-01-12 12:33:01 +08:00
dean
4ed4a7659a fix: support snake_case fields in GeminiChatGenerationConfig 2026-01-12 12:23:24 +08:00
Seefs
138fcd2327 fix: clean propertyNames for gemini function 2026-01-11 23:34:18 +08:00
Seefs
62b796fa6a feat: /v1/chat/completion -> /v1/response (#2629)
* feat: /v1/chat/completion -> /v1/response
2026-01-11 21:38:07 +08:00
feitianbubu
a4f28f0b79 feat: add doubao video 1.5 2026-01-10 22:23:31 +08:00
Seefs
1a5c8f3c35 Merge pull request #2615 from RedwindA/feat/GeminiNativeFetchModels
fix(gemini): fetch model list via native v1beta/models endpoint
2026-01-09 23:54:36 +08:00
Seefs
e1d43a820f Merge pull request #2619 from RedwindA/fix/disableMinimaxFetchModels
fix: remove Minimax from FETCHABLE channels
2026-01-09 21:42:25 +08:00
RedwindA
c975b4cfa8 fix(minimax): 添加 MiniMax-M2 系列模型到 ModelList 2026-01-09 20:46:47 +08:00
RedwindA
b51d1e2f78 fix: remove Minimax from FETCHABLE channels 2026-01-09 20:37:12 +08:00
RedwindA
07e77b3c6f refactor(gemini): 更新 GeminiModelsResponse 以使用 dto.GeminiModel 类型 2026-01-09 18:08:11 +08:00
RedwindA
c2464fc877 fix(gemini): fetch model list via native v1beta/models endpoint
Use the native Gemini Models API (/v1beta/models) instead of the OpenAI-compatible
path when listing models for Gemini channels, improving compatibility with
third-party Gemini-format providers that don't implement OpenAI routes.

- Add paginated model listing with timeout and optional proxy support
- Select an enabled key for multi-key Gemini channels
2026-01-09 18:00:40 +08:00
CaIon
93012631d1 docs: update readme 2026-01-07 20:52:27 +08:00
Xyfacai
1ab0c35540 Merge pull request #2590 from xyfacai/fix/max-body-limit
fix: 设置默认max req body 为128MB
2026-01-06 21:47:12 +08:00
Xyfacai
022fbcee04 Merge pull request #2588 from PowerfulBart/fix/auto-group-task-logging
fix(task): 修复使用 auto 分组时 Task Relay 不记录日志和不扣费的问题
2026-01-06 11:14:49 +08:00
郑伯涛
aed1900364 fix(task): 修复使用 auto 分组时 Task Relay 不记录日志和不扣费的问题
问题描述:
- 使用 auto 分组的令牌调用 /v1/videos 等 Task 接口时,虽然任务能成功创建,
  但使用日志不显示记录,且不会扣费

根本原因:
- Distribute 中间件在选择渠道后,会将实际选中的分组存储在 ContextKeyAutoGroup 中
- 但 RelayTaskSubmit 函数没有从 context 中读取这个值来更新 info.UsingGroup
- 导致 info.UsingGroup 始终是 "auto" 而不是实际选中的分组(如 "sora2逆")
- 当 auto 分组的倍率配置为 0 时,quota 计算结果为 0
- 日志记录条件 "if quota != 0" 不满足,导致日志不记录、不扣费

修复方案:
- 在 RelayTaskSubmit 函数中计算分组倍率之前,添加从 ContextKeyAutoGroup
  获取实际分组的逻辑
- 使用安全的类型断言,避免潜在的 panic 风险

影响范围:
- 仅影响 Task Relay 流程(/v1/videos, /suno, /kling 等接口)
- 不影响使用具体分组令牌的调用
- 不影响其他 Relay 类型(chat/completions 等已有类似处理逻辑)
2026-01-06 00:16:50 +08:00
Seefs
f8938a8f78 Merge pull request #2587 from xiangsx/main 2026-01-05 23:36:16 +08:00
xiangsx
e13459f3a1 feat: add regex pattern to mask API keys in sensitive information 2026-01-05 22:44:11 +08:00
CaIon
ad61c0f89e fix(gin): update request body size check to allow zero limit 2026-01-05 18:55:24 +08:00
Seefs
9addf1b705 Merge pull request #2581 from seefs001/fix/batch-add-key-deduplicate 2026-01-05 18:52:18 +08:00
Seefs
9e61338a6f Merge pull request #2582 from seefs001/fix/tips
fix: add tips for model management and channel testing
2026-01-05 18:47:02 +08:00
Calcium-Ion
d3f33932c0 Merge pull request #2580 from seefs001/fix/aws-proxy-timeout
fix: fix the proxyURL is empty, not using the default HTTP client configuration && the AWS calling side did not apply the relay timeout.
2026-01-05 18:32:25 +08:00
Seefs
a8f7c0614f fix: batch add key backend deduplication 2026-01-05 18:09:02 +08:00
Seefs
5f37a1e97c fix: fix the proxyURL is empty, not using the default HTTP client configuration && the AWS calling side did not apply the relay timeout. 2026-01-05 17:56:24 +08:00
Calcium-Ion
177553af37 Merge pull request #2578 from xyfacai/fix/gemini-mimetype
fix: 修复 gemini 文件类型不支持 image/jpg
2026-01-04 22:19:16 +08:00
Xyfacai
5ed4583c0c fix: 修复 gemini 文件类型不支持 image/jpg 2026-01-04 22:09:03 +08:00
Seefs
1519e97bc6 Merge pull request #2550 from shikaiwei1/patch-2 2026-01-04 18:11:46 +08:00
CaIon
443b05821f feat: add plans directory to .gitignore 2026-01-04 16:20:58 +08:00
John Chen
ab81d6e444 fix: 修复智普、Moonshot渠道在stream=true时无法拿到cachePrompt的统计数据。
根本原因:
1. 在OaiStreamHandler流式处理函数中,调用applyUsagePostProcessing(info, usage, nil)时传入的responseBody为nil,导致无法从响应体中提取缓存tokens。
2. 两个渠道的cached_tokens位置不同:
  - 智普:标准位置 usage.prompt_tokens_details.cached_tokens
  - Moonshot:非标准位置 choices[].usage.cached_tokens

处理方案:
1. 传递body信息到applyUsagePostProcessing中
2. 拆分智普和Moonshot的解析,并为Moonshot单独写一个解析方法。
2025-12-30 17:38:32 +08:00
80 changed files with 4289 additions and 108 deletions

1
.gitignore vendored
View File

@@ -19,6 +19,7 @@ tiktoken_cache
.gomodcache/
.cache
web/bun.lock
plans
electron/node_modules
electron/dist

View File

@@ -213,9 +213,11 @@ docker run --name new-api -d --restart always \
- 🚦 User-level model rate limiting
**Format Conversion:**
- 🔄 OpenAI ⇄ Claude Messages
- 🔄 OpenAI Gemini Chat
- 🔄 Thinking-to-content functionality
- 🔄 **OpenAI Compatible ⇄ Claude Messages**
- 🔄 **OpenAI Compatible → Google Gemini**
- 🔄 **Google Gemini → OpenAI Compatible** - Text only, function calling not supported yet
- 🚧 **OpenAI Compatible ⇄ OpenAI Responses** - In development
- 🔄 **Thinking-to-content functionality**
**Reasoning Effort Support:**

View File

@@ -212,9 +212,11 @@ docker run --name new-api -d --restart always \
- 🚦 Limitation du débit du modèle pour les utilisateurs
**Conversion de format:**
- 🔄 OpenAI ⇄ Claude Messages
- 🔄 OpenAI Gemini Chat
- 🔄 Fonctionnalité de la pensée au contenu
- 🔄 **OpenAI Compatible ⇄ Claude Messages**
- 🔄 **OpenAI Compatible → Google Gemini**
- 🔄 **Google Gemini → OpenAI Compatible** - Texte uniquement, les appels de fonction ne sont pas encore pris en charge
- 🚧 **OpenAI Compatible ⇄ OpenAI Responses** - En développement
- 🔄 **Fonctionnalité de la pensée au contenu**
**Prise en charge de l'effort de raisonnement:**

View File

@@ -218,9 +218,11 @@ docker run --name new-api -d --restart always \
- 🚦 ユーザーレベルモデルレート制限
**フォーマット変換:**
- 🔄 OpenAI ⇄ Claude Messages
- 🔄 OpenAI Gemini Chat
- 🔄 思考からコンテンツへの機能
- 🔄 **OpenAI Compatible ⇄ Claude Messages**
- 🔄 **OpenAI Compatible → Google Gemini**
- 🔄 **Google Gemini → OpenAI Compatible** - テキストのみ、関数呼び出しはまだサポートされていません
- 🚧 **OpenAI Compatible ⇄ OpenAI Responses** - 開発中
- 🔄 **思考からコンテンツへの機能**
**Reasoning Effort サポート:**

View File

@@ -214,9 +214,11 @@ docker run --name new-api -d --restart always \
- 🚦 用户级别模型限流
**格式转换:**
- 🔄 OpenAI ⇄ Claude Messages
- 🔄 OpenAI Gemini Chat
- 🔄 思考转内容功能
- 🔄 **OpenAI Compatible ⇄ Claude Messages**
- 🔄 **OpenAI Compatible → Google Gemini**
- 🔄 **Google Gemini → OpenAI Compatible** - 仅支持文本,暂不支持函数调用
- 🚧 **OpenAI Compatible ⇄ OpenAI Responses** - 开发中
- 🔄 **思考转内容功能**
**Reasoning Effort 支持:**

View File

@@ -73,6 +73,8 @@ func ChannelType2APIType(channelType int) (int, bool) {
apiType = constant.APITypeMiniMax
case constant.ChannelTypeReplicate:
apiType = constant.APITypeReplicate
case constant.ChannelTypeCodex:
apiType = constant.APITypeCodex
}
if apiType == -1 {
return constant.APITypeOpenAI, false

View File

@@ -40,7 +40,7 @@ func GetRequestBody(c *gin.Context) ([]byte, error) {
}
}
maxMB := constant.MaxRequestBodyMB
if maxMB < 0 {
if maxMB <= 0 {
// no limit
body, err := io.ReadAll(c.Request.Body)
_ = c.Request.Body.Close()

View File

@@ -118,7 +118,7 @@ func initConstantEnv() {
constant.MaxFileDownloadMB = GetEnvOrDefault("MAX_FILE_DOWNLOAD_MB", 64)
constant.StreamScannerMaxBufferMB = GetEnvOrDefault("STREAM_SCANNER_MAX_BUFFER_MB", 64)
// MaxRequestBodyMB 请求体最大大小(解压后),用于防止超大请求/zip bomb导致内存暴涨
constant.MaxRequestBodyMB = GetEnvOrDefault("MAX_REQUEST_BODY_MB", 64)
constant.MaxRequestBodyMB = GetEnvOrDefault("MAX_REQUEST_BODY_MB", 128)
// ForceStreamOption 覆盖请求参数强制返回usage信息
constant.ForceStreamOption = GetEnvOrDefaultBool("FORCE_STREAM_OPTION", true)
constant.CountToken = GetEnvOrDefaultBool("CountToken", true)

View File

@@ -16,6 +16,8 @@ var (
maskURLPattern = regexp.MustCompile(`(http|https)://[^\s/$.?#].[^\s]*`)
maskDomainPattern = regexp.MustCompile(`\b(?:[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.)+[a-zA-Z]{2,}\b`)
maskIPPattern = regexp.MustCompile(`\b(?:\d{1,3}\.){3}\d{1,3}\b`)
// maskApiKeyPattern matches patterns like 'api_key:xxx' or "api_key:xxx" to mask the API key value
maskApiKeyPattern = regexp.MustCompile(`(['"]?)api_key:([^\s'"]+)(['"]?)`)
)
func GetStringIfEmpty(str string, defaultValue string) string {
@@ -235,5 +237,8 @@ func MaskSensitiveInfo(str string) string {
// Mask IP addresses
str = maskIPPattern.ReplaceAllString(str, "***.***.***.***")
// Mask API keys (e.g., "api_key:AIzaSyAAAaUooTUni8AdaOkSRMda30n_Q4vrV70" -> "api_key:***")
str = maskApiKeyPattern.ReplaceAllString(str, "${1}api_key:***${3}")
return str
}

View File

@@ -35,5 +35,6 @@ const (
APITypeSubmodel
APITypeMiniMax
APITypeReplicate
APITypeCodex
APITypeDummy // this one is only for count, do not add any channel after this
)

View File

@@ -54,6 +54,7 @@ const (
ChannelTypeDoubaoVideo = 54
ChannelTypeSora = 55
ChannelTypeReplicate = 56
ChannelTypeCodex = 57
ChannelTypeDummy // this one is only for count, do not add any channel after this
)
@@ -116,6 +117,7 @@ var ChannelBaseURLs = []string{
"https://ark.cn-beijing.volces.com", //54
"https://api.openai.com", //55
"https://api.replicate.com", //56
"https://chatgpt.com", //57
}
var ChannelTypeNames = map[int]string{
@@ -172,6 +174,7 @@ var ChannelTypeNames = map[int]string{
ChannelTypeDoubaoVideo: "DoubaoVideo",
ChannelTypeSora: "Sora",
ChannelTypeReplicate: "Replicate",
ChannelTypeCodex: "Codex",
}
func GetChannelTypeName(channelType int) string {

View File

@@ -193,6 +193,7 @@ func testChannel(channel *model.Channel, testModel string, endpointType string)
}
}
info.IsChannelTest = true
info.InitChannelMeta(c)
err = helper.ModelMappedHelper(c, info, request)
@@ -309,6 +310,27 @@ func testChannel(channel *model.Channel, testModel string, endpointType string)
newAPIError: types.NewError(err, types.ErrorCodeJsonMarshalFailed),
}
}
//jsonData, err = relaycommon.RemoveDisabledFields(jsonData, info.ChannelOtherSettings)
//if err != nil {
// return testResult{
// context: c,
// localErr: err,
// newAPIError: types.NewError(err, types.ErrorCodeConvertRequestFailed),
// }
//}
if len(info.ParamOverride) > 0 {
jsonData, err = relaycommon.ApplyParamOverride(jsonData, info.ParamOverride, relaycommon.BuildParamOverrideContext(info))
if err != nil {
return testResult{
context: c,
localErr: err,
newAPIError: types.NewError(err, types.ErrorCodeChannelParamOverrideInvalid),
}
}
}
requestBody := bytes.NewBuffer(jsonData)
c.Request.Body = io.NopCloser(requestBody)
resp, err := adaptor.DoRequest(c, info, requestBody)

View File

@@ -1,16 +1,19 @@
package controller
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/model"
"github.com/QuantumNous/new-api/relay/channel/gemini"
"github.com/QuantumNous/new-api/relay/channel/ollama"
"github.com/QuantumNous/new-api/service"
@@ -260,11 +263,37 @@ func FetchUpstreamModels(c *gin.Context) {
return
}
// 对于 Gemini 渠道,使用特殊处理
if channel.Type == constant.ChannelTypeGemini {
// 获取用于请求的可用密钥(多密钥渠道优先使用启用状态的密钥)
key, _, apiErr := channel.GetNextEnabledKey()
if apiErr != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("获取渠道密钥失败: %s", apiErr.Error()),
})
return
}
key = strings.TrimSpace(key)
models, err := gemini.FetchGeminiModels(baseURL, key, channel.GetSetting().Proxy)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("获取Gemini模型失败: %s", err.Error()),
})
return
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": models,
})
return
}
var url string
switch channel.Type {
case constant.ChannelTypeGemini:
// curl https://example.com/v1beta/models?key=$GEMINI_API_KEY
url = fmt.Sprintf("%s/v1beta/openai/models", baseURL) // Remove key in url since we need to use AuthHeader
case constant.ChannelTypeAli:
url = fmt.Sprintf("%s/compatible-mode/v1/models", baseURL)
case constant.ChannelTypeZhipu_v4:
@@ -577,9 +606,60 @@ func validateChannel(channel *model.Channel, isAdd bool) error {
}
}
// Codex OAuth key validation (optional, only when JSON object is provided)
if channel.Type == constant.ChannelTypeCodex {
trimmedKey := strings.TrimSpace(channel.Key)
if isAdd || trimmedKey != "" {
if !strings.HasPrefix(trimmedKey, "{") {
return fmt.Errorf("Codex key must be a valid JSON object")
}
var keyMap map[string]any
if err := common.Unmarshal([]byte(trimmedKey), &keyMap); err != nil {
return fmt.Errorf("Codex key must be a valid JSON object")
}
if v, ok := keyMap["access_token"]; !ok || v == nil || strings.TrimSpace(fmt.Sprintf("%v", v)) == "" {
return fmt.Errorf("Codex key JSON must include access_token")
}
if v, ok := keyMap["account_id"]; !ok || v == nil || strings.TrimSpace(fmt.Sprintf("%v", v)) == "" {
return fmt.Errorf("Codex key JSON must include account_id")
}
}
}
return nil
}
func RefreshCodexChannelCredential(c *gin.Context) {
channelId, err := strconv.Atoi(c.Param("id"))
if err != nil {
common.ApiError(c, fmt.Errorf("invalid channel id: %w", err))
return
}
ctx, cancel := context.WithTimeout(c.Request.Context(), 10*time.Second)
defer cancel()
oauthKey, ch, err := service.RefreshCodexChannelCredential(ctx, channelId, service.CodexCredentialRefreshOptions{ResetCaches: true})
if err != nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": err.Error()})
return
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "refreshed",
"data": gin.H{
"expires_at": oauthKey.Expired,
"last_refresh": oauthKey.LastRefresh,
"account_id": oauthKey.AccountID,
"email": oauthKey.Email,
"channel_id": ch.Id,
"channel_type": ch.Type,
"channel_name": ch.Name,
},
})
}
type AddChannelRequest struct {
Mode string `json:"mode"`
MultiKeyMode constant.MultiKeyMode `json:"multi_key_mode"`
@@ -970,9 +1050,6 @@ func UpdateChannel(c *gin.Context) {
// 单个JSON密钥
newKeys = []string{channel.Key}
}
// 合并密钥
allKeys := append(existingKeys, newKeys...)
channel.Key = strings.Join(allKeys, "\n")
} else {
// 普通渠道的处理
inputKeys := strings.Split(channel.Key, "\n")
@@ -982,10 +1059,31 @@ func UpdateChannel(c *gin.Context) {
newKeys = append(newKeys, key)
}
}
// 合并密钥
allKeys := append(existingKeys, newKeys...)
channel.Key = strings.Join(allKeys, "\n")
}
seen := make(map[string]struct{}, len(existingKeys)+len(newKeys))
for _, key := range existingKeys {
normalized := strings.TrimSpace(key)
if normalized == "" {
continue
}
seen[normalized] = struct{}{}
}
dedupedNewKeys := make([]string, 0, len(newKeys))
for _, key := range newKeys {
normalized := strings.TrimSpace(key)
if normalized == "" {
continue
}
if _, ok := seen[normalized]; ok {
continue
}
seen[normalized] = struct{}{}
dedupedNewKeys = append(dedupedNewKeys, normalized)
}
allKeys := append(existingKeys, dedupedNewKeys...)
channel.Key = strings.Join(allKeys, "\n")
}
case "replace":
// 覆盖模式:直接使用新密钥(默认行为,不需要特殊处理)
@@ -1054,6 +1152,23 @@ func FetchModels(c *gin.Context) {
return
}
if req.Type == constant.ChannelTypeGemini {
models, err := gemini.FetchGeminiModels(baseURL, key, "")
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("获取Gemini模型失败: %s", err.Error()),
})
return
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"data": models,
})
return
}
client := &http.Client{}
url := fmt.Sprintf("%s/v1/models", baseURL)

243
controller/codex_oauth.go Normal file
View File

@@ -0,0 +1,243 @@
package controller
import (
"context"
"errors"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/model"
"github.com/QuantumNous/new-api/relay/channel/codex"
"github.com/QuantumNous/new-api/service"
"github.com/gin-contrib/sessions"
"github.com/gin-gonic/gin"
)
type codexOAuthCompleteRequest struct {
Input string `json:"input"`
}
func codexOAuthSessionKey(channelID int, field string) string {
return fmt.Sprintf("codex_oauth_%s_%d", field, channelID)
}
func parseCodexAuthorizationInput(input string) (code string, state string, err error) {
v := strings.TrimSpace(input)
if v == "" {
return "", "", errors.New("empty input")
}
if strings.Contains(v, "#") {
parts := strings.SplitN(v, "#", 2)
code = strings.TrimSpace(parts[0])
state = strings.TrimSpace(parts[1])
return code, state, nil
}
if strings.Contains(v, "code=") {
u, parseErr := url.Parse(v)
if parseErr == nil {
q := u.Query()
code = strings.TrimSpace(q.Get("code"))
state = strings.TrimSpace(q.Get("state"))
return code, state, nil
}
q, parseErr := url.ParseQuery(v)
if parseErr == nil {
code = strings.TrimSpace(q.Get("code"))
state = strings.TrimSpace(q.Get("state"))
return code, state, nil
}
}
code = v
return code, "", nil
}
func StartCodexOAuth(c *gin.Context) {
startCodexOAuthWithChannelID(c, 0)
}
func StartCodexOAuthForChannel(c *gin.Context) {
channelID, err := strconv.Atoi(c.Param("id"))
if err != nil {
common.ApiError(c, fmt.Errorf("invalid channel id: %w", err))
return
}
startCodexOAuthWithChannelID(c, channelID)
}
func startCodexOAuthWithChannelID(c *gin.Context, channelID int) {
if channelID > 0 {
ch, err := model.GetChannelById(channelID, false)
if err != nil {
common.ApiError(c, err)
return
}
if ch == nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "channel not found"})
return
}
if ch.Type != constant.ChannelTypeCodex {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "channel type is not Codex"})
return
}
}
flow, err := service.CreateCodexOAuthAuthorizationFlow()
if err != nil {
common.ApiError(c, err)
return
}
session := sessions.Default(c)
session.Set(codexOAuthSessionKey(channelID, "state"), flow.State)
session.Set(codexOAuthSessionKey(channelID, "verifier"), flow.Verifier)
session.Set(codexOAuthSessionKey(channelID, "created_at"), time.Now().Unix())
_ = session.Save()
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": gin.H{
"authorize_url": flow.AuthorizeURL,
},
})
}
func CompleteCodexOAuth(c *gin.Context) {
completeCodexOAuthWithChannelID(c, 0)
}
func CompleteCodexOAuthForChannel(c *gin.Context) {
channelID, err := strconv.Atoi(c.Param("id"))
if err != nil {
common.ApiError(c, fmt.Errorf("invalid channel id: %w", err))
return
}
completeCodexOAuthWithChannelID(c, channelID)
}
func completeCodexOAuthWithChannelID(c *gin.Context, channelID int) {
req := codexOAuthCompleteRequest{}
if err := c.ShouldBindJSON(&req); err != nil {
common.ApiError(c, err)
return
}
code, state, err := parseCodexAuthorizationInput(req.Input)
if err != nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": err.Error()})
return
}
if strings.TrimSpace(code) == "" {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "missing authorization code"})
return
}
if strings.TrimSpace(state) == "" {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "missing state in input"})
return
}
if channelID > 0 {
ch, err := model.GetChannelById(channelID, false)
if err != nil {
common.ApiError(c, err)
return
}
if ch == nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "channel not found"})
return
}
if ch.Type != constant.ChannelTypeCodex {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "channel type is not Codex"})
return
}
}
session := sessions.Default(c)
expectedState, _ := session.Get(codexOAuthSessionKey(channelID, "state")).(string)
verifier, _ := session.Get(codexOAuthSessionKey(channelID, "verifier")).(string)
if strings.TrimSpace(expectedState) == "" || strings.TrimSpace(verifier) == "" {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "oauth flow not started or session expired"})
return
}
if state != expectedState {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "state mismatch"})
return
}
ctx, cancel := context.WithTimeout(c.Request.Context(), 15*time.Second)
defer cancel()
tokenRes, err := service.ExchangeCodexAuthorizationCode(ctx, code, verifier)
if err != nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": err.Error()})
return
}
accountID, ok := service.ExtractCodexAccountIDFromJWT(tokenRes.AccessToken)
if !ok {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "failed to extract account_id from access_token"})
return
}
email, _ := service.ExtractEmailFromJWT(tokenRes.AccessToken)
key := codex.OAuthKey{
AccessToken: tokenRes.AccessToken,
RefreshToken: tokenRes.RefreshToken,
AccountID: accountID,
LastRefresh: time.Now().Format(time.RFC3339),
Expired: tokenRes.ExpiresAt.Format(time.RFC3339),
Email: email,
Type: "codex",
}
encoded, err := common.Marshal(key)
if err != nil {
common.ApiError(c, err)
return
}
session.Delete(codexOAuthSessionKey(channelID, "state"))
session.Delete(codexOAuthSessionKey(channelID, "verifier"))
session.Delete(codexOAuthSessionKey(channelID, "created_at"))
_ = session.Save()
if channelID > 0 {
if err := model.DB.Model(&model.Channel{}).Where("id = ?", channelID).Update("key", string(encoded)).Error; err != nil {
common.ApiError(c, err)
return
}
model.InitChannelCache()
service.ResetProxyClientCache()
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "saved",
"data": gin.H{
"channel_id": channelID,
"account_id": accountID,
"email": email,
"expires_at": key.Expired,
"last_refresh": key.LastRefresh,
},
})
return
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "generated",
"data": gin.H{
"key": string(encoded),
"account_id": accountID,
"email": email,
"expires_at": key.Expired,
"last_refresh": key.LastRefresh,
},
})
}

124
controller/codex_usage.go Normal file
View File

@@ -0,0 +1,124 @@
package controller
import (
"context"
"encoding/json"
"fmt"
"net/http"
"strconv"
"strings"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/model"
"github.com/QuantumNous/new-api/relay/channel/codex"
"github.com/QuantumNous/new-api/service"
"github.com/gin-gonic/gin"
)
func GetCodexChannelUsage(c *gin.Context) {
channelId, err := strconv.Atoi(c.Param("id"))
if err != nil {
common.ApiError(c, fmt.Errorf("invalid channel id: %w", err))
return
}
ch, err := model.GetChannelById(channelId, true)
if err != nil {
common.ApiError(c, err)
return
}
if ch == nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "channel not found"})
return
}
if ch.Type != constant.ChannelTypeCodex {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "channel type is not Codex"})
return
}
if ch.ChannelInfo.IsMultiKey {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "multi-key channel is not supported"})
return
}
oauthKey, err := codex.ParseOAuthKey(strings.TrimSpace(ch.Key))
if err != nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": err.Error()})
return
}
accessToken := strings.TrimSpace(oauthKey.AccessToken)
accountID := strings.TrimSpace(oauthKey.AccountID)
if accessToken == "" {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "codex channel: access_token is required"})
return
}
if accountID == "" {
c.JSON(http.StatusOK, gin.H{"success": false, "message": "codex channel: account_id is required"})
return
}
client, err := service.NewProxyHttpClient(ch.GetSetting().Proxy)
if err != nil {
common.ApiError(c, err)
return
}
ctx, cancel := context.WithTimeout(c.Request.Context(), 15*time.Second)
defer cancel()
statusCode, body, err := service.FetchCodexWhamUsage(ctx, client, ch.GetBaseURL(), accessToken, accountID)
if err != nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": err.Error()})
return
}
if (statusCode == http.StatusUnauthorized || statusCode == http.StatusForbidden) && strings.TrimSpace(oauthKey.RefreshToken) != "" {
refreshCtx, refreshCancel := context.WithTimeout(c.Request.Context(), 10*time.Second)
defer refreshCancel()
res, refreshErr := service.RefreshCodexOAuthToken(refreshCtx, oauthKey.RefreshToken)
if refreshErr == nil {
oauthKey.AccessToken = res.AccessToken
oauthKey.RefreshToken = res.RefreshToken
oauthKey.LastRefresh = time.Now().Format(time.RFC3339)
oauthKey.Expired = res.ExpiresAt.Format(time.RFC3339)
if strings.TrimSpace(oauthKey.Type) == "" {
oauthKey.Type = "codex"
}
encoded, encErr := common.Marshal(oauthKey)
if encErr == nil {
_ = model.DB.Model(&model.Channel{}).Where("id = ?", ch.Id).Update("key", string(encoded)).Error
model.InitChannelCache()
service.ResetProxyClientCache()
}
ctx2, cancel2 := context.WithTimeout(c.Request.Context(), 15*time.Second)
defer cancel2()
statusCode, body, err = service.FetchCodexWhamUsage(ctx2, client, ch.GetBaseURL(), oauthKey.AccessToken, accountID)
if err != nil {
c.JSON(http.StatusOK, gin.H{"success": false, "message": err.Error()})
return
}
}
}
var payload any
if json.Unmarshal(body, &payload) != nil {
payload = string(body)
}
ok := statusCode >= 200 && statusCode < 300
resp := gin.H{
"success": ok,
"message": "",
"upstream_status": statusCode,
"data": payload,
}
if !ok {
resp["message"] = fmt.Sprintf("upstream status: %d", statusCode)
}
c.JSON(http.StatusOK, resp)
}

View File

@@ -10,6 +10,7 @@ import (
"github.com/QuantumNous/new-api/model"
"github.com/QuantumNous/new-api/setting"
"github.com/QuantumNous/new-api/setting/console_setting"
"github.com/QuantumNous/new-api/setting/operation_setting"
"github.com/QuantumNous/new-api/setting/ratio_setting"
"github.com/QuantumNous/new-api/setting/system_setting"
@@ -177,6 +178,15 @@ func UpdateOption(c *gin.Context) {
})
return
}
case "AutomaticDisableStatusCodes":
_, err = operation_setting.ParseHTTPStatusCodeRanges(option.Value.(string))
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": err.Error(),
})
return
}
case "console_setting.api_info":
err = console_setting.ValidateConsoleSettings(option.Value.(string), "ApiInfo")
if err != nil {

View File

@@ -348,7 +348,7 @@ func processChannelError(c *gin.Context, channelError types.ChannelError, err *t
// do not use context to get channel info, there may be inconsistent channel info when processing asynchronously
if service.ShouldDisableChannel(channelError.ChannelType, err) && channelError.AutoBan {
gopool.Go(func() {
service.DisableChannel(channelError, err.Error())
service.DisableChannel(channelError, err.ErrorWithStatusCode())
})
}
@@ -378,7 +378,7 @@ func processChannelError(c *gin.Context, channelError types.ChannelError, err *t
adminInfo["multi_key_index"] = common.GetContextKeyInt(c, constant.ContextKeyChannelMultiKeyIndex)
}
other["admin_info"] = adminInfo
model.RecordErrorLog(c, userId, channelId, modelName, tokenName, err.MaskSensitiveError(), tokenId, 0, false, userGroup, other)
model.RecordErrorLog(c, userId, channelId, modelName, tokenName, err.MaskSensitiveErrorWithStatusCode(), tokenId, 0, false, userGroup, other)
}
}

View File

@@ -26,7 +26,8 @@ type GeneralErrorResponse struct {
Msg string `json:"msg"`
Err string `json:"err"`
ErrorMsg string `json:"error_msg"`
Metadata json.RawMessage `json:"metadata,omitempty"`
Metadata json.RawMessage `json:"metadata,omitempty"`
Detail string `json:"detail,omitempty"`
Header struct {
Message string `json:"message"`
} `json:"header"`
@@ -79,6 +80,9 @@ func (e GeneralErrorResponse) ToMessage() string {
if e.ErrorMsg != "" {
return e.ErrorMsg
}
if e.Detail != "" {
return e.Detail
}
if e.Header.Message != "" {
return e.Header.Message
}

View File

@@ -341,6 +341,88 @@ type GeminiChatGenerationConfig struct {
ImageConfig json.RawMessage `json:"imageConfig,omitempty"` // RawMessage to allow flexible image config
}
// UnmarshalJSON allows GeminiChatGenerationConfig to accept both snake_case and camelCase fields.
func (c *GeminiChatGenerationConfig) UnmarshalJSON(data []byte) error {
type Alias GeminiChatGenerationConfig
var aux struct {
Alias
TopPSnake float64 `json:"top_p,omitempty"`
TopKSnake float64 `json:"top_k,omitempty"`
MaxOutputTokensSnake uint `json:"max_output_tokens,omitempty"`
CandidateCountSnake int `json:"candidate_count,omitempty"`
StopSequencesSnake []string `json:"stop_sequences,omitempty"`
ResponseMimeTypeSnake string `json:"response_mime_type,omitempty"`
ResponseSchemaSnake any `json:"response_schema,omitempty"`
ResponseJsonSchemaSnake json.RawMessage `json:"response_json_schema,omitempty"`
PresencePenaltySnake *float32 `json:"presence_penalty,omitempty"`
FrequencyPenaltySnake *float32 `json:"frequency_penalty,omitempty"`
ResponseLogprobsSnake bool `json:"response_logprobs,omitempty"`
MediaResolutionSnake MediaResolution `json:"media_resolution,omitempty"`
ResponseModalitiesSnake []string `json:"response_modalities,omitempty"`
ThinkingConfigSnake *GeminiThinkingConfig `json:"thinking_config,omitempty"`
SpeechConfigSnake json.RawMessage `json:"speech_config,omitempty"`
ImageConfigSnake json.RawMessage `json:"image_config,omitempty"`
}
if err := common.Unmarshal(data, &aux); err != nil {
return err
}
*c = GeminiChatGenerationConfig(aux.Alias)
// Prioritize snake_case if present
if aux.TopPSnake != 0 {
c.TopP = aux.TopPSnake
}
if aux.TopKSnake != 0 {
c.TopK = aux.TopKSnake
}
if aux.MaxOutputTokensSnake != 0 {
c.MaxOutputTokens = aux.MaxOutputTokensSnake
}
if aux.CandidateCountSnake != 0 {
c.CandidateCount = aux.CandidateCountSnake
}
if len(aux.StopSequencesSnake) > 0 {
c.StopSequences = aux.StopSequencesSnake
}
if aux.ResponseMimeTypeSnake != "" {
c.ResponseMimeType = aux.ResponseMimeTypeSnake
}
if aux.ResponseSchemaSnake != nil {
c.ResponseSchema = aux.ResponseSchemaSnake
}
if len(aux.ResponseJsonSchemaSnake) > 0 {
c.ResponseJsonSchema = aux.ResponseJsonSchemaSnake
}
if aux.PresencePenaltySnake != nil {
c.PresencePenalty = aux.PresencePenaltySnake
}
if aux.FrequencyPenaltySnake != nil {
c.FrequencyPenalty = aux.FrequencyPenaltySnake
}
if aux.ResponseLogprobsSnake {
c.ResponseLogprobs = aux.ResponseLogprobsSnake
}
if aux.MediaResolutionSnake != "" {
c.MediaResolution = aux.MediaResolutionSnake
}
if len(aux.ResponseModalitiesSnake) > 0 {
c.ResponseModalities = aux.ResponseModalitiesSnake
}
if aux.ThinkingConfigSnake != nil {
c.ThinkingConfig = aux.ThinkingConfigSnake
}
if len(aux.SpeechConfigSnake) > 0 {
c.SpeechConfig = aux.SpeechConfigSnake
}
if len(aux.ImageConfigSnake) > 0 {
c.ImageConfig = aux.ImageConfigSnake
}
return nil
}
type MediaResolution string
type GeminiChatCandidate struct {

View File

@@ -808,11 +808,11 @@ type OpenAIResponsesRequest struct {
PromptCacheKey json.RawMessage `json:"prompt_cache_key,omitempty"`
PromptCacheRetention json.RawMessage `json:"prompt_cache_retention,omitempty"`
Stream bool `json:"stream,omitempty"`
Temperature float64 `json:"temperature,omitempty"`
Temperature *float64 `json:"temperature,omitempty"`
Text json.RawMessage `json:"text,omitempty"`
ToolChoice json.RawMessage `json:"tool_choice,omitempty"`
Tools json.RawMessage `json:"tools,omitempty"` // 需要处理的参数很少MCP 参数太多不确定,所以用 map
TopP float64 `json:"top_p,omitempty"`
TopP *float64 `json:"top_p,omitempty"`
Truncation string `json:"truncation,omitempty"`
User string `json:"user,omitempty"`
MaxToolCalls uint `json:"max_tool_calls,omitempty"`

View File

@@ -334,13 +334,16 @@ type IncompleteDetails struct {
}
type ResponsesOutput struct {
Type string `json:"type"`
ID string `json:"id"`
Status string `json:"status"`
Role string `json:"role"`
Content []ResponsesOutputContent `json:"content"`
Quality string `json:"quality"`
Size string `json:"size"`
Type string `json:"type"`
ID string `json:"id"`
Status string `json:"status"`
Role string `json:"role"`
Content []ResponsesOutputContent `json:"content"`
Quality string `json:"quality"`
Size string `json:"size"`
CallId string `json:"call_id,omitempty"`
Name string `json:"name,omitempty"`
Arguments string `json:"arguments,omitempty"`
}
type ResponsesOutputContent struct {
@@ -369,6 +372,10 @@ type ResponsesStreamResponse struct {
Response *OpenAIResponsesResponse `json:"response,omitempty"`
Delta string `json:"delta,omitempty"`
Item *ResponsesOutput `json:"item,omitempty"`
// - response.function_call_arguments.delta
// - response.function_call_arguments.done
OutputIndex *int `json:"output_index,omitempty"`
ItemID string `json:"item_id,omitempty"`
}
// GetOpenAIError 从动态错误类型中提取OpenAIError结构

55
dto/values.go Normal file
View File

@@ -0,0 +1,55 @@
package dto
import (
"encoding/json"
"strconv"
)
type IntValue int
func (i *IntValue) UnmarshalJSON(b []byte) error {
var n int
if err := json.Unmarshal(b, &n); err == nil {
*i = IntValue(n)
return nil
}
var s string
if err := json.Unmarshal(b, &s); err != nil {
return err
}
v, err := strconv.Atoi(s)
if err != nil {
return err
}
*i = IntValue(v)
return nil
}
func (i IntValue) MarshalJSON() ([]byte, error) {
return json.Marshal(int(i))
}
type BoolValue bool
func (b *BoolValue) UnmarshalJSON(data []byte) error {
var boolean bool
if err := json.Unmarshal(data, &boolean); err == nil {
*b = BoolValue(boolean)
return nil
}
var str string
if err := json.Unmarshal(data, &str); err != nil {
return err
}
if str == "true" {
*b = BoolValue(true)
} else if str == "false" {
*b = BoolValue(false)
} else {
return json.Unmarshal(data, &boolean)
}
return nil
}
func (b BoolValue) MarshalJSON() ([]byte, error) {
return json.Marshal(bool(b))
}

View File

@@ -102,6 +102,9 @@ func main() {
go controller.AutomaticallyTestChannels()
// Codex credential auto-refresh check every 10 minutes, refresh when expires within 1 day
service.StartCodexCredentialAutoRefreshTask()
if common.IsMasterNode && constant.UpdateTask {
gopool.Go(func() {
controller.UpdateMidjourneyTaskBulk()

View File

@@ -143,6 +143,7 @@ func InitOptionMap() {
common.OptionMap["SensitiveWords"] = setting.SensitiveWordsToString()
common.OptionMap["StreamCacheQueueLength"] = strconv.Itoa(setting.StreamCacheQueueLength)
common.OptionMap["AutomaticDisableKeywords"] = operation_setting.AutomaticDisableKeywordsToString()
common.OptionMap["AutomaticDisableStatusCodes"] = operation_setting.AutomaticDisableStatusCodesToString()
common.OptionMap["ExposeRatioEnabled"] = strconv.FormatBool(ratio_setting.IsExposeRatioEnabled())
// 自动添加所有注册的模型配置
@@ -444,6 +445,8 @@ func updateOptionMap(key string, value string) (err error) {
setting.SensitiveWordsFromString(value)
case "AutomaticDisableKeywords":
operation_setting.AutomaticDisableKeywordsFromString(value)
case "AutomaticDisableStatusCodes":
err = operation_setting.AutomaticDisableStatusCodesFromString(value)
case "StreamCacheQueueLength":
setting.StreamCacheQueueLength, _ = strconv.Atoi(value)
case "PayMethods":

View File

@@ -1,11 +1,13 @@
package aws
import (
"context"
"encoding/json"
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/dto"
@@ -37,6 +39,13 @@ func getAwsErrorStatusCode(err error) int {
return http.StatusInternalServerError
}
func newAwsInvokeContext() (context.Context, context.CancelFunc) {
if common.RelayTimeout <= 0 {
return context.Background(), func() {}
}
return context.WithTimeout(context.Background(), time.Duration(common.RelayTimeout)*time.Second)
}
func newAwsClient(c *gin.Context, info *relaycommon.RelayInfo) (*bedrockruntime.Client, error) {
var (
httpClient *http.Client
@@ -117,6 +126,7 @@ func doAwsClientRequest(c *gin.Context, info *relaycommon.RelayInfo, a *Adaptor,
return nil, types.NewError(errors.Wrap(err, "marshal nova request"), types.ErrorCodeBadResponseBody)
}
awsReq.Body = reqBody
a.AwsReq = awsReq
return nil, nil
} else {
awsClaudeReq, err := formatRequest(requestBody, requestHeader)
@@ -201,7 +211,10 @@ func getAwsModelID(requestModel string) string {
func awsHandler(c *gin.Context, info *relaycommon.RelayInfo, a *Adaptor) (*types.NewAPIError, *dto.Usage) {
awsResp, err := a.AwsClient.InvokeModel(c.Request.Context(), a.AwsReq.(*bedrockruntime.InvokeModelInput))
ctx, cancel := newAwsInvokeContext()
defer cancel()
awsResp, err := a.AwsClient.InvokeModel(ctx, a.AwsReq.(*bedrockruntime.InvokeModelInput))
if err != nil {
statusCode := getAwsErrorStatusCode(err)
return types.NewOpenAIError(errors.Wrap(err, "InvokeModel"), types.ErrorCodeAwsInvokeError, statusCode), nil
@@ -228,7 +241,10 @@ func awsHandler(c *gin.Context, info *relaycommon.RelayInfo, a *Adaptor) (*types
}
func awsStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, a *Adaptor) (*types.NewAPIError, *dto.Usage) {
awsResp, err := a.AwsClient.InvokeModelWithResponseStream(c.Request.Context(), a.AwsReq.(*bedrockruntime.InvokeModelWithResponseStreamInput))
ctx, cancel := newAwsInvokeContext()
defer cancel()
awsResp, err := a.AwsClient.InvokeModelWithResponseStream(ctx, a.AwsReq.(*bedrockruntime.InvokeModelWithResponseStreamInput))
if err != nil {
statusCode := getAwsErrorStatusCode(err)
return types.NewOpenAIError(errors.Wrap(err, "InvokeModelWithResponseStream"), types.ErrorCodeAwsInvokeError, statusCode), nil
@@ -268,7 +284,10 @@ func awsStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, a *Adaptor) (
// Nova模型处理函数
func handleNovaRequest(c *gin.Context, info *relaycommon.RelayInfo, a *Adaptor) (*types.NewAPIError, *dto.Usage) {
awsResp, err := a.AwsClient.InvokeModel(c.Request.Context(), a.AwsReq.(*bedrockruntime.InvokeModelInput))
ctx, cancel := newAwsInvokeContext()
defer cancel()
awsResp, err := a.AwsClient.InvokeModel(ctx, a.AwsReq.(*bedrockruntime.InvokeModelInput))
if err != nil {
statusCode := getAwsErrorStatusCode(err)
return types.NewOpenAIError(errors.Wrap(err, "InvokeModel"), types.ErrorCodeAwsInvokeError, statusCode), nil

View File

@@ -0,0 +1,161 @@
package codex
import (
"encoding/json"
"errors"
"io"
"net/http"
"strings"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/relay/channel"
"github.com/QuantumNous/new-api/relay/channel/openai"
relaycommon "github.com/QuantumNous/new-api/relay/common"
relayconstant "github.com/QuantumNous/new-api/relay/constant"
"github.com/QuantumNous/new-api/types"
"github.com/gin-gonic/gin"
)
type Adaptor struct {
}
func (a *Adaptor) ConvertGeminiRequest(c *gin.Context, info *relaycommon.RelayInfo, request *dto.GeminiChatRequest) (any, error) {
return nil, errors.New("codex channel: endpoint not supported")
}
func (a *Adaptor) ConvertClaudeRequest(*gin.Context, *relaycommon.RelayInfo, *dto.ClaudeRequest) (any, error) {
return nil, errors.New("codex channel: endpoint not supported")
}
func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.AudioRequest) (io.Reader, error) {
return nil, errors.New("codex channel: endpoint not supported")
}
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
return nil, errors.New("codex channel: endpoint not supported")
}
func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
}
func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayInfo, request *dto.GeneralOpenAIRequest) (any, error) {
return nil, errors.New("codex channel: endpoint not supported")
}
func (a *Adaptor) ConvertRerankRequest(c *gin.Context, relayMode int, request dto.RerankRequest) (any, error) {
return nil, errors.New("codex channel: endpoint not supported")
}
func (a *Adaptor) ConvertEmbeddingRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.EmbeddingRequest) (any, error) {
return nil, errors.New("codex channel: endpoint not supported")
}
func (a *Adaptor) ConvertOpenAIResponsesRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.OpenAIResponsesRequest) (any, error) {
if info != nil && info.ChannelSetting.SystemPrompt != "" {
systemPrompt := info.ChannelSetting.SystemPrompt
if len(request.Instructions) == 0 {
if b, err := common.Marshal(systemPrompt); err == nil {
request.Instructions = b
} else {
return nil, err
}
} else if info.ChannelSetting.SystemPromptOverride {
var existing string
if err := common.Unmarshal(request.Instructions, &existing); err == nil {
existing = strings.TrimSpace(existing)
if existing == "" {
if b, err := common.Marshal(systemPrompt); err == nil {
request.Instructions = b
} else {
return nil, err
}
} else {
if b, err := common.Marshal(systemPrompt + "\n" + existing); err == nil {
request.Instructions = b
} else {
return nil, err
}
}
} else {
if b, err := common.Marshal(systemPrompt); err == nil {
request.Instructions = b
} else {
return nil, err
}
}
}
}
// codex: store must be false
request.Store = json.RawMessage("false")
return request, nil
}
func (a *Adaptor) DoRequest(c *gin.Context, info *relaycommon.RelayInfo, requestBody io.Reader) (any, error) {
return channel.DoApiRequest(a, c, info, requestBody)
}
func (a *Adaptor) DoResponse(c *gin.Context, resp *http.Response, info *relaycommon.RelayInfo) (usage any, err *types.NewAPIError) {
if info.RelayMode != relayconstant.RelayModeResponses {
return nil, types.NewError(errors.New("codex channel: endpoint not supported"), types.ErrorCodeInvalidRequest)
}
if info.IsStream {
return openai.OaiResponsesStreamHandler(c, info, resp)
}
return openai.OaiResponsesHandler(c, info, resp)
}
func (a *Adaptor) GetModelList() []string {
return ModelList
}
func (a *Adaptor) GetChannelName() string {
return ChannelName
}
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
if info.RelayMode != relayconstant.RelayModeResponses {
return "", errors.New("codex channel: only /v1/responses is supported")
}
return relaycommon.GetFullRequestURL(info.ChannelBaseUrl, "/backend-api/codex/responses", info.ChannelType), nil
}
func (a *Adaptor) SetupRequestHeader(c *gin.Context, req *http.Header, info *relaycommon.RelayInfo) error {
channel.SetupApiRequestHeader(info, c, req)
key := strings.TrimSpace(info.ApiKey)
if !strings.HasPrefix(key, "{") {
return errors.New("codex channel: key must be a JSON object")
}
oauthKey, err := ParseOAuthKey(key)
if err != nil {
return err
}
accessToken := strings.TrimSpace(oauthKey.AccessToken)
accountID := strings.TrimSpace(oauthKey.AccountID)
if accessToken == "" {
return errors.New("codex channel: access_token is required")
}
if accountID == "" {
return errors.New("codex channel: account_id is required")
}
req.Set("Authorization", "Bearer "+accessToken)
req.Set("chatgpt-account-id", accountID)
if req.Get("OpenAI-Beta") == "" {
req.Set("OpenAI-Beta", "responses=experimental")
}
if req.Get("originator") == "" {
req.Set("originator", "codex_cli_rs")
}
return nil
}

View File

@@ -0,0 +1,9 @@
package codex
var ModelList = []string{
"gpt-5", "gpt-5-codex", "gpt-5-codex-mini",
"gpt-5.1", "gpt-5.1-codex", "gpt-5.1-codex-max", "gpt-5.1-codex-mini",
"gpt-5.2", "gpt-5.2-codex",
}
const ChannelName = "codex"

View File

@@ -0,0 +1,30 @@
package codex
import (
"errors"
"github.com/QuantumNous/new-api/common"
)
type OAuthKey struct {
IDToken string `json:"id_token,omitempty"`
AccessToken string `json:"access_token,omitempty"`
RefreshToken string `json:"refresh_token,omitempty"`
AccountID string `json:"account_id,omitempty"`
LastRefresh string `json:"last_refresh,omitempty"`
Email string `json:"email,omitempty"`
Type string `json:"type,omitempty"`
Expired string `json:"expired,omitempty"`
}
func ParseOAuthKey(raw string) (*OAuthKey, error) {
if raw == "" {
return nil, errors.New("codex channel: empty oauth key")
}
var key OAuthKey
if err := common.Unmarshal([]byte(raw), &key); err != nil {
return nil, errors.New("codex channel: invalid oauth key json")
}
return &key, nil
}

View File

@@ -1,6 +1,7 @@
package gemini
import (
"context"
"encoding/json"
"errors"
"fmt"
@@ -8,6 +9,7 @@ import (
"net/http"
"strconv"
"strings"
"time"
"unicode/utf8"
"github.com/QuantumNous/new-api/common"
@@ -32,6 +34,7 @@ var geminiSupportedMimeTypes = map[string]bool{
"audio/wav": true,
"image/png": true,
"image/jpeg": true,
"image/jpg": true, // support old image/jpeg
"image/webp": true,
"text/plain": true,
"video/mov": true,
@@ -672,6 +675,7 @@ func cleanFunctionParameters(params interface{}) interface{} {
delete(cleanedMap, "exclusiveMinimum")
delete(cleanedMap, "$schema")
delete(cleanedMap, "additionalProperties")
delete(cleanedMap, "propertyNames")
// Check and clean 'format' for string types
if propType, typeExists := cleanedMap["type"].(string); typeExists && propType == "string" {
@@ -1362,3 +1366,76 @@ func GeminiImageHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.
return usage, nil
}
type GeminiModelsResponse struct {
Models []dto.GeminiModel `json:"models"`
NextPageToken string `json:"nextPageToken"`
}
func FetchGeminiModels(baseURL, apiKey, proxyURL string) ([]string, error) {
client, err := service.GetHttpClientWithProxy(proxyURL)
if err != nil {
return nil, fmt.Errorf("创建HTTP客户端失败: %v", err)
}
allModels := make([]string, 0)
nextPageToken := ""
maxPages := 100 // Safety limit to prevent infinite loops
for page := 0; page < maxPages; page++ {
url := fmt.Sprintf("%s/v1beta/models", baseURL)
if nextPageToken != "" {
url = fmt.Sprintf("%s?pageToken=%s", url, nextPageToken)
}
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
request, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
cancel()
return nil, fmt.Errorf("创建请求失败: %v", err)
}
request.Header.Set("x-goog-api-key", apiKey)
response, err := client.Do(request)
if err != nil {
cancel()
return nil, fmt.Errorf("请求失败: %v", err)
}
if response.StatusCode != http.StatusOK {
body, _ := io.ReadAll(response.Body)
response.Body.Close()
cancel()
return nil, fmt.Errorf("服务器返回错误 %d: %s", response.StatusCode, string(body))
}
body, err := io.ReadAll(response.Body)
response.Body.Close()
cancel()
if err != nil {
return nil, fmt.Errorf("读取响应失败: %v", err)
}
var modelsResponse GeminiModelsResponse
if err = common.Unmarshal(body, &modelsResponse); err != nil {
return nil, fmt.Errorf("解析响应失败: %v", err)
}
for _, model := range modelsResponse.Models {
modelNameValue, ok := model.Name.(string)
if !ok {
continue
}
modelName := strings.TrimPrefix(modelNameValue, "models/")
allModels = append(allModels, modelName)
}
nextPageToken = modelsResponse.NextPageToken
if nextPageToken == "" {
break
}
}
return allModels, nil
}

View File

@@ -14,6 +14,9 @@ var ModelList = []string{
"speech-02-turbo",
"speech-01-hd",
"speech-01-turbo",
"MiniMax-M2.1",
"MiniMax-M2.1-lightning",
"MiniMax-M2",
}
var ChannelName = "minimax"

View File

@@ -0,0 +1,369 @@
package openai
import (
"fmt"
"io"
"net/http"
"strings"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/logger"
relaycommon "github.com/QuantumNous/new-api/relay/common"
"github.com/QuantumNous/new-api/relay/helper"
"github.com/QuantumNous/new-api/service"
"github.com/QuantumNous/new-api/types"
"github.com/gin-gonic/gin"
)
func OaiResponsesToChatHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
if resp == nil || resp.Body == nil {
return nil, types.NewOpenAIError(fmt.Errorf("invalid response"), types.ErrorCodeBadResponse, http.StatusInternalServerError)
}
defer service.CloseResponseBodyGracefully(resp)
var responsesResp dto.OpenAIResponsesResponse
body, err := io.ReadAll(resp.Body)
if err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeReadResponseBodyFailed, http.StatusInternalServerError)
}
if err := common.Unmarshal(body, &responsesResp); err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError)
}
if oaiError := responsesResp.GetOpenAIError(); oaiError != nil && oaiError.Type != "" {
return nil, types.WithOpenAIError(*oaiError, resp.StatusCode)
}
chatId := helper.GetResponseID(c)
chatResp, usage, err := service.ResponsesResponseToChatCompletionsResponse(&responsesResp, chatId)
if err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeBadResponseBody, http.StatusInternalServerError)
}
if usage == nil || usage.TotalTokens == 0 {
text := service.ExtractOutputTextFromResponses(&responsesResp)
usage = service.ResponseText2Usage(c, text, info.UpstreamModelName, info.GetEstimatePromptTokens())
chatResp.Usage = *usage
}
chatBody, err := common.Marshal(chatResp)
if err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeJsonMarshalFailed, http.StatusInternalServerError)
}
service.IOCopyBytesGracefully(c, resp, chatBody)
return usage, nil
}
func OaiResponsesToChatStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Response) (*dto.Usage, *types.NewAPIError) {
if resp == nil || resp.Body == nil {
return nil, types.NewOpenAIError(fmt.Errorf("invalid response"), types.ErrorCodeBadResponse, http.StatusInternalServerError)
}
defer service.CloseResponseBodyGracefully(resp)
responseId := helper.GetResponseID(c)
createAt := time.Now().Unix()
model := info.UpstreamModelName
var (
usage = &dto.Usage{}
outputText strings.Builder
usageText strings.Builder
sentStart bool
sentStop bool
sawToolCall bool
streamErr *types.NewAPIError
)
toolCallIndexByID := make(map[string]int)
toolCallNameByID := make(map[string]string)
toolCallArgsByID := make(map[string]string)
toolCallNameSent := make(map[string]bool)
toolCallCanonicalIDByItemID := make(map[string]string)
sendStartIfNeeded := func() bool {
if sentStart {
return true
}
if err := helper.ObjectData(c, helper.GenerateStartEmptyResponse(responseId, createAt, model, nil)); err != nil {
streamErr = types.NewOpenAIError(err, types.ErrorCodeBadResponse, http.StatusInternalServerError)
return false
}
sentStart = true
return true
}
sendToolCallDelta := func(callID string, name string, argsDelta string) bool {
if callID == "" {
return true
}
if outputText.Len() > 0 {
// Prefer streaming assistant text over tool calls to match non-stream behavior.
return true
}
if !sendStartIfNeeded() {
return false
}
idx, ok := toolCallIndexByID[callID]
if !ok {
idx = len(toolCallIndexByID)
toolCallIndexByID[callID] = idx
}
if name != "" {
toolCallNameByID[callID] = name
}
if toolCallNameByID[callID] != "" {
name = toolCallNameByID[callID]
}
tool := dto.ToolCallResponse{
ID: callID,
Type: "function",
Function: dto.FunctionResponse{
Arguments: argsDelta,
},
}
tool.SetIndex(idx)
if name != "" && !toolCallNameSent[callID] {
tool.Function.Name = name
toolCallNameSent[callID] = true
}
chunk := &dto.ChatCompletionsStreamResponse{
Id: responseId,
Object: "chat.completion.chunk",
Created: createAt,
Model: model,
Choices: []dto.ChatCompletionsStreamResponseChoice{
{
Index: 0,
Delta: dto.ChatCompletionsStreamResponseChoiceDelta{
ToolCalls: []dto.ToolCallResponse{tool},
},
},
},
}
if err := helper.ObjectData(c, chunk); err != nil {
streamErr = types.NewOpenAIError(err, types.ErrorCodeBadResponse, http.StatusInternalServerError)
return false
}
sawToolCall = true
// Include tool call data in the local builder for fallback token estimation.
if tool.Function.Name != "" {
usageText.WriteString(tool.Function.Name)
}
if argsDelta != "" {
usageText.WriteString(argsDelta)
}
return true
}
helper.StreamScannerHandler(c, resp, info, func(data string) bool {
if streamErr != nil {
return false
}
var streamResp dto.ResponsesStreamResponse
if err := common.UnmarshalJsonStr(data, &streamResp); err != nil {
logger.LogError(c, "failed to unmarshal responses stream event: "+err.Error())
return true
}
switch streamResp.Type {
case "response.created":
if streamResp.Response != nil {
if streamResp.Response.Model != "" {
model = streamResp.Response.Model
}
if streamResp.Response.CreatedAt != 0 {
createAt = int64(streamResp.Response.CreatedAt)
}
}
case "response.output_text.delta":
if !sendStartIfNeeded() {
return false
}
if streamResp.Delta != "" {
outputText.WriteString(streamResp.Delta)
usageText.WriteString(streamResp.Delta)
delta := streamResp.Delta
chunk := &dto.ChatCompletionsStreamResponse{
Id: responseId,
Object: "chat.completion.chunk",
Created: createAt,
Model: model,
Choices: []dto.ChatCompletionsStreamResponseChoice{
{
Index: 0,
Delta: dto.ChatCompletionsStreamResponseChoiceDelta{
Content: &delta,
},
},
},
}
if err := helper.ObjectData(c, chunk); err != nil {
streamErr = types.NewOpenAIError(err, types.ErrorCodeBadResponse, http.StatusInternalServerError)
return false
}
}
case "response.output_item.added", "response.output_item.done":
if streamResp.Item == nil {
break
}
if streamResp.Item.Type != "function_call" {
break
}
itemID := strings.TrimSpace(streamResp.Item.ID)
callID := strings.TrimSpace(streamResp.Item.CallId)
if callID == "" {
callID = itemID
}
if itemID != "" && callID != "" {
toolCallCanonicalIDByItemID[itemID] = callID
}
name := strings.TrimSpace(streamResp.Item.Name)
if name != "" {
toolCallNameByID[callID] = name
}
newArgs := streamResp.Item.Arguments
prevArgs := toolCallArgsByID[callID]
argsDelta := ""
if newArgs != "" {
if strings.HasPrefix(newArgs, prevArgs) {
argsDelta = newArgs[len(prevArgs):]
} else {
argsDelta = newArgs
}
toolCallArgsByID[callID] = newArgs
}
if !sendToolCallDelta(callID, name, argsDelta) {
return false
}
case "response.function_call_arguments.delta":
itemID := strings.TrimSpace(streamResp.ItemID)
callID := toolCallCanonicalIDByItemID[itemID]
if callID == "" {
callID = itemID
}
if callID == "" {
break
}
toolCallArgsByID[callID] += streamResp.Delta
if !sendToolCallDelta(callID, "", streamResp.Delta) {
return false
}
case "response.function_call_arguments.done":
case "response.completed":
if streamResp.Response != nil {
if streamResp.Response.Model != "" {
model = streamResp.Response.Model
}
if streamResp.Response.CreatedAt != 0 {
createAt = int64(streamResp.Response.CreatedAt)
}
if streamResp.Response.Usage != nil {
if streamResp.Response.Usage.InputTokens != 0 {
usage.PromptTokens = streamResp.Response.Usage.InputTokens
usage.InputTokens = streamResp.Response.Usage.InputTokens
}
if streamResp.Response.Usage.OutputTokens != 0 {
usage.CompletionTokens = streamResp.Response.Usage.OutputTokens
usage.OutputTokens = streamResp.Response.Usage.OutputTokens
}
if streamResp.Response.Usage.TotalTokens != 0 {
usage.TotalTokens = streamResp.Response.Usage.TotalTokens
} else {
usage.TotalTokens = usage.PromptTokens + usage.CompletionTokens
}
if streamResp.Response.Usage.InputTokensDetails != nil {
usage.PromptTokensDetails.CachedTokens = streamResp.Response.Usage.InputTokensDetails.CachedTokens
usage.PromptTokensDetails.ImageTokens = streamResp.Response.Usage.InputTokensDetails.ImageTokens
usage.PromptTokensDetails.AudioTokens = streamResp.Response.Usage.InputTokensDetails.AudioTokens
}
if streamResp.Response.Usage.CompletionTokenDetails.ReasoningTokens != 0 {
usage.CompletionTokenDetails.ReasoningTokens = streamResp.Response.Usage.CompletionTokenDetails.ReasoningTokens
}
}
}
if !sendStartIfNeeded() {
return false
}
if !sentStop {
finishReason := "stop"
if sawToolCall && outputText.Len() == 0 {
finishReason = "tool_calls"
}
stop := helper.GenerateStopResponse(responseId, createAt, model, finishReason)
if err := helper.ObjectData(c, stop); err != nil {
streamErr = types.NewOpenAIError(err, types.ErrorCodeBadResponse, http.StatusInternalServerError)
return false
}
sentStop = true
}
case "response.error", "response.failed":
if streamResp.Response != nil {
if oaiErr := streamResp.Response.GetOpenAIError(); oaiErr != nil && oaiErr.Type != "" {
streamErr = types.WithOpenAIError(*oaiErr, http.StatusInternalServerError)
return false
}
}
streamErr = types.NewOpenAIError(fmt.Errorf("responses stream error: %s", streamResp.Type), types.ErrorCodeBadResponse, http.StatusInternalServerError)
return false
default:
}
return true
})
if streamErr != nil {
return nil, streamErr
}
if usage.TotalTokens == 0 {
usage = service.ResponseText2Usage(c, usageText.String(), info.UpstreamModelName, info.GetEstimatePromptTokens())
}
if !sentStart {
if err := helper.ObjectData(c, helper.GenerateStartEmptyResponse(responseId, createAt, model, nil)); err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeBadResponse, http.StatusInternalServerError)
}
}
if !sentStop {
finishReason := "stop"
if sawToolCall && outputText.Len() == 0 {
finishReason = "tool_calls"
}
stop := helper.GenerateStopResponse(responseId, createAt, model, finishReason)
if err := helper.ObjectData(c, stop); err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeBadResponse, http.StatusInternalServerError)
}
}
if info.ShouldIncludeUsage && usage != nil {
if err := helper.ObjectData(c, helper.GenerateFinalUsageResponse(responseId, createAt, model, *usage)); err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeBadResponse, http.StatusInternalServerError)
}
}
helper.Done(c)
return usage, nil
}

View File

@@ -186,7 +186,7 @@ func OaiStreamHandler(c *gin.Context, info *relaycommon.RelayInfo, resp *http.Re
usage.CompletionTokens += toolCount * 7
}
applyUsagePostProcessing(info, usage, nil)
applyUsagePostProcessing(info, usage, common.StringToByteSlice(lastStreamData))
HandleFinalResponse(c, info, lastStreamData, responseId, createAt, model, systemFingerprint, usage, containStreamUsage)
@@ -596,7 +596,8 @@ func applyUsagePostProcessing(info *relaycommon.RelayInfo, usage *dto.Usage, res
if usage.PromptTokensDetails.CachedTokens == 0 && usage.PromptCacheHitTokens != 0 {
usage.PromptTokensDetails.CachedTokens = usage.PromptCacheHitTokens
}
case constant.ChannelTypeZhipu_v4, constant.ChannelTypeMoonshot:
case constant.ChannelTypeZhipu_v4:
// 智普的cached_tokens在标准位置: usage.prompt_tokens_details.cached_tokens
if usage.PromptTokensDetails.CachedTokens == 0 {
if usage.InputTokensDetails != nil && usage.InputTokensDetails.CachedTokens > 0 {
usage.PromptTokensDetails.CachedTokens = usage.InputTokensDetails.CachedTokens
@@ -606,6 +607,19 @@ func applyUsagePostProcessing(info *relaycommon.RelayInfo, usage *dto.Usage, res
usage.PromptTokensDetails.CachedTokens = usage.PromptCacheHitTokens
}
}
case constant.ChannelTypeMoonshot:
// Moonshot的cached_tokens在非标准位置: choices[].usage.cached_tokens
if usage.PromptTokensDetails.CachedTokens == 0 {
if usage.InputTokensDetails != nil && usage.InputTokensDetails.CachedTokens > 0 {
usage.PromptTokensDetails.CachedTokens = usage.InputTokensDetails.CachedTokens
} else if cachedTokens, ok := extractMoonshotCachedTokensFromBody(responseBody); ok {
usage.PromptTokensDetails.CachedTokens = cachedTokens
} else if cachedTokens, ok := extractCachedTokensFromBody(responseBody); ok {
usage.PromptTokensDetails.CachedTokens = cachedTokens
} else if usage.PromptCacheHitTokens > 0 {
usage.PromptTokensDetails.CachedTokens = usage.PromptCacheHitTokens
}
}
}
}
@@ -639,3 +653,32 @@ func extractCachedTokensFromBody(body []byte) (int, bool) {
}
return 0, false
}
// extractMoonshotCachedTokensFromBody 从Moonshot的非标准位置提取cached_tokens
// Moonshot的流式响应格式: {"choices":[{"usage":{"cached_tokens":111}}]}
func extractMoonshotCachedTokensFromBody(body []byte) (int, bool) {
if len(body) == 0 {
return 0, false
}
var payload struct {
Choices []struct {
Usage struct {
CachedTokens *int `json:"cached_tokens"`
} `json:"usage"`
} `json:"choices"`
}
if err := common.Unmarshal(body, &payload); err != nil {
return 0, false
}
// 遍历choices查找cached_tokens
for _, choice := range payload.Choices {
if choice.Usage.CachedTokens != nil && *choice.Usage.CachedTokens > 0 {
return *choice.Usage.CachedTokens, true
}
}
return 0, false
}

View File

@@ -6,6 +6,9 @@ import (
"fmt"
"io"
"net/http"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/dto"
@@ -23,18 +26,36 @@ import (
// ============================
type ContentItem struct {
Type string `json:"type"` // "text" or "image_url"
Text string `json:"text,omitempty"` // for text type
ImageURL *ImageURL `json:"image_url,omitempty"` // for image_url type
Type string `json:"type"` // "text", "image_url" or "video"
Text string `json:"text,omitempty"` // for text type
ImageURL *ImageURL `json:"image_url,omitempty"` // for image_url type
Video *VideoReference `json:"video,omitempty"` // for video (sample) type
}
type ImageURL struct {
URL string `json:"url"`
}
type VideoReference struct {
URL string `json:"url"` // Draft video URL
}
type requestPayload struct {
Model string `json:"model"`
Content []ContentItem `json:"content"`
Model string `json:"model"`
Content []ContentItem `json:"content"`
CallbackURL string `json:"callback_url,omitempty"`
ReturnLastFrame *dto.BoolValue `json:"return_last_frame,omitempty"`
ServiceTier string `json:"service_tier,omitempty"`
ExecutionExpiresAfter dto.IntValue `json:"execution_expires_after,omitempty"`
GenerateAudio *dto.BoolValue `json:"generate_audio,omitempty"`
Draft *dto.BoolValue `json:"draft,omitempty"`
Resolution string `json:"resolution,omitempty"`
Ratio string `json:"ratio,omitempty"`
Duration dto.IntValue `json:"duration,omitempty"`
Frames dto.IntValue `json:"frames,omitempty"`
Seed dto.IntValue `json:"seed,omitempty"`
CameraFixed *dto.BoolValue `json:"camera_fixed,omitempty"`
Watermark *dto.BoolValue `json:"watermark,omitempty"`
}
type responsePayload struct {
@@ -53,6 +74,7 @@ type responseTask struct {
Duration int `json:"duration"`
Ratio string `json:"ratio"`
FramesPerSecond int `json:"framespersecond"`
ServiceTier string `json:"service_tier"`
Usage struct {
CompletionTokens int `json:"completion_tokens"`
TotalTokens int `json:"total_tokens"`
@@ -98,16 +120,16 @@ func (a *TaskAdaptor) BuildRequestHeader(c *gin.Context, req *http.Request, info
// BuildRequestBody converts request into Doubao specific format.
func (a *TaskAdaptor) BuildRequestBody(c *gin.Context, info *relaycommon.RelayInfo) (io.Reader, error) {
v, exists := c.Get("task_request")
if !exists {
return nil, fmt.Errorf("request not found in context")
req, err := relaycommon.GetTaskRequest(c)
if err != nil {
return nil, err
}
req := v.(relaycommon.TaskSubmitReq)
body, err := a.convertToRequestPayload(&req)
if err != nil {
return nil, errors.Wrap(err, "convert request payload failed")
}
info.UpstreamModelName = body.Model
data, err := json.Marshal(body)
if err != nil {
return nil, err
@@ -141,7 +163,13 @@ func (a *TaskAdaptor) DoResponse(c *gin.Context, resp *http.Response, info *rela
return
}
c.JSON(http.StatusOK, gin.H{"task_id": dResp.ID})
ov := dto.NewOpenAIVideo()
ov.ID = dResp.ID
ov.TaskID = dResp.ID
ov.CreatedAt = time.Now().Unix()
ov.Model = info.OriginModelName
c.JSON(http.StatusOK, ov)
return dResp.ID, responseBody, nil
}
@@ -204,12 +232,15 @@ func (a *TaskAdaptor) convertToRequestPayload(req *relaycommon.TaskSubmitReq) (*
}
}
// TODO: Add support for additional parameters from metadata
// such as ratio, duration, seed, etc.
// metadata := req.Metadata
// if metadata != nil {
// // Parse and apply metadata parameters
// }
metadata := req.Metadata
medaBytes, err := json.Marshal(metadata)
if err != nil {
return nil, errors.Wrap(err, "metadata marshal metadata failed")
}
err = json.Unmarshal(medaBytes, &r)
if err != nil {
return nil, errors.Wrap(err, "unmarshal metadata failed")
}
return &r, nil
}
@@ -229,7 +260,7 @@ func (a *TaskAdaptor) ParseTaskResult(respBody []byte) (*relaycommon.TaskInfo, e
case "pending", "queued":
taskResult.Status = model.TaskStatusQueued
taskResult.Progress = "10%"
case "processing":
case "processing", "running":
taskResult.Status = model.TaskStatusInProgress
taskResult.Progress = "50%"
case "succeeded":
@@ -251,3 +282,30 @@ func (a *TaskAdaptor) ParseTaskResult(respBody []byte) (*relaycommon.TaskInfo, e
return &taskResult, nil
}
func (a *TaskAdaptor) ConvertToOpenAIVideo(originTask *model.Task) ([]byte, error) {
var dResp responseTask
if err := json.Unmarshal(originTask.Data, &dResp); err != nil {
return nil, errors.Wrap(err, "unmarshal doubao task data failed")
}
openAIVideo := dto.NewOpenAIVideo()
openAIVideo.ID = originTask.TaskID
openAIVideo.TaskID = originTask.TaskID
openAIVideo.Status = originTask.Status.ToVideoStatus()
openAIVideo.SetProgressStr(originTask.Progress)
openAIVideo.SetMetadata("url", dResp.Content.VideoURL)
openAIVideo.CreatedAt = originTask.CreatedAt
openAIVideo.CompletedAt = originTask.UpdatedAt
openAIVideo.Model = originTask.Properties.OriginModelName
if dResp.Status == "failed" {
openAIVideo.Error = &dto.OpenAIVideoError{
Message: "task failed",
Code: "failed",
}
}
jsonData, _ := common.Marshal(openAIVideo)
return jsonData, nil
}

View File

@@ -4,6 +4,7 @@ var ModelList = []string{
"doubao-seedance-1-0-pro-250528",
"doubao-seedance-1-0-lite-t2v",
"doubao-seedance-1-0-lite-i2v",
"doubao-seedance-1-5-pro-251215",
}
var ChannelName = "doubao-video"

View File

@@ -0,0 +1,160 @@
package relay
import (
"bytes"
"net/http"
"strings"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/relay/channel"
openaichannel "github.com/QuantumNous/new-api/relay/channel/openai"
relaycommon "github.com/QuantumNous/new-api/relay/common"
relayconstant "github.com/QuantumNous/new-api/relay/constant"
"github.com/QuantumNous/new-api/service"
"github.com/QuantumNous/new-api/types"
"github.com/gin-gonic/gin"
)
func applySystemPromptIfNeeded(c *gin.Context, info *relaycommon.RelayInfo, request *dto.GeneralOpenAIRequest) {
if info == nil || request == nil {
return
}
if info.ChannelSetting.SystemPrompt == "" {
return
}
systemRole := request.GetSystemRoleName()
containSystemPrompt := false
for _, message := range request.Messages {
if message.Role == systemRole {
containSystemPrompt = true
break
}
}
if !containSystemPrompt {
systemMessage := dto.Message{
Role: systemRole,
Content: info.ChannelSetting.SystemPrompt,
}
request.Messages = append([]dto.Message{systemMessage}, request.Messages...)
return
}
if !info.ChannelSetting.SystemPromptOverride {
return
}
common.SetContextKey(c, constant.ContextKeySystemPromptOverride, true)
for i, message := range request.Messages {
if message.Role != systemRole {
continue
}
if message.IsStringContent() {
request.Messages[i].SetStringContent(info.ChannelSetting.SystemPrompt + "\n" + message.StringContent())
return
}
contents := message.ParseContent()
contents = append([]dto.MediaContent{
{
Type: dto.ContentTypeText,
Text: info.ChannelSetting.SystemPrompt,
},
}, contents...)
request.Messages[i].Content = contents
return
}
}
func chatCompletionsViaResponses(c *gin.Context, info *relaycommon.RelayInfo, adaptor channel.Adaptor, request *dto.GeneralOpenAIRequest) (*dto.Usage, *types.NewAPIError) {
overrideCtx := relaycommon.BuildParamOverrideContext(info)
chatJSON, err := common.Marshal(request)
if err != nil {
return nil, types.NewError(err, types.ErrorCodeConvertRequestFailed, types.ErrOptionWithSkipRetry())
}
chatJSON, err = relaycommon.RemoveDisabledFields(chatJSON, info.ChannelOtherSettings)
if err != nil {
return nil, types.NewError(err, types.ErrorCodeConvertRequestFailed, types.ErrOptionWithSkipRetry())
}
if len(info.ParamOverride) > 0 {
chatJSON, err = relaycommon.ApplyParamOverride(chatJSON, info.ParamOverride, overrideCtx)
if err != nil {
return nil, types.NewError(err, types.ErrorCodeChannelParamOverrideInvalid, types.ErrOptionWithSkipRetry())
}
}
var overriddenChatReq dto.GeneralOpenAIRequest
if err := common.Unmarshal(chatJSON, &overriddenChatReq); err != nil {
return nil, types.NewError(err, types.ErrorCodeChannelParamOverrideInvalid, types.ErrOptionWithSkipRetry())
}
responsesReq, err := service.ChatCompletionsRequestToResponsesRequest(&overriddenChatReq)
if err != nil {
return nil, types.NewErrorWithStatusCode(err, types.ErrorCodeInvalidRequest, http.StatusBadRequest, types.ErrOptionWithSkipRetry())
}
savedRelayMode := info.RelayMode
savedRequestURLPath := info.RequestURLPath
defer func() {
info.RelayMode = savedRelayMode
info.RequestURLPath = savedRequestURLPath
}()
info.RelayMode = relayconstant.RelayModeResponses
info.RequestURLPath = "/v1/responses"
convertedRequest, err := adaptor.ConvertOpenAIResponsesRequest(c, info, *responsesReq)
if err != nil {
return nil, types.NewError(err, types.ErrorCodeConvertRequestFailed, types.ErrOptionWithSkipRetry())
}
jsonData, err := common.Marshal(convertedRequest)
if err != nil {
return nil, types.NewError(err, types.ErrorCodeConvertRequestFailed, types.ErrOptionWithSkipRetry())
}
jsonData, err = relaycommon.RemoveDisabledFields(jsonData, info.ChannelOtherSettings)
if err != nil {
return nil, types.NewError(err, types.ErrorCodeConvertRequestFailed, types.ErrOptionWithSkipRetry())
}
var httpResp *http.Response
resp, err := adaptor.DoRequest(c, info, bytes.NewBuffer(jsonData))
if err != nil {
return nil, types.NewOpenAIError(err, types.ErrorCodeDoRequestFailed, http.StatusInternalServerError)
}
if resp == nil {
return nil, types.NewOpenAIError(nil, types.ErrorCodeBadResponse, http.StatusInternalServerError)
}
statusCodeMappingStr := c.GetString("status_code_mapping")
httpResp = resp.(*http.Response)
info.IsStream = info.IsStream || strings.HasPrefix(httpResp.Header.Get("Content-Type"), "text/event-stream")
if httpResp.StatusCode != http.StatusOK {
newApiErr := service.RelayErrorHandler(c.Request.Context(), httpResp, false)
service.ResetStatusCode(newApiErr, statusCodeMappingStr)
return nil, newApiErr
}
if info.IsStream {
usage, newApiErr := openaichannel.OaiResponsesToChatStreamHandler(c, info, httpResp)
if newApiErr != nil {
service.ResetStatusCode(newApiErr, statusCodeMappingStr)
return nil, newApiErr
}
return usage, nil
}
usage, newApiErr := openaichannel.OaiResponsesToChatHandler(c, info, httpResp)
if newApiErr != nil {
service.ResetStatusCode(newApiErr, statusCodeMappingStr)
return nil, newApiErr
}
return usage, nil
}

View File

@@ -570,18 +570,19 @@ func mergeObjects(jsonStr, path string, value interface{}, keepOrigin bool) (str
// BuildParamOverrideContext 提供 ApplyParamOverride 可用的上下文信息。
// 目前内置以下字段:
// - model优先使用上游模型名UpstreamModelName若不存在则回落到原始模型名OriginModelName
// - upstream_model始终为通道映射后的上游模型名。
// - upstream_model/model始终为通道映射后的上游模型名
// - original_model请求最初指定的模型名。
// - request_path请求路径
// - is_channel_test是否为渠道测试请求同 is_test
func BuildParamOverrideContext(info *RelayInfo) map[string]interface{} {
if info == nil || info.ChannelMeta == nil {
if info == nil {
return nil
}
ctx := make(map[string]interface{})
if info.UpstreamModelName != "" {
ctx["model"] = info.UpstreamModelName
ctx["upstream_model"] = info.UpstreamModelName
if info.ChannelMeta != nil && info.ChannelMeta.UpstreamModelName != "" {
ctx["model"] = info.ChannelMeta.UpstreamModelName
ctx["upstream_model"] = info.ChannelMeta.UpstreamModelName
}
if info.OriginModelName != "" {
ctx["original_model"] = info.OriginModelName
@@ -590,8 +591,13 @@ func BuildParamOverrideContext(info *RelayInfo) map[string]interface{} {
}
}
if len(ctx) == 0 {
return nil
if info.RequestURLPath != "" {
requestPath := info.RequestURLPath
if requestPath != "" {
ctx["request_path"] = requestPath
}
}
ctx["is_channel_test"] = info.IsChannelTest
return ctx
}

View File

@@ -115,6 +115,7 @@ type RelayInfo struct {
SendResponseCount int
FinalPreConsumedQuota int // 最终预消耗的配额
IsClaudeBetaQuery bool // /v1/messages?beta=true
IsChannelTest bool // channel test request
PriceData types.PriceData
@@ -273,6 +274,7 @@ var streamSupportedChannels = map[int]bool{
constant.ChannelTypeZhipu_v4: true,
constant.ChannelTypeAli: true,
constant.ChannelTypeSubmodel: true,
constant.ChannelTypeCodex: true,
}
func GenRelayInfoWs(c *gin.Context, ws *websocket.Conn) *RelayInfo {

View File

@@ -14,6 +14,7 @@ import (
"github.com/QuantumNous/new-api/logger"
"github.com/QuantumNous/new-api/model"
relaycommon "github.com/QuantumNous/new-api/relay/common"
relayconstant "github.com/QuantumNous/new-api/relay/constant"
"github.com/QuantumNous/new-api/relay/helper"
"github.com/QuantumNous/new-api/service"
"github.com/QuantumNous/new-api/setting/model_setting"
@@ -73,9 +74,32 @@ func TextHelper(c *gin.Context, info *relaycommon.RelayInfo) (newAPIError *types
return types.NewError(fmt.Errorf("invalid api type: %d", info.ApiType), types.ErrorCodeInvalidApiType, types.ErrOptionWithSkipRetry())
}
adaptor.Init(info)
passThroughGlobal := model_setting.GetGlobalSettings().PassThroughRequestEnabled
if info.RelayMode == relayconstant.RelayModeChatCompletions &&
!passThroughGlobal &&
!info.ChannelSetting.PassThroughBodyEnabled &&
shouldChatCompletionsViaResponses(info) {
applySystemPromptIfNeeded(c, info, request)
usage, newApiErr := chatCompletionsViaResponses(c, info, adaptor, request)
if newApiErr != nil {
return newApiErr
}
var containAudioTokens = usage.CompletionTokenDetails.AudioTokens > 0 || usage.PromptTokensDetails.AudioTokens > 0
var containsAudioRatios = ratio_setting.ContainsAudioRatio(info.OriginModelName) || ratio_setting.ContainsAudioCompletionRatio(info.OriginModelName)
if containAudioTokens && containsAudioRatios {
service.PostAudioConsumeQuota(c, info, usage, "")
} else {
postConsumeQuota(c, info, usage)
}
return nil
}
var requestBody io.Reader
if model_setting.GetGlobalSettings().PassThroughRequestEnabled || info.ChannelSetting.PassThroughBodyEnabled {
if passThroughGlobal || info.ChannelSetting.PassThroughBodyEnabled {
body, err := common.GetRequestBody(c)
if err != nil {
return types.NewErrorWithStatusCode(err, types.ErrorCodeReadRequestBodyFailed, http.StatusBadRequest, types.ErrOptionWithSkipRetry())
@@ -193,6 +217,16 @@ func TextHelper(c *gin.Context, info *relaycommon.RelayInfo) (newAPIError *types
return nil
}
func shouldChatCompletionsViaResponses(info *relaycommon.RelayInfo) bool {
if info == nil {
return false
}
if info.RelayMode != relayconstant.RelayModeChatCompletions {
return false
}
return service.ShouldChatCompletionsUseResponsesGlobal(info.ChannelId, info.OriginModelName)
}
func postConsumeQuota(ctx *gin.Context, relayInfo *relaycommon.RelayInfo, usage *dto.Usage, extraContent ...string) {
if usage == nil {
usage = &dto.Usage{

View File

@@ -11,6 +11,7 @@ import (
"github.com/QuantumNous/new-api/relay/channel/baidu_v2"
"github.com/QuantumNous/new-api/relay/channel/claude"
"github.com/QuantumNous/new-api/relay/channel/cloudflare"
"github.com/QuantumNous/new-api/relay/channel/codex"
"github.com/QuantumNous/new-api/relay/channel/cohere"
"github.com/QuantumNous/new-api/relay/channel/coze"
"github.com/QuantumNous/new-api/relay/channel/deepseek"
@@ -117,6 +118,8 @@ func GetAdaptor(apiType int) channel.Adaptor {
return &minimax.Adaptor{}
case constant.APITypeReplicate:
return &replicate.Adaptor{}
case constant.APITypeCodex:
return &codex.Adaptor{}
}
return nil
}
@@ -148,7 +151,7 @@ func GetTaskAdaptor(platform constant.TaskPlatform) channel.TaskAdaptor {
return &taskvertex.TaskAdaptor{}
case constant.ChannelTypeVidu:
return &taskVidu.TaskAdaptor{}
case constant.ChannelTypeDoubaoVideo:
case constant.ChannelTypeDoubaoVideo, constant.ChannelTypeVolcEngine:
return &taskdoubao.TaskAdaptor{}
case constant.ChannelTypeSora, constant.ChannelTypeOpenAI:
return &tasksora.TaskAdaptor{}

View File

@@ -150,6 +150,14 @@ func RelayTaskSubmit(c *gin.Context, info *relaycommon.RelayInfo) (taskErr *dto.
}
}
// 处理 auto 分组:从 context 获取实际选中的分组
// 当使用 auto 分组时Distribute 中间件会将实际选中的分组存储在 ContextKeyAutoGroup 中
if autoGroup, exists := common.GetContextKey(c, constant.ContextKeyAutoGroup); exists {
if groupStr, ok := autoGroup.(string); ok && groupStr != "" {
info.UsingGroup = groupStr
}
}
// 预扣
groupRatio := ratio_setting.GetGroupRatio(info.UsingGroup)
var ratio float64

View File

@@ -156,6 +156,12 @@ func SetApiRouter(router *gin.Engine) {
channelRoute.POST("/fix", controller.FixChannelsAbilities)
channelRoute.GET("/fetch_models/:id", controller.FetchUpstreamModels)
channelRoute.POST("/fetch_models", controller.FetchModels)
channelRoute.POST("/codex/oauth/start", controller.StartCodexOAuth)
channelRoute.POST("/codex/oauth/complete", controller.CompleteCodexOAuth)
channelRoute.POST("/:id/codex/oauth/start", controller.StartCodexOAuthForChannel)
channelRoute.POST("/:id/codex/oauth/complete", controller.CompleteCodexOAuthForChannel)
channelRoute.POST("/:id/codex/refresh", controller.RefreshCodexChannelCredential)
channelRoute.GET("/:id/codex/usage", controller.GetCodexChannelUsage)
channelRoute.POST("/ollama/pull", controller.OllamaPullModel)
channelRoute.POST("/ollama/pull/stream", controller.OllamaPullModelStream)
channelRoute.DELETE("/ollama/delete", controller.OllamaDeleteModel)

View File

@@ -57,9 +57,12 @@ func ShouldDisableChannel(channelType int, err *types.NewAPIError) bool {
if types.IsSkipRetryError(err) {
return false
}
if err.StatusCode == http.StatusUnauthorized {
if operation_setting.ShouldDisableByStatusCode(err.StatusCode) {
return true
}
//if err.StatusCode == http.StatusUnauthorized {
// return true
//}
if err.StatusCode == http.StatusForbidden {
switch channelType {
case constant.ChannelTypeGemini:

View File

@@ -0,0 +1,104 @@
package service
import (
"context"
"errors"
"fmt"
"strings"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/model"
)
type CodexCredentialRefreshOptions struct {
ResetCaches bool
}
type CodexOAuthKey struct {
IDToken string `json:"id_token,omitempty"`
AccessToken string `json:"access_token,omitempty"`
RefreshToken string `json:"refresh_token,omitempty"`
AccountID string `json:"account_id,omitempty"`
LastRefresh string `json:"last_refresh,omitempty"`
Email string `json:"email,omitempty"`
Type string `json:"type,omitempty"`
Expired string `json:"expired,omitempty"`
}
func parseCodexOAuthKey(raw string) (*CodexOAuthKey, error) {
if strings.TrimSpace(raw) == "" {
return nil, errors.New("codex channel: empty oauth key")
}
var key CodexOAuthKey
if err := common.Unmarshal([]byte(raw), &key); err != nil {
return nil, errors.New("codex channel: invalid oauth key json")
}
return &key, nil
}
func RefreshCodexChannelCredential(ctx context.Context, channelID int, opts CodexCredentialRefreshOptions) (*CodexOAuthKey, *model.Channel, error) {
ch, err := model.GetChannelById(channelID, true)
if err != nil {
return nil, nil, err
}
if ch == nil {
return nil, nil, fmt.Errorf("channel not found")
}
if ch.Type != constant.ChannelTypeCodex {
return nil, nil, fmt.Errorf("channel type is not Codex")
}
oauthKey, err := parseCodexOAuthKey(strings.TrimSpace(ch.Key))
if err != nil {
return nil, nil, err
}
if strings.TrimSpace(oauthKey.RefreshToken) == "" {
return nil, nil, fmt.Errorf("codex channel: refresh_token is required to refresh credential")
}
refreshCtx, cancel := context.WithTimeout(ctx, 10*time.Second)
defer cancel()
res, err := RefreshCodexOAuthToken(refreshCtx, oauthKey.RefreshToken)
if err != nil {
return nil, nil, err
}
oauthKey.AccessToken = res.AccessToken
oauthKey.RefreshToken = res.RefreshToken
oauthKey.LastRefresh = time.Now().Format(time.RFC3339)
oauthKey.Expired = res.ExpiresAt.Format(time.RFC3339)
if strings.TrimSpace(oauthKey.Type) == "" {
oauthKey.Type = "codex"
}
if strings.TrimSpace(oauthKey.AccountID) == "" {
if accountID, ok := ExtractCodexAccountIDFromJWT(oauthKey.AccessToken); ok {
oauthKey.AccountID = accountID
}
}
if strings.TrimSpace(oauthKey.Email) == "" {
if email, ok := ExtractEmailFromJWT(oauthKey.AccessToken); ok {
oauthKey.Email = email
}
}
encoded, err := common.Marshal(oauthKey)
if err != nil {
return nil, nil, err
}
if err := model.DB.Model(&model.Channel{}).Where("id = ?", ch.Id).Update("key", string(encoded)).Error; err != nil {
return nil, nil, err
}
if opts.ResetCaches {
model.InitChannelCache()
ResetProxyClientCache()
}
return oauthKey, ch, nil
}

View File

@@ -0,0 +1,140 @@
package service
import (
"context"
"fmt"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/logger"
"github.com/QuantumNous/new-api/model"
"github.com/bytedance/gopkg/util/gopool"
)
const (
codexCredentialRefreshTickInterval = 10 * time.Minute
codexCredentialRefreshThreshold = 24 * time.Hour
codexCredentialRefreshBatchSize = 200
codexCredentialRefreshTimeout = 15 * time.Second
)
var (
codexCredentialRefreshOnce sync.Once
codexCredentialRefreshRunning atomic.Bool
)
func StartCodexCredentialAutoRefreshTask() {
codexCredentialRefreshOnce.Do(func() {
if !common.IsMasterNode {
return
}
gopool.Go(func() {
logger.LogInfo(context.Background(), fmt.Sprintf("codex credential auto-refresh task started: tick=%s threshold=%s", codexCredentialRefreshTickInterval, codexCredentialRefreshThreshold))
ticker := time.NewTicker(codexCredentialRefreshTickInterval)
defer ticker.Stop()
runCodexCredentialAutoRefreshOnce()
for range ticker.C {
runCodexCredentialAutoRefreshOnce()
}
})
})
}
func runCodexCredentialAutoRefreshOnce() {
if !codexCredentialRefreshRunning.CompareAndSwap(false, true) {
return
}
defer codexCredentialRefreshRunning.Store(false)
ctx := context.Background()
now := time.Now()
var refreshed int
var scanned int
offset := 0
for {
var channels []*model.Channel
err := model.DB.
Select("id", "name", "key", "status", "channel_info").
Where("type = ? AND status = 1", constant.ChannelTypeCodex).
Order("id asc").
Limit(codexCredentialRefreshBatchSize).
Offset(offset).
Find(&channels).Error
if err != nil {
logger.LogError(ctx, fmt.Sprintf("codex credential auto-refresh: query channels failed: %v", err))
return
}
if len(channels) == 0 {
break
}
offset += codexCredentialRefreshBatchSize
for _, ch := range channels {
if ch == nil {
continue
}
scanned++
if ch.ChannelInfo.IsMultiKey {
continue
}
rawKey := strings.TrimSpace(ch.Key)
if rawKey == "" {
continue
}
oauthKey, err := parseCodexOAuthKey(rawKey)
if err != nil {
continue
}
refreshToken := strings.TrimSpace(oauthKey.RefreshToken)
if refreshToken == "" {
continue
}
expiredAtRaw := strings.TrimSpace(oauthKey.Expired)
expiredAt, err := time.Parse(time.RFC3339, expiredAtRaw)
if err == nil && !expiredAt.IsZero() && expiredAt.Sub(now) > codexCredentialRefreshThreshold {
continue
}
refreshCtx, cancel := context.WithTimeout(ctx, codexCredentialRefreshTimeout)
newKey, _, err := RefreshCodexChannelCredential(refreshCtx, ch.Id, CodexCredentialRefreshOptions{ResetCaches: false})
cancel()
if err != nil {
logger.LogWarn(ctx, fmt.Sprintf("codex credential auto-refresh: channel_id=%d name=%s refresh failed: %v", ch.Id, ch.Name, err))
continue
}
refreshed++
logger.LogInfo(ctx, fmt.Sprintf("codex credential auto-refresh: channel_id=%d name=%s refreshed, expires_at=%s", ch.Id, ch.Name, newKey.Expired))
}
}
if refreshed > 0 {
func() {
defer func() {
if r := recover(); r != nil {
logger.LogWarn(ctx, fmt.Sprintf("codex credential auto-refresh: InitChannelCache panic: %v", r))
}
}()
model.InitChannelCache()
}()
ResetProxyClientCache()
}
if common.DebugEnabled {
logger.LogDebug(ctx, "codex credential auto-refresh: scanned=%d refreshed=%d", scanned, refreshed)
}
}

288
service/codex_oauth.go Normal file
View File

@@ -0,0 +1,288 @@
package service
import (
"context"
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"strings"
"time"
)
const (
codexOAuthClientID = "app_EMoamEEZ73f0CkXaXp7hrann"
codexOAuthAuthorizeURL = "https://auth.openai.com/oauth/authorize"
codexOAuthTokenURL = "https://auth.openai.com/oauth/token"
codexOAuthRedirectURI = "http://localhost:1455/auth/callback"
codexOAuthScope = "openid profile email offline_access"
codexJWTClaimPath = "https://api.openai.com/auth"
defaultHTTPTimeout = 20 * time.Second
)
type CodexOAuthTokenResult struct {
AccessToken string
RefreshToken string
ExpiresAt time.Time
}
type CodexOAuthAuthorizationFlow struct {
State string
Verifier string
Challenge string
AuthorizeURL string
}
func RefreshCodexOAuthToken(ctx context.Context, refreshToken string) (*CodexOAuthTokenResult, error) {
client := &http.Client{Timeout: defaultHTTPTimeout}
return refreshCodexOAuthToken(ctx, client, codexOAuthTokenURL, codexOAuthClientID, refreshToken)
}
func ExchangeCodexAuthorizationCode(ctx context.Context, code string, verifier string) (*CodexOAuthTokenResult, error) {
client := &http.Client{Timeout: defaultHTTPTimeout}
return exchangeCodexAuthorizationCode(ctx, client, codexOAuthTokenURL, codexOAuthClientID, code, verifier, codexOAuthRedirectURI)
}
func CreateCodexOAuthAuthorizationFlow() (*CodexOAuthAuthorizationFlow, error) {
state, err := createStateHex(16)
if err != nil {
return nil, err
}
verifier, challenge, err := generatePKCEPair()
if err != nil {
return nil, err
}
u, err := buildCodexAuthorizeURL(state, challenge)
if err != nil {
return nil, err
}
return &CodexOAuthAuthorizationFlow{
State: state,
Verifier: verifier,
Challenge: challenge,
AuthorizeURL: u,
}, nil
}
func refreshCodexOAuthToken(
ctx context.Context,
client *http.Client,
tokenURL string,
clientID string,
refreshToken string,
) (*CodexOAuthTokenResult, error) {
rt := strings.TrimSpace(refreshToken)
if rt == "" {
return nil, errors.New("empty refresh_token")
}
form := url.Values{}
form.Set("grant_type", "refresh_token")
form.Set("refresh_token", rt)
form.Set("client_id", clientID)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, tokenURL, strings.NewReader(form.Encode()))
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var payload struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int `json:"expires_in"`
}
if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil {
return nil, err
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return nil, fmt.Errorf("codex oauth refresh failed: status=%d", resp.StatusCode)
}
if strings.TrimSpace(payload.AccessToken) == "" || strings.TrimSpace(payload.RefreshToken) == "" || payload.ExpiresIn <= 0 {
return nil, errors.New("codex oauth refresh response missing fields")
}
return &CodexOAuthTokenResult{
AccessToken: strings.TrimSpace(payload.AccessToken),
RefreshToken: strings.TrimSpace(payload.RefreshToken),
ExpiresAt: time.Now().Add(time.Duration(payload.ExpiresIn) * time.Second),
}, nil
}
func exchangeCodexAuthorizationCode(
ctx context.Context,
client *http.Client,
tokenURL string,
clientID string,
code string,
verifier string,
redirectURI string,
) (*CodexOAuthTokenResult, error) {
c := strings.TrimSpace(code)
v := strings.TrimSpace(verifier)
if c == "" {
return nil, errors.New("empty authorization code")
}
if v == "" {
return nil, errors.New("empty code_verifier")
}
form := url.Values{}
form.Set("grant_type", "authorization_code")
form.Set("client_id", clientID)
form.Set("code", c)
form.Set("code_verifier", v)
form.Set("redirect_uri", redirectURI)
req, err := http.NewRequestWithContext(ctx, http.MethodPost, tokenURL, strings.NewReader(form.Encode()))
if err != nil {
return nil, err
}
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
req.Header.Set("Accept", "application/json")
resp, err := client.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var payload struct {
AccessToken string `json:"access_token"`
RefreshToken string `json:"refresh_token"`
ExpiresIn int `json:"expires_in"`
}
if err := json.NewDecoder(resp.Body).Decode(&payload); err != nil {
return nil, err
}
if resp.StatusCode < 200 || resp.StatusCode >= 300 {
return nil, fmt.Errorf("codex oauth code exchange failed: status=%d", resp.StatusCode)
}
if strings.TrimSpace(payload.AccessToken) == "" || strings.TrimSpace(payload.RefreshToken) == "" || payload.ExpiresIn <= 0 {
return nil, errors.New("codex oauth token response missing fields")
}
return &CodexOAuthTokenResult{
AccessToken: strings.TrimSpace(payload.AccessToken),
RefreshToken: strings.TrimSpace(payload.RefreshToken),
ExpiresAt: time.Now().Add(time.Duration(payload.ExpiresIn) * time.Second),
}, nil
}
func buildCodexAuthorizeURL(state string, challenge string) (string, error) {
u, err := url.Parse(codexOAuthAuthorizeURL)
if err != nil {
return "", err
}
q := u.Query()
q.Set("response_type", "code")
q.Set("client_id", codexOAuthClientID)
q.Set("redirect_uri", codexOAuthRedirectURI)
q.Set("scope", codexOAuthScope)
q.Set("code_challenge", challenge)
q.Set("code_challenge_method", "S256")
q.Set("state", state)
q.Set("id_token_add_organizations", "true")
q.Set("codex_cli_simplified_flow", "true")
q.Set("originator", "codex_cli_rs")
u.RawQuery = q.Encode()
return u.String(), nil
}
func createStateHex(nBytes int) (string, error) {
if nBytes <= 0 {
return "", errors.New("invalid state bytes length")
}
b := make([]byte, nBytes)
if _, err := rand.Read(b); err != nil {
return "", err
}
return fmt.Sprintf("%x", b), nil
}
func generatePKCEPair() (verifier string, challenge string, err error) {
b := make([]byte, 32)
if _, err := rand.Read(b); err != nil {
return "", "", err
}
verifier = base64.RawURLEncoding.EncodeToString(b)
sum := sha256.Sum256([]byte(verifier))
challenge = base64.RawURLEncoding.EncodeToString(sum[:])
return verifier, challenge, nil
}
func ExtractCodexAccountIDFromJWT(token string) (string, bool) {
claims, ok := decodeJWTClaims(token)
if !ok {
return "", false
}
raw, ok := claims[codexJWTClaimPath]
if !ok {
return "", false
}
obj, ok := raw.(map[string]any)
if !ok {
return "", false
}
v, ok := obj["chatgpt_account_id"]
if !ok {
return "", false
}
s, ok := v.(string)
if !ok {
return "", false
}
s = strings.TrimSpace(s)
if s == "" {
return "", false
}
return s, true
}
func ExtractEmailFromJWT(token string) (string, bool) {
claims, ok := decodeJWTClaims(token)
if !ok {
return "", false
}
v, ok := claims["email"]
if !ok {
return "", false
}
s, ok := v.(string)
if !ok {
return "", false
}
s = strings.TrimSpace(s)
if s == "" {
return "", false
}
return s, true
}
func decodeJWTClaims(token string) (map[string]any, bool) {
parts := strings.Split(token, ".")
if len(parts) != 3 {
return nil, false
}
payloadRaw, err := base64.RawURLEncoding.DecodeString(parts[1])
if err != nil {
return nil, false
}
var claims map[string]any
if err := json.Unmarshal(payloadRaw, &claims); err != nil {
return nil, false
}
return claims, true
}

View File

@@ -0,0 +1,56 @@
package service
import (
"context"
"fmt"
"io"
"net/http"
"strings"
)
func FetchCodexWhamUsage(
ctx context.Context,
client *http.Client,
baseURL string,
accessToken string,
accountID string,
) (statusCode int, body []byte, err error) {
if client == nil {
return 0, nil, fmt.Errorf("nil http client")
}
bu := strings.TrimRight(strings.TrimSpace(baseURL), "/")
if bu == "" {
return 0, nil, fmt.Errorf("empty baseURL")
}
at := strings.TrimSpace(accessToken)
aid := strings.TrimSpace(accountID)
if at == "" {
return 0, nil, fmt.Errorf("empty accessToken")
}
if aid == "" {
return 0, nil, fmt.Errorf("empty accountID")
}
req, err := http.NewRequestWithContext(ctx, http.MethodGet, bu+"/backend-api/wham/usage", nil)
if err != nil {
return 0, nil, err
}
req.Header.Set("Authorization", "Bearer "+at)
req.Header.Set("chatgpt-account-id", aid)
req.Header.Set("Accept", "application/json")
if req.Header.Get("originator") == "" {
req.Header.Set("originator", "codex_cli_rs")
}
resp, err := client.Do(req)
if err != nil {
return 0, nil, err
}
defer resp.Body.Close()
body, err = io.ReadAll(resp.Body)
if err != nil {
return resp.StatusCode, nil, err
}
return resp.StatusCode, body, nil
}

View File

@@ -82,6 +82,9 @@ func ResetProxyClientCache() {
// NewProxyHttpClient 创建支持代理的 HTTP 客户端
func NewProxyHttpClient(proxyURL string) (*http.Client, error) {
if proxyURL == "" {
if client := GetHttpClient(); client != nil {
return client, nil
}
return http.DefaultClient, nil
}

View File

@@ -0,0 +1,18 @@
package service
import (
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/service/openaicompat"
)
func ChatCompletionsRequestToResponsesRequest(req *dto.GeneralOpenAIRequest) (*dto.OpenAIResponsesRequest, error) {
return openaicompat.ChatCompletionsRequestToResponsesRequest(req)
}
func ResponsesResponseToChatCompletionsResponse(resp *dto.OpenAIResponsesResponse, id string) (*dto.OpenAITextResponse, *dto.Usage, error) {
return openaicompat.ResponsesResponseToChatCompletionsResponse(resp, id)
}
func ExtractOutputTextFromResponses(resp *dto.OpenAIResponsesResponse) string {
return openaicompat.ExtractOutputTextFromResponses(resp)
}

View File

@@ -0,0 +1,14 @@
package service
import (
"github.com/QuantumNous/new-api/service/openaicompat"
"github.com/QuantumNous/new-api/setting/model_setting"
)
func ShouldChatCompletionsUseResponsesPolicy(policy model_setting.ChatCompletionsToResponsesPolicy, channelID int, model string) bool {
return openaicompat.ShouldChatCompletionsUseResponsesPolicy(policy, channelID, model)
}
func ShouldChatCompletionsUseResponsesGlobal(channelID int, model string) bool {
return openaicompat.ShouldChatCompletionsUseResponsesGlobal(channelID, model)
}

View File

@@ -0,0 +1,356 @@
package openaicompat
import (
"encoding/json"
"errors"
"fmt"
"strings"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/dto"
)
func normalizeChatImageURLToString(v any) any {
switch vv := v.(type) {
case string:
return vv
case map[string]any:
if url := common.Interface2String(vv["url"]); url != "" {
return url
}
return v
case dto.MessageImageUrl:
if vv.Url != "" {
return vv.Url
}
return v
case *dto.MessageImageUrl:
if vv != nil && vv.Url != "" {
return vv.Url
}
return v
default:
return v
}
}
func ChatCompletionsRequestToResponsesRequest(req *dto.GeneralOpenAIRequest) (*dto.OpenAIResponsesRequest, error) {
if req == nil {
return nil, errors.New("request is nil")
}
if req.Model == "" {
return nil, errors.New("model is required")
}
if req.N > 1 {
return nil, fmt.Errorf("n>1 is not supported in responses compatibility mode")
}
var instructionsParts []string
inputItems := make([]map[string]any, 0, len(req.Messages))
for _, msg := range req.Messages {
role := strings.TrimSpace(msg.Role)
if role == "" {
continue
}
if role == "tool" || role == "function" {
callID := strings.TrimSpace(msg.ToolCallId)
var output any
if msg.Content == nil {
output = ""
} else if msg.IsStringContent() {
output = msg.StringContent()
} else {
if b, err := common.Marshal(msg.Content); err == nil {
output = string(b)
} else {
output = fmt.Sprintf("%v", msg.Content)
}
}
if callID == "" {
inputItems = append(inputItems, map[string]any{
"role": "user",
"content": fmt.Sprintf("[tool_output_missing_call_id] %v", output),
})
continue
}
inputItems = append(inputItems, map[string]any{
"type": "function_call_output",
"call_id": callID,
"output": output,
})
continue
}
// Prefer mapping system/developer messages into `instructions`.
if role == "system" || role == "developer" {
if msg.Content == nil {
continue
}
if msg.IsStringContent() {
if s := strings.TrimSpace(msg.StringContent()); s != "" {
instructionsParts = append(instructionsParts, s)
}
continue
}
parts := msg.ParseContent()
var sb strings.Builder
for _, part := range parts {
if part.Type == dto.ContentTypeText && strings.TrimSpace(part.Text) != "" {
if sb.Len() > 0 {
sb.WriteString("\n")
}
sb.WriteString(part.Text)
}
}
if s := strings.TrimSpace(sb.String()); s != "" {
instructionsParts = append(instructionsParts, s)
}
continue
}
item := map[string]any{
"role": role,
}
if msg.Content == nil {
item["content"] = ""
inputItems = append(inputItems, item)
if role == "assistant" {
for _, tc := range msg.ParseToolCalls() {
if strings.TrimSpace(tc.ID) == "" {
continue
}
if tc.Type != "" && tc.Type != "function" {
continue
}
name := strings.TrimSpace(tc.Function.Name)
if name == "" {
continue
}
inputItems = append(inputItems, map[string]any{
"type": "function_call",
"call_id": tc.ID,
"name": name,
"arguments": tc.Function.Arguments,
})
}
}
continue
}
if msg.IsStringContent() {
item["content"] = msg.StringContent()
inputItems = append(inputItems, item)
if role == "assistant" {
for _, tc := range msg.ParseToolCalls() {
if strings.TrimSpace(tc.ID) == "" {
continue
}
if tc.Type != "" && tc.Type != "function" {
continue
}
name := strings.TrimSpace(tc.Function.Name)
if name == "" {
continue
}
inputItems = append(inputItems, map[string]any{
"type": "function_call",
"call_id": tc.ID,
"name": name,
"arguments": tc.Function.Arguments,
})
}
}
continue
}
parts := msg.ParseContent()
contentParts := make([]map[string]any, 0, len(parts))
for _, part := range parts {
switch part.Type {
case dto.ContentTypeText:
contentParts = append(contentParts, map[string]any{
"type": "input_text",
"text": part.Text,
})
case dto.ContentTypeImageURL:
contentParts = append(contentParts, map[string]any{
"type": "input_image",
"image_url": normalizeChatImageURLToString(part.ImageUrl),
})
case dto.ContentTypeInputAudio:
contentParts = append(contentParts, map[string]any{
"type": "input_audio",
"input_audio": part.InputAudio,
})
case dto.ContentTypeFile:
contentParts = append(contentParts, map[string]any{
"type": "input_file",
"file": part.File,
})
case dto.ContentTypeVideoUrl:
contentParts = append(contentParts, map[string]any{
"type": "input_video",
"video_url": part.VideoUrl,
})
default:
contentParts = append(contentParts, map[string]any{
"type": part.Type,
})
}
}
item["content"] = contentParts
inputItems = append(inputItems, item)
if role == "assistant" {
for _, tc := range msg.ParseToolCalls() {
if strings.TrimSpace(tc.ID) == "" {
continue
}
if tc.Type != "" && tc.Type != "function" {
continue
}
name := strings.TrimSpace(tc.Function.Name)
if name == "" {
continue
}
inputItems = append(inputItems, map[string]any{
"type": "function_call",
"call_id": tc.ID,
"name": name,
"arguments": tc.Function.Arguments,
})
}
}
}
inputRaw, err := common.Marshal(inputItems)
if err != nil {
return nil, err
}
var instructionsRaw json.RawMessage
if len(instructionsParts) > 0 {
instructions := strings.Join(instructionsParts, "\n\n")
instructionsRaw, _ = common.Marshal(instructions)
}
var toolsRaw json.RawMessage
if req.Tools != nil {
tools := make([]map[string]any, 0, len(req.Tools))
for _, tool := range req.Tools {
switch tool.Type {
case "function":
tools = append(tools, map[string]any{
"type": "function",
"name": tool.Function.Name,
"description": tool.Function.Description,
"parameters": tool.Function.Parameters,
})
default:
// Best-effort: keep original tool shape for unknown types.
var m map[string]any
if b, err := common.Marshal(tool); err == nil {
_ = common.Unmarshal(b, &m)
}
if len(m) == 0 {
m = map[string]any{"type": tool.Type}
}
tools = append(tools, m)
}
}
toolsRaw, _ = common.Marshal(tools)
}
var toolChoiceRaw json.RawMessage
if req.ToolChoice != nil {
switch v := req.ToolChoice.(type) {
case string:
toolChoiceRaw, _ = common.Marshal(v)
default:
var m map[string]any
if b, err := common.Marshal(v); err == nil {
_ = common.Unmarshal(b, &m)
}
if m == nil {
toolChoiceRaw, _ = common.Marshal(v)
} else if t, _ := m["type"].(string); t == "function" {
// Chat: {"type":"function","function":{"name":"..."}}
// Responses: {"type":"function","name":"..."}
if name, ok := m["name"].(string); ok && name != "" {
toolChoiceRaw, _ = common.Marshal(map[string]any{
"type": "function",
"name": name,
})
} else if fn, ok := m["function"].(map[string]any); ok {
if name, ok := fn["name"].(string); ok && name != "" {
toolChoiceRaw, _ = common.Marshal(map[string]any{
"type": "function",
"name": name,
})
} else {
toolChoiceRaw, _ = common.Marshal(v)
}
} else {
toolChoiceRaw, _ = common.Marshal(v)
}
} else {
toolChoiceRaw, _ = common.Marshal(v)
}
}
}
var parallelToolCallsRaw json.RawMessage
if req.ParallelTooCalls != nil {
parallelToolCallsRaw, _ = common.Marshal(*req.ParallelTooCalls)
}
var textRaw json.RawMessage
if req.ResponseFormat != nil && req.ResponseFormat.Type != "" {
textRaw, _ = common.Marshal(map[string]any{
"format": req.ResponseFormat,
})
}
maxOutputTokens := req.MaxTokens
if req.MaxCompletionTokens > maxOutputTokens {
maxOutputTokens = req.MaxCompletionTokens
}
var topP *float64
if req.TopP != 0 {
topP = common.GetPointer(req.TopP)
}
out := &dto.OpenAIResponsesRequest{
Model: req.Model,
Input: inputRaw,
Instructions: instructionsRaw,
MaxOutputTokens: maxOutputTokens,
Stream: req.Stream,
Temperature: req.Temperature,
Text: textRaw,
ToolChoice: toolChoiceRaw,
Tools: toolsRaw,
TopP: topP,
User: req.User,
ParallelToolCalls: parallelToolCallsRaw,
Store: req.Store,
Metadata: req.Metadata,
}
if req.ReasoningEffort != "" && req.ReasoningEffort != "none" {
out.Reasoning = &dto.Reasoning{
Effort: req.ReasoningEffort,
}
}
return out, nil
}

View File

@@ -0,0 +1,18 @@
package openaicompat
import "github.com/QuantumNous/new-api/setting/model_setting"
func ShouldChatCompletionsUseResponsesPolicy(policy model_setting.ChatCompletionsToResponsesPolicy, channelID int, model string) bool {
if !policy.IsChannelEnabled(channelID) {
return false
}
return matchAnyRegex(policy.ModelPatterns, model)
}
func ShouldChatCompletionsUseResponsesGlobal(channelID int, model string) bool {
return ShouldChatCompletionsUseResponsesPolicy(
model_setting.GetGlobalSettings().ChatCompletionsToResponsesPolicy,
channelID,
model,
)
}

View File

@@ -0,0 +1,33 @@
package openaicompat
import (
"regexp"
"sync"
)
var compiledRegexCache sync.Map // map[string]*regexp.Regexp
func matchAnyRegex(patterns []string, s string) bool {
if len(patterns) == 0 || s == "" {
return false
}
for _, pattern := range patterns {
if pattern == "" {
continue
}
re, ok := compiledRegexCache.Load(pattern)
if !ok {
compiled, err := regexp.Compile(pattern)
if err != nil {
// Treat invalid patterns as non-matching to avoid breaking runtime traffic.
continue
}
re = compiled
compiledRegexCache.Store(pattern, re)
}
if re.(*regexp.Regexp).MatchString(s) {
return true
}
}
return false
}

View File

@@ -0,0 +1,133 @@
package openaicompat
import (
"errors"
"strings"
"github.com/QuantumNous/new-api/dto"
)
func ResponsesResponseToChatCompletionsResponse(resp *dto.OpenAIResponsesResponse, id string) (*dto.OpenAITextResponse, *dto.Usage, error) {
if resp == nil {
return nil, nil, errors.New("response is nil")
}
text := ExtractOutputTextFromResponses(resp)
usage := &dto.Usage{}
if resp.Usage != nil {
if resp.Usage.InputTokens != 0 {
usage.PromptTokens = resp.Usage.InputTokens
usage.InputTokens = resp.Usage.InputTokens
}
if resp.Usage.OutputTokens != 0 {
usage.CompletionTokens = resp.Usage.OutputTokens
usage.OutputTokens = resp.Usage.OutputTokens
}
if resp.Usage.TotalTokens != 0 {
usage.TotalTokens = resp.Usage.TotalTokens
} else {
usage.TotalTokens = usage.PromptTokens + usage.CompletionTokens
}
if resp.Usage.InputTokensDetails != nil {
usage.PromptTokensDetails.CachedTokens = resp.Usage.InputTokensDetails.CachedTokens
usage.PromptTokensDetails.ImageTokens = resp.Usage.InputTokensDetails.ImageTokens
usage.PromptTokensDetails.AudioTokens = resp.Usage.InputTokensDetails.AudioTokens
}
if resp.Usage.CompletionTokenDetails.ReasoningTokens != 0 {
usage.CompletionTokenDetails.ReasoningTokens = resp.Usage.CompletionTokenDetails.ReasoningTokens
}
}
created := resp.CreatedAt
var toolCalls []dto.ToolCallResponse
if text == "" && len(resp.Output) > 0 {
for _, out := range resp.Output {
if out.Type != "function_call" {
continue
}
name := strings.TrimSpace(out.Name)
if name == "" {
continue
}
callId := strings.TrimSpace(out.CallId)
if callId == "" {
callId = strings.TrimSpace(out.ID)
}
toolCalls = append(toolCalls, dto.ToolCallResponse{
ID: callId,
Type: "function",
Function: dto.FunctionResponse{
Name: name,
Arguments: out.Arguments,
},
})
}
}
finishReason := "stop"
if len(toolCalls) > 0 {
finishReason = "tool_calls"
}
msg := dto.Message{
Role: "assistant",
Content: text,
}
if len(toolCalls) > 0 {
msg.SetToolCalls(toolCalls)
msg.Content = ""
}
out := &dto.OpenAITextResponse{
Id: id,
Object: "chat.completion",
Created: created,
Model: resp.Model,
Choices: []dto.OpenAITextResponseChoice{
{
Index: 0,
Message: msg,
FinishReason: finishReason,
},
},
Usage: *usage,
}
return out, usage, nil
}
func ExtractOutputTextFromResponses(resp *dto.OpenAIResponsesResponse) string {
if resp == nil || len(resp.Output) == 0 {
return ""
}
var sb strings.Builder
// Prefer assistant message outputs.
for _, out := range resp.Output {
if out.Type != "message" {
continue
}
if out.Role != "" && out.Role != "assistant" {
continue
}
for _, c := range out.Content {
if c.Type == "output_text" && c.Text != "" {
sb.WriteString(c.Text)
}
}
}
if sb.Len() > 0 {
return sb.String()
}
for _, out := range resp.Output {
for _, c := range out.Content {
if c.Text != "" {
sb.WriteString(c.Text)
}
}
}
return sb.String()
}

View File

@@ -1,14 +1,36 @@
package model_setting
import (
"slices"
"strings"
"github.com/QuantumNous/new-api/setting/config"
)
type ChatCompletionsToResponsesPolicy struct {
Enabled bool `json:"enabled"`
AllChannels bool `json:"all_channels"`
ChannelIDs []int `json:"channel_ids,omitempty"`
ModelPatterns []string `json:"model_patterns,omitempty"`
}
func (p ChatCompletionsToResponsesPolicy) IsChannelEnabled(channelID int) bool {
if !p.Enabled {
return false
}
if p.AllChannels {
return true
}
if channelID == 0 || len(p.ChannelIDs) == 0 {
return false
}
return slices.Contains(p.ChannelIDs, channelID)
}
type GlobalSettings struct {
PassThroughRequestEnabled bool `json:"pass_through_request_enabled"`
ThinkingModelBlacklist []string `json:"thinking_model_blacklist"`
PassThroughRequestEnabled bool `json:"pass_through_request_enabled"`
ThinkingModelBlacklist []string `json:"thinking_model_blacklist"`
ChatCompletionsToResponsesPolicy ChatCompletionsToResponsesPolicy `json:"chat_completions_to_responses_policy"`
}
// 默认配置
@@ -18,6 +40,10 @@ var defaultOpenaiSettings = GlobalSettings{
"moonshotai/kimi-k2-thinking",
"kimi-k2-thinking",
},
ChatCompletionsToResponsesPolicy: ChatCompletionsToResponsesPolicy{
Enabled: false,
AllChannels: true,
},
}
// 全局实例

View File

@@ -0,0 +1,147 @@
package operation_setting
import (
"fmt"
"sort"
"strconv"
"strings"
)
type StatusCodeRange struct {
Start int
End int
}
var AutomaticDisableStatusCodeRanges = []StatusCodeRange{{Start: 401, End: 401}}
func AutomaticDisableStatusCodesToString() string {
if len(AutomaticDisableStatusCodeRanges) == 0 {
return ""
}
parts := make([]string, 0, len(AutomaticDisableStatusCodeRanges))
for _, r := range AutomaticDisableStatusCodeRanges {
if r.Start == r.End {
parts = append(parts, strconv.Itoa(r.Start))
continue
}
parts = append(parts, fmt.Sprintf("%d-%d", r.Start, r.End))
}
return strings.Join(parts, ",")
}
func AutomaticDisableStatusCodesFromString(s string) error {
ranges, err := ParseHTTPStatusCodeRanges(s)
if err != nil {
return err
}
AutomaticDisableStatusCodeRanges = ranges
return nil
}
func ShouldDisableByStatusCode(code int) bool {
if code < 100 || code > 599 {
return false
}
for _, r := range AutomaticDisableStatusCodeRanges {
if code < r.Start {
return false
}
if code <= r.End {
return true
}
}
return false
}
func ParseHTTPStatusCodeRanges(input string) ([]StatusCodeRange, error) {
input = strings.TrimSpace(input)
if input == "" {
return nil, nil
}
input = strings.NewReplacer("", ",").Replace(input)
segments := strings.Split(input, ",")
var ranges []StatusCodeRange
var invalid []string
for _, seg := range segments {
seg = strings.TrimSpace(seg)
if seg == "" {
continue
}
r, err := parseHTTPStatusCodeToken(seg)
if err != nil {
invalid = append(invalid, seg)
continue
}
ranges = append(ranges, r)
}
if len(invalid) > 0 {
return nil, fmt.Errorf("invalid http status code rules: %s", strings.Join(invalid, ", "))
}
if len(ranges) == 0 {
return nil, nil
}
sort.Slice(ranges, func(i, j int) bool {
if ranges[i].Start == ranges[j].Start {
return ranges[i].End < ranges[j].End
}
return ranges[i].Start < ranges[j].Start
})
merged := []StatusCodeRange{ranges[0]}
for _, r := range ranges[1:] {
last := &merged[len(merged)-1]
if r.Start <= last.End+1 {
if r.End > last.End {
last.End = r.End
}
continue
}
merged = append(merged, r)
}
return merged, nil
}
func parseHTTPStatusCodeToken(token string) (StatusCodeRange, error) {
token = strings.TrimSpace(token)
token = strings.ReplaceAll(token, " ", "")
if token == "" {
return StatusCodeRange{}, fmt.Errorf("empty token")
}
if strings.Contains(token, "-") {
parts := strings.Split(token, "-")
if len(parts) != 2 || parts[0] == "" || parts[1] == "" {
return StatusCodeRange{}, fmt.Errorf("invalid range token: %s", token)
}
start, err := strconv.Atoi(parts[0])
if err != nil {
return StatusCodeRange{}, fmt.Errorf("invalid range start: %s", token)
}
end, err := strconv.Atoi(parts[1])
if err != nil {
return StatusCodeRange{}, fmt.Errorf("invalid range end: %s", token)
}
if start > end {
return StatusCodeRange{}, fmt.Errorf("range start > end: %s", token)
}
if start < 100 || end > 599 {
return StatusCodeRange{}, fmt.Errorf("range out of bounds: %s", token)
}
return StatusCodeRange{Start: start, End: end}, nil
}
code, err := strconv.Atoi(token)
if err != nil {
return StatusCodeRange{}, fmt.Errorf("invalid status code: %s", token)
}
if code < 100 || code > 599 {
return StatusCodeRange{}, fmt.Errorf("status code out of bounds: %s", token)
}
return StatusCodeRange{Start: code, End: code}, nil
}

View File

@@ -0,0 +1,52 @@
package operation_setting
import (
"testing"
"github.com/stretchr/testify/require"
)
func TestParseHTTPStatusCodeRanges_CommaSeparated(t *testing.T) {
ranges, err := ParseHTTPStatusCodeRanges("401,403,500-599")
require.NoError(t, err)
require.Equal(t, []StatusCodeRange{
{Start: 401, End: 401},
{Start: 403, End: 403},
{Start: 500, End: 599},
}, ranges)
}
func TestParseHTTPStatusCodeRanges_MergeAndNormalize(t *testing.T) {
ranges, err := ParseHTTPStatusCodeRanges("500-505,504,401,403,402")
require.NoError(t, err)
require.Equal(t, []StatusCodeRange{
{Start: 401, End: 403},
{Start: 500, End: 505},
}, ranges)
}
func TestParseHTTPStatusCodeRanges_Invalid(t *testing.T) {
_, err := ParseHTTPStatusCodeRanges("99,600,foo,500-400,500-")
require.Error(t, err)
}
func TestParseHTTPStatusCodeRanges_NoComma_IsInvalid(t *testing.T) {
_, err := ParseHTTPStatusCodeRanges("401 403")
require.Error(t, err)
}
func TestShouldDisableByStatusCode(t *testing.T) {
orig := AutomaticDisableStatusCodeRanges
t.Cleanup(func() { AutomaticDisableStatusCodeRanges = orig })
AutomaticDisableStatusCodeRanges = []StatusCodeRange{
{Start: 401, End: 403},
{Start: 500, End: 599},
}
require.True(t, ShouldDisableByStatusCode(401))
require.True(t, ShouldDisableByStatusCode(403))
require.False(t, ShouldDisableByStatusCode(404))
require.True(t, ShouldDisableByStatusCode(500))
require.False(t, ShouldDisableByStatusCode(200))
}

View File

@@ -130,6 +130,20 @@ func (e *NewAPIError) Error() string {
return e.Err.Error()
}
func (e *NewAPIError) ErrorWithStatusCode() string {
if e == nil {
return ""
}
msg := e.Error()
if e.StatusCode == 0 {
return msg
}
if msg == "" {
return fmt.Sprintf("status_code=%d", e.StatusCode)
}
return fmt.Sprintf("status_code=%d, %s", e.StatusCode, msg)
}
func (e *NewAPIError) MaskSensitiveError() string {
if e == nil {
return ""
@@ -144,6 +158,20 @@ func (e *NewAPIError) MaskSensitiveError() string {
return common.MaskSensitiveInfo(errStr)
}
func (e *NewAPIError) MaskSensitiveErrorWithStatusCode() string {
if e == nil {
return ""
}
msg := e.MaskSensitiveError()
if e.StatusCode == 0 {
return msg
}
if msg == "" {
return fmt.Sprintf("status_code=%d", e.StatusCode)
}
return fmt.Sprintf("status_code=%d, %s", e.StatusCode, msg)
}
func (e *NewAPIError) SetMessage(message string) {
e.Err = errors.New(message)
}

View File

@@ -59,6 +59,11 @@ import { SiDiscord }from 'react-icons/si';
const LoginForm = () => {
let navigate = useNavigate();
const { t } = useTranslation();
const githubButtonTextKeyByState = {
idle: '使用 GitHub 继续',
redirecting: '正在跳转 GitHub...',
timeout: '请求超时,请刷新页面后重新发起 GitHub 登录',
};
const [inputs, setInputs] = useState({
username: '',
password: '',
@@ -90,9 +95,10 @@ const LoginForm = () => {
const [agreedToTerms, setAgreedToTerms] = useState(false);
const [hasUserAgreement, setHasUserAgreement] = useState(false);
const [hasPrivacyPolicy, setHasPrivacyPolicy] = useState(false);
const [githubButtonText, setGithubButtonText] = useState('使用 GitHub 继续');
const [githubButtonState, setGithubButtonState] = useState('idle');
const [githubButtonDisabled, setGithubButtonDisabled] = useState(false);
const githubTimeoutRef = useRef(null);
const githubButtonText = t(githubButtonTextKeyByState[githubButtonState]);
const logo = getLogo();
const systemName = getSystemName();
@@ -284,13 +290,13 @@ const LoginForm = () => {
}
setGithubLoading(true);
setGithubButtonDisabled(true);
setGithubButtonText(t('正在跳转 GitHub...'));
setGithubButtonState('redirecting');
if (githubTimeoutRef.current) {
clearTimeout(githubTimeoutRef.current);
}
githubTimeoutRef.current = setTimeout(() => {
setGithubLoading(false);
setGithubButtonText(t('请求超时,请刷新页面后重新发起 GitHub 登录'));
setGithubButtonState('timeout');
setGithubButtonDisabled(true);
}, 20000);
try {

View File

@@ -57,6 +57,11 @@ import { SiDiscord } from 'react-icons/si';
const RegisterForm = () => {
let navigate = useNavigate();
const { t } = useTranslation();
const githubButtonTextKeyByState = {
idle: '使用 GitHub 继续',
redirecting: '正在跳转 GitHub...',
timeout: '请求超时,请刷新页面后重新发起 GitHub 登录',
};
const [inputs, setInputs] = useState({
username: '',
password: '',
@@ -88,9 +93,10 @@ const RegisterForm = () => {
const [agreedToTerms, setAgreedToTerms] = useState(false);
const [hasUserAgreement, setHasUserAgreement] = useState(false);
const [hasPrivacyPolicy, setHasPrivacyPolicy] = useState(false);
const [githubButtonText, setGithubButtonText] = useState('使用 GitHub 继续');
const [githubButtonState, setGithubButtonState] = useState('idle');
const [githubButtonDisabled, setGithubButtonDisabled] = useState(false);
const githubTimeoutRef = useRef(null);
const githubButtonText = t(githubButtonTextKeyByState[githubButtonState]);
const logo = getLogo();
const systemName = getSystemName();
@@ -251,13 +257,13 @@ const RegisterForm = () => {
}
setGithubLoading(true);
setGithubButtonDisabled(true);
setGithubButtonText(t('正在跳转 GitHub...'));
setGithubButtonState('redirecting');
if (githubTimeoutRef.current) {
clearTimeout(githubTimeoutRef.current);
}
githubTimeoutRef.current = setTimeout(() => {
setGithubLoading(false);
setGithubButtonText(t('请求超时,请刷新页面后重新发起 GitHub 登录'));
setGithubButtonState('timeout');
setGithubButtonDisabled(true);
}, 20000);
try {

View File

@@ -39,6 +39,7 @@ const ModelSetting = () => {
'claude.thinking_adapter_budget_tokens_percentage': 0.8,
'global.pass_through_request_enabled': false,
'global.thinking_model_blacklist': '[]',
'global.chat_completions_to_responses_policy': '{}',
'general_setting.ping_interval_enabled': false,
'general_setting.ping_interval_seconds': 60,
'gemini.thinking_adapter_enabled': false,
@@ -59,10 +60,16 @@ const ModelSetting = () => {
item.key === 'claude.model_headers_settings' ||
item.key === 'claude.default_max_tokens' ||
item.key === 'gemini.supported_imagine_models' ||
item.key === 'global.thinking_model_blacklist'
item.key === 'global.thinking_model_blacklist' ||
item.key === 'global.chat_completions_to_responses_policy'
) {
if (item.value !== '') {
item.value = JSON.stringify(JSON.parse(item.value), null, 2);
try {
item.value = JSON.stringify(JSON.parse(item.value), null, 2);
} catch (e) {
// Keep raw value so user can fix it, and avoid crashing the page.
console.error(`Invalid JSON for option ${item.key}:`, e);
}
}
}
// Keep boolean config keys ending with enabled/Enabled so UI parses correctly.

View File

@@ -70,6 +70,7 @@ const OperationSetting = () => {
AutomaticDisableChannelEnabled: false,
AutomaticEnableChannelEnabled: false,
AutomaticDisableKeywords: '',
AutomaticDisableStatusCodes: '401',
'monitor_setting.auto_test_channel_enabled': false,
'monitor_setting.auto_test_channel_minutes': 10 /* 签到设置 */,
'checkin_setting.enabled': false,

View File

@@ -0,0 +1,151 @@
/*
Copyright (C) 2025 QuantumNous
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
For commercial licensing, please contact support@quantumnous.com
*/
import React, { useEffect, useState } from 'react';
import { useTranslation } from 'react-i18next';
import { Modal, Button, Space, Typography, Input, Banner } from '@douyinfe/semi-ui';
import { API, copy, showError, showSuccess } from '../../../../helpers';
const { Text } = Typography;
const CodexOAuthModal = ({ visible, onCancel, onSuccess }) => {
const { t } = useTranslation();
const [loading, setLoading] = useState(false);
const [authorizeUrl, setAuthorizeUrl] = useState('');
const [input, setInput] = useState('');
const startOAuth = async () => {
setLoading(true);
try {
const res = await API.post('/api/channel/codex/oauth/start', {}, { skipErrorHandler: true });
if (!res?.data?.success) {
console.error('Codex OAuth start failed:', res?.data?.message);
throw new Error(t('启动授权失败'));
}
const url = res?.data?.data?.authorize_url || '';
if (!url) {
console.error('Codex OAuth start response missing authorize_url:', res?.data);
throw new Error(t('响应缺少授权链接'));
}
setAuthorizeUrl(url);
window.open(url, '_blank', 'noopener,noreferrer');
showSuccess(t('已打开授权页面'));
} catch (error) {
showError(error?.message || t('启动授权失败'));
} finally {
setLoading(false);
}
};
const completeOAuth = async () => {
if (!input || !input.trim()) {
showError(t('请先粘贴回调 URL'));
return;
}
setLoading(true);
try {
const res = await API.post(
'/api/channel/codex/oauth/complete',
{ input },
{ skipErrorHandler: true },
);
if (!res?.data?.success) {
console.error('Codex OAuth complete failed:', res?.data?.message);
throw new Error(t('授权失败'));
}
const key = res?.data?.data?.key || '';
if (!key) {
console.error('Codex OAuth complete response missing key:', res?.data);
throw new Error(t('响应缺少凭据'));
}
onSuccess && onSuccess(key);
showSuccess(t('已生成授权凭据'));
onCancel && onCancel();
} catch (error) {
showError(error?.message || t('授权失败'));
} finally {
setLoading(false);
}
};
useEffect(() => {
if (!visible) return;
setAuthorizeUrl('');
setInput('');
}, [visible]);
return (
<Modal
title={t('Codex 授权')}
visible={visible}
onCancel={onCancel}
maskClosable={false}
closeOnEsc
width={720}
footer={
<Space>
<Button theme='borderless' onClick={onCancel} disabled={loading}>
{t('取消')}
</Button>
<Button theme='solid' type='primary' onClick={completeOAuth} loading={loading}>
{t('生成并填入')}
</Button>
</Space>
}
>
<Space vertical spacing='tight' style={{ width: '100%' }}>
<Banner
type='info'
description={t(
'1) 点击「打开授权页面」完成登录2) 浏览器会跳转到 localhost页面打不开也没关系3) 复制地址栏完整 URL 粘贴到下方4) 点击「生成并填入」。',
)}
/>
<Space wrap>
<Button type='primary' onClick={startOAuth} loading={loading}>
{t('打开授权页面')}
</Button>
<Button
theme='outline'
disabled={!authorizeUrl || loading}
onClick={() => copy(authorizeUrl)}
>
{t('复制授权链接')}
</Button>
</Space>
<Input
value={input}
onChange={(value) => setInput(value)}
placeholder={t('请粘贴完整回调 URL包含 code 与 state')}
showClear
/>
<Text type='tertiary' size='small'>
{t('说明:生成结果是可直接粘贴到渠道密钥里的 JSON包含 access_token / refresh_token / account_id。')}
</Text>
</Space>
</Modal>
);
};
export default CodexOAuthModal;

View File

@@ -0,0 +1,190 @@
/*
Copyright (C) 2025 QuantumNous
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
For commercial licensing, please contact support@quantumnous.com
*/
import React from 'react';
import { Modal, Button, Progress, Tag, Typography } from '@douyinfe/semi-ui';
const { Text } = Typography;
const clampPercent = (value) => {
const v = Number(value);
if (!Number.isFinite(v)) return 0;
return Math.max(0, Math.min(100, v));
};
const pickStrokeColor = (percent) => {
const p = clampPercent(percent);
if (p >= 95) return '#ef4444';
if (p >= 80) return '#f59e0b';
return '#3b82f6';
};
const formatDurationSeconds = (seconds, t) => {
const tt = typeof t === 'function' ? t : (v) => v;
const s = Number(seconds);
if (!Number.isFinite(s) || s <= 0) return '-';
const total = Math.floor(s);
const hours = Math.floor(total / 3600);
const minutes = Math.floor((total % 3600) / 60);
const secs = total % 60;
if (hours > 0) return `${hours}${tt('小时')} ${minutes}${tt('分钟')}`;
if (minutes > 0) return `${minutes}${tt('分钟')} ${secs}${tt('秒')}`;
return `${secs}${tt('秒')}`;
};
const formatUnixSeconds = (unixSeconds) => {
const v = Number(unixSeconds);
if (!Number.isFinite(v) || v <= 0) return '-';
try {
return new Date(v * 1000).toLocaleString();
} catch (error) {
return String(unixSeconds);
}
};
const RateLimitWindowCard = ({ t, title, windowData }) => {
const tt = typeof t === 'function' ? t : (v) => v;
const percent = clampPercent(windowData?.used_percent ?? 0);
const resetAt = windowData?.reset_at;
const resetAfterSeconds = windowData?.reset_after_seconds;
const limitWindowSeconds = windowData?.limit_window_seconds;
return (
<div className='rounded-lg border border-semi-color-border bg-semi-color-bg-0 p-3'>
<div className='flex items-center justify-between gap-2'>
<div className='font-medium'>{title}</div>
<Text type='tertiary' size='small'>
{tt('重置时间:')}
{formatUnixSeconds(resetAt)}
</Text>
</div>
<div className='mt-2'>
<Progress
percent={percent}
stroke={pickStrokeColor(percent)}
showInfo={true}
/>
</div>
<div className='mt-1 flex flex-wrap items-center gap-2 text-xs text-semi-color-text-2'>
<div>
{tt('已使用:')}
{percent}%
</div>
<div>
{tt('距离重置:')}
{formatDurationSeconds(resetAfterSeconds, tt)}
</div>
<div>
{tt('窗口:')}
{formatDurationSeconds(limitWindowSeconds, tt)}
</div>
</div>
</div>
);
};
export const openCodexUsageModal = ({ t, record, payload, onCopy }) => {
const tt = typeof t === 'function' ? t : (v) => v;
const data = payload?.data ?? null;
const rateLimit = data?.rate_limit ?? {};
const primary = rateLimit?.primary_window ?? null;
const secondary = rateLimit?.secondary_window ?? null;
const allowed = !!rateLimit?.allowed;
const limitReached = !!rateLimit?.limit_reached;
const upstreamStatus = payload?.upstream_status;
const statusTag =
allowed && !limitReached ? (
<Tag color='green'>{tt('可用')}</Tag>
) : (
<Tag color='red'>{tt('受限')}</Tag>
);
const rawText =
typeof data === 'string' ? data : JSON.stringify(data ?? payload, null, 2);
Modal.info({
title: (
<div className='flex items-center gap-2'>
<span>{tt('Codex 用量')}</span>
{statusTag}
</div>
),
centered: true,
width: 900,
style: { maxWidth: '95vw' },
content: (
<div className='flex flex-col gap-3'>
<div className='flex flex-wrap items-center justify-between gap-2'>
<Text type='tertiary' size='small'>
{tt('渠道:')}
{record?.name || '-'} ({tt('编号:')}
{record?.id || '-'})
</Text>
<Text type='tertiary' size='small'>
{tt('上游状态码:')}
{upstreamStatus ?? '-'}
</Text>
</div>
<div className='grid grid-cols-1 gap-3 md:grid-cols-2'>
<RateLimitWindowCard
t={tt}
title={tt('5小时窗口')}
windowData={primary}
/>
<RateLimitWindowCard
t={tt}
title={tt('每周窗口')}
windowData={secondary}
/>
</div>
<div>
<div className='mb-1 flex items-center justify-between gap-2'>
<div className='text-sm font-medium'>{tt('原始 JSON')}</div>
<Button
size='small'
type='primary'
theme='outline'
onClick={() => onCopy?.(rawText)}
disabled={!rawText}
>
{tt('复制')}
</Button>
</div>
<pre className='max-h-[50vh] overflow-auto rounded-lg bg-semi-color-fill-0 p-3 text-xs text-semi-color-text-0'>
{rawText}
</pre>
</div>
</div>
),
footer: (
<div className='flex justify-end gap-2'>
<Button type='primary' theme='solid' onClick={() => Modal.destroyAll()}>
{tt('关闭')}
</Button>
</div>
),
});
};

View File

@@ -56,6 +56,7 @@ import {
} from '../../../../helpers';
import ModelSelectModal from './ModelSelectModal';
import OllamaModelModal from './OllamaModelModal';
import CodexOAuthModal from './CodexOAuthModal';
import JSONEditor from '../../../common/ui/JSONEditor';
import SecureVerificationModal from '../../../common/modals/SecureVerificationModal';
import ChannelKeyDisplay from '../../../common/ui/ChannelKeyDisplay';
@@ -92,7 +93,7 @@ const REGION_EXAMPLE = {
// 支持并且已适配通过接口获取模型列表的渠道类型
const MODEL_FETCHABLE_TYPES = new Set([
1, 4, 14, 34, 17, 26, 27, 24, 47, 25, 20, 23, 31, 35, 40, 42, 48, 43,
1, 4, 14, 34, 17, 26, 27, 24, 47, 25, 20, 23, 31, 40, 42, 48, 43,
]);
function type2secretPrompt(type) {
@@ -114,6 +115,8 @@ function type2secretPrompt(type) {
return '按照如下格式输入: AccessKey|SecretKey, 如果上游是New API则直接输ApiKey';
case 51:
return '按照如下格式输入: AccessKey|SecretAccessKey';
case 57:
return '请输入 JSON 格式的 OAuth 凭据(必须包含 access_token 和 account_id';
default:
return '请输入渠道对应的鉴权密钥';
}
@@ -199,17 +202,11 @@ const EditChannelModal = (props) => {
if (!trimmed) return [];
try {
const parsed = JSON.parse(trimmed);
if (
!parsed ||
typeof parsed !== 'object' ||
Array.isArray(parsed)
) {
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
return [];
}
const values = Object.values(parsed)
.map((value) =>
typeof value === 'string' ? value.trim() : undefined,
)
.map((value) => (typeof value === 'string' ? value.trim() : undefined))
.filter((value) => value);
return Array.from(new Set(values));
} catch (error) {
@@ -218,6 +215,9 @@ const EditChannelModal = (props) => {
}, [inputs.model_mapping]);
const [isIonetChannel, setIsIonetChannel] = useState(false);
const [ionetMetadata, setIonetMetadata] = useState(null);
const [codexOAuthModalVisible, setCodexOAuthModalVisible] = useState(false);
const [codexCredentialRefreshing, setCodexCredentialRefreshing] =
useState(false);
// 密钥显示状态
const [keyDisplayState, setKeyDisplayState] = useState({
@@ -505,10 +505,34 @@ const EditChannelModal = (props) => {
// 重置手动输入模式状态
setUseManualInput(false);
if (value === 57) {
setBatch(false);
setMultiToSingle(false);
setMultiKeyMode('random');
setVertexKeys([]);
setVertexFileList([]);
if (formApiRef.current) {
formApiRef.current.setValue('vertex_files', []);
}
setInputs((prev) => ({ ...prev, vertex_files: [] }));
}
}
//setAutoBan
};
const formatJsonField = (fieldName) => {
const rawValue = (inputs?.[fieldName] ?? '').trim();
if (!rawValue) return;
try {
const parsed = JSON.parse(rawValue);
handleInputChange(fieldName, JSON.stringify(parsed, null, 2));
} catch (error) {
showError(`${t('JSON格式错误')}: ${error.message}`);
}
};
const loadChannel = async () => {
setLoading(true);
let res = await API.get(`/api/channel/${channelId}`);
@@ -816,6 +840,32 @@ const EditChannelModal = (props) => {
}
};
const handleCodexOAuthGenerated = (key) => {
handleInputChange('key', key);
formatJsonField('key');
};
const handleRefreshCodexCredential = async () => {
if (!isEdit) return;
setCodexCredentialRefreshing(true);
try {
const res = await API.post(
`/api/channel/${channelId}/codex/refresh`,
{},
{ skipErrorHandler: true },
);
if (!res?.data?.success) {
throw new Error(res?.data?.message || 'Failed to refresh credential');
}
showSuccess(t('凭证已刷新'));
} catch (error) {
showError(error.message || t('刷新失败'));
} finally {
setCodexCredentialRefreshing(false);
}
};
useEffect(() => {
if (inputs.type !== 45) {
doubaoApiClickCountRef.current = 0;
@@ -1064,6 +1114,47 @@ const EditChannelModal = (props) => {
const formValues = formApiRef.current ? formApiRef.current.getValues() : {};
let localInputs = { ...formValues };
if (localInputs.type === 57) {
if (batch) {
showInfo(t('Codex 渠道不支持批量创建'));
return;
}
const rawKey = (localInputs.key || '').trim();
if (!isEdit && rawKey === '') {
showInfo(t('请输入密钥!'));
return;
}
if (rawKey !== '') {
if (!verifyJSON(rawKey)) {
showInfo(t('密钥必须是合法的 JSON 格式!'));
return;
}
try {
const parsed = JSON.parse(rawKey);
if (!parsed || typeof parsed !== 'object' || Array.isArray(parsed)) {
showInfo(t('密钥必须是 JSON 对象'));
return;
}
const accessToken = String(parsed.access_token || '').trim();
const accountId = String(parsed.account_id || '').trim();
if (!accessToken) {
showInfo(t('密钥 JSON 必须包含 access_token'));
return;
}
if (!accountId) {
showInfo(t('密钥 JSON 必须包含 account_id'));
return;
}
localInputs.key = JSON.stringify(parsed);
} catch (error) {
showInfo(t('密钥必须是合法的 JSON 格式!'));
return;
}
}
}
if (localInputs.type === 41) {
const keyType = localInputs.vertex_key_type || 'json';
if (keyType === 'api_key') {
@@ -1395,7 +1486,7 @@ const EditChannelModal = (props) => {
}
};
const batchAllowed = !isEdit || isMultiKeyChannel;
const batchAllowed = (!isEdit || isMultiKeyChannel) && inputs.type !== 57;
const batchExtra = batchAllowed ? (
<Space>
{!isEdit && (
@@ -1878,8 +1969,94 @@ const EditChannelModal = (props) => {
)
) : (
<>
{inputs.type === 41 &&
(inputs.vertex_key_type || 'json') === 'json' ? (
{inputs.type === 57 ? (
<>
<Form.TextArea
field='key'
label={
isEdit
? t('密钥(编辑模式下,保存的密钥不会显示)')
: t('密钥')
}
placeholder={t(
'请输入 JSON 格式的 OAuth 凭据,例如:\n{\n "access_token": "...",\n "account_id": "..." \n}',
)}
rules={
isEdit
? []
: [{ required: true, message: t('请输入密钥') }]
}
autoComplete='new-password'
onChange={(value) => handleInputChange('key', value)}
disabled={isIonetLocked}
extraText={
<div className='flex flex-col gap-2'>
<Text type='tertiary' size='small'>
{t(
'仅支持 JSON 对象,必须包含 access_token 与 account_id',
)}
</Text>
<Space wrap spacing='tight'>
<Button
size='small'
type='primary'
theme='outline'
onClick={() =>
setCodexOAuthModalVisible(true)
}
disabled={isIonetLocked}
>
{t('Codex 授权')}
</Button>
{isEdit && (
<Button
size='small'
type='primary'
theme='outline'
onClick={handleRefreshCodexCredential}
loading={codexCredentialRefreshing}
disabled={isIonetLocked}
>
{t('刷新凭证')}
</Button>
)}
<Button
size='small'
type='primary'
theme='outline'
onClick={() => formatJsonField('key')}
disabled={isIonetLocked}
>
{t('格式化')}
</Button>
{isEdit && (
<Button
size='small'
type='primary'
theme='outline'
onClick={handleShow2FAModal}
disabled={isIonetLocked}
>
{t('查看密钥')}
</Button>
)}
{batchExtra}
</Space>
</div>
}
autosize
showClear
/>
<CodexOAuthModal
visible={codexOAuthModalVisible}
onCancel={() => setCodexOAuthModalVisible(false)}
onSuccess={handleCodexOAuthGenerated}
/>
</>
) : inputs.type === 41 &&
(inputs.vertex_key_type || 'json') === 'json' ? (
<>
{!batch && (
<div className='flex items-center justify-between mb-3'>
@@ -2812,6 +2989,12 @@ const EditChannelModal = (props) => {
>
{t('新格式模板')}
</Text>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() => formatJsonField('param_override')}
>
{t('格式化')}
</Text>
</div>
}
showClear
@@ -2852,6 +3035,12 @@ const EditChannelModal = (props) => {
>
{t('填入模板')}
</Text>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() => formatJsonField('header_override')}
>
{t('格式化')}
</Text>
</div>
<div>
<Text type='tertiary' size='small'>
@@ -3181,7 +3370,9 @@ const EditChannelModal = (props) => {
? inputs.models.map(String)
: [];
const incoming = modelIds.map(String);
const nextModels = Array.from(new Set([...existingModels, ...incoming]));
const nextModels = Array.from(
new Set([...existingModels, ...incoming]),
);
handleInputChange('models', nextModels);
if (formApiRef.current) {

View File

@@ -265,6 +265,11 @@ const ModelTestModal = ({
placeholder={t('选择端点类型')}
/>
</div>
<Typography.Text type='tertiary' size='small' className='block mb-2'>
{t(
'说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。',
)}
</Typography.Text>
{/* 搜索与操作按钮 */}
<div className='flex items-center justify-end gap-2 w-full mb-2'>

View File

@@ -17,7 +17,9 @@ along with this program. If not, see <https://www.gnu.org/licenses/>.
For commercial licensing, please contact support@quantumnous.com
*/
import React from 'react';
import React, { useState } from 'react';
import { Banner, Button, Modal } from '@douyinfe/semi-ui';
import { IconAlertTriangle, IconClose } from '@douyinfe/semi-icons';
import CardPro from '../../common/ui/CardPro';
import ModelsTable from './ModelsTable';
import ModelsActions from './ModelsActions';
@@ -29,6 +31,9 @@ import { useModelsData } from '../../../hooks/models/useModelsData';
import { useIsMobile } from '../../../hooks/common/useIsMobile';
import { createCardProPagination } from '../../../helpers/utils';
const MARKETPLACE_DISPLAY_NOTICE_STORAGE_KEY =
'models_marketplace_display_notice_dismissed';
const ModelsPage = () => {
const modelsData = useModelsData();
const isMobile = useIsMobile();
@@ -71,6 +76,37 @@ const ModelsPage = () => {
t,
} = modelsData;
const [showMarketplaceDisplayNotice, setShowMarketplaceDisplayNotice] =
useState(() => {
try {
return (
localStorage.getItem(MARKETPLACE_DISPLAY_NOTICE_STORAGE_KEY) !== '1'
);
} catch (_) {
return true;
}
});
const confirmCloseMarketplaceDisplayNotice = () => {
Modal.confirm({
title: t('确认关闭提示'),
content: t(
'关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?',
),
okText: t('关闭提示'),
cancelText: t('取消'),
okButtonProps: {
type: 'danger',
},
onOk: () => {
try {
localStorage.setItem(MARKETPLACE_DISPLAY_NOTICE_STORAGE_KEY, '1');
} catch (_) {}
setShowMarketplaceDisplayNotice(false);
},
});
};
return (
<>
<EditModelModal
@@ -94,6 +130,33 @@ const ModelsPage = () => {
}}
/>
{showMarketplaceDisplayNotice ? (
<div style={{ position: 'relative', marginBottom: 12 }}>
<Banner
type='warning'
closeIcon={null}
icon={
<IconAlertTriangle
size='large'
style={{ color: 'var(--semi-color-warning)' }}
/>
}
description={t(
'提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。',
)}
style={{ marginBottom: 0 }}
/>
<Button
theme='borderless'
size='small'
type='tertiary'
icon={<IconClose aria-hidden={true} />}
onClick={confirmCloseMarketplaceDisplayNotice}
style={{ position: 'absolute', top: 8, right: 8 }}
aria-label={t('关闭')}
/>
</div>
) : null}
<CardPro
type='type3'
tabsArea={<ModelsTabs {...modelsData} />}

View File

@@ -31,9 +31,10 @@ import {
Avatar,
Col,
Row,
Tooltip,
} from '@douyinfe/semi-ui';
import { Save, X, FileText } from 'lucide-react';
import { IconLink } from '@douyinfe/semi-icons';
import { IconInfoCircle, IconLink } from '@douyinfe/semi-icons';
import { API, showError, showSuccess } from '../../../../helpers';
import { useTranslation } from 'react-i18next';
import { useIsMobile } from '../../../../hooks/common/useIsMobile';
@@ -447,7 +448,22 @@ const EditModelModal = (props) => {
<Col span={24}>
<JSONEditor
field='endpoints'
label={t('端点映射')}
label={
<span className='inline-flex items-center gap-2'>
<span>{t('端点映射')}</span>
<Tooltip
position='top'
content={t(
'提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。',
)}
>
<IconInfoCircle
size='small'
className='text-gray-400 cursor-help'
/>
</Tooltip>
</span>
}
placeholder={
'{\n "openai": {"path": "/v1/chat/completions", "method": "POST"}\n}'
}

View File

@@ -184,6 +184,11 @@ export const CHANNEL_OPTIONS = [
color: 'blue',
label: 'Replicate',
},
{
value: 57,
color: 'blue',
label: 'Codex (OpenAI OAuth)',
},
];
export const MODEL_TABLE_PAGE_SIZE = 10;

View File

@@ -29,3 +29,4 @@ export * from './token';
export * from './boolean';
export * from './dashboard';
export * from './passkey';
export * from './statusCodeRules';

View File

@@ -301,6 +301,7 @@ export function getChannelIcon(channelType) {
switch (channelType) {
case 1: // OpenAI
case 3: // Azure OpenAI
case 57: // Codex
return <OpenAI size={iconSize} />;
case 2: // Midjourney Proxy
case 5: // Midjourney Proxy Plus

View File

@@ -0,0 +1,96 @@
export function parseHttpStatusCodeRules(input) {
const raw = (input ?? '').toString().trim();
if (raw.length === 0) {
return {
ok: true,
ranges: [],
tokens: [],
normalized: '',
invalidTokens: [],
};
}
const sanitized = raw.replace(/[]/g, ',');
const segments = sanitized.split(/[,]/g);
const ranges = [];
const invalidTokens = [];
for (const segment of segments) {
const trimmed = segment.trim();
if (!trimmed) continue;
const parsed = parseToken(trimmed);
if (!parsed) invalidTokens.push(trimmed);
else ranges.push(parsed);
}
if (invalidTokens.length > 0) {
return {
ok: false,
ranges: [],
tokens: [],
normalized: raw,
invalidTokens,
};
}
const merged = mergeRanges(ranges);
const tokens = merged.map((r) => (r.start === r.end ? `${r.start}` : `${r.start}-${r.end}`));
const normalized = tokens.join(',');
return {
ok: true,
ranges: merged,
tokens,
normalized,
invalidTokens: [],
};
}
function parseToken(token) {
const cleaned = (token ?? '').toString().trim().replaceAll(' ', '');
if (!cleaned) return null;
if (cleaned.includes('-')) {
const parts = cleaned.split('-');
if (parts.length !== 2) return null;
const [a, b] = parts;
if (!isNumber(a) || !isNumber(b)) return null;
const start = Number.parseInt(a, 10);
const end = Number.parseInt(b, 10);
if (!Number.isFinite(start) || !Number.isFinite(end)) return null;
if (start > end) return null;
if (start < 100 || end > 599) return null;
return { start, end };
}
if (!isNumber(cleaned)) return null;
const code = Number.parseInt(cleaned, 10);
if (!Number.isFinite(code)) return null;
if (code < 100 || code > 599) return null;
return { start: code, end: code };
}
function isNumber(s) {
return typeof s === 'string' && /^\d+$/.test(s);
}
function mergeRanges(ranges) {
if (!Array.isArray(ranges) || ranges.length === 0) return [];
const sorted = [...ranges].sort((a, b) => (a.start !== b.start ? a.start - b.start : a.end - b.end));
const merged = [sorted[0]];
for (let i = 1; i < sorted.length; i += 1) {
const current = sorted[i];
const last = merged[merged.length - 1];
if (current.start <= last.end + 1) {
last.end = Math.max(last.end, current.end);
continue;
}
merged.push({ ...current });
}
return merged;
}

View File

@@ -36,6 +36,7 @@ import {
import { useIsMobile } from '../common/useIsMobile';
import { useTableCompactMode } from '../common/useTableCompactMode';
import { Modal, Button } from '@douyinfe/semi-ui';
import { openCodexUsageModal } from '../../components/table/channels/modals/CodexUsageModal';
export const useChannelsData = () => {
const { t } = useTranslation();
@@ -745,6 +746,32 @@ export const useChannelsData = () => {
};
const updateChannelBalance = async (record) => {
if (record?.type === 57) {
try {
const res = await API.get(`/api/channel/${record.id}/codex/usage`, {
skipErrorHandler: true,
});
if (!res?.data?.success) {
console.error('Codex usage fetch failed:', res?.data?.message);
showError(t('获取用量失败'));
}
openCodexUsageModal({
t,
record,
payload: res?.data,
onCopy: async (text) => {
const ok = await copy(text);
if (ok) showSuccess(t('已复制'));
else showError(t('复制失败'));
},
});
} catch (error) {
console.error('Codex usage fetch error:', error);
showError(t('获取用量失败'));
}
return;
}
const res = await API.get(`/api/channel/update_balance/${record.id}/`);
const { success, message, balance } = res.data;
if (success) {

View File

@@ -1923,6 +1923,10 @@
"自动测试所有通道间隔时间": "Auto test interval for all channels",
"自动禁用": "Auto disabled",
"自动禁用关键词": "Automatic disable keywords",
"自动禁用状态码": "Auto-disable status codes",
"自动禁用状态码格式不正确": "Invalid auto-disable status code format",
"支持填写单个状态码或范围(含首尾),使用逗号分隔": "Supports single status codes or inclusive ranges; separate with commas",
"例如401, 403, 429, 500-599": "e.g. 401,403,429,500-599",
"自动选择": "Auto Select",
"自定义充值数量选项": "Custom Recharge Amount Options",
"自定义充值数量选项不是合法的 JSON 数组": "Custom recharge amount options is not a valid JSON array",
@@ -2582,6 +2586,17 @@
"签到奖励的最小额度": "Minimum quota for check-in rewards",
"签到最大额度": "Maximum check-in quota",
"签到奖励的最大额度": "Maximum quota for check-in rewards",
"保存签到设置": "Save check-in settings"
"保存签到设置": "Save check-in settings",
"ChatCompletions→Responses 兼容配置Beta": "ChatCompletions→Responses Compatibility (Beta)",
"提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。": "Notice: This feature is beta. The configuration structure and behavior may change in the future. Do not use in production.",
"填充模板(指定渠道)": "Fill template (selected channels)",
"填充模板(全渠道)": "Fill template (all channels)",
"格式化 JSON": "Format JSON",
"提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。": "Notice: This configuration only affects how models are displayed in the Model Marketplace and does not impact actual model invocation or routing. To configure real invocation behavior, please go to Channel Management.",
"确认关闭提示": "Confirm close",
"关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?": "After closing, this notice will no longer be shown (only for this browser). Are you sure you want to close it?",
"关闭提示": "Close notice",
"说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。": "Note: Tests on this page use non-streaming requests. If a channel only supports streaming responses, tests may fail. Please rely on actual usage.",
"提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。": "Notice: Endpoint mapping is for Model Marketplace display only and does not affect real model invocation. To configure real invocation, please go to Channel Management."
}
}

View File

@@ -2593,6 +2593,17 @@
"签到奖励的最小额度": "Quota minimum pour les récompenses d'enregistrement",
"签到最大额度": "Quota maximum d'enregistrement",
"签到奖励的最大额度": "Quota maximum pour les récompenses d'enregistrement",
"保存签到设置": "Enregistrer les paramètres d'enregistrement"
"保存签到设置": "Enregistrer les paramètres d'enregistrement",
"提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。": "Remarque : cette configuration n'affecte que l'affichage des modèles dans la place de marché des modèles et n'a aucun impact sur l'invocation ou le routage réels. Pour configurer le comportement réel des appels, veuillez aller dans « Gestion des canaux ».",
"确认关闭提示": "Confirmer la fermeture",
"关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?": "Après fermeture, cet avertissement ne sera plus affiché (uniquement pour ce navigateur). Voulez-vous vraiment le fermer ?",
"ChatCompletions→Responses 兼容配置Beta": "Compatibilité ChatCompletions→Responses (bêta)",
"提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。": "Remarque : cette fonctionnalité est en version bêta. La structure de configuration et le comportement peuvent changer à lavenir. Ne lutilisez pas en production.",
"填充模板(指定渠道)": "Remplir le modèle (canaux sélectionnés)",
"填充模板(全渠道)": "Remplir le modèle (tous les canaux)",
"格式化 JSON": "Formater le JSON",
"关闭提示": "Fermer lavertissement",
"说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。": "Remarque : les tests sur cette page utilisent des requêtes non-streaming. Si un canal ne prend en charge que les réponses en streaming, les tests peuvent échouer. Veuillez vous référer à lusage réel.",
"提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。": "Remarque : la correspondance des endpoints sert uniquement à laffichage dans la place de marché des modèles et naffecte pas linvocation réelle. Pour configurer linvocation réelle, veuillez aller dans « Gestion des canaux »."
}
}

View File

@@ -2556,7 +2556,6 @@
"默认补全倍率": "デフォルト補完倍率",
"每日签到": "毎日のチェックイン",
"今日已签到,累计签到": "本日チェックイン済み、累計チェックイン",
"天": "日",
"每日签到可获得随机额度奖励": "毎日のチェックインでランダムなクォータ報酬を獲得できます",
"今日已签到": "本日チェックイン済み",
"立即签到": "今すぐチェックイン",
@@ -2577,6 +2576,17 @@
"签到奖励的最小额度": "チェックイン報酬の最小クォータ",
"签到最大额度": "チェックイン最大クォータ",
"签到奖励的最大额度": "チェックイン報酬の最大クォータ",
"保存签到设置": "チェックイン設定を保存"
"保存签到设置": "チェックイン設定を保存",
"提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。": "注意: ここでの設定は「モデル広場」での表示にのみ影響し、実際の呼び出しやルーティングには影響しません。実際の呼び出しを設定する場合は、「チャネル管理」で設定してください。",
"确认关闭提示": "閉じる確認",
"关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?": "閉じると、このお知らせは今後表示されません(このブラウザのみ)。閉じてもよろしいですか?",
"ChatCompletions→Responses 兼容配置Beta": "ChatCompletions→Responses 互換設定(ベータ)",
"提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。": "注意: この機能はベータ版です。今後、設定構造や挙動が変更される可能性があります。本番環境では使用しないでください。",
"填充模板(指定渠道)": "テンプレートを入力(指定チャネル)",
"填充模板(全渠道)": "テンプレートを入力(全チャネル)",
"格式化 JSON": "JSON を整形",
"关闭提示": "お知らせを閉じる",
"说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。": "注意: このページのテストは非ストリーミングリクエストです。チャネルがストリーミング応答のみ対応の場合、テストが失敗することがあります。実際の利用結果を優先してください。",
"提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。": "注意: エンドポイントマッピングは「モデル広場」での表示専用で、実際の呼び出しには影響しません。実際の呼び出し設定は「チャネル管理」で行ってください。"
}
}

View File

@@ -2586,7 +2586,6 @@
"默认补全倍率": "Default completion ratio",
"每日签到": "Ежедневная регистрация",
"今日已签到,累计签到": "Зарегистрирован сегодня, всего регистраций",
"天": "дней",
"每日签到可获得随机额度奖励": "Ежедневная регистрация награждает случайной квотой",
"今日已签到": "Зарегистрирован сегодня",
"立即签到": "Зарегистрироваться сейчас",
@@ -2607,6 +2606,17 @@
"签到奖励的最小额度": "Минимальная квота для наград за регистрацию",
"签到最大额度": "Максимальная квота регистрации",
"签到奖励的最大额度": "Максимальная квота для наград за регистрацию",
"保存签到设置": "Сохранить настройки регистрации"
"保存签到设置": "Сохранить настройки регистрации",
"提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。": "Примечание: эта настройка влияет только на отображение моделей в «Маркетплейсе моделей» и не влияет на фактический вызов или маршрутизацию. Чтобы настроить реальное поведение вызовов, перейдите в «Управление каналами».",
"确认关闭提示": "Подтвердить закрытие",
"关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?": "После закрытия это уведомление больше не будет показываться (только в этом браузере). Закрыть?",
"ChatCompletions→Responses 兼容配置Beta": "Совместимость ChatCompletions→Responses (бета)",
"提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。": "Примечание: это бета-функция. Структура конфигурации и поведение могут измениться в будущем. Не используйте в продакшене.",
"填充模板(指定渠道)": "Заполнить шаблон (выбранные каналы)",
"填充模板(全渠道)": "Заполнить шаблон (все каналы)",
"格式化 JSON": "Форматировать JSON",
"关闭提示": "Закрыть уведомление",
"说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。": "Примечание: тесты на этой странице используют нестриминговые запросы. Если канал поддерживает только стриминговые ответы, тест может завершиться неудачей. Ориентируйтесь на реальное использование.",
"提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。": "Примечание: сопоставление endpoint'ов используется только для отображения в «Маркетплейсе моделей» и не влияет на реальный вызов. Чтобы настроить реальное поведение вызовов, перейдите в «Управление каналами»."
}
}

View File

@@ -1547,7 +1547,6 @@
"此项可选,用于覆盖请求头参数": "Tùy chọn, được sử dụng để ghi đè tham số tiêu đề yêu cầu.",
"此项可选用于通过自定义API地址来进行 API 调用,末尾不要带/v1和/": "Tùy chọn cho các cuộc gọi API thông qua địa chỉ API tùy chỉnh, không thêm /v1 và / ở cuối",
"每容器GPU数": "GPUs per Container",
"每日签到": "Đăng nhập hàng ngày",
"每日签到获得": "Nhận được từ đăng nhập hàng ngày",
"每隔多少分钟测试一次所有通道": "Bao nhiêu phút kiểm tra tất cả các kênh một lần",
"比率": "Tỷ lệ",
@@ -3137,7 +3136,6 @@
"默认补全倍率": "Tỷ lệ hoàn thành mặc định",
"每日签到": "Đăng nhập hàng ngày",
"今日已签到,累计签到": "Đã đăng nhập hôm nay, tổng số lần đăng nhập",
"天": "ngày",
"每日签到可获得随机额度奖励": "Đăng nhập hàng ngày để nhận phần thưởng hạn mức ngẫu nhiên",
"今日已签到": "Đã đăng nhập hôm nay",
"立即签到": "Đăng nhập ngay",
@@ -3158,6 +3156,17 @@
"签到奖励的最小额度": "Hạn mức tối thiểu cho phần thưởng đăng nhập",
"签到最大额度": "Hạn mức đăng nhập tối đa",
"签到奖励的最大额度": "Hạn mức tối đa cho phần thưởng đăng nhập",
"保存签到设置": "Lưu cài đặt đăng nhập"
"保存签到设置": "Lưu cài đặt đăng nhập",
"提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。": "Lưu ý: Cấu hình tại đây chỉ ảnh hưởng đến cách hiển thị trong \"Chợ mô hình\" và không ảnh hưởng đến việc gọi hoặc định tuyến thực tế. Nếu cần cấu hình hành vi gọi thực tế, vui lòng thiết lập trong \"Quản lý kênh\".",
"确认关闭提示": "Xác nhận đóng",
"关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?": "Sau khi đóng, thông báo này sẽ không còn hiển thị nữa (chỉ với trình duyệt này). Bạn có chắc muốn đóng không?",
"ChatCompletions→Responses 兼容配置Beta": "Tương thích ChatCompletions→Responses (Beta)",
"提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。": "Lưu ý: Đây là tính năng beta. Cấu trúc cấu hình và hành vi có thể thay đổi trong tương lai. Không dùng trong môi trường production.",
"填充模板(指定渠道)": "Điền mẫu (kênh được chọn)",
"填充模板(全渠道)": "Điền mẫu (tất cả kênh)",
"格式化 JSON": "Định dạng JSON",
"关闭提示": "Đóng thông báo",
"说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。": "Lưu ý: Bài kiểm tra trên trang này sử dụng yêu cầu không streaming. Nếu kênh chỉ hỗ trợ phản hồi streaming, bài kiểm tra có thể thất bại. Vui lòng dựa vào sử dụng thực tế.",
"提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。": "Lưu ý: Ánh xạ endpoint chỉ dùng để hiển thị trong \"Chợ mô hình\" và không ảnh hưởng đến việc gọi thực tế. Để cấu hình gọi thực tế, vui lòng vào \"Quản lý kênh\"."
}
}

View File

@@ -1909,6 +1909,10 @@
"自动测试所有通道间隔时间": "自动测试所有通道间隔时间",
"自动禁用": "自动禁用",
"自动禁用关键词": "自动禁用关键词",
"自动禁用状态码": "自动禁用状态码",
"自动禁用状态码格式不正确": "自动禁用状态码格式不正确",
"支持填写单个状态码或范围(含首尾),使用逗号分隔": "支持填写单个状态码或范围(含首尾),使用逗号分隔",
"例如401, 403, 429, 500-599": "例如401,403,429,500-599",
"自动选择": "自动选择",
"自定义充值数量选项": "自定义充值数量选项",
"自定义充值数量选项不是合法的 JSON 数组": "自定义充值数量选项不是合法的 JSON 数组",
@@ -2568,6 +2572,17 @@
"签到奖励的最小额度": "签到奖励的最小额度",
"签到最大额度": "签到最大额度",
"签到奖励的最大额度": "签到奖励的最大额度",
"保存签到设置": "保存签到设置"
"保存签到设置": "保存签到设置",
"ChatCompletions→Responses 兼容配置Beta": "ChatCompletions→Responses 兼容配置Beta",
"提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。": "提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。",
"填充模板(指定渠道)": "填充模板(指定渠道)",
"填充模板(全渠道)": "填充模板(全渠道)",
"格式化 JSON": "格式化 JSON",
"提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。": "提示:此处配置仅用于控制「模型广场」对用户的展示效果,不会影响模型的实际调用与路由。若需配置真实调用行为,请前往「渠道管理」进行设置。",
"确认关闭提示": "确认关闭提示",
"关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?": "关闭后将不再显示此提示(仅对当前浏览器生效)。确定要关闭吗?",
"关闭提示": "关闭提示",
"说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。": "说明:本页测试为非流式请求;若渠道仅支持流式返回,可能出现测试失败,请以实际使用为准。",
"提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。": "提示:端点映射仅用于模型广场展示,不会影响模型真实调用。如需配置真实调用,请前往「渠道管理」。"
}
}

View File

@@ -18,7 +18,16 @@ For commercial licensing, please contact support@quantumnous.com
*/
import React, { useEffect, useState, useRef } from 'react';
import { Button, Col, Form, Row, Spin, Banner } from '@douyinfe/semi-ui';
import {
Button,
Col,
Form,
Row,
Spin,
Banner,
Tag,
Divider,
} from '@douyinfe/semi-ui';
import {
compareObjects,
API,
@@ -35,9 +44,31 @@ const thinkingExample = JSON.stringify(
2,
);
const chatCompletionsToResponsesPolicyExample = JSON.stringify(
{
enabled: true,
all_channels: false,
channel_ids: [1, 2],
model_patterns: ['^gpt-4o.*$', '^gpt-5.*$'],
},
null,
2,
);
const chatCompletionsToResponsesPolicyAllChannelsExample = JSON.stringify(
{
enabled: true,
all_channels: true,
model_patterns: ['^gpt-4o.*$', '^gpt-5.*$'],
},
null,
2,
);
const defaultGlobalSettingInputs = {
'global.pass_through_request_enabled': false,
'global.thinking_model_blacklist': '[]',
'global.chat_completions_to_responses_policy': '{}',
'general_setting.ping_interval_enabled': false,
'general_setting.ping_interval_seconds': 60,
};
@@ -49,12 +80,28 @@ export default function SettingGlobalModel(props) {
const [inputs, setInputs] = useState(defaultGlobalSettingInputs);
const refForm = useRef();
const [inputsRow, setInputsRow] = useState(defaultGlobalSettingInputs);
const chatCompletionsToResponsesPolicyKey =
'global.chat_completions_to_responses_policy';
const setChatCompletionsToResponsesPolicyValue = (value) => {
setInputs((prev) => ({
...prev,
[chatCompletionsToResponsesPolicyKey]: value,
}));
if (refForm.current) {
refForm.current.setValue(chatCompletionsToResponsesPolicyKey, value);
}
};
const normalizeValueBeforeSave = (key, value) => {
if (key === 'global.thinking_model_blacklist') {
const text = typeof value === 'string' ? value.trim() : '';
return text === '' ? '[]' : value;
}
if (key === 'global.chat_completions_to_responses_policy') {
const text = typeof value === 'string' ? value.trim() : '';
return text === '' ? '{}' : value;
}
return value;
};
@@ -108,6 +155,16 @@ export default function SettingGlobalModel(props) {
value = defaultGlobalSettingInputs[key];
}
}
if (key === 'global.chat_completions_to_responses_policy') {
try {
value =
value && String(value).trim() !== ''
? JSON.stringify(JSON.parse(value), null, 2)
: defaultGlobalSettingInputs[key];
} catch (error) {
value = defaultGlobalSettingInputs[key];
}
}
currentInputs[key] = value;
} else {
currentInputs[key] = defaultGlobalSettingInputs[key];
@@ -180,7 +237,134 @@ export default function SettingGlobalModel(props) {
</Col>
</Row>
<Form.Section text={t('连接保活设置')}>
<Form.Section
text={
<span
style={{
fontSize: 14,
fontWeight: 600,
display: 'inline-flex',
alignItems: 'center',
gap: 8,
flexWrap: 'wrap',
}}
>
{t('ChatCompletions→Responses 兼容配置')}
<Tag color='orange' size='small'>
测试版
</Tag>
</span>
}
>
<Row style={{ marginTop: 10 }}>
<Col span={24}>
<Banner
type='warning'
description={t(
'提示:该功能为测试版,未来配置结构与功能行为可能发生变更,请勿在生产环境使用。',
)}
/>
</Col>
</Row>
<Row style={{ marginTop: 10 }}>
<Col span={24}>
<Form.TextArea
label={t('参数配置')}
field={chatCompletionsToResponsesPolicyKey}
placeholder={
t('例如(指定渠道):') +
'\n' +
chatCompletionsToResponsesPolicyExample +
'\n\n' +
t('例如(全渠道):') +
'\n' +
chatCompletionsToResponsesPolicyAllChannelsExample
}
rows={8}
rules={[
{
validator: (rule, value) => {
if (!value || value.trim() === '') return true;
return verifyJSON(value);
},
message: t('不是合法的 JSON 字符串'),
},
]}
onChange={(value) =>
setInputs((prev) => ({
...prev,
[chatCompletionsToResponsesPolicyKey]: value,
}))
}
/>
</Col>
</Row>
<Row style={{ marginTop: 10, marginBottom: 16 }}>
<Col span={24}>
<div
style={{
display: 'flex',
gap: 8,
flexWrap: 'wrap',
alignItems: 'center',
}}
>
<Button
type='secondary'
size='small'
onClick={() =>
setChatCompletionsToResponsesPolicyValue(
chatCompletionsToResponsesPolicyExample,
)
}
>
{t('填充模板(指定渠道)')}
</Button>
<Button
type='secondary'
size='small'
onClick={() =>
setChatCompletionsToResponsesPolicyValue(
chatCompletionsToResponsesPolicyAllChannelsExample,
)
}
>
{t('填充模板(全渠道)')}
</Button>
<Button
type='secondary'
size='small'
onClick={() => {
const raw = inputs[chatCompletionsToResponsesPolicyKey];
if (!raw || String(raw).trim() === '') return;
try {
const formatted = JSON.stringify(
JSON.parse(raw),
null,
2,
);
setChatCompletionsToResponsesPolicyValue(formatted);
} catch (error) {
showError(t('不是合法的 JSON 字符串'));
}
}}
>
{t('格式化 JSON')}
</Button>
</div>
</Col>
</Row>
</Form.Section>
<Form.Section
text={
<span style={{ fontSize: 14, fontWeight: 600 }}>
{t('连接保活设置')}
</span>
}
>
<Row style={{ marginTop: 10 }}>
<Col span={24}>
<Banner

View File

@@ -18,19 +18,29 @@ For commercial licensing, please contact support@quantumnous.com
*/
import React, { useEffect, useState, useRef } from 'react';
import { Button, Col, Form, Row, Spin } from '@douyinfe/semi-ui';
import {
Button,
Col,
Form,
Row,
Spin,
Tag,
Typography,
} from '@douyinfe/semi-ui';
import {
compareObjects,
API,
showError,
showSuccess,
showWarning,
parseHttpStatusCodeRules,
verifyJSON,
} from '../../../helpers';
import { useTranslation } from 'react-i18next';
export default function SettingsMonitoring(props) {
const { t } = useTranslation();
const { Text } = Typography;
const [loading, setLoading] = useState(false);
const [inputs, setInputs] = useState({
ChannelDisableThreshold: '',
@@ -38,21 +48,37 @@ export default function SettingsMonitoring(props) {
AutomaticDisableChannelEnabled: false,
AutomaticEnableChannelEnabled: false,
AutomaticDisableKeywords: '',
AutomaticDisableStatusCodes: '401',
'monitor_setting.auto_test_channel_enabled': false,
'monitor_setting.auto_test_channel_minutes': 10,
});
const refForm = useRef();
const [inputsRow, setInputsRow] = useState(inputs);
const parsedAutoDisableStatusCodes = parseHttpStatusCodeRules(
inputs.AutomaticDisableStatusCodes || '',
);
function onSubmit() {
const updateArray = compareObjects(inputs, inputsRow);
if (!updateArray.length) return showWarning(t('你似乎并没有修改什么'));
if (!parsedAutoDisableStatusCodes.ok) {
const details =
parsedAutoDisableStatusCodes.invalidTokens &&
parsedAutoDisableStatusCodes.invalidTokens.length > 0
? `: ${parsedAutoDisableStatusCodes.invalidTokens.join(', ')}`
: '';
return showError(`${t('自动禁用状态码格式不正确')}${details}`);
}
const requestQueue = updateArray.map((item) => {
let value = '';
if (typeof inputs[item.key] === 'boolean') {
value = String(inputs[item.key]);
} else {
value = inputs[item.key];
if (item.key === 'AutomaticDisableStatusCodes') {
value = parsedAutoDisableStatusCodes.normalized;
} else {
value = inputs[item.key];
}
}
return API.put('/api/option/', {
key: item.key,
@@ -207,6 +233,45 @@ export default function SettingsMonitoring(props) {
</Row>
<Row gutter={16}>
<Col xs={24} sm={16}>
<Form.Input
label={t('自动禁用状态码')}
placeholder={t('例如401, 403, 429, 500-599')}
extraText={t(
'支持填写单个状态码或范围(含首尾),使用逗号分隔',
)}
field={'AutomaticDisableStatusCodes'}
onChange={(value) =>
setInputs({ ...inputs, AutomaticDisableStatusCodes: value })
}
/>
{parsedAutoDisableStatusCodes.ok &&
parsedAutoDisableStatusCodes.tokens.length > 0 && (
<div
style={{
display: 'flex',
flexWrap: 'wrap',
gap: 8,
marginTop: 8,
}}
>
{parsedAutoDisableStatusCodes.tokens.map((token) => (
<Tag key={token} size='small'>
{token}
</Tag>
))}
</div>
)}
{!parsedAutoDisableStatusCodes.ok && (
<Text type='danger' style={{ display: 'block', marginTop: 8 }}>
{t('自动禁用状态码格式不正确')}
{parsedAutoDisableStatusCodes.invalidTokens &&
parsedAutoDisableStatusCodes.invalidTokens.length > 0
? `: ${parsedAutoDisableStatusCodes.invalidTokens.join(
', ',
)}`
: ''}
</Text>
)}
<Form.TextArea
label={t('自动禁用关键词')}
placeholder={t('一行一个,不区分大小写')}