Compare commits

...

41 Commits

Author SHA1 Message Date
CaIon
18aa0de323 fix: add support for gpt-5.4 model in model_ratio.go 2026-03-06 11:43:05 +08:00
Seefs
f0e938a513 Merge pull request #3120 from nekohy/main
feats: repair the thinking of claude to openrouter convert
2026-03-05 18:10:46 +08:00
Seefs
db8243bb36 Merge pull request #3130 from feitianbubu/pr/ce221d98d71ab7aec3eb60f27ca33dbb4dc9610a
fix: fetch model add header passthrough rule key check
2026-03-05 18:09:44 +08:00
feitianbubu
1b85b183e6 fix: fetch model add header passthrough rule key check 2026-03-05 17:49:36 +08:00
Calcium-Ion
56c971691b Merge pull request #3129 from seefs001/feature/param-override-wildcard-path
Feature/param override wildcard path
2026-03-05 16:53:39 +08:00
Seefs
9d4ea49984 chore: remove top-right field guide entry in param override editor 2026-03-05 16:43:15 +08:00
Seefs
16e4ce52e3 feat: add wildcard path support and improve param override templates/editor 2026-03-05 16:39:34 +08:00
Nekohy
de12d6df05 delete some if 2026-03-05 06:24:22 +08:00
Nekohy
5b264f3a57 feats: repair the thinking of claude to openrouter convert 2026-03-05 06:12:48 +08:00
CaIon
887a929d65 fix: add multilingual support for meta description in index.html 2026-03-04 18:19:19 +08:00
Calcium-Ion
34262dc8c3 Merge pull request #3093 from feitianbubu/pr/92ad4854fcb501216dd9f2155c19f0556e4655bc
fix: update task billing log content to include reason
2026-03-04 18:13:59 +08:00
CaIon
ddffccc499 fix: update meta description for improved clarity and accuracy 2026-03-04 18:07:17 +08:00
CaIon
c31f9db61e feat: enhance PricingTags and SelectableButtonGroup with new badge styles and color variants 2026-03-04 00:36:04 +08:00
CaIon
3b65c32573 fix: improve error message for unsupported image generation models 2026-03-04 00:36:03 +08:00
Calcium-Ion
196f534c41 Merge pull request #3096 from seefs001/fix/auto-fetch-upstream-model-tips
Fix/auto fetch upstream model tips
2026-03-03 14:47:43 +08:00
Seefs
40c36b1a30 fix: count ignored models from unselected items in upstream update toast 2026-03-03 14:29:43 +08:00
Calcium-Ion
ae1c8e4173 fix: use default model price for radio price model (#3090) 2026-03-03 14:29:03 +08:00
Seefs
429b7428f4 fix: remove extra spaces 2026-03-03 14:08:43 +08:00
Seefs
0a804f0e70 fix: refine upstream update ignore UX and detect behavior 2026-03-03 14:00:48 +08:00
feitianbubu
5f3c5f14d4 fix: update task billing log content to include reason 2026-03-03 12:37:43 +08:00
feitianbubu
d12cc3a8da fix: use default model price for radio price model 2026-03-03 11:22:04 +08:00
Seefs
e71f5a45f2 feat: auto fetch upstream models (#2979)
* feat: add upstream model update detection with scheduled sync and manual apply flows

* feat: support upstream model removal sync and selectable deletes in update modal

* feat: add detect-only upstream updates and show compact +/- model badges

* feat: improve upstream model update UX

* feat: improve upstream model update UX

* fix: respect model_mapping in upstream update detection

* feat: improve upstream update modal to prevent missed add/remove actions

* feat: add admin upstream model update notifications with digest and truncation

* fix: avoid repeated partial-submit confirmation in upstream update modal

* feat: improve ui/ux

* feat: suppress upstream update alerts for unchanged channel-count within 24h

* fix: submit upstream update choices even when no models are selected

* feat: improve upstream model update flow and split frontend updater

* fix merge conflict
2026-03-02 22:01:53 +08:00
Calcium-Ion
d36f4205a9 Merge pull request #3081 from BenLampson/main
Return error when model price/ratio unset
2026-03-02 22:01:21 +08:00
Calcium-Ion
e593c11eab Merge pull request #3037 from RedwindA/fix/token-model-limits-length
fix: change token model_limits column from varchar(1024) to text
2026-03-02 22:00:21 +08:00
CaIon
477e9cf7db feat: add AionUI to chat settings and built-in templates 2026-03-02 21:19:04 +08:00
Calcium-Ion
1d3dcc0afa Merge pull request #3083 from QuantumNous/revert-3077-fix/aws-non-empty-text
Revert "fix: aws text content blocks must be non-empty"
2026-03-02 19:43:28 +08:00
Seefs
b1b3def081 Revert "fix: aws text content blocks must be non-empty" 2026-03-02 19:43:00 +08:00
Calcium-Ion
4298891ffe Merge pull request #3082 from QuantumNous/revert-3080-fix/aws-non-empty-text
Revert "Fix/aws non empty text"
2026-03-02 19:42:58 +08:00
Seefs
9be9943224 Revert "Fix/aws non empty text" 2026-03-02 19:40:53 +08:00
Calcium-Ion
5dcbcd9cad fix: tool responses (#3080) 2026-03-02 19:23:50 +08:00
Seefs
032a3ec7df fix: tool responses 2026-03-02 19:22:37 +08:00
Fat Person
4b439ad3be Return error when model price/ratio unset
#3079
Change ModelPriceHelperPerCall to return (PriceData, error) and stop silently falling back to a default price. If a model price is not configured the helper now returns an error (unless the user has AcceptUnsetRatioModel enabled and a ratio exists). Propagate this error to callers: Midjourney handlers now return a MidjourneyResponse with Code 4 and the error message, and task submission returns a wrapped task error with HTTP 400. Also extract remix video_id in ResolveOriginTask for remix actions. This enforces explicit model price/ratio configuration and surfaces configuration issues to clients.
2026-03-02 19:09:48 +08:00
Seefs
0689600103 Merge pull request #3066 from seefs001/fix/aws-header-override
Fix/aws header override
2026-03-02 18:54:56 +08:00
CaIon
f2c5acf815 fix: handle rate limits and improve error response parsing in video task updates 2026-03-02 17:11:57 +08:00
Seefs
1043a3088c Merge pull request #3077 from seefs001/fix/aws-non-empty-text
fix: aws text content blocks must be non-empty
2026-03-02 16:33:03 +08:00
Seefs
550fbe516d fix: default empty input_json_delta arguments to {} for tool call parsing 2026-03-02 15:51:55 +08:00
Seefs
d826dd2c16 fix: preserve tool_use on malformed tool arguments to keep tool_result pairing valid 2026-03-02 15:41:03 +08:00
Seefs
17d1224141 fix: aws text content blocks must be non-empty 2026-03-02 15:31:37 +08:00
RedwindA
43e068c0c0 fix: enhance migrateTokenModelLimitsToText function to return errors and improve migration checks 2026-02-28 19:08:03 +08:00
RedwindA
52c29e7582 fix: migrate model_limits column from varchar(1024) to text for existing tables 2026-02-28 18:49:06 +08:00
RedwindA
db8534b4a3 fix: change token model_limits column from varchar(1024) to text
Fixes #3033 — users with many model limits hit PostgreSQL's varchar
length constraint. The text type is supported across all three
databases (SQLite, MySQL, PostgreSQL) with no length restriction.
2026-02-27 14:47:20 +08:00
60 changed files with 3553 additions and 437 deletions

View File

@@ -13,6 +13,7 @@ import (
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/model"
relaychannel "github.com/QuantumNous/new-api/relay/channel"
"github.com/QuantumNous/new-api/relay/channel/gemini"
"github.com/QuantumNous/new-api/relay/channel/ollama"
"github.com/QuantumNous/new-api/service"
@@ -183,6 +184,9 @@ func buildFetchModelsHeaders(channel *model.Channel, key string) (http.Header, e
headerOverride := channel.GetHeaderOverride()
for k, v := range headerOverride {
if relaychannel.IsHeaderPassthroughRuleKey(k) {
continue
}
str, ok := v.(string)
if !ok {
return nil, fmt.Errorf("invalid header override for key %s", k)
@@ -209,157 +213,14 @@ func FetchUpstreamModels(c *gin.Context) {
return
}
baseURL := constant.ChannelBaseURLs[channel.Type]
if channel.GetBaseURL() != "" {
baseURL = channel.GetBaseURL()
}
// 对于 Ollama 渠道,使用特殊处理
if channel.Type == constant.ChannelTypeOllama {
key := strings.Split(channel.Key, "\n")[0]
models, err := ollama.FetchOllamaModels(baseURL, key)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("获取Ollama模型失败: %s", err.Error()),
})
return
}
result := OpenAIModelsResponse{
Data: make([]OpenAIModel, 0, len(models)),
}
for _, modelInfo := range models {
metadata := map[string]any{}
if modelInfo.Size > 0 {
metadata["size"] = modelInfo.Size
}
if modelInfo.Digest != "" {
metadata["digest"] = modelInfo.Digest
}
if modelInfo.ModifiedAt != "" {
metadata["modified_at"] = modelInfo.ModifiedAt
}
details := modelInfo.Details
if details.ParentModel != "" || details.Format != "" || details.Family != "" || len(details.Families) > 0 || details.ParameterSize != "" || details.QuantizationLevel != "" {
metadata["details"] = modelInfo.Details
}
if len(metadata) == 0 {
metadata = nil
}
result.Data = append(result.Data, OpenAIModel{
ID: modelInfo.Name,
Object: "model",
Created: 0,
OwnedBy: "ollama",
Metadata: metadata,
})
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"data": result.Data,
})
return
}
// 对于 Gemini 渠道,使用特殊处理
if channel.Type == constant.ChannelTypeGemini {
// 获取用于请求的可用密钥(多密钥渠道优先使用启用状态的密钥)
key, _, apiErr := channel.GetNextEnabledKey()
if apiErr != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("获取渠道密钥失败: %s", apiErr.Error()),
})
return
}
key = strings.TrimSpace(key)
models, err := gemini.FetchGeminiModels(baseURL, key, channel.GetSetting().Proxy)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("获取Gemini模型失败: %s", err.Error()),
})
return
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": models,
})
return
}
var url string
switch channel.Type {
case constant.ChannelTypeAli:
url = fmt.Sprintf("%s/compatible-mode/v1/models", baseURL)
case constant.ChannelTypeZhipu_v4:
if plan, ok := constant.ChannelSpecialBases[baseURL]; ok && plan.OpenAIBaseURL != "" {
url = fmt.Sprintf("%s/models", plan.OpenAIBaseURL)
} else {
url = fmt.Sprintf("%s/api/paas/v4/models", baseURL)
}
case constant.ChannelTypeVolcEngine:
if plan, ok := constant.ChannelSpecialBases[baseURL]; ok && plan.OpenAIBaseURL != "" {
url = fmt.Sprintf("%s/v1/models", plan.OpenAIBaseURL)
} else {
url = fmt.Sprintf("%s/v1/models", baseURL)
}
case constant.ChannelTypeMoonshot:
if plan, ok := constant.ChannelSpecialBases[baseURL]; ok && plan.OpenAIBaseURL != "" {
url = fmt.Sprintf("%s/models", plan.OpenAIBaseURL)
} else {
url = fmt.Sprintf("%s/v1/models", baseURL)
}
default:
url = fmt.Sprintf("%s/v1/models", baseURL)
}
// 获取用于请求的可用密钥(多密钥渠道优先使用启用状态的密钥)
key, _, apiErr := channel.GetNextEnabledKey()
if apiErr != nil {
ids, err := fetchChannelUpstreamModelIDs(channel)
if err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("获取渠道密钥失败: %s", apiErr.Error()),
"message": fmt.Sprintf("获取模型列表失败: %s", err.Error()),
})
return
}
key = strings.TrimSpace(key)
headers, err := buildFetchModelsHeaders(channel, key)
if err != nil {
common.ApiError(c, err)
return
}
body, err := GetResponseBody("GET", url, channel, headers)
if err != nil {
common.ApiError(c, err)
return
}
var result OpenAIModelsResponse
if err = json.Unmarshal(body, &result); err != nil {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": fmt.Sprintf("解析响应失败: %s", err.Error()),
})
return
}
var ids []string
for _, model := range result.Data {
id := model.ID
if channel.Type == constant.ChannelTypeGemini {
id = strings.TrimPrefix(id, "models/")
}
ids = append(ids, id)
}
c.JSON(http.StatusOK, gin.H{
"success": true,

View File

@@ -0,0 +1,975 @@
package controller
import (
"fmt"
"net/http"
"slices"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/constant"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/model"
"github.com/QuantumNous/new-api/relay/channel/gemini"
"github.com/QuantumNous/new-api/relay/channel/ollama"
"github.com/QuantumNous/new-api/service"
"github.com/gin-gonic/gin"
"github.com/samber/lo"
)
const (
channelUpstreamModelUpdateTaskDefaultIntervalMinutes = 30
channelUpstreamModelUpdateTaskBatchSize = 100
channelUpstreamModelUpdateMinCheckIntervalSeconds = 300
channelUpstreamModelUpdateNotifySuppressWindowSeconds = 86400
channelUpstreamModelUpdateNotifyMaxChannelDetails = 8
channelUpstreamModelUpdateNotifyMaxModelDetails = 12
channelUpstreamModelUpdateNotifyMaxFailedChannelIDs = 10
)
var (
channelUpstreamModelUpdateTaskOnce sync.Once
channelUpstreamModelUpdateTaskRunning atomic.Bool
channelUpstreamModelUpdateNotifyState = struct {
sync.Mutex
lastNotifiedAt int64
lastChangedChannels int
lastFailedChannels int
}{}
)
type applyChannelUpstreamModelUpdatesRequest struct {
ID int `json:"id"`
AddModels []string `json:"add_models"`
RemoveModels []string `json:"remove_models"`
IgnoreModels []string `json:"ignore_models"`
}
type applyAllChannelUpstreamModelUpdatesResult struct {
ChannelID int `json:"channel_id"`
ChannelName string `json:"channel_name"`
AddedModels []string `json:"added_models"`
RemovedModels []string `json:"removed_models"`
RemainingModels []string `json:"remaining_models"`
RemainingRemoveModels []string `json:"remaining_remove_models"`
}
type detectChannelUpstreamModelUpdatesResult struct {
ChannelID int `json:"channel_id"`
ChannelName string `json:"channel_name"`
AddModels []string `json:"add_models"`
RemoveModels []string `json:"remove_models"`
LastCheckTime int64 `json:"last_check_time"`
AutoAddedModels int `json:"auto_added_models"`
}
type upstreamModelUpdateChannelSummary struct {
ChannelName string
AddCount int
RemoveCount int
}
func normalizeModelNames(models []string) []string {
return lo.Uniq(lo.FilterMap(models, func(model string, _ int) (string, bool) {
trimmed := strings.TrimSpace(model)
return trimmed, trimmed != ""
}))
}
func mergeModelNames(base []string, appended []string) []string {
merged := normalizeModelNames(base)
seen := make(map[string]struct{}, len(merged))
for _, model := range merged {
seen[model] = struct{}{}
}
for _, model := range normalizeModelNames(appended) {
if _, ok := seen[model]; ok {
continue
}
seen[model] = struct{}{}
merged = append(merged, model)
}
return merged
}
func subtractModelNames(base []string, removed []string) []string {
removeSet := make(map[string]struct{}, len(removed))
for _, model := range normalizeModelNames(removed) {
removeSet[model] = struct{}{}
}
return lo.Filter(normalizeModelNames(base), func(model string, _ int) bool {
_, ok := removeSet[model]
return !ok
})
}
func intersectModelNames(base []string, allowed []string) []string {
allowedSet := make(map[string]struct{}, len(allowed))
for _, model := range normalizeModelNames(allowed) {
allowedSet[model] = struct{}{}
}
return lo.Filter(normalizeModelNames(base), func(model string, _ int) bool {
_, ok := allowedSet[model]
return ok
})
}
func applySelectedModelChanges(originModels []string, addModels []string, removeModels []string) []string {
// Add wins when the same model appears in both selected lists.
normalizedAdd := normalizeModelNames(addModels)
normalizedRemove := subtractModelNames(normalizeModelNames(removeModels), normalizedAdd)
return subtractModelNames(mergeModelNames(originModels, normalizedAdd), normalizedRemove)
}
func normalizeChannelModelMapping(channel *model.Channel) map[string]string {
if channel == nil || channel.ModelMapping == nil {
return nil
}
rawMapping := strings.TrimSpace(*channel.ModelMapping)
if rawMapping == "" || rawMapping == "{}" {
return nil
}
parsed := make(map[string]string)
if err := common.UnmarshalJsonStr(rawMapping, &parsed); err != nil {
return nil
}
normalized := make(map[string]string, len(parsed))
for source, target := range parsed {
normalizedSource := strings.TrimSpace(source)
normalizedTarget := strings.TrimSpace(target)
if normalizedSource == "" || normalizedTarget == "" {
continue
}
normalized[normalizedSource] = normalizedTarget
}
if len(normalized) == 0 {
return nil
}
return normalized
}
func collectPendingUpstreamModelChangesFromModels(
localModels []string,
upstreamModels []string,
ignoredModels []string,
modelMapping map[string]string,
) (pendingAddModels []string, pendingRemoveModels []string) {
localSet := make(map[string]struct{})
localModels = normalizeModelNames(localModels)
upstreamModels = normalizeModelNames(upstreamModels)
for _, modelName := range localModels {
localSet[modelName] = struct{}{}
}
upstreamSet := make(map[string]struct{}, len(upstreamModels))
for _, modelName := range upstreamModels {
upstreamSet[modelName] = struct{}{}
}
ignoredSet := make(map[string]struct{})
for _, modelName := range normalizeModelNames(ignoredModels) {
ignoredSet[modelName] = struct{}{}
}
redirectSourceSet := make(map[string]struct{}, len(modelMapping))
redirectTargetSet := make(map[string]struct{}, len(modelMapping))
for source, target := range modelMapping {
redirectSourceSet[source] = struct{}{}
redirectTargetSet[target] = struct{}{}
}
coveredUpstreamSet := make(map[string]struct{}, len(localSet)+len(redirectTargetSet))
for modelName := range localSet {
coveredUpstreamSet[modelName] = struct{}{}
}
for modelName := range redirectTargetSet {
coveredUpstreamSet[modelName] = struct{}{}
}
pendingAdd := lo.Filter(upstreamModels, func(modelName string, _ int) bool {
if _, ok := coveredUpstreamSet[modelName]; ok {
return false
}
if _, ok := ignoredSet[modelName]; ok {
return false
}
return true
})
pendingRemove := lo.Filter(localModels, func(modelName string, _ int) bool {
// Redirect source models are virtual aliases and should not be removed
// only because they are absent from upstream model list.
if _, ok := redirectSourceSet[modelName]; ok {
return false
}
_, ok := upstreamSet[modelName]
return !ok
})
return normalizeModelNames(pendingAdd), normalizeModelNames(pendingRemove)
}
func collectPendingUpstreamModelChanges(channel *model.Channel, settings dto.ChannelOtherSettings) (pendingAddModels []string, pendingRemoveModels []string, err error) {
upstreamModels, err := fetchChannelUpstreamModelIDs(channel)
if err != nil {
return nil, nil, err
}
pendingAddModels, pendingRemoveModels = collectPendingUpstreamModelChangesFromModels(
channel.GetModels(),
upstreamModels,
settings.UpstreamModelUpdateIgnoredModels,
normalizeChannelModelMapping(channel),
)
return pendingAddModels, pendingRemoveModels, nil
}
func getUpstreamModelUpdateMinCheckIntervalSeconds() int64 {
interval := int64(common.GetEnvOrDefault(
"CHANNEL_UPSTREAM_MODEL_UPDATE_MIN_CHECK_INTERVAL_SECONDS",
channelUpstreamModelUpdateMinCheckIntervalSeconds,
))
if interval < 0 {
return channelUpstreamModelUpdateMinCheckIntervalSeconds
}
return interval
}
func fetchChannelUpstreamModelIDs(channel *model.Channel) ([]string, error) {
baseURL := constant.ChannelBaseURLs[channel.Type]
if channel.GetBaseURL() != "" {
baseURL = channel.GetBaseURL()
}
if channel.Type == constant.ChannelTypeOllama {
key := strings.TrimSpace(strings.Split(channel.Key, "\n")[0])
models, err := ollama.FetchOllamaModels(baseURL, key)
if err != nil {
return nil, err
}
return normalizeModelNames(lo.Map(models, func(item ollama.OllamaModel, _ int) string {
return item.Name
})), nil
}
if channel.Type == constant.ChannelTypeGemini {
key, _, apiErr := channel.GetNextEnabledKey()
if apiErr != nil {
return nil, fmt.Errorf("获取渠道密钥失败: %w", apiErr)
}
key = strings.TrimSpace(key)
models, err := gemini.FetchGeminiModels(baseURL, key, channel.GetSetting().Proxy)
if err != nil {
return nil, err
}
return normalizeModelNames(models), nil
}
var url string
switch channel.Type {
case constant.ChannelTypeAli:
url = fmt.Sprintf("%s/compatible-mode/v1/models", baseURL)
case constant.ChannelTypeZhipu_v4:
if plan, ok := constant.ChannelSpecialBases[baseURL]; ok && plan.OpenAIBaseURL != "" {
url = fmt.Sprintf("%s/models", plan.OpenAIBaseURL)
} else {
url = fmt.Sprintf("%s/api/paas/v4/models", baseURL)
}
case constant.ChannelTypeVolcEngine:
if plan, ok := constant.ChannelSpecialBases[baseURL]; ok && plan.OpenAIBaseURL != "" {
url = fmt.Sprintf("%s/v1/models", plan.OpenAIBaseURL)
} else {
url = fmt.Sprintf("%s/v1/models", baseURL)
}
case constant.ChannelTypeMoonshot:
if plan, ok := constant.ChannelSpecialBases[baseURL]; ok && plan.OpenAIBaseURL != "" {
url = fmt.Sprintf("%s/models", plan.OpenAIBaseURL)
} else {
url = fmt.Sprintf("%s/v1/models", baseURL)
}
default:
url = fmt.Sprintf("%s/v1/models", baseURL)
}
key, _, apiErr := channel.GetNextEnabledKey()
if apiErr != nil {
return nil, fmt.Errorf("获取渠道密钥失败: %w", apiErr)
}
key = strings.TrimSpace(key)
headers, err := buildFetchModelsHeaders(channel, key)
if err != nil {
return nil, err
}
body, err := GetResponseBody(http.MethodGet, url, channel, headers)
if err != nil {
return nil, err
}
var result OpenAIModelsResponse
if err := common.Unmarshal(body, &result); err != nil {
return nil, err
}
ids := lo.Map(result.Data, func(item OpenAIModel, _ int) string {
if channel.Type == constant.ChannelTypeGemini {
return strings.TrimPrefix(item.ID, "models/")
}
return item.ID
})
return normalizeModelNames(ids), nil
}
func updateChannelUpstreamModelSettings(channel *model.Channel, settings dto.ChannelOtherSettings, updateModels bool) error {
channel.SetOtherSettings(settings)
updates := map[string]interface{}{
"settings": channel.OtherSettings,
}
if updateModels {
updates["models"] = channel.Models
}
return model.DB.Model(&model.Channel{}).Where("id = ?", channel.Id).Updates(updates).Error
}
func checkAndPersistChannelUpstreamModelUpdates(
channel *model.Channel,
settings *dto.ChannelOtherSettings,
force bool,
allowAutoApply bool,
) (modelsChanged bool, autoAdded int, err error) {
now := common.GetTimestamp()
if !force {
minInterval := getUpstreamModelUpdateMinCheckIntervalSeconds()
if settings.UpstreamModelUpdateLastCheckTime > 0 &&
now-settings.UpstreamModelUpdateLastCheckTime < minInterval {
return false, 0, nil
}
}
pendingAddModels, pendingRemoveModels, fetchErr := collectPendingUpstreamModelChanges(channel, *settings)
settings.UpstreamModelUpdateLastCheckTime = now
if fetchErr != nil {
if err = updateChannelUpstreamModelSettings(channel, *settings, false); err != nil {
return false, 0, err
}
return false, 0, fetchErr
}
if allowAutoApply && settings.UpstreamModelUpdateAutoSyncEnabled && len(pendingAddModels) > 0 {
originModels := normalizeModelNames(channel.GetModels())
mergedModels := mergeModelNames(originModels, pendingAddModels)
if len(mergedModels) > len(originModels) {
channel.Models = strings.Join(mergedModels, ",")
autoAdded = len(mergedModels) - len(originModels)
modelsChanged = true
}
settings.UpstreamModelUpdateLastDetectedModels = []string{}
} else {
settings.UpstreamModelUpdateLastDetectedModels = pendingAddModels
}
settings.UpstreamModelUpdateLastRemovedModels = pendingRemoveModels
if err = updateChannelUpstreamModelSettings(channel, *settings, modelsChanged); err != nil {
return false, autoAdded, err
}
if modelsChanged {
if err = channel.UpdateAbilities(nil); err != nil {
return true, autoAdded, err
}
}
return modelsChanged, autoAdded, nil
}
func refreshChannelRuntimeCache() {
if common.MemoryCacheEnabled {
func() {
defer func() {
if r := recover(); r != nil {
common.SysLog(fmt.Sprintf("InitChannelCache panic: %v", r))
}
}()
model.InitChannelCache()
}()
}
service.ResetProxyClientCache()
}
func shouldSendUpstreamModelUpdateNotification(now int64, changedChannels int, failedChannels int) bool {
if changedChannels <= 0 && failedChannels <= 0 {
return true
}
channelUpstreamModelUpdateNotifyState.Lock()
defer channelUpstreamModelUpdateNotifyState.Unlock()
if channelUpstreamModelUpdateNotifyState.lastNotifiedAt > 0 &&
now-channelUpstreamModelUpdateNotifyState.lastNotifiedAt < channelUpstreamModelUpdateNotifySuppressWindowSeconds &&
channelUpstreamModelUpdateNotifyState.lastChangedChannels == changedChannels &&
channelUpstreamModelUpdateNotifyState.lastFailedChannels == failedChannels {
return false
}
channelUpstreamModelUpdateNotifyState.lastNotifiedAt = now
channelUpstreamModelUpdateNotifyState.lastChangedChannels = changedChannels
channelUpstreamModelUpdateNotifyState.lastFailedChannels = failedChannels
return true
}
func buildUpstreamModelUpdateTaskNotificationContent(
checkedChannels int,
changedChannels int,
detectedAddModels int,
detectedRemoveModels int,
autoAddedModels int,
failedChannelIDs []int,
channelSummaries []upstreamModelUpdateChannelSummary,
addModelSamples []string,
removeModelSamples []string,
) string {
var builder strings.Builder
failedChannels := len(failedChannelIDs)
builder.WriteString(fmt.Sprintf(
"上游模型巡检摘要:检测渠道 %d 个,发现变更 %d 个,新增 %d 个,删除 %d 个,自动同步新增 %d 个,失败 %d 个。",
checkedChannels,
changedChannels,
detectedAddModels,
detectedRemoveModels,
autoAddedModels,
failedChannels,
))
if len(channelSummaries) > 0 {
displayCount := min(len(channelSummaries), channelUpstreamModelUpdateNotifyMaxChannelDetails)
builder.WriteString(fmt.Sprintf("\n\n变更渠道明细展示 %d/%d", displayCount, len(channelSummaries)))
for _, summary := range channelSummaries[:displayCount] {
builder.WriteString(fmt.Sprintf("\n- %s (+%d / -%d)", summary.ChannelName, summary.AddCount, summary.RemoveCount))
}
if len(channelSummaries) > displayCount {
builder.WriteString(fmt.Sprintf("\n- 其余 %d 个渠道已省略", len(channelSummaries)-displayCount))
}
}
normalizedAddModelSamples := normalizeModelNames(addModelSamples)
if len(normalizedAddModelSamples) > 0 {
displayCount := min(len(normalizedAddModelSamples), channelUpstreamModelUpdateNotifyMaxModelDetails)
builder.WriteString(fmt.Sprintf("\n\n新增模型示例展示 %d/%d%s",
displayCount,
len(normalizedAddModelSamples),
strings.Join(normalizedAddModelSamples[:displayCount], ", "),
))
if len(normalizedAddModelSamples) > displayCount {
builder.WriteString(fmt.Sprintf("(其余 %d 个已省略)", len(normalizedAddModelSamples)-displayCount))
}
}
normalizedRemoveModelSamples := normalizeModelNames(removeModelSamples)
if len(normalizedRemoveModelSamples) > 0 {
displayCount := min(len(normalizedRemoveModelSamples), channelUpstreamModelUpdateNotifyMaxModelDetails)
builder.WriteString(fmt.Sprintf("\n\n删除模型示例展示 %d/%d%s",
displayCount,
len(normalizedRemoveModelSamples),
strings.Join(normalizedRemoveModelSamples[:displayCount], ", "),
))
if len(normalizedRemoveModelSamples) > displayCount {
builder.WriteString(fmt.Sprintf("(其余 %d 个已省略)", len(normalizedRemoveModelSamples)-displayCount))
}
}
if failedChannels > 0 {
displayCount := min(failedChannels, channelUpstreamModelUpdateNotifyMaxFailedChannelIDs)
displayIDs := lo.Map(failedChannelIDs[:displayCount], func(channelID int, _ int) string {
return fmt.Sprintf("%d", channelID)
})
builder.WriteString(fmt.Sprintf(
"\n\n失败渠道 ID展示 %d/%d%s",
displayCount,
failedChannels,
strings.Join(displayIDs, ", "),
))
if failedChannels > displayCount {
builder.WriteString(fmt.Sprintf("(其余 %d 个已省略)", failedChannels-displayCount))
}
}
return builder.String()
}
func runChannelUpstreamModelUpdateTaskOnce() {
if !channelUpstreamModelUpdateTaskRunning.CompareAndSwap(false, true) {
return
}
defer channelUpstreamModelUpdateTaskRunning.Store(false)
checkedChannels := 0
failedChannels := 0
failedChannelIDs := make([]int, 0)
changedChannels := 0
detectedAddModels := 0
detectedRemoveModels := 0
autoAddedModels := 0
channelSummaries := make([]upstreamModelUpdateChannelSummary, 0)
addModelSamples := make([]string, 0)
removeModelSamples := make([]string, 0)
refreshNeeded := false
lastID := 0
for {
var channels []*model.Channel
query := model.DB.
Select("id", "name", "type", "key", "status", "base_url", "models", "settings", "setting", "other", "group", "priority", "weight", "tag", "channel_info", "header_override").
Where("status = ?", common.ChannelStatusEnabled).
Order("id asc").
Limit(channelUpstreamModelUpdateTaskBatchSize)
if lastID > 0 {
query = query.Where("id > ?", lastID)
}
err := query.Find(&channels).Error
if err != nil {
common.SysLog(fmt.Sprintf("upstream model update task query failed: %v", err))
break
}
if len(channels) == 0 {
break
}
lastID = channels[len(channels)-1].Id
for _, channel := range channels {
if channel == nil {
continue
}
settings := channel.GetOtherSettings()
if !settings.UpstreamModelUpdateCheckEnabled {
continue
}
checkedChannels++
modelsChanged, autoAdded, err := checkAndPersistChannelUpstreamModelUpdates(channel, &settings, false, true)
if err != nil {
failedChannels++
failedChannelIDs = append(failedChannelIDs, channel.Id)
common.SysLog(fmt.Sprintf("upstream model update check failed: channel_id=%d channel_name=%s err=%v", channel.Id, channel.Name, err))
continue
}
currentAddModels := normalizeModelNames(settings.UpstreamModelUpdateLastDetectedModels)
currentRemoveModels := normalizeModelNames(settings.UpstreamModelUpdateLastRemovedModels)
currentAddCount := len(currentAddModels) + autoAdded
currentRemoveCount := len(currentRemoveModels)
detectedAddModels += currentAddCount
detectedRemoveModels += currentRemoveCount
if currentAddCount > 0 || currentRemoveCount > 0 {
changedChannels++
channelSummaries = append(channelSummaries, upstreamModelUpdateChannelSummary{
ChannelName: channel.Name,
AddCount: currentAddCount,
RemoveCount: currentRemoveCount,
})
}
addModelSamples = mergeModelNames(addModelSamples, currentAddModels)
removeModelSamples = mergeModelNames(removeModelSamples, currentRemoveModels)
if modelsChanged {
refreshNeeded = true
}
autoAddedModels += autoAdded
if common.RequestInterval > 0 {
time.Sleep(common.RequestInterval)
}
}
if len(channels) < channelUpstreamModelUpdateTaskBatchSize {
break
}
}
if refreshNeeded {
refreshChannelRuntimeCache()
}
if checkedChannels > 0 || common.DebugEnabled {
common.SysLog(fmt.Sprintf(
"upstream model update task done: checked_channels=%d changed_channels=%d detected_add_models=%d detected_remove_models=%d failed_channels=%d auto_added_models=%d",
checkedChannels,
changedChannels,
detectedAddModels,
detectedRemoveModels,
failedChannels,
autoAddedModels,
))
}
if changedChannels > 0 || failedChannels > 0 {
now := common.GetTimestamp()
if !shouldSendUpstreamModelUpdateNotification(now, changedChannels, failedChannels) {
common.SysLog(fmt.Sprintf(
"upstream model update notification skipped in 24h window: changed_channels=%d failed_channels=%d",
changedChannels,
failedChannels,
))
return
}
service.NotifyUpstreamModelUpdateWatchers(
"上游模型巡检通知",
buildUpstreamModelUpdateTaskNotificationContent(
checkedChannels,
changedChannels,
detectedAddModels,
detectedRemoveModels,
autoAddedModels,
failedChannelIDs,
channelSummaries,
addModelSamples,
removeModelSamples,
),
)
}
}
func StartChannelUpstreamModelUpdateTask() {
channelUpstreamModelUpdateTaskOnce.Do(func() {
if !common.IsMasterNode {
return
}
if !common.GetEnvOrDefaultBool("CHANNEL_UPSTREAM_MODEL_UPDATE_TASK_ENABLED", true) {
common.SysLog("upstream model update task disabled by CHANNEL_UPSTREAM_MODEL_UPDATE_TASK_ENABLED")
return
}
intervalMinutes := common.GetEnvOrDefault(
"CHANNEL_UPSTREAM_MODEL_UPDATE_TASK_INTERVAL_MINUTES",
channelUpstreamModelUpdateTaskDefaultIntervalMinutes,
)
if intervalMinutes < 1 {
intervalMinutes = channelUpstreamModelUpdateTaskDefaultIntervalMinutes
}
interval := time.Duration(intervalMinutes) * time.Minute
go func() {
common.SysLog(fmt.Sprintf("upstream model update task started: interval=%s", interval))
runChannelUpstreamModelUpdateTaskOnce()
ticker := time.NewTicker(interval)
defer ticker.Stop()
for range ticker.C {
runChannelUpstreamModelUpdateTaskOnce()
}
}()
})
}
func ApplyChannelUpstreamModelUpdates(c *gin.Context) {
var req applyChannelUpstreamModelUpdatesRequest
if err := c.ShouldBindJSON(&req); err != nil {
common.ApiError(c, err)
return
}
if req.ID <= 0 {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "invalid channel id",
})
return
}
channel, err := model.GetChannelById(req.ID, true)
if err != nil {
common.ApiError(c, err)
return
}
beforeSettings := channel.GetOtherSettings()
ignoredModels := intersectModelNames(req.IgnoreModels, beforeSettings.UpstreamModelUpdateLastDetectedModels)
addedModels, removedModels, remainingModels, remainingRemoveModels, modelsChanged, err := applyChannelUpstreamModelUpdates(
channel,
req.AddModels,
req.IgnoreModels,
req.RemoveModels,
)
if err != nil {
common.ApiError(c, err)
return
}
if modelsChanged {
refreshChannelRuntimeCache()
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": gin.H{
"id": channel.Id,
"added_models": addedModels,
"removed_models": removedModels,
"ignored_models": ignoredModels,
"remaining_models": remainingModels,
"remaining_remove_models": remainingRemoveModels,
"models": channel.Models,
"settings": channel.OtherSettings,
},
})
}
func DetectChannelUpstreamModelUpdates(c *gin.Context) {
var req applyChannelUpstreamModelUpdatesRequest
if err := c.ShouldBindJSON(&req); err != nil {
common.ApiError(c, err)
return
}
if req.ID <= 0 {
c.JSON(http.StatusOK, gin.H{
"success": false,
"message": "invalid channel id",
})
return
}
channel, err := model.GetChannelById(req.ID, true)
if err != nil {
common.ApiError(c, err)
return
}
settings := channel.GetOtherSettings()
modelsChanged, autoAdded, err := checkAndPersistChannelUpstreamModelUpdates(channel, &settings, true, false)
if err != nil {
common.ApiError(c, err)
return
}
if modelsChanged {
refreshChannelRuntimeCache()
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": detectChannelUpstreamModelUpdatesResult{
ChannelID: channel.Id,
ChannelName: channel.Name,
AddModels: normalizeModelNames(settings.UpstreamModelUpdateLastDetectedModels),
RemoveModels: normalizeModelNames(settings.UpstreamModelUpdateLastRemovedModels),
LastCheckTime: settings.UpstreamModelUpdateLastCheckTime,
AutoAddedModels: autoAdded,
},
})
}
func applyChannelUpstreamModelUpdates(
channel *model.Channel,
addModelsInput []string,
ignoreModelsInput []string,
removeModelsInput []string,
) (
addedModels []string,
removedModels []string,
remainingModels []string,
remainingRemoveModels []string,
modelsChanged bool,
err error,
) {
settings := channel.GetOtherSettings()
pendingAddModels := normalizeModelNames(settings.UpstreamModelUpdateLastDetectedModels)
pendingRemoveModels := normalizeModelNames(settings.UpstreamModelUpdateLastRemovedModels)
addModels := intersectModelNames(addModelsInput, pendingAddModels)
ignoreModels := intersectModelNames(ignoreModelsInput, pendingAddModels)
removeModels := intersectModelNames(removeModelsInput, pendingRemoveModels)
removeModels = subtractModelNames(removeModels, addModels)
originModels := normalizeModelNames(channel.GetModels())
nextModels := applySelectedModelChanges(originModels, addModels, removeModels)
modelsChanged = !slices.Equal(originModels, nextModels)
if modelsChanged {
channel.Models = strings.Join(nextModels, ",")
}
settings.UpstreamModelUpdateIgnoredModels = mergeModelNames(settings.UpstreamModelUpdateIgnoredModels, ignoreModels)
if len(addModels) > 0 {
settings.UpstreamModelUpdateIgnoredModels = subtractModelNames(settings.UpstreamModelUpdateIgnoredModels, addModels)
}
remainingModels = subtractModelNames(pendingAddModels, append(addModels, ignoreModels...))
remainingRemoveModels = subtractModelNames(pendingRemoveModels, removeModels)
settings.UpstreamModelUpdateLastDetectedModels = remainingModels
settings.UpstreamModelUpdateLastRemovedModels = remainingRemoveModels
settings.UpstreamModelUpdateLastCheckTime = common.GetTimestamp()
if err := updateChannelUpstreamModelSettings(channel, settings, modelsChanged); err != nil {
return nil, nil, nil, nil, false, err
}
if modelsChanged {
if err := channel.UpdateAbilities(nil); err != nil {
return addModels, removeModels, remainingModels, remainingRemoveModels, true, err
}
}
return addModels, removeModels, remainingModels, remainingRemoveModels, modelsChanged, nil
}
func collectPendingApplyUpstreamModelChanges(settings dto.ChannelOtherSettings) (pendingAddModels []string, pendingRemoveModels []string) {
return normalizeModelNames(settings.UpstreamModelUpdateLastDetectedModels), normalizeModelNames(settings.UpstreamModelUpdateLastRemovedModels)
}
func findEnabledChannelsAfterID(lastID int, batchSize int) ([]*model.Channel, error) {
var channels []*model.Channel
query := model.DB.
Select("id", "name", "type", "key", "status", "base_url", "models", "settings", "setting", "other", "group", "priority", "weight", "tag", "channel_info", "header_override").
Where("status = ?", common.ChannelStatusEnabled).
Order("id asc").
Limit(batchSize)
if lastID > 0 {
query = query.Where("id > ?", lastID)
}
return channels, query.Find(&channels).Error
}
func ApplyAllChannelUpstreamModelUpdates(c *gin.Context) {
results := make([]applyAllChannelUpstreamModelUpdatesResult, 0)
failed := make([]int, 0)
refreshNeeded := false
addedModelCount := 0
removedModelCount := 0
lastID := 0
for {
channels, err := findEnabledChannelsAfterID(lastID, channelUpstreamModelUpdateTaskBatchSize)
if err != nil {
common.ApiError(c, err)
return
}
if len(channels) == 0 {
break
}
lastID = channels[len(channels)-1].Id
for _, channel := range channels {
if channel == nil {
continue
}
settings := channel.GetOtherSettings()
if !settings.UpstreamModelUpdateCheckEnabled {
continue
}
pendingAddModels, pendingRemoveModels := collectPendingApplyUpstreamModelChanges(settings)
if len(pendingAddModels) == 0 && len(pendingRemoveModels) == 0 {
continue
}
addedModels, removedModels, remainingModels, remainingRemoveModels, modelsChanged, err := applyChannelUpstreamModelUpdates(
channel,
pendingAddModels,
nil,
pendingRemoveModels,
)
if err != nil {
failed = append(failed, channel.Id)
continue
}
if modelsChanged {
refreshNeeded = true
}
addedModelCount += len(addedModels)
removedModelCount += len(removedModels)
results = append(results, applyAllChannelUpstreamModelUpdatesResult{
ChannelID: channel.Id,
ChannelName: channel.Name,
AddedModels: addedModels,
RemovedModels: removedModels,
RemainingModels: remainingModels,
RemainingRemoveModels: remainingRemoveModels,
})
}
if len(channels) < channelUpstreamModelUpdateTaskBatchSize {
break
}
}
if refreshNeeded {
refreshChannelRuntimeCache()
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": gin.H{
"processed_channels": len(results),
"added_models": addedModelCount,
"removed_models": removedModelCount,
"failed_channel_ids": failed,
"results": results,
},
})
}
func DetectAllChannelUpstreamModelUpdates(c *gin.Context) {
results := make([]detectChannelUpstreamModelUpdatesResult, 0)
failed := make([]int, 0)
detectedAddCount := 0
detectedRemoveCount := 0
refreshNeeded := false
lastID := 0
for {
channels, err := findEnabledChannelsAfterID(lastID, channelUpstreamModelUpdateTaskBatchSize)
if err != nil {
common.ApiError(c, err)
return
}
if len(channels) == 0 {
break
}
lastID = channels[len(channels)-1].Id
for _, channel := range channels {
if channel == nil {
continue
}
settings := channel.GetOtherSettings()
if !settings.UpstreamModelUpdateCheckEnabled {
continue
}
modelsChanged, autoAdded, err := checkAndPersistChannelUpstreamModelUpdates(channel, &settings, true, false)
if err != nil {
failed = append(failed, channel.Id)
continue
}
if modelsChanged {
refreshNeeded = true
}
addModels := normalizeModelNames(settings.UpstreamModelUpdateLastDetectedModels)
removeModels := normalizeModelNames(settings.UpstreamModelUpdateLastRemovedModels)
detectedAddCount += len(addModels)
detectedRemoveCount += len(removeModels)
results = append(results, detectChannelUpstreamModelUpdatesResult{
ChannelID: channel.Id,
ChannelName: channel.Name,
AddModels: addModels,
RemoveModels: removeModels,
LastCheckTime: settings.UpstreamModelUpdateLastCheckTime,
AutoAddedModels: autoAdded,
})
}
if len(channels) < channelUpstreamModelUpdateTaskBatchSize {
break
}
}
if refreshNeeded {
refreshChannelRuntimeCache()
}
c.JSON(http.StatusOK, gin.H{
"success": true,
"message": "",
"data": gin.H{
"processed_channels": len(results),
"failed_channel_ids": failed,
"detected_add_models": detectedAddCount,
"detected_remove_models": detectedRemoveCount,
"channel_detected_results": results,
},
})
}

View File

@@ -0,0 +1,167 @@
package controller
import (
"testing"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/model"
"github.com/stretchr/testify/require"
)
func TestNormalizeModelNames(t *testing.T) {
result := normalizeModelNames([]string{
" gpt-4o ",
"",
"gpt-4o",
"gpt-4.1",
" ",
})
require.Equal(t, []string{"gpt-4o", "gpt-4.1"}, result)
}
func TestMergeModelNames(t *testing.T) {
result := mergeModelNames(
[]string{"gpt-4o", "gpt-4.1"},
[]string{"gpt-4.1", " gpt-4.1-mini ", "gpt-4o"},
)
require.Equal(t, []string{"gpt-4o", "gpt-4.1", "gpt-4.1-mini"}, result)
}
func TestSubtractModelNames(t *testing.T) {
result := subtractModelNames(
[]string{"gpt-4o", "gpt-4.1", "gpt-4.1-mini"},
[]string{"gpt-4.1", "not-exists"},
)
require.Equal(t, []string{"gpt-4o", "gpt-4.1-mini"}, result)
}
func TestIntersectModelNames(t *testing.T) {
result := intersectModelNames(
[]string{"gpt-4o", "gpt-4.1", "gpt-4.1", "not-exists"},
[]string{"gpt-4.1", "gpt-4o-mini", "gpt-4o"},
)
require.Equal(t, []string{"gpt-4o", "gpt-4.1"}, result)
}
func TestApplySelectedModelChanges(t *testing.T) {
t.Run("add and remove together", func(t *testing.T) {
result := applySelectedModelChanges(
[]string{"gpt-4o", "gpt-4.1", "claude-3"},
[]string{"gpt-4.1-mini"},
[]string{"claude-3"},
)
require.Equal(t, []string{"gpt-4o", "gpt-4.1", "gpt-4.1-mini"}, result)
})
t.Run("add wins when conflict with remove", func(t *testing.T) {
result := applySelectedModelChanges(
[]string{"gpt-4o"},
[]string{"gpt-4.1"},
[]string{"gpt-4.1"},
)
require.Equal(t, []string{"gpt-4o", "gpt-4.1"}, result)
})
}
func TestCollectPendingApplyUpstreamModelChanges(t *testing.T) {
settings := dto.ChannelOtherSettings{
UpstreamModelUpdateLastDetectedModels: []string{" gpt-4o ", "gpt-4o", "gpt-4.1"},
UpstreamModelUpdateLastRemovedModels: []string{" old-model ", "", "old-model"},
}
pendingAddModels, pendingRemoveModels := collectPendingApplyUpstreamModelChanges(settings)
require.Equal(t, []string{"gpt-4o", "gpt-4.1"}, pendingAddModels)
require.Equal(t, []string{"old-model"}, pendingRemoveModels)
}
func TestNormalizeChannelModelMapping(t *testing.T) {
modelMapping := `{
" alias-model ": " upstream-model ",
"": "invalid",
"invalid-target": ""
}`
channel := &model.Channel{
ModelMapping: &modelMapping,
}
result := normalizeChannelModelMapping(channel)
require.Equal(t, map[string]string{
"alias-model": "upstream-model",
}, result)
}
func TestCollectPendingUpstreamModelChangesFromModels_WithModelMapping(t *testing.T) {
pendingAddModels, pendingRemoveModels := collectPendingUpstreamModelChangesFromModels(
[]string{"alias-model", "gpt-4o", "stale-model"},
[]string{"gpt-4o", "gpt-4.1", "mapped-target"},
[]string{"gpt-4.1"},
map[string]string{
"alias-model": "mapped-target",
},
)
require.Equal(t, []string{}, pendingAddModels)
require.Equal(t, []string{"stale-model"}, pendingRemoveModels)
}
func TestBuildUpstreamModelUpdateTaskNotificationContent_OmitOverflowDetails(t *testing.T) {
channelSummaries := make([]upstreamModelUpdateChannelSummary, 0, 12)
for i := 0; i < 12; i++ {
channelSummaries = append(channelSummaries, upstreamModelUpdateChannelSummary{
ChannelName: "channel-" + string(rune('A'+i)),
AddCount: i + 1,
RemoveCount: i,
})
}
content := buildUpstreamModelUpdateTaskNotificationContent(
24,
12,
56,
21,
9,
[]int{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12},
channelSummaries,
[]string{
"gpt-4.1", "gpt-4.1-mini", "o3", "o4-mini", "gemini-2.5-pro", "claude-3.7-sonnet",
"qwen-max", "deepseek-r1", "llama-3.3-70b", "mistral-large", "command-r-plus", "doubao-pro-32k",
"hunyuan-large",
},
[]string{
"gpt-3.5-turbo", "claude-2.1", "gemini-1.5-pro", "mixtral-8x7b", "qwen-plus", "glm-4",
"yi-large", "moonshot-v1", "doubao-lite",
},
)
require.Contains(t, content, "其余 4 个渠道已省略")
require.Contains(t, content, "其余 1 个已省略")
require.Contains(t, content, "失败渠道 ID展示 10/12")
require.Contains(t, content, "其余 2 个已省略")
}
func TestShouldSendUpstreamModelUpdateNotification(t *testing.T) {
channelUpstreamModelUpdateNotifyState.Lock()
channelUpstreamModelUpdateNotifyState.lastNotifiedAt = 0
channelUpstreamModelUpdateNotifyState.lastChangedChannels = 0
channelUpstreamModelUpdateNotifyState.lastFailedChannels = 0
channelUpstreamModelUpdateNotifyState.Unlock()
baseTime := int64(2000000)
require.True(t, shouldSendUpstreamModelUpdateNotification(baseTime, 6, 0))
require.False(t, shouldSendUpstreamModelUpdateNotification(baseTime+3600, 6, 0))
require.True(t, shouldSendUpstreamModelUpdateNotification(baseTime+3600, 7, 0))
require.False(t, shouldSendUpstreamModelUpdateNotification(baseTime+7200, 7, 0))
require.True(t, shouldSendUpstreamModelUpdateNotification(baseTime+8000, 0, 3))
require.False(t, shouldSendUpstreamModelUpdateNotification(baseTime+9000, 0, 3))
require.True(t, shouldSendUpstreamModelUpdateNotification(baseTime+10000, 0, 4))
require.True(t, shouldSendUpstreamModelUpdateNotification(baseTime+90000, 7, 0))
require.True(t, shouldSendUpstreamModelUpdateNotification(baseTime+90001, 0, 0))
}

View File

@@ -1032,17 +1032,18 @@ func TopUp(c *gin.Context) {
}
type UpdateUserSettingRequest struct {
QuotaWarningType string `json:"notify_type"`
QuotaWarningThreshold float64 `json:"quota_warning_threshold"`
WebhookUrl string `json:"webhook_url,omitempty"`
WebhookSecret string `json:"webhook_secret,omitempty"`
NotificationEmail string `json:"notification_email,omitempty"`
BarkUrl string `json:"bark_url,omitempty"`
GotifyUrl string `json:"gotify_url,omitempty"`
GotifyToken string `json:"gotify_token,omitempty"`
GotifyPriority int `json:"gotify_priority,omitempty"`
AcceptUnsetModelRatioModel bool `json:"accept_unset_model_ratio_model"`
RecordIpLog bool `json:"record_ip_log"`
QuotaWarningType string `json:"notify_type"`
QuotaWarningThreshold float64 `json:"quota_warning_threshold"`
WebhookUrl string `json:"webhook_url,omitempty"`
WebhookSecret string `json:"webhook_secret,omitempty"`
NotificationEmail string `json:"notification_email,omitempty"`
BarkUrl string `json:"bark_url,omitempty"`
GotifyUrl string `json:"gotify_url,omitempty"`
GotifyToken string `json:"gotify_token,omitempty"`
GotifyPriority int `json:"gotify_priority,omitempty"`
UpstreamModelUpdateNotifyEnabled *bool `json:"upstream_model_update_notify_enabled,omitempty"`
AcceptUnsetModelRatioModel bool `json:"accept_unset_model_ratio_model"`
RecordIpLog bool `json:"record_ip_log"`
}
func UpdateUserSetting(c *gin.Context) {
@@ -1132,13 +1133,19 @@ func UpdateUserSetting(c *gin.Context) {
common.ApiError(c, err)
return
}
existingSettings := user.GetSetting()
upstreamModelUpdateNotifyEnabled := existingSettings.UpstreamModelUpdateNotifyEnabled
if user.Role >= common.RoleAdminUser && req.UpstreamModelUpdateNotifyEnabled != nil {
upstreamModelUpdateNotifyEnabled = *req.UpstreamModelUpdateNotifyEnabled
}
// 构建设置
settings := dto.UserSetting{
NotifyType: req.QuotaWarningType,
QuotaWarningThreshold: req.QuotaWarningThreshold,
AcceptUnsetRatioModel: req.AcceptUnsetModelRatioModel,
RecordIpLog: req.RecordIpLog,
NotifyType: req.QuotaWarningType,
QuotaWarningThreshold: req.QuotaWarningThreshold,
UpstreamModelUpdateNotifyEnabled: upstreamModelUpdateNotifyEnabled,
AcceptUnsetRatioModel: req.AcceptUnsetModelRatioModel,
RecordIpLog: req.RecordIpLog,
}
// 如果是webhook类型,添加webhook相关设置

View File

@@ -24,16 +24,22 @@ const (
)
type ChannelOtherSettings struct {
AzureResponsesVersion string `json:"azure_responses_version,omitempty"`
VertexKeyType VertexKeyType `json:"vertex_key_type,omitempty"` // "json" or "api_key"
OpenRouterEnterprise *bool `json:"openrouter_enterprise,omitempty"`
ClaudeBetaQuery bool `json:"claude_beta_query,omitempty"` // Claude 渠道是否强制追加 ?beta=true
AllowServiceTier bool `json:"allow_service_tier,omitempty"` // 是否允许 service_tier 透传(默认过滤以避免额外计费)
AllowInferenceGeo bool `json:"allow_inference_geo,omitempty"` // 是否允许 inference_geo 透传(仅 Claude默认过滤以满足数据驻留合规
DisableStore bool `json:"disable_store,omitempty"` // 是否禁用 store 透传(默认允许透传,禁用后可能导致 Codex 无法使用
AllowSafetyIdentifier bool `json:"allow_safety_identifier,omitempty"` // 是否允许 safety_identifier 透传(默认过滤以保护用户隐私
AllowIncludeObfuscation bool `json:"allow_include_obfuscation,omitempty"` // 是否允许 stream_options.include_obfuscation 透传(默认过滤以避免关闭流混淆保护)
AwsKeyType AwsKeyType `json:"aws_key_type,omitempty"`
AzureResponsesVersion string `json:"azure_responses_version,omitempty"`
VertexKeyType VertexKeyType `json:"vertex_key_type,omitempty"` // "json" or "api_key"
OpenRouterEnterprise *bool `json:"openrouter_enterprise,omitempty"`
ClaudeBetaQuery bool `json:"claude_beta_query,omitempty"` // Claude 渠道是否强制追加 ?beta=true
AllowServiceTier bool `json:"allow_service_tier,omitempty"` // 是否允许 service_tier 透传(默认过滤以避免额外计费)
AllowInferenceGeo bool `json:"allow_inference_geo,omitempty"` // 是否允许 inference_geo 透传(仅 Claude默认过滤以满足数据驻留合规
AllowSafetyIdentifier bool `json:"allow_safety_identifier,omitempty"` // 是否允许 safety_identifier 透传(默认过滤以保护用户隐私
DisableStore bool `json:"disable_store,omitempty"` // 是否禁用 store 透传(默认允许透传,禁用后可能导致 Codex 无法使用
AllowIncludeObfuscation bool `json:"allow_include_obfuscation,omitempty"` // 是否允许 stream_options.include_obfuscation 透传(默认过滤以避免关闭流混淆保护)
AwsKeyType AwsKeyType `json:"aws_key_type,omitempty"`
UpstreamModelUpdateCheckEnabled bool `json:"upstream_model_update_check_enabled,omitempty"` // 是否检测上游模型更新
UpstreamModelUpdateAutoSyncEnabled bool `json:"upstream_model_update_auto_sync_enabled,omitempty"` // 是否自动同步上游模型更新
UpstreamModelUpdateLastCheckTime int64 `json:"upstream_model_update_last_check_time,omitempty"` // 上次检测时间
UpstreamModelUpdateLastDetectedModels []string `json:"upstream_model_update_last_detected_models,omitempty"` // 上次检测到的可加入模型
UpstreamModelUpdateLastRemovedModels []string `json:"upstream_model_update_last_removed_models,omitempty"` // 上次检测到的可删除模型
UpstreamModelUpdateIgnoredModels []string `json:"upstream_model_update_ignored_models,omitempty"` // 手动忽略的模型
}
func (s *ChannelOtherSettings) IsOpenRouterEnterprise() bool {

View File

@@ -218,6 +218,11 @@ type ClaudeRequest struct {
ServiceTier string `json:"service_tier,omitempty"`
}
// OutputConfigForEffort just for extract effort
type OutputConfigForEffort struct {
Effort string `json:"effort,omitempty"`
}
// createClaudeFileSource 根据数据内容创建正确类型的 FileSource
func createClaudeFileSource(data string) *types.FileSource {
if strings.HasPrefix(data, "http://") || strings.HasPrefix(data, "https://") {
@@ -409,6 +414,15 @@ func (c *ClaudeRequest) GetTools() []any {
}
}
func (c *ClaudeRequest) GetEfforts() string {
var OutputConfig OutputConfigForEffort
if err := json.Unmarshal(c.OutputConfig, &OutputConfig); err == nil {
effort := OutputConfig.Effort
return effort
}
return ""
}
// ProcessTools 处理工具列表,支持类型断言
func ProcessTools(tools []any) ([]*Tool, []*ClaudeWebSearchTool) {
var normalTools []*Tool
@@ -434,7 +448,7 @@ func ProcessTools(tools []any) ([]*Tool, []*ClaudeWebSearchTool) {
}
type Thinking struct {
Type string `json:"type"`
Type string `json:"type,omitempty"`
BudgetTokens *int `json:"budget_tokens,omitempty"`
}

View File

@@ -1,20 +1,21 @@
package dto
type UserSetting struct {
NotifyType string `json:"notify_type,omitempty"` // QuotaWarningType 额度预警类型
QuotaWarningThreshold float64 `json:"quota_warning_threshold,omitempty"` // QuotaWarningThreshold 额度预警阈值
WebhookUrl string `json:"webhook_url,omitempty"` // WebhookUrl webhook地址
WebhookSecret string `json:"webhook_secret,omitempty"` // WebhookSecret webhook密钥
NotificationEmail string `json:"notification_email,omitempty"` // NotificationEmail 通知邮箱地址
BarkUrl string `json:"bark_url,omitempty"` // BarkUrl Bark推送URL
GotifyUrl string `json:"gotify_url,omitempty"` // GotifyUrl Gotify服务器地址
GotifyToken string `json:"gotify_token,omitempty"` // GotifyToken Gotify应用令牌
GotifyPriority int `json:"gotify_priority"` // GotifyPriority Gotify消息优先级
AcceptUnsetRatioModel bool `json:"accept_unset_model_ratio_model,omitempty"` // AcceptUnsetRatioModel 是否接受未设置价格的模型
RecordIpLog bool `json:"record_ip_log,omitempty"` // 是否记录请求和错误日志IP
SidebarModules string `json:"sidebar_modules,omitempty"` // SidebarModules 左侧边栏模块配置
BillingPreference string `json:"billing_preference,omitempty"` // BillingPreference 扣费策略(订阅/钱包)
Language string `json:"language,omitempty"` // Language 用户语言偏好 (zh, en)
NotifyType string `json:"notify_type,omitempty"` // QuotaWarningType 额度预警类型
QuotaWarningThreshold float64 `json:"quota_warning_threshold,omitempty"` // QuotaWarningThreshold 额度预警阈值
WebhookUrl string `json:"webhook_url,omitempty"` // WebhookUrl webhook地址
WebhookSecret string `json:"webhook_secret,omitempty"` // WebhookSecret webhook密钥
NotificationEmail string `json:"notification_email,omitempty"` // NotificationEmail 通知邮箱地址
BarkUrl string `json:"bark_url,omitempty"` // BarkUrl Bark推送URL
GotifyUrl string `json:"gotify_url,omitempty"` // GotifyUrl Gotify服务器地址
GotifyToken string `json:"gotify_token,omitempty"` // GotifyToken Gotify应用令牌
GotifyPriority int `json:"gotify_priority"` // GotifyPriority Gotify消息优先级
UpstreamModelUpdateNotifyEnabled bool `json:"upstream_model_update_notify_enabled,omitempty"` // 是否接收上游模型更新定时检测通知(仅管理员)
AcceptUnsetRatioModel bool `json:"accept_unset_model_ratio_model,omitempty"` // AcceptUnsetRatioModel 是否接受未设置价格的模型
RecordIpLog bool `json:"record_ip_log,omitempty"` // 是否记录请求和错误日志IP
SidebarModules string `json:"sidebar_modules,omitempty"` // SidebarModules 左侧边栏模块配置
BillingPreference string `json:"billing_preference,omitempty"` // BillingPreference 扣费策略(订阅/钱包)
Language string `json:"language,omitempty"` // Language 用户语言偏好 (zh, en)
}
var (

14
go.mod
View File

@@ -8,10 +8,10 @@ require (
github.com/abema/go-mp4 v1.4.1
github.com/andybalholm/brotli v1.1.1
github.com/anknown/ahocorasick v0.0.0-20190904063843-d75dbd5169c0
github.com/aws/aws-sdk-go-v2 v1.37.2
github.com/aws/aws-sdk-go-v2/credentials v1.17.11
github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.33.0
github.com/aws/smithy-go v1.22.5
github.com/aws/aws-sdk-go-v2 v1.41.2
github.com/aws/aws-sdk-go-v2/credentials v1.19.10
github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.50.0
github.com/aws/smithy-go v1.24.2
github.com/bytedance/gopkg v0.1.3
github.com/gin-contrib/cors v1.7.2
github.com/gin-contrib/gzip v0.0.6
@@ -62,9 +62,9 @@ require (
require (
github.com/DmitriyVTitov/size v1.5.0 // indirect
github.com/anknown/darts v0.0.0-20151216065714-83ff685239e6 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.0 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.2 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.2 // indirect
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.5 // indirect
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.18 // indirect
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.18 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/boombuler/barcode v1.1.0 // indirect
github.com/bytedance/sonic v1.14.1 // indirect

16
go.sum
View File

@@ -12,18 +12,34 @@ github.com/anknown/darts v0.0.0-20151216065714-83ff685239e6 h1:HblK3eJHq54yET63q
github.com/anknown/darts v0.0.0-20151216065714-83ff685239e6/go.mod h1:pbiaLIeYLUbgMY1kwEAdwO6UKD5ZNwdPGQlwokS9fe8=
github.com/aws/aws-sdk-go-v2 v1.37.2 h1:xkW1iMYawzcmYFYEV0UCMxc8gSsjCGEhBXQkdQywVbo=
github.com/aws/aws-sdk-go-v2 v1.37.2/go.mod h1:9Q0OoGQoboYIAJyslFyF1f5K1Ryddop8gqMhWx/n4Wg=
github.com/aws/aws-sdk-go-v2 v1.41.2 h1:LuT2rzqNQsauaGkPK/7813XxcZ3o3yePY0Iy891T2ls=
github.com/aws/aws-sdk-go-v2 v1.41.2/go.mod h1:IvvlAZQXvTXznUPfRVfryiG1fbzE2NGK6m9u39YQ+S4=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.0 h1:6GMWV6CNpA/6fbFHnoAjrv4+LGfyTqZz2LtCHnspgDg=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.0/go.mod h1:/mXlTIVG9jbxkqDnr5UQNQxW1HRYxeGklkM9vAFeabg=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.5 h1:zWFmPmgw4sveAYi1mRqG+E/g0461cJ5M4bJ8/nc6d3Q=
github.com/aws/aws-sdk-go-v2/aws/protocol/eventstream v1.7.5/go.mod h1:nVUlMLVV8ycXSb7mSkcNu9e3v/1TJq2RTlrPwhYWr5c=
github.com/aws/aws-sdk-go-v2/credentials v1.17.11 h1:YuIB1dJNf1Re822rriUOTxopaHHvIq0l/pX3fwO+Tzs=
github.com/aws/aws-sdk-go-v2/credentials v1.17.11/go.mod h1:AQtFPsDH9bI2O+71anW6EKL+NcD7LG3dpKGMV4SShgo=
github.com/aws/aws-sdk-go-v2/credentials v1.19.10 h1:EEhmEUFCE1Yhl7vDhNOI5OCL/iKMdkkYFTRpZXNw7m8=
github.com/aws/aws-sdk-go-v2/credentials v1.19.10/go.mod h1:RnnlFCAlxQCkN2Q379B67USkBMu1PipEEiibzYN5UTE=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.2 h1:sPiRHLVUIIQcoVZTNwqQcdtjkqkPopyYmIX0M5ElRf4=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.2/go.mod h1:ik86P3sgV+Bk7c1tBFCwI3VxMoSEwl4YkRB9xn1s340=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.18 h1:F43zk1vemYIqPAwhjTjYIz0irU2EY7sOb/F5eJ3HuyM=
github.com/aws/aws-sdk-go-v2/internal/configsources v1.4.18/go.mod h1:w1jdlZXrGKaJcNoL+Nnrj+k5wlpGXqnNrKoP22HvAug=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.2 h1:ZdzDAg075H6stMZtbD2o+PyB933M/f20e9WmCBC17wA=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.2/go.mod h1:eE1IIzXG9sdZCB0pNNpMpsYTLl4YdOQD3njiVN1e/E4=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.18 h1:xCeWVjj0ki0l3nruoyP2slHsGArMxeiiaoPN5QZH6YQ=
github.com/aws/aws-sdk-go-v2/internal/endpoints/v2 v2.7.18/go.mod h1:r/eLGuGCBw6l36ZRWiw6PaZwPXb6YOj+i/7MizNl5/k=
github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.33.0 h1:JzidOz4Hcn2RbP5fvIS1iAP+DcRv5VJtgixbEYDsI5g=
github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.33.0/go.mod h1:9A4/PJYlWjvjEzzoOLGQjkLt4bYK9fRWi7uz1GSsAcA=
github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.50.0 h1:TDKR8ACRw7G+GFaQlhoy6biu+8q6ZtSddQCy9avMdMI=
github.com/aws/aws-sdk-go-v2/service/bedrockruntime v1.50.0/go.mod h1:XlhOh5Ax/lesqN4aZCUgj9vVJed5VoXYHHFYGAlJEwU=
github.com/aws/smithy-go v1.22.5 h1:P9ATCXPMb2mPjYBgueqJNCA5S9UfktsW0tTxi+a7eqw=
github.com/aws/smithy-go v1.22.5/go.mod h1:t1ufH5HMublsJYulve2RKmHDC15xu1f26kHCp/HgceI=
github.com/aws/smithy-go v1.24.1 h1:VbyeNfmYkWoxMVpGUAbQumkODcYmfMRfZ8yQiH30SK0=
github.com/aws/smithy-go v1.24.1/go.mod h1:LEj2LM3rBRQJxPZTB4KuzZkaZYnZPnvgIhb4pu07mx0=
github.com/aws/smithy-go v1.24.2 h1:FzA3bu/nt/vDvmnkg+R8Xl46gmzEDam6mZ1hzmwXFng=
github.com/aws/smithy-go v1.24.2/go.mod h1:YE2RhdIuDbA5E5bTdciG9KrW3+TiEONeUWCqxX9i1Fc=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/boombuler/barcode v1.0.1-0.20190219062509-6c824513bacc/go.mod h1:paBWMcWSl3LHKBqUq+rly7CNSldXjb2rDl3JlRe0mD8=

View File

@@ -121,6 +121,9 @@ func main() {
return a
}
// Channel upstream model update check task
controller.StartChannelUpstreamModelUpdateTask()
if common.IsMasterNode && constant.UpdateTask {
gopool.Go(func() {
controller.UpdateMidjourneyTaskBulk()

View File

@@ -250,6 +250,10 @@ func InitLogDB() (err error) {
func migrateDB() error {
// Migrate price_amount column from float/double to decimal for existing tables
migrateSubscriptionPlanPriceAmount()
// Migrate model_limits column from varchar to text for existing tables
if err := migrateTokenModelLimitsToText(); err != nil {
return err
}
err := DB.AutoMigrate(
&Channel{},
@@ -445,6 +449,59 @@ PRIMARY KEY (` + "`id`" + `)
return nil
}
// migrateTokenModelLimitsToText migrates model_limits column from varchar(1024) to text
// This is safe to run multiple times - it checks the column type first
func migrateTokenModelLimitsToText() error {
// SQLite uses type affinity, so TEXT and VARCHAR are effectively the same — no migration needed
if common.UsingSQLite {
return nil
}
tableName := "tokens"
columnName := "model_limits"
if !DB.Migrator().HasTable(tableName) {
return nil
}
if !DB.Migrator().HasColumn(&Token{}, columnName) {
return nil
}
var alterSQL string
if common.UsingPostgreSQL {
var dataType string
if err := DB.Raw(`SELECT data_type FROM information_schema.columns
WHERE table_schema = current_schema() AND table_name = ? AND column_name = ?`,
tableName, columnName).Scan(&dataType).Error; err != nil {
common.SysLog(fmt.Sprintf("Warning: failed to query metadata for %s.%s: %v", tableName, columnName, err))
} else if dataType == "text" {
return nil
}
alterSQL = fmt.Sprintf(`ALTER TABLE %s ALTER COLUMN %s TYPE text`, tableName, columnName)
} else if common.UsingMySQL {
var columnType string
if err := DB.Raw(`SELECT COLUMN_TYPE FROM information_schema.columns
WHERE table_schema = DATABASE() AND table_name = ? AND column_name = ?`,
tableName, columnName).Scan(&columnType).Error; err != nil {
common.SysLog(fmt.Sprintf("Warning: failed to query metadata for %s.%s: %v", tableName, columnName, err))
} else if strings.ToLower(columnType) == "text" {
return nil
}
alterSQL = fmt.Sprintf("ALTER TABLE %s MODIFY COLUMN %s text", tableName, columnName)
} else {
return nil
}
if alterSQL != "" {
if err := DB.Exec(alterSQL).Error; err != nil {
return fmt.Errorf("failed to migrate %s.%s to text: %w", tableName, columnName, err)
}
common.SysLog(fmt.Sprintf("Successfully migrated %s.%s to text", tableName, columnName))
}
return nil
}
// migrateSubscriptionPlanPriceAmount migrates price_amount column from float/double to decimal(10,6)
// This is safe to run multiple times - it checks the column type first
func migrateSubscriptionPlanPriceAmount() {
@@ -471,9 +528,11 @@ func migrateSubscriptionPlanPriceAmount() {
if common.UsingPostgreSQL {
// PostgreSQL: Check if already decimal/numeric
var dataType string
DB.Raw(`SELECT data_type FROM information_schema.columns
WHERE table_name = ? AND column_name = ?`, tableName, columnName).Scan(&dataType)
if dataType == "numeric" {
if err := DB.Raw(`SELECT data_type FROM information_schema.columns
WHERE table_schema = current_schema() AND table_name = ? AND column_name = ?`,
tableName, columnName).Scan(&dataType).Error; err != nil {
common.SysLog(fmt.Sprintf("Warning: failed to query metadata for %s.%s: %v", tableName, columnName, err))
} else if dataType == "numeric" {
return // Already decimal/numeric
}
alterSQL = fmt.Sprintf(`ALTER TABLE %s ALTER COLUMN %s TYPE decimal(10,6) USING %s::decimal(10,6)`,
@@ -481,10 +540,11 @@ func migrateSubscriptionPlanPriceAmount() {
} else if common.UsingMySQL {
// MySQL: Check if already decimal
var columnType string
DB.Raw(`SELECT COLUMN_TYPE FROM information_schema.columns
WHERE table_schema = DATABASE() AND table_name = ? AND column_name = ?`,
tableName, columnName).Scan(&columnType)
if strings.HasPrefix(strings.ToLower(columnType), "decimal") {
if err := DB.Raw(`SELECT COLUMN_TYPE FROM information_schema.columns
WHERE table_schema = DATABASE() AND table_name = ? AND column_name = ?`,
tableName, columnName).Scan(&columnType).Error; err != nil {
common.SysLog(fmt.Sprintf("Warning: failed to query metadata for %s.%s: %v", tableName, columnName, err))
} else if strings.HasPrefix(strings.ToLower(columnType), "decimal") {
return // Already decimal
}
alterSQL = fmt.Sprintf("ALTER TABLE %s MODIFY COLUMN %s decimal(10,6) NOT NULL DEFAULT 0",

View File

@@ -23,7 +23,7 @@ type Token struct {
RemainQuota int `json:"remain_quota" gorm:"default:0"`
UnlimitedQuota bool `json:"unlimited_quota"`
ModelLimitsEnabled bool `json:"model_limits_enabled"`
ModelLimits string `json:"model_limits" gorm:"type:varchar(1024);default:''"`
ModelLimits string `json:"model_limits" gorm:"type:text"`
AllowIps *string `json:"allow_ips" gorm:"default:''"`
UsedQuota int `json:"used_quota" gorm:"default:0"` // used quota
Group string `json:"group" gorm:"default:''"`

View File

@@ -100,6 +100,9 @@ func getHeaderPassthroughRegex(pattern string) (*regexp.Regexp, error) {
return compiled, nil
}
func IsHeaderPassthroughRuleKey(key string) bool {
return isHeaderPassthroughRuleKey(key)
}
func isHeaderPassthroughRuleKey(key string) bool {
key = strings.TrimSpace(key)
if key == "" {
@@ -267,6 +270,10 @@ func processHeaderOverride(info *common.RelayInfo, c *gin.Context) (map[string]s
return headerOverride, nil
}
func ResolveHeaderOverride(info *common.RelayInfo, c *gin.Context) (map[string]string, error) {
return processHeaderOverride(info, c)
}
func applyHeaderOverrideToRequest(req *http.Request, headerOverride map[string]string) {
if req == nil {
return

View File

@@ -27,6 +27,7 @@ type AwsClaudeRequest struct {
ToolChoice any `json:"tool_choice,omitempty"`
Thinking *dto.Thinking `json:"thinking,omitempty"`
OutputConfig json.RawMessage `json:"output_config,omitempty"`
//Metadata json.RawMessage `json:"metadata,omitempty"`
}
func formatRequest(requestBody io.Reader, requestHeader http.Header) (*AwsClaudeRequest, error) {

View File

@@ -11,6 +11,7 @@ import (
"github.com/QuantumNous/new-api/common"
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/relay/channel"
"github.com/QuantumNous/new-api/relay/channel/claude"
relaycommon "github.com/QuantumNous/new-api/relay/common"
"github.com/QuantumNous/new-api/relay/helper"
@@ -106,6 +107,13 @@ func doAwsClientRequest(c *gin.Context, info *relaycommon.RelayInfo, a *Adaptor,
// init empty request.header
requestHeader := http.Header{}
a.SetupRequestHeader(c, &requestHeader, info)
headerOverride, err := channel.ResolveHeaderOverride(info, c)
if err != nil {
return nil, err
}
for key, value := range headerOverride {
requestHeader.Set(key, value)
}
if isNovaModel(awsModelId) {
var novaReq *NovaRequest

View File

@@ -0,0 +1,55 @@
package aws
import (
"bytes"
"net/http"
"net/http/httptest"
"testing"
"github.com/QuantumNous/new-api/common"
relaycommon "github.com/QuantumNous/new-api/relay/common"
"github.com/aws/aws-sdk-go-v2/service/bedrockruntime"
"github.com/gin-gonic/gin"
"github.com/stretchr/testify/require"
)
func TestDoAwsClientRequest_AppliesRuntimeHeaderOverrideToAnthropicBeta(t *testing.T) {
t.Parallel()
gin.SetMode(gin.TestMode)
recorder := httptest.NewRecorder()
ctx, _ := gin.CreateTestContext(recorder)
ctx.Request = httptest.NewRequest(http.MethodPost, "/v1/messages", nil)
info := &relaycommon.RelayInfo{
OriginModelName: "claude-3-5-sonnet-20240620",
IsStream: false,
UseRuntimeHeadersOverride: true,
RuntimeHeadersOverride: map[string]any{
"anthropic-beta": "computer-use-2025-01-24",
},
ChannelMeta: &relaycommon.ChannelMeta{
ApiKey: "access-key|secret-key|us-east-1",
UpstreamModelName: "claude-3-5-sonnet-20240620",
},
}
requestBody := bytes.NewBufferString(`{"messages":[{"role":"user","content":"hello"}],"max_tokens":128}`)
adaptor := &Adaptor{}
_, err := doAwsClientRequest(ctx, info, adaptor, requestBody)
require.NoError(t, err)
awsReq, ok := adaptor.AwsReq.(*bedrockruntime.InvokeModelInput)
require.True(t, ok)
var payload map[string]any
require.NoError(t, common.Unmarshal(awsReq.Body, &payload))
anthropicBeta, exists := payload["anthropic_beta"]
require.True(t, exists)
values, ok := anthropicBeta.([]any)
require.True(t, ok)
require.Equal(t, []any{"computer-use-2025-01-24"}, values)
}

View File

@@ -59,7 +59,7 @@ func (a *Adaptor) ConvertAudioRequest(c *gin.Context, info *relaycommon.RelayInf
func (a *Adaptor) ConvertImageRequest(c *gin.Context, info *relaycommon.RelayInfo, request dto.ImageRequest) (any, error) {
if !strings.HasPrefix(info.UpstreamModelName, "imagen") {
return nil, errors.New("not supported model for image generation")
return nil, errors.New("not supported model for image generation, only imagen models are supported")
}
// convert size to aspect ratio but allow user to specify aspect ratio

View File

@@ -298,6 +298,7 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
}
reasoning := openrouter.RequestReasoning{
Enabled: true,
MaxTokens: *thinking.BudgetTokens,
}

View File

@@ -3,6 +3,7 @@ package openrouter
import "encoding/json"
type RequestReasoning struct {
Enabled bool `json:"enabled"`
// One of the following (not both):
Effort string `json:"effort,omitempty"` // Can be "high", "medium", or "low" (OpenAI-style)
MaxTokens int `json:"max_tokens,omitempty"` // Specific token limit (Anthropic-style)

View File

@@ -20,6 +20,7 @@ type VertexAIClaudeRequest struct {
ToolChoice any `json:"tool_choice,omitempty"`
Thinking *dto.Thinking `json:"thinking,omitempty"`
OutputConfig json.RawMessage `json:"output_config,omitempty"`
//Metadata json.RawMessage `json:"metadata,omitempty"`
}
func copyRequest(req *dto.ClaudeRequest, version string) *VertexAIClaudeRequest {

View File

@@ -5,6 +5,7 @@ import (
"fmt"
"net/http"
"regexp"
"sort"
"strconv"
"strings"
@@ -120,8 +121,18 @@ func ApplyParamOverride(jsonData []byte, paramOverride map[string]interface{}, c
// 尝试断言为操作格式
if operations, ok := tryParseOperations(paramOverride); ok {
legacyOverride := buildLegacyParamOverride(paramOverride)
workingJSON := jsonData
var err error
if len(legacyOverride) > 0 {
workingJSON, err = applyOperationsLegacy(workingJSON, legacyOverride)
if err != nil {
return nil, err
}
}
// 使用新方法
result, err := applyOperations(string(jsonData), operations, conditionContext)
result, err := applyOperations(string(workingJSON), operations, conditionContext)
return []byte(result), err
}
@@ -129,6 +140,20 @@ func ApplyParamOverride(jsonData []byte, paramOverride map[string]interface{}, c
return applyOperationsLegacy(jsonData, paramOverride)
}
func buildLegacyParamOverride(paramOverride map[string]interface{}) map[string]interface{} {
if len(paramOverride) == 0 {
return nil
}
legacy := make(map[string]interface{}, len(paramOverride))
for key, value := range paramOverride {
if strings.EqualFold(strings.TrimSpace(key), "operations") {
continue
}
legacy[key] = value
}
return legacy
}
func ApplyParamOverrideWithRelayInfo(jsonData []byte, info *RelayInfo) ([]byte, error) {
paramOverride := getParamOverrideMap(info)
if len(paramOverride) == 0 {
@@ -463,15 +488,35 @@ func applyOperations(jsonStr string, operations []ParamOperation, conditionConte
}
// 处理路径中的负数索引
opPath := processNegativeIndex(result, op.Path)
var opPaths []string
if isPathBasedOperation(op.Mode) {
opPaths, err = resolveOperationPaths(result, opPath)
if err != nil {
return "", err
}
if len(opPaths) == 0 {
continue
}
}
switch op.Mode {
case "delete":
result, err = sjson.Delete(result, opPath)
case "set":
if op.KeepOrigin && gjson.Get(result, opPath).Exists() {
continue
for _, path := range opPaths {
result, err = deleteValue(result, path)
if err != nil {
break
}
}
case "set":
for _, path := range opPaths {
if op.KeepOrigin && gjson.Get(result, path).Exists() {
continue
}
result, err = sjson.Set(result, path, op.Value)
if err != nil {
break
}
}
result, err = sjson.Set(result, opPath, op.Value)
case "move":
opFrom := processNegativeIndex(result, op.From)
opTo := processNegativeIndex(result, op.To)
@@ -484,27 +529,82 @@ func applyOperations(jsonStr string, operations []ParamOperation, conditionConte
opTo := processNegativeIndex(result, op.To)
result, err = copyValue(result, opFrom, opTo)
case "prepend":
result, err = modifyValue(result, opPath, op.Value, op.KeepOrigin, true)
for _, path := range opPaths {
result, err = modifyValue(result, path, op.Value, op.KeepOrigin, true)
if err != nil {
break
}
}
case "append":
result, err = modifyValue(result, opPath, op.Value, op.KeepOrigin, false)
for _, path := range opPaths {
result, err = modifyValue(result, path, op.Value, op.KeepOrigin, false)
if err != nil {
break
}
}
case "trim_prefix":
result, err = trimStringValue(result, opPath, op.Value, true)
for _, path := range opPaths {
result, err = trimStringValue(result, path, op.Value, true)
if err != nil {
break
}
}
case "trim_suffix":
result, err = trimStringValue(result, opPath, op.Value, false)
for _, path := range opPaths {
result, err = trimStringValue(result, path, op.Value, false)
if err != nil {
break
}
}
case "ensure_prefix":
result, err = ensureStringAffix(result, opPath, op.Value, true)
for _, path := range opPaths {
result, err = ensureStringAffix(result, path, op.Value, true)
if err != nil {
break
}
}
case "ensure_suffix":
result, err = ensureStringAffix(result, opPath, op.Value, false)
for _, path := range opPaths {
result, err = ensureStringAffix(result, path, op.Value, false)
if err != nil {
break
}
}
case "trim_space":
result, err = transformStringValue(result, opPath, strings.TrimSpace)
for _, path := range opPaths {
result, err = transformStringValue(result, path, strings.TrimSpace)
if err != nil {
break
}
}
case "to_lower":
result, err = transformStringValue(result, opPath, strings.ToLower)
for _, path := range opPaths {
result, err = transformStringValue(result, path, strings.ToLower)
if err != nil {
break
}
}
case "to_upper":
result, err = transformStringValue(result, opPath, strings.ToUpper)
for _, path := range opPaths {
result, err = transformStringValue(result, path, strings.ToUpper)
if err != nil {
break
}
}
case "replace":
result, err = replaceStringValue(result, opPath, op.From, op.To)
for _, path := range opPaths {
result, err = replaceStringValue(result, path, op.From, op.To)
if err != nil {
break
}
}
case "regex_replace":
result, err = regexReplaceStringValue(result, opPath, op.From, op.To)
for _, path := range opPaths {
result, err = regexReplaceStringValue(result, path, op.From, op.To)
if err != nil {
break
}
}
case "return_error":
returnErr, parseErr := parseParamOverrideReturnError(op.Value)
if parseErr != nil {
@@ -512,7 +612,12 @@ func applyOperations(jsonStr string, operations []ParamOperation, conditionConte
}
return "", returnErr
case "prune_objects":
result, err = pruneObjects(result, opPath, contextJSON, op.Value)
for _, path := range opPaths {
result, err = pruneObjects(result, path, contextJSON, op.Value)
if err != nil {
break
}
}
case "set_header":
err = setHeaderOverrideInContext(context, op.Path, op.Value, op.KeepOrigin)
if err == nil {
@@ -1150,6 +1255,92 @@ func copyValue(jsonStr, fromPath, toPath string) (string, error) {
return sjson.Set(jsonStr, toPath, sourceValue.Value())
}
func isPathBasedOperation(mode string) bool {
switch mode {
case "delete", "set", "prepend", "append", "trim_prefix", "trim_suffix", "ensure_prefix", "ensure_suffix", "trim_space", "to_lower", "to_upper", "replace", "regex_replace", "prune_objects":
return true
default:
return false
}
}
func resolveOperationPaths(jsonStr, path string) ([]string, error) {
if !strings.Contains(path, "*") {
return []string{path}, nil
}
return expandWildcardPaths(jsonStr, path)
}
func expandWildcardPaths(jsonStr, path string) ([]string, error) {
var root interface{}
if err := common.Unmarshal([]byte(jsonStr), &root); err != nil {
return nil, err
}
segments := strings.Split(path, ".")
paths := collectWildcardPaths(root, segments, nil)
return lo.Uniq(paths), nil
}
func collectWildcardPaths(node interface{}, segments []string, prefix []string) []string {
if len(segments) == 0 {
return []string{strings.Join(prefix, ".")}
}
segment := strings.TrimSpace(segments[0])
if segment == "" {
return nil
}
isLast := len(segments) == 1
if segment == "*" {
switch typed := node.(type) {
case map[string]interface{}:
keys := lo.Keys(typed)
sort.Strings(keys)
return lo.FlatMap(keys, func(key string, _ int) []string {
return collectWildcardPaths(typed[key], segments[1:], append(prefix, key))
})
case []interface{}:
return lo.FlatMap(lo.Range(len(typed)), func(index int, _ int) []string {
return collectWildcardPaths(typed[index], segments[1:], append(prefix, strconv.Itoa(index)))
})
default:
return nil
}
}
switch typed := node.(type) {
case map[string]interface{}:
if isLast {
return []string{strings.Join(append(prefix, segment), ".")}
}
next, exists := typed[segment]
if !exists {
return nil
}
return collectWildcardPaths(next, segments[1:], append(prefix, segment))
case []interface{}:
index, err := strconv.Atoi(segment)
if err != nil || index < 0 || index >= len(typed) {
return nil
}
if isLast {
return []string{strings.Join(append(prefix, segment), ".")}
}
return collectWildcardPaths(typed[index], segments[1:], append(prefix, segment))
default:
return nil
}
}
func deleteValue(jsonStr, path string) (string, error) {
if strings.TrimSpace(path) == "" {
return jsonStr, nil
}
return sjson.Delete(jsonStr, path)
}
func modifyValue(jsonStr, path string, value interface{}, keepOrigin, isPrepend bool) (string, error) {
current := gjson.Get(jsonStr, path)
switch {

View File

@@ -2,6 +2,7 @@ package common
import (
"encoding/json"
"fmt"
"reflect"
"testing"
@@ -9,6 +10,7 @@ import (
"github.com/QuantumNous/new-api/dto"
"github.com/QuantumNous/new-api/setting/model_setting"
"github.com/samber/lo"
)
func TestApplyParamOverrideTrimPrefix(t *testing.T) {
@@ -74,6 +76,48 @@ func TestApplyParamOverrideTrimNoop(t *testing.T) {
assertJSONEqual(t, `{"model":"gpt-4","temperature":0.7}`, string(out))
}
func TestApplyParamOverrideMixedLegacyAndOperations(t *testing.T) {
input := []byte(`{"model":"openai/gpt-4","temperature":0.7}`)
override := map[string]interface{}{
"temperature": 0.2,
"top_p": 0.95,
"operations": []interface{}{
map[string]interface{}{
"path": "model",
"mode": "trim_prefix",
"value": "openai/",
},
},
}
out, err := ApplyParamOverride(input, override, nil)
if err != nil {
t.Fatalf("ApplyParamOverride returned error: %v", err)
}
assertJSONEqual(t, `{"model":"gpt-4","temperature":0.2,"top_p":0.95}`, string(out))
}
func TestApplyParamOverrideMixedLegacyAndOperationsConflictPrefersOperations(t *testing.T) {
input := []byte(`{"model":"openai/gpt-4","temperature":0.7}`)
override := map[string]interface{}{
"model": "legacy-model",
"temperature": 0.2,
"operations": []interface{}{
map[string]interface{}{
"path": "model",
"mode": "set",
"value": "op-model",
},
},
}
out, err := ApplyParamOverride(input, override, nil)
if err != nil {
t.Fatalf("ApplyParamOverride returned error: %v", err)
}
assertJSONEqual(t, `{"model":"op-model","temperature":0.2}`, string(out))
}
func TestApplyParamOverrideTrimRequiresValue(t *testing.T) {
// trim_prefix requires value example:
// {"operations":[{"path":"model","mode":"trim_prefix"}]}
@@ -200,6 +244,224 @@ func TestApplyParamOverrideDelete(t *testing.T) {
}
}
func TestApplyParamOverrideDeleteWildcardPath(t *testing.T) {
input := []byte(`{"tools":[{"type":"bash","custom":{"input_examples":["a"],"other":1}},{"type":"code","custom":{"input_examples":["b"]}},{"type":"noop","custom":{"other":2}}]}`)
override := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"path": "tools.*.custom.input_examples",
"mode": "delete",
},
},
}
out, err := ApplyParamOverride(input, override, nil)
if err != nil {
t.Fatalf("ApplyParamOverride returned error: %v", err)
}
assertJSONEqual(t, `{"tools":[{"type":"bash","custom":{"other":1}},{"type":"code","custom":{}},{"type":"noop","custom":{"other":2}}]}`, string(out))
}
func TestApplyParamOverrideSetWildcardPath(t *testing.T) {
input := []byte(`{"tools":[{"custom":{"tag":"A"}},{"custom":{"tag":"B"}},{"custom":{"tag":"C"}}]}`)
override := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"path": "tools.*.custom.enabled",
"mode": "set",
"value": true,
},
},
}
out, err := ApplyParamOverride(input, override, nil)
if err != nil {
t.Fatalf("ApplyParamOverride returned error: %v", err)
}
var got struct {
Tools []struct {
Custom struct {
Enabled bool `json:"enabled"`
} `json:"custom"`
} `json:"tools"`
}
if err := json.Unmarshal(out, &got); err != nil {
t.Fatalf("failed to unmarshal output JSON: %v", err)
}
if !lo.EveryBy(got.Tools, func(item struct {
Custom struct {
Enabled bool `json:"enabled"`
} `json:"custom"`
}) bool {
return item.Custom.Enabled
}) {
t.Fatalf("expected wildcard set to enable all tools, got: %s", string(out))
}
}
func TestApplyParamOverrideTrimSpaceWildcardPath(t *testing.T) {
input := []byte(`{"tools":[{"custom":{"name":" alpha "}},{"custom":{"name":" beta"}},{"custom":{"name":"gamma "}}]}`)
override := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"path": "tools.*.custom.name",
"mode": "trim_space",
},
},
}
out, err := ApplyParamOverride(input, override, nil)
if err != nil {
t.Fatalf("ApplyParamOverride returned error: %v", err)
}
var got struct {
Tools []struct {
Custom struct {
Name string `json:"name"`
} `json:"custom"`
} `json:"tools"`
}
if err := json.Unmarshal(out, &got); err != nil {
t.Fatalf("failed to unmarshal output JSON: %v", err)
}
names := lo.Map(got.Tools, func(item struct {
Custom struct {
Name string `json:"name"`
} `json:"custom"`
}, _ int) string {
return item.Custom.Name
})
if !reflect.DeepEqual(names, []string{"alpha", "beta", "gamma"}) {
t.Fatalf("unexpected names after wildcard trim_space: %v", names)
}
}
func TestApplyParamOverrideDeleteWildcardEqualsIndexedPaths(t *testing.T) {
input := []byte(`{"tools":[{"custom":{"input_examples":["a"],"other":1}},{"custom":{"input_examples":["b"],"other":2}},{"custom":{"input_examples":["c"],"other":3}}]}`)
wildcardOverride := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"path": "tools.*.custom.input_examples",
"mode": "delete",
},
},
}
indexedOverride := map[string]interface{}{
"operations": lo.Map(lo.Range(3), func(index int, _ int) interface{} {
return map[string]interface{}{
"path": fmt.Sprintf("tools.%d.custom.input_examples", index),
"mode": "delete",
}
}),
}
wildcardOut, err := ApplyParamOverride(input, wildcardOverride, nil)
if err != nil {
t.Fatalf("wildcard ApplyParamOverride returned error: %v", err)
}
indexedOut, err := ApplyParamOverride(input, indexedOverride, nil)
if err != nil {
t.Fatalf("indexed ApplyParamOverride returned error: %v", err)
}
assertJSONEqual(t, string(indexedOut), string(wildcardOut))
}
func TestApplyParamOverrideSetWildcardKeepOrigin(t *testing.T) {
input := []byte(`{"tools":[{"custom":{"tag":"A"}},{"custom":{"tag":"B","enabled":false}},{"custom":{"tag":"C"}}]}`)
override := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"path": "tools.*.custom.enabled",
"mode": "set",
"value": true,
"keep_origin": true,
},
},
}
out, err := ApplyParamOverride(input, override, nil)
if err != nil {
t.Fatalf("ApplyParamOverride returned error: %v", err)
}
var got struct {
Tools []struct {
Custom struct {
Enabled bool `json:"enabled"`
} `json:"custom"`
} `json:"tools"`
}
if err := json.Unmarshal(out, &got); err != nil {
t.Fatalf("failed to unmarshal output JSON: %v", err)
}
enabledValues := lo.Map(got.Tools, func(item struct {
Custom struct {
Enabled bool `json:"enabled"`
} `json:"custom"`
}, _ int) bool {
return item.Custom.Enabled
})
if !reflect.DeepEqual(enabledValues, []bool{true, false, true}) {
t.Fatalf("unexpected enabled values after wildcard keep_origin set: %v", enabledValues)
}
}
func TestApplyParamOverrideTrimSpaceMultiWildcardPath(t *testing.T) {
input := []byte(`{"tools":[{"custom":{"items":[{"name":" alpha "},{"name":" beta "}]}},{"custom":{"items":[{"name":" gamma"}]}}]}`)
override := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"path": "tools.*.custom.items.*.name",
"mode": "trim_space",
},
},
}
out, err := ApplyParamOverride(input, override, nil)
if err != nil {
t.Fatalf("ApplyParamOverride returned error: %v", err)
}
var got struct {
Tools []struct {
Custom struct {
Items []struct {
Name string `json:"name"`
} `json:"items"`
} `json:"custom"`
} `json:"tools"`
}
if err := json.Unmarshal(out, &got); err != nil {
t.Fatalf("failed to unmarshal output JSON: %v", err)
}
names := lo.FlatMap(got.Tools, func(tool struct {
Custom struct {
Items []struct {
Name string `json:"name"`
} `json:"items"`
} `json:"custom"`
}, _ int) []string {
return lo.Map(tool.Custom.Items, func(item struct {
Name string `json:"name"`
}, _ int) string {
return item.Name
})
})
if !reflect.DeepEqual(names, []string{"alpha", "beta", "gamma"}) {
t.Fatalf("unexpected names after multi wildcard trim_space: %v", names)
}
}
func TestApplyParamOverrideSet(t *testing.T) {
input := []byte(`{"model":"gpt-4","temperature":0.7}`)
override := map[string]interface{}{
@@ -219,6 +481,42 @@ func TestApplyParamOverrideSet(t *testing.T) {
assertJSONEqual(t, `{"model":"gpt-4","temperature":0.1}`, string(out))
}
func TestApplyParamOverrideSetWithDescriptionKeepsCompatibility(t *testing.T) {
input := []byte(`{"model":"gpt-4","temperature":0.7}`)
overrideWithoutDesc := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"path": "temperature",
"mode": "set",
"value": 0.1,
},
},
}
overrideWithDesc := map[string]interface{}{
"operations": []interface{}{
map[string]interface{}{
"description": "set temperature for deterministic output",
"path": "temperature",
"mode": "set",
"value": 0.1,
},
},
}
outWithoutDesc, err := ApplyParamOverride(input, overrideWithoutDesc, nil)
if err != nil {
t.Fatalf("ApplyParamOverride without description returned error: %v", err)
}
outWithDesc, err := ApplyParamOverride(input, overrideWithDesc, nil)
if err != nil {
t.Fatalf("ApplyParamOverride with description returned error: %v", err)
}
assertJSONEqual(t, string(outWithoutDesc), string(outWithDesc))
assertJSONEqual(t, `{"model":"gpt-4","temperature":0.1}`, string(outWithDesc))
}
func TestApplyParamOverrideSetKeepOrigin(t *testing.T) {
input := []byte(`{"model":"gpt-4","temperature":0.7}`)
override := map[string]interface{}{
@@ -1429,6 +1727,44 @@ func TestApplyParamOverrideWithRelayInfoSyncRuntimeHeaders(t *testing.T) {
}
}
func TestApplyParamOverrideWithRelayInfoMixedLegacyAndOperations(t *testing.T) {
info := &RelayInfo{
RequestHeaders: map[string]string{
"Originator": "Codex CLI",
},
ChannelMeta: &ChannelMeta{
ParamOverride: map[string]interface{}{
"temperature": 0.2,
"operations": []interface{}{
map[string]interface{}{
"mode": "pass_headers",
"value": []interface{}{"Originator"},
},
},
},
HeadersOverride: map[string]interface{}{
"X-Static": "legacy-static",
},
},
}
out, err := ApplyParamOverrideWithRelayInfo([]byte(`{"model":"gpt-5","temperature":0.7}`), info)
if err != nil {
t.Fatalf("ApplyParamOverrideWithRelayInfo returned error: %v", err)
}
assertJSONEqual(t, `{"model":"gpt-5","temperature":0.2}`, string(out))
if !info.UseRuntimeHeadersOverride {
t.Fatalf("expected runtime header override to be enabled")
}
if info.RuntimeHeadersOverride["x-static"] != "legacy-static" {
t.Fatalf("expected x-static to be preserved, got: %v", info.RuntimeHeadersOverride["x-static"])
}
if info.RuntimeHeadersOverride["originator"] != "Codex CLI" {
t.Fatalf("expected originator header to be passed, got: %v", info.RuntimeHeadersOverride["originator"])
}
}
func TestApplyParamOverrideWithRelayInfoMoveAndCopyHeaders(t *testing.T) {
info := &RelayInfo{
ChannelMeta: &ChannelMeta{

View File

@@ -140,18 +140,31 @@ func ModelPriceHelper(c *gin.Context, info *relaycommon.RelayInfo, promptTokens
}
// ModelPriceHelperPerCall 按次计费的 PriceHelper (MJ、Task)
func ModelPriceHelperPerCall(c *gin.Context, info *relaycommon.RelayInfo) types.PriceData {
func ModelPriceHelperPerCall(c *gin.Context, info *relaycommon.RelayInfo) (types.PriceData, error) {
groupRatioInfo := HandleGroupRatio(c, info)
modelPrice, success := ratio_setting.GetModelPrice(info.OriginModelName, true)
// 如果没有配置价格,则使用默认价格
// 如果没有配置价格,检查模型倍率配置
if !success {
// 没有配置费用,也要使用默认费用,否则按费率计费模型无法使用
defaultPrice, ok := ratio_setting.GetDefaultModelPriceMap()[info.OriginModelName]
if !ok {
modelPrice = 0.1
} else {
if ok {
modelPrice = defaultPrice
} else {
// 没有配置倍率也不接受没配置,那就返回错误
_, ratioSuccess, matchName := ratio_setting.GetModelRatio(info.OriginModelName)
acceptUnsetRatio := false
if info.UserSetting.AcceptUnsetRatioModel {
acceptUnsetRatio = true
}
if !ratioSuccess && !acceptUnsetRatio {
return types.PriceData{}, fmt.Errorf("模型 %s 倍率或价格未配置请联系管理员设置或开始自用模式Model %s ratio or price not set, please set or start self-use mode", matchName, matchName)
}
// 未配置价格但配置了倍率,使用默认预扣价格
modelPrice = float64(common.PreConsumedQuota) / common.QuotaPerUnit
}
}
quota := int(modelPrice * common.QuotaPerUnit * groupRatioInfo.GroupRatio)
@@ -170,7 +183,7 @@ func ModelPriceHelperPerCall(c *gin.Context, info *relaycommon.RelayInfo) types.
Quota: quota,
GroupRatioInfo: groupRatioInfo,
}
return priceData
return priceData, nil
}
func ContainPriceOrRatio(modelName string) bool {

View File

@@ -229,7 +229,7 @@ func GetAndValidOpenAIImageRequest(c *gin.Context, relayMode int) (*dto.ImageReq
func GetAndValidateClaudeRequest(c *gin.Context) (textRequest *dto.ClaudeRequest, err error) {
textRequest = &dto.ClaudeRequest{}
err = c.ShouldBindJSON(textRequest)
err = common.UnmarshalBodyReusable(c, textRequest)
if err != nil {
return nil, err
}

View File

@@ -186,7 +186,13 @@ func RelaySwapFace(c *gin.Context, info *relaycommon.RelayInfo) *dto.MidjourneyR
}
modelName := service.CovertMjpActionToModelName(constant.MjActionSwapFace)
priceData := helper.ModelPriceHelperPerCall(c, info)
priceData, err := helper.ModelPriceHelperPerCall(c, info)
if err != nil {
return &dto.MidjourneyResponse{
Code: 4,
Description: err.Error(),
}
}
userQuota, err := model.GetUserQuota(info.UserId, false)
if err != nil {
@@ -487,7 +493,13 @@ func RelayMidjourneySubmit(c *gin.Context, relayInfo *relaycommon.RelayInfo) *dt
modelName := service.CovertMjpActionToModelName(midjRequest.Action)
priceData := helper.ModelPriceHelperPerCall(c, relayInfo)
priceData, err := helper.ModelPriceHelperPerCall(c, relayInfo)
if err != nil {
return &dto.MidjourneyResponse{
Code: 4,
Description: err.Error(),
}
}
userQuota, err := model.GetUserQuota(relayInfo.UserId, false)
if err != nil {

View File

@@ -41,6 +41,8 @@ func ResolveOriginTask(c *gin.Context, info *relaycommon.RelayInfo) *dto.TaskErr
if strings.Contains(path, "/v1/videos/") && strings.HasSuffix(path, "/remix") {
info.Action = constant.TaskActionRemix
}
// 提取 remix 任务的 video_id
if info.Action == constant.TaskActionRemix {
videoID := c.Param("video_id")
if strings.TrimSpace(videoID) == "" {
@@ -176,7 +178,11 @@ func RelayTaskSubmit(c *gin.Context, info *relaycommon.RelayInfo) (*TaskSubmitRe
// 4. 价格计算:基础模型价格
info.OriginModelName = modelName
info.PriceData = helper.ModelPriceHelperPerCall(c, info)
priceData, err := helper.ModelPriceHelperPerCall(c, info)
if err != nil {
return nil, service.TaskErrorWrapper(err, "model_price_error", http.StatusBadRequest)
}
info.PriceData = priceData
// 5. 计费估算:让适配器根据用户请求提供 OtherRatios时长、分辨率等
// 必须在 ModelPriceHelperPerCall 之后调用(它会重建 PriceData

View File

@@ -237,6 +237,10 @@ func SetApiRouter(router *gin.Engine) {
channelRoute.GET("/tag/models", controller.GetTagModels)
channelRoute.POST("/copy/:id", controller.CopyChannel)
channelRoute.POST("/multi_key/manage", controller.ManageMultiKeys)
channelRoute.POST("/upstream_updates/apply", controller.ApplyChannelUpstreamModelUpdates)
channelRoute.POST("/upstream_updates/apply_all", controller.ApplyAllChannelUpstreamModelUpdates)
channelRoute.POST("/upstream_updates/detect", controller.DetectChannelUpstreamModelUpdates)
channelRoute.POST("/upstream_updates/detect_all", controller.DetectAllChannelUpstreamModelUpdates)
}
tokenRoute := apiRouter.Group("/token")
tokenRoute.Use(middleware.UserAuth())

View File

@@ -436,11 +436,46 @@ func mergeChannelOverride(base map[string]interface{}, tpl map[string]interface{
}
out := cloneStringAnyMap(base)
for k, v := range tpl {
if strings.EqualFold(strings.TrimSpace(k), "operations") {
baseOps, hasBaseOps := extractParamOperations(out[k])
tplOps, hasTplOps := extractParamOperations(v)
if hasTplOps {
if hasBaseOps {
out[k] = append(tplOps, baseOps...)
} else {
out[k] = tplOps
}
continue
}
}
if _, exists := out[k]; exists {
continue
}
out[k] = v
}
return out
}
func extractParamOperations(value interface{}) ([]interface{}, bool) {
switch ops := value.(type) {
case []interface{}:
if len(ops) == 0 {
return []interface{}{}, true
}
cloned := make([]interface{}, 0, len(ops))
cloned = append(cloned, ops...)
return cloned, true
case []map[string]interface{}:
cloned := make([]interface{}, 0, len(ops))
for _, op := range ops {
cloned = append(cloned, op)
}
return cloned, true
default:
return nil, false
}
}
func appendChannelAffinityTemplateAdminInfo(c *gin.Context, meta channelAffinityMeta) {
if c == nil {
return

View File

@@ -56,7 +56,7 @@ func TestApplyChannelAffinityOverrideTemplate_MergeTemplate(t *testing.T) {
merged, applied := ApplyChannelAffinityOverrideTemplate(ctx, base)
require.True(t, applied)
require.Equal(t, 0.2, merged["temperature"])
require.Equal(t, 0.7, merged["temperature"])
require.Equal(t, 0.95, merged["top_p"])
require.Equal(t, 2000, merged["max_tokens"])
require.Equal(t, 0.7, base["temperature"])
@@ -74,6 +74,48 @@ func TestApplyChannelAffinityOverrideTemplate_MergeTemplate(t *testing.T) {
require.EqualValues(t, 2, overrideInfo["param_override_keys"])
}
func TestApplyChannelAffinityOverrideTemplate_MergeOperations(t *testing.T) {
ctx := buildChannelAffinityTemplateContextForTest(channelAffinityMeta{
RuleName: "rule-with-ops-template",
ParamTemplate: map[string]interface{}{
"operations": []map[string]interface{}{
{
"mode": "pass_headers",
"value": []string{"Originator"},
},
},
},
})
base := map[string]interface{}{
"temperature": 0.7,
"operations": []map[string]interface{}{
{
"path": "model",
"mode": "trim_prefix",
"value": "openai/",
},
},
}
merged, applied := ApplyChannelAffinityOverrideTemplate(ctx, base)
require.True(t, applied)
require.Equal(t, 0.7, merged["temperature"])
opsAny, ok := merged["operations"]
require.True(t, ok)
ops, ok := opsAny.([]interface{})
require.True(t, ok)
require.Len(t, ops, 2)
firstOp, ok := ops[0].(map[string]interface{})
require.True(t, ok)
require.Equal(t, "pass_headers", firstOp["mode"])
secondOp, ok := ops[1].(map[string]interface{})
require.True(t, ok)
require.Equal(t, "trim_prefix", secondOp["mode"])
}
func TestChannelAffinityHitCodexTemplatePassHeadersEffective(t *testing.T) {
gin.SetMode(gin.TestMode)

View File

@@ -34,22 +34,34 @@ func ClaudeToOpenAIRequest(claudeRequest dto.ClaudeRequest, info *relaycommon.Re
isOpenRouter := info.ChannelType == constant.ChannelTypeOpenRouter
if claudeRequest.Thinking != nil && claudeRequest.Thinking.Type == "enabled" {
if isOpenRouter {
reasoning := openrouter.RequestReasoning{
MaxTokens: claudeRequest.Thinking.GetBudgetTokens(),
if isOpenRouter {
if effort := claudeRequest.GetEfforts(); effort != "" {
effortBytes, _ := json.Marshal(effort)
openAIRequest.Verbosity = effortBytes
}
if claudeRequest.Thinking != nil {
var reasoning openrouter.RequestReasoning
if claudeRequest.Thinking.Type == "enabled" {
reasoning = openrouter.RequestReasoning{
Enabled: true,
MaxTokens: claudeRequest.Thinking.GetBudgetTokens(),
}
} else if claudeRequest.Thinking.Type == "adaptive" {
reasoning = openrouter.RequestReasoning{
Enabled: true,
}
}
reasoningJSON, err := json.Marshal(reasoning)
if err != nil {
return nil, fmt.Errorf("failed to marshal reasoning: %w", err)
}
openAIRequest.Reasoning = reasoningJSON
} else {
thinkingSuffix := "-thinking"
if strings.HasSuffix(info.OriginModelName, thinkingSuffix) &&
!strings.HasSuffix(openAIRequest.Model, thinkingSuffix) {
openAIRequest.Model = openAIRequest.Model + thinkingSuffix
}
}
} else {
thinkingSuffix := "-thinking"
if strings.HasSuffix(info.OriginModelName, thinkingSuffix) &&
!strings.HasSuffix(openAIRequest.Model, thinkingSuffix) {
openAIRequest.Model = openAIRequest.Model + thinkingSuffix
}
}

View File

@@ -222,13 +222,13 @@ func RecalculateTaskQuota(ctx context.Context, task *model.Task, actualQuota int
}
other := taskBillingOther(task)
other["task_id"] = task.TaskID
other["reason"] = reason
//other["reason"] = reason
other["pre_consumed_quota"] = preConsumedQuota
other["actual_quota"] = actualQuota
model.RecordTaskBillingLog(model.RecordTaskBillingLogParams{
UserId: task.UserId,
LogType: logType,
Content: "",
Content: reason,
ChannelId: task.ChannelId,
ModelName: taskModelName(task),
Quota: logQuota,

View File

@@ -125,8 +125,8 @@ func makeTask(userId, channelId, quota, tokenId int, billingSource string, subsc
SubscriptionId: subscriptionId,
TokenId: tokenId,
BillingContext: &model.TaskBillingContext{
ModelPrice: 0.02,
GroupRatio: 1.0,
ModelPrice: 0.02,
GroupRatio: 1.0,
OriginModelName: "test-model",
},
},
@@ -615,9 +615,11 @@ type mockAdaptor struct {
adjustReturn int
}
func (m *mockAdaptor) Init(_ *relaycommon.RelayInfo) {}
func (m *mockAdaptor) FetchTask(string, string, map[string]any, string) (*http.Response, error) { return nil, nil }
func (m *mockAdaptor) ParseTaskResult([]byte) (*relaycommon.TaskInfo, error) { return nil, nil }
func (m *mockAdaptor) Init(_ *relaycommon.RelayInfo) {}
func (m *mockAdaptor) FetchTask(string, string, map[string]any, string) (*http.Response, error) {
return nil, nil
}
func (m *mockAdaptor) ParseTaskResult([]byte) (*relaycommon.TaskInfo, error) { return nil, nil }
func (m *mockAdaptor) AdjustBillingOnComplete(_ *model.Task, _ *relaycommon.TaskInfo) int {
return m.adjustReturn
}

View File

@@ -335,6 +335,8 @@ func updateVideoTasks(ctx context.Context, platform constant.TaskPlatform, chann
if err := updateVideoSingleTask(ctx, adaptor, cacheGetChannel, taskId, taskM); err != nil {
logger.LogError(ctx, fmt.Sprintf("Failed to update video task %s: %s", taskId, err.Error()))
}
// sleep 1 second between each task to avoid hitting rate limits of upstream platforms
time.Sleep(1 * time.Second)
}
return nil
}
@@ -388,15 +390,33 @@ func updateVideoSingleTask(ctx context.Context, adaptor TaskPollingAdaptor, ch *
task.Data = t.Data
} else if taskResult, err = adaptor.ParseTaskResult(responseBody); err != nil {
return fmt.Errorf("parseTaskResult failed for task %s: %w", taskId, err)
} else {
task.Data = redactVideoResponseBody(responseBody)
}
task.Data = redactVideoResponseBody(responseBody)
logger.LogDebug(ctx, fmt.Sprintf("updateVideoSingleTask taskResult: %+v", taskResult))
now := time.Now().Unix()
if taskResult.Status == "" {
taskResult = relaycommon.FailTaskInfo("upstream returned empty status")
//taskResult = relaycommon.FailTaskInfo("upstream returned empty status")
errorResult := &dto.GeneralErrorResponse{}
if err = common.Unmarshal(responseBody, &errorResult); err == nil {
openaiError := errorResult.TryToOpenAIError()
if openaiError != nil {
// 返回规范的 OpenAI 错误格式,提取错误信息,判断错误是否为任务失败
if openaiError.Code == "429" {
// 429 错误通常表示请求过多或速率限制,暂时不认为是任务失败,保持原状态等待下一轮轮询
return nil
}
// 其他错误认为是任务失败,记录错误信息并更新任务状态
taskResult = relaycommon.FailTaskInfo("upstream returned error")
} else {
// unknown error format, log original response
logger.LogError(ctx, fmt.Sprintf("Task %s returned empty status with unrecognized error format, response: %s", taskId, string(responseBody)))
taskResult = relaycommon.FailTaskInfo("upstream returned unrecognized message")
}
}
}
shouldRefund := false

View File

@@ -22,6 +22,32 @@ func NotifyRootUser(t string, subject string, content string) {
}
}
func NotifyUpstreamModelUpdateWatchers(subject string, content string) {
var users []model.User
if err := model.DB.
Select("id", "email", "role", "status", "setting").
Where("status = ? AND role >= ?", common.UserStatusEnabled, common.RoleAdminUser).
Find(&users).Error; err != nil {
common.SysLog(fmt.Sprintf("failed to query upstream update notification users: %s", err.Error()))
return
}
notification := dto.NewNotify(dto.NotifyTypeChannelUpdate, subject, content, nil)
sentCount := 0
for _, user := range users {
userSetting := user.GetSetting()
if !userSetting.UpstreamModelUpdateNotifyEnabled {
continue
}
if err := NotifyUser(user.Id, user.Email, userSetting, notification); err != nil {
common.SysLog(fmt.Sprintf("failed to notify user %d for upstream model update: %s", user.Id, err.Error()))
continue
}
sentCount++
}
common.SysLog(fmt.Sprintf("upstream model update notifications sent: %d", sentCount))
}
func NotifyUser(userId int, userEmail string, userSetting dto.UserSetting, data dto.Notify) error {
notifyType := userSetting.NotifyType
if notifyType == "" {

View File

@@ -13,9 +13,9 @@ var Chats = []map[string]string{
{
"Cherry Studio": "cherrystudio://providers/api-keys?v=1&data={cherryConfig}",
},
//{
// "AionUI": "aionui://provider/add?v=1&data={aionuiConfig}",
//},
{
"AionUI": "aionui://provider/add?v=1&data={aionuiConfig}",
},
{
"流畅阅读": "fluentread",
},

View File

@@ -471,6 +471,9 @@ func getHardcodedCompletionModelRatio(name string) (float64, bool) {
}
// gpt-5 匹配
if strings.HasPrefix(name, "gpt-5") {
if strings.HasPrefix(name, "gpt-5.4") {
return 6, true
}
return 8, true
}
// gpt-4.5-preview匹配

8
web/index.html vendored
View File

@@ -7,7 +7,13 @@
<meta name="theme-color" content="#ffffff" />
<meta
name="description"
content="OpenAI 接口聚合管理,支持多种渠道包括 Azure可用于二次分发管理 key仅单可执行文件已打包好 Docker 镜像,一键部署,开箱即用"
lang="zh"
content="统一的 AI 模型聚合与分发网关,支持将各类大语言模型跨格式转换为 OpenAI、Claude、Gemini 兼容接口,为个人与企业提供集中式模型管理与网关服务。"
/>
<meta
name="description"
lang="en"
content="A unified AI model hub for aggregation & distribution. It supports cross-converting various LLMs into OpenAI-compatible, Claude-compatible, or Gemini-compatible formats. A centralized gateway for personal and enterprise model management."
/>
<meta name="generator" content="new-api" />
<title>New API</title>

View File

@@ -23,7 +23,6 @@ import { useContainerWidth } from '../../../hooks/common/useContainerWidth';
import {
Divider,
Button,
Tag,
Row,
Col,
Collapsible,
@@ -46,6 +45,7 @@ import { IconChevronDown, IconChevronUp } from '@douyinfe/semi-icons';
* @param {number} collapseHeight 折叠时的高度默认200
* @param {boolean} withCheckbox 是否启用前缀 Checkbox 来控制激活状态
* @param {boolean} loading 是否处于加载状态
* @param {string} variant 颜色变体: 'violet' | 'teal' | 'amber' | 'rose' | 'green',不传则使用默认蓝色
*/
const SelectableButtonGroup = ({
title,
@@ -58,6 +58,7 @@ const SelectableButtonGroup = ({
collapseHeight = 200,
withCheckbox = false,
loading = false,
variant,
}) => {
const [isOpen, setIsOpen] = useState(false);
const [skeletonCount] = useState(12);
@@ -178,9 +179,6 @@ const SelectableButtonGroup = ({
) : (
<Row gutter={gutterSize} style={{ lineHeight: '32px', ...style }}>
{items.map((item) => {
const isDisabled =
item.disabled ||
(typeof item.tagCount === 'number' && item.tagCount === 0);
const isActive = Array.isArray(activeValue)
? activeValue.includes(item.value)
: activeValue === item.value;
@@ -194,13 +192,11 @@ const SelectableButtonGroup = ({
}}
theme={isActive ? 'light' : 'outline'}
type={isActive ? 'primary' : 'tertiary'}
disabled={isDisabled}
className='sbg-button'
icon={
<Checkbox
checked={isActive}
onChange={() => onChange(item.value)}
disabled={isDisabled}
style={{ pointerEvents: 'auto' }}
/>
}
@@ -210,14 +206,9 @@ const SelectableButtonGroup = ({
{item.icon && <span className='sbg-icon'>{item.icon}</span>}
<ConditionalTooltipText text={item.label} />
{item.tagCount !== undefined && shouldShowTags && (
<Tag
className='sbg-tag'
color='white'
shape='circle'
size='small'
>
<span className={`sbg-badge ${isActive ? 'sbg-badge-active' : ''}`}>
{item.tagCount}
</Tag>
</span>
)}
</div>
</Button>
@@ -231,22 +222,16 @@ const SelectableButtonGroup = ({
onClick={() => onChange(item.value)}
theme={isActive ? 'light' : 'outline'}
type={isActive ? 'primary' : 'tertiary'}
disabled={isDisabled}
className='sbg-button'
style={{ width: '100%' }}
>
<div className='sbg-content'>
{item.icon && <span className='sbg-icon'>{item.icon}</span>}
<ConditionalTooltipText text={item.label} />
{item.tagCount !== undefined && shouldShowTags && (
<Tag
className='sbg-tag'
color='white'
shape='circle'
size='small'
>
{item.tagCount !== undefined && shouldShowTags && item.tagCount !== '' && (
<span className={`sbg-badge ${isActive ? 'sbg-badge-active' : ''}`}>
{item.tagCount}
</Tag>
</span>
)}
</div>
</Button>
@@ -258,7 +243,7 @@ const SelectableButtonGroup = ({
return (
<div
className={`mb-8 ${containerWidth <= 400 ? 'sbg-compact' : ''}`}
className={`mb-8 ${containerWidth <= 400 ? 'sbg-compact' : ''}${variant ? ` sbg-variant-${variant}` : ''}`}
ref={containerRef}
>
{title && (

View File

@@ -86,6 +86,7 @@ const PersonalSetting = () => {
gotifyUrl: '',
gotifyToken: '',
gotifyPriority: 5,
upstreamModelUpdateNotifyEnabled: false,
acceptUnsetModelRatioModel: false,
recordIpLog: false,
});
@@ -158,6 +159,8 @@ const PersonalSetting = () => {
gotifyToken: settings.gotify_token || '',
gotifyPriority:
settings.gotify_priority !== undefined ? settings.gotify_priority : 5,
upstreamModelUpdateNotifyEnabled:
settings.upstream_model_update_notify_enabled === true,
acceptUnsetModelRatioModel:
settings.accept_unset_model_ratio_model || false,
recordIpLog: settings.record_ip_log || false,
@@ -426,6 +429,8 @@ const PersonalSetting = () => {
const parsed = parseInt(notificationSettings.gotifyPriority);
return isNaN(parsed) ? 5 : parsed;
})(),
upstream_model_update_notify_enabled:
notificationSettings.upstreamModelUpdateNotifyEnabled === true,
accept_unset_model_ratio_model:
notificationSettings.acceptUnsetModelRatioModel,
record_ip_log: notificationSettings.recordIpLog,

View File

@@ -58,6 +58,7 @@ const NotificationSettings = ({
const formApiRef = useRef(null);
const [statusState] = useContext(StatusContext);
const [userState] = useContext(UserContext);
const isAdminOrRoot = (userState?.user?.role || 0) >= 10;
// 左侧边栏设置相关状态
const [sidebarLoading, setSidebarLoading] = useState(false);
@@ -470,6 +471,21 @@ const NotificationSettings = ({
]}
/>
{isAdminOrRoot && (
<Form.Switch
field='upstreamModelUpdateNotifyEnabled'
label={t('接收上游模型更新通知')}
checkedText={t('开')}
uncheckedText={t('关')}
onChange={(value) =>
handleFormChange('upstreamModelUpdateNotifyEnabled', value)
}
extraText={t(
'仅管理员可用。开启后,当系统定时检测全部渠道发现上游模型变更或检测异常时,将按你选择的通知方式发送汇总通知;渠道或模型过多时会自动省略部分明细。',
)}
/>
)}
{/* 邮件通知设置 */}
{notificationSettings.warningType === 'email' && (
<Form.Input

View File

@@ -36,6 +36,10 @@ const ChannelsActions = ({
fixChannelsAbilities,
updateAllChannelsBalance,
deleteAllDisabledChannels,
applyAllUpstreamUpdates,
detectAllUpstreamUpdates,
detectAllUpstreamUpdatesLoading,
applyAllUpstreamUpdatesLoading,
compactMode,
setCompactMode,
idSort,
@@ -96,6 +100,8 @@ const ChannelsActions = ({
size='small'
type='tertiary'
className='w-full'
loading={detectAllUpstreamUpdatesLoading}
disabled={detectAllUpstreamUpdatesLoading}
onClick={() => {
Modal.confirm({
title: t('确定?'),
@@ -146,6 +152,46 @@ const ChannelsActions = ({
{t('更新所有已启用通道余额')}
</Button>
</Dropdown.Item>
<Dropdown.Item>
<Button
size='small'
type='tertiary'
className='w-full'
onClick={() => {
Modal.confirm({
title: t('确定?'),
content: t(
'确定要仅检测全部渠道上游模型更新吗?(不执行新增/删除)',
),
onOk: () => detectAllUpstreamUpdates(),
size: 'sm',
centered: true,
});
}}
>
{t('检测全部渠道上游更新')}
</Button>
</Dropdown.Item>
<Dropdown.Item>
<Button
size='small'
type='primary'
className='w-full'
loading={applyAllUpstreamUpdatesLoading}
disabled={applyAllUpstreamUpdatesLoading}
onClick={() => {
Modal.confirm({
title: t('确定?'),
content: t('确定要对全部渠道执行上游模型更新吗?'),
onOk: () => applyAllUpstreamUpdates(),
size: 'sm',
centered: true,
});
}}
>
{t('处理全部渠道上游更新')}
</Button>
</Dropdown.Item>
<Dropdown.Item>
<Button
size='small'

View File

@@ -37,8 +37,13 @@ import {
renderQuotaWithAmount,
showSuccess,
showError,
showInfo,
} from '../../../helpers';
import { CHANNEL_OPTIONS } from '../../../constants';
import {
CHANNEL_OPTIONS,
MODEL_FETCHABLE_CHANNEL_TYPES,
} from '../../../constants';
import { parseUpstreamUpdateMeta } from '../../../hooks/channels/upstreamUpdateUtils';
import {
IconTreeTriangleDown,
IconMore,
@@ -270,6 +275,35 @@ const isRequestPassThroughEnabled = (record) => {
}
};
const getUpstreamUpdateMeta = (record) => {
const supported =
!!record &&
record.children === undefined &&
MODEL_FETCHABLE_CHANNEL_TYPES.has(record.type);
if (!record || record.children !== undefined) {
return {
supported: false,
enabled: false,
pendingAddModels: [],
pendingRemoveModels: [],
};
}
const parsed =
record?.upstreamUpdateMeta && typeof record.upstreamUpdateMeta === 'object'
? record.upstreamUpdateMeta
: parseUpstreamUpdateMeta(record?.settings);
return {
supported,
enabled: parsed?.enabled === true,
pendingAddModels: Array.isArray(parsed?.pendingAddModels)
? parsed.pendingAddModels
: [],
pendingRemoveModels: Array.isArray(parsed?.pendingRemoveModels)
? parsed.pendingRemoveModels
: [],
};
};
export const getChannelsColumns = ({
t,
COLUMN_KEYS,
@@ -291,6 +325,8 @@ export const getChannelsColumns = ({
checkOllamaVersion,
setShowMultiKeyManageModal,
setCurrentMultiKeyChannel,
openUpstreamUpdateModal,
detectChannelUpstreamUpdates,
}) => {
return [
{
@@ -304,6 +340,14 @@ export const getChannelsColumns = ({
dataIndex: 'name',
render: (text, record, index) => {
const passThroughEnabled = isRequestPassThroughEnabled(record);
const upstreamUpdateMeta = getUpstreamUpdateMeta(record);
const pendingAddCount = upstreamUpdateMeta.pendingAddModels.length;
const pendingRemoveCount =
upstreamUpdateMeta.pendingRemoveModels.length;
const showUpstreamUpdateTag =
upstreamUpdateMeta.supported &&
upstreamUpdateMeta.enabled &&
(pendingAddCount > 0 || pendingRemoveCount > 0);
const nameNode =
record.remark && record.remark.trim() !== '' ? (
<Tooltip
@@ -339,26 +383,76 @@ export const getChannelsColumns = ({
<span>{text}</span>
);
if (!passThroughEnabled) {
if (!passThroughEnabled && !showUpstreamUpdateTag) {
return nameNode;
}
return (
<Space spacing={6} align='center'>
{nameNode}
<Tooltip
content={t(
'该渠道已开启请求透传:参数覆写、模型重定向、渠道适配等 NewAPI 内置功能将失效,非最佳实践;如因此产生问题,请勿提交 issue 反馈。',
)}
trigger='hover'
position='topLeft'
>
<span className='inline-flex items-center'>
<IconAlertTriangle
style={{ color: 'var(--semi-color-warning)' }}
/>
</span>
</Tooltip>
{passThroughEnabled && (
<Tooltip
content={t(
'该渠道已开启请求透传:参数覆写、模型重定向、渠道适配等 NewAPI 内置功能将失效,非最佳实践;如因此产生问题,请勿提交 issue 反馈。',
)}
trigger='hover'
position='topLeft'
>
<span className='inline-flex items-center'>
<IconAlertTriangle
style={{ color: 'var(--semi-color-warning)' }}
/>
</span>
</Tooltip>
)}
{showUpstreamUpdateTag && (
<Space spacing={4} align='center'>
{pendingAddCount > 0 ? (
<Tooltip content={t('点击处理新增模型')} position='top'>
<Tag
color='green'
type='light'
size='small'
shape='circle'
className='cursor-pointer transition-all duration-150 hover:opacity-85 hover:-translate-y-px active:scale-95'
onClick={(e) => {
e.stopPropagation();
openUpstreamUpdateModal(
record,
upstreamUpdateMeta.pendingAddModels,
upstreamUpdateMeta.pendingRemoveModels,
'add',
);
}}
>
+{pendingAddCount}
</Tag>
</Tooltip>
) : null}
{pendingRemoveCount > 0 ? (
<Tooltip content={t('点击处理删除模型')} position='top'>
<Tag
color='red'
type='light'
size='small'
shape='circle'
className='cursor-pointer transition-all duration-150 hover:opacity-85 hover:-translate-y-px active:scale-95'
onClick={(e) => {
e.stopPropagation();
openUpstreamUpdateModal(
record,
upstreamUpdateMeta.pendingAddModels,
upstreamUpdateMeta.pendingRemoveModels,
'remove',
);
}}
>
-{pendingRemoveCount}
</Tag>
</Tooltip>
) : null}
</Space>
)}
</Space>
);
},
@@ -585,6 +679,7 @@ export const getChannelsColumns = ({
fixed: 'right',
render: (text, record, index) => {
if (record.children === undefined) {
const upstreamUpdateMeta = getUpstreamUpdateMeta(record);
const moreMenuItems = [
{
node: 'item',
@@ -622,6 +717,43 @@ export const getChannelsColumns = ({
},
];
if (upstreamUpdateMeta.supported) {
moreMenuItems.push({
node: 'item',
name: t('仅检测上游模型更新'),
type: 'tertiary',
onClick: () => {
detectChannelUpstreamUpdates(record);
},
});
moreMenuItems.push({
node: 'item',
name: t('处理上游模型更新'),
type: 'tertiary',
onClick: () => {
if (!upstreamUpdateMeta.enabled) {
showInfo(t('该渠道未开启上游模型更新检测'));
return;
}
if (
upstreamUpdateMeta.pendingAddModels.length === 0 &&
upstreamUpdateMeta.pendingRemoveModels.length === 0
) {
showInfo(t('该渠道暂无可处理的上游模型更新'));
return;
}
openUpstreamUpdateModal(
record,
upstreamUpdateMeta.pendingAddModels,
upstreamUpdateMeta.pendingRemoveModels,
upstreamUpdateMeta.pendingAddModels.length > 0
? 'add'
: 'remove',
);
},
});
}
if (record.type === 4) {
moreMenuItems.unshift({
node: 'item',

View File

@@ -61,6 +61,8 @@ const ChannelsTable = (channelsData) => {
// Multi-key management
setShowMultiKeyManageModal,
setCurrentMultiKeyChannel,
openUpstreamUpdateModal,
detectChannelUpstreamUpdates,
} = channelsData;
// Get all columns
@@ -86,6 +88,8 @@ const ChannelsTable = (channelsData) => {
checkOllamaVersion,
setShowMultiKeyManageModal,
setCurrentMultiKeyChannel,
openUpstreamUpdateModal,
detectChannelUpstreamUpdates,
});
}, [
t,
@@ -108,6 +112,8 @@ const ChannelsTable = (channelsData) => {
checkOllamaVersion,
setShowMultiKeyManageModal,
setCurrentMultiKeyChannel,
openUpstreamUpdateModal,
detectChannelUpstreamUpdates,
]);
// Filter columns based on visibility settings

View File

@@ -33,6 +33,7 @@ import ColumnSelectorModal from './modals/ColumnSelectorModal';
import EditChannelModal from './modals/EditChannelModal';
import EditTagModal from './modals/EditTagModal';
import MultiKeyManageModal from './modals/MultiKeyManageModal';
import ChannelUpstreamUpdateModal from './modals/ChannelUpstreamUpdateModal';
import { createCardProPagination } from '../../../helpers/utils';
const ChannelsPage = () => {
@@ -63,6 +64,15 @@ const ChannelsPage = () => {
channel={channelsData.currentMultiKeyChannel}
onRefresh={channelsData.refresh}
/>
<ChannelUpstreamUpdateModal
visible={channelsData.showUpstreamUpdateModal}
addModels={channelsData.upstreamUpdateAddModels}
removeModels={channelsData.upstreamUpdateRemoveModels}
preferredTab={channelsData.upstreamUpdatePreferredTab}
confirmLoading={channelsData.upstreamApplyLoading}
onConfirm={channelsData.applyUpstreamUpdates}
onCancel={channelsData.closeUpstreamUpdateModal}
/>
{/* Main Content */}
{channelsData.globalPassThroughEnabled ? (

View File

@@ -0,0 +1,313 @@
/*
Copyright (C) 2025 QuantumNous
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
For commercial licensing, please contact support@quantumnous.com
*/
import React, { useEffect, useMemo, useState } from 'react';
import { useTranslation } from 'react-i18next';
import {
Modal,
Checkbox,
Empty,
Input,
Tabs,
Typography,
} from '@douyinfe/semi-ui';
import {
IllustrationNoResult,
IllustrationNoResultDark,
} from '@douyinfe/semi-illustrations';
import { IconSearch } from '@douyinfe/semi-icons';
import { useIsMobile } from '../../../../hooks/common/useIsMobile';
const normalizeModels = (models = []) =>
Array.from(
new Set(
(models || []).map((model) => String(model || '').trim()).filter(Boolean),
),
);
const filterByKeyword = (models = [], keyword = '') => {
const normalizedKeyword = String(keyword || '')
.trim()
.toLowerCase();
if (!normalizedKeyword) {
return models;
}
return models.filter((model) =>
String(model).toLowerCase().includes(normalizedKeyword),
);
};
const ChannelUpstreamUpdateModal = ({
visible,
addModels = [],
removeModels = [],
preferredTab = 'add',
confirmLoading = false,
onConfirm,
onCancel,
}) => {
const { t } = useTranslation();
const isMobile = useIsMobile();
const normalizedAddModels = useMemo(
() => normalizeModels(addModels),
[addModels],
);
const normalizedRemoveModels = useMemo(
() => normalizeModels(removeModels),
[removeModels],
);
const [selectedAddModels, setSelectedAddModels] = useState([]);
const [selectedRemoveModels, setSelectedRemoveModels] = useState([]);
const [keyword, setKeyword] = useState('');
const [activeTab, setActiveTab] = useState('add');
const [partialSubmitConfirmed, setPartialSubmitConfirmed] = useState(false);
const addTabEnabled = normalizedAddModels.length > 0;
const removeTabEnabled = normalizedRemoveModels.length > 0;
const filteredAddModels = useMemo(
() => filterByKeyword(normalizedAddModels, keyword),
[normalizedAddModels, keyword],
);
const filteredRemoveModels = useMemo(
() => filterByKeyword(normalizedRemoveModels, keyword),
[normalizedRemoveModels, keyword],
);
useEffect(() => {
if (!visible) {
return;
}
setSelectedAddModels([]);
setSelectedRemoveModels([]);
setKeyword('');
setPartialSubmitConfirmed(false);
const normalizedPreferredTab = preferredTab === 'remove' ? 'remove' : 'add';
if (normalizedPreferredTab === 'remove' && removeTabEnabled) {
setActiveTab('remove');
return;
}
if (normalizedPreferredTab === 'add' && addTabEnabled) {
setActiveTab('add');
return;
}
setActiveTab(addTabEnabled ? 'add' : 'remove');
}, [visible, addTabEnabled, removeTabEnabled, preferredTab]);
const currentModels =
activeTab === 'add' ? filteredAddModels : filteredRemoveModels;
const currentSelectedModels =
activeTab === 'add' ? selectedAddModels : selectedRemoveModels;
const currentSetSelectedModels =
activeTab === 'add' ? setSelectedAddModels : setSelectedRemoveModels;
const selectedAddCount = selectedAddModels.length;
const selectedRemoveCount = selectedRemoveModels.length;
const checkedCount = currentModels.filter((model) =>
currentSelectedModels.includes(model),
).length;
const isAllChecked =
currentModels.length > 0 && checkedCount === currentModels.length;
const isIndeterminate =
checkedCount > 0 && checkedCount < currentModels.length;
const handleToggleAllCurrent = (checked) => {
if (checked) {
const merged = normalizeModels([
...currentSelectedModels,
...currentModels,
]);
currentSetSelectedModels(merged);
return;
}
const currentSet = new Set(currentModels);
currentSetSelectedModels(
currentSelectedModels.filter((model) => !currentSet.has(model)),
);
};
const tabList = [
{
itemKey: 'add',
tab: `${t('新增模型')} (${selectedAddCount}/${normalizedAddModels.length})`,
disabled: !addTabEnabled,
},
{
itemKey: 'remove',
tab: `${t('删除模型')} (${selectedRemoveCount}/${normalizedRemoveModels.length})`,
disabled: !removeTabEnabled,
},
];
const submitSelectedChanges = () => {
onConfirm?.({
addModels: selectedAddModels,
removeModels: selectedRemoveModels,
});
};
const handleSubmit = () => {
const hasAnySelected = selectedAddCount > 0 || selectedRemoveCount > 0;
if (!hasAnySelected) {
submitSelectedChanges();
return;
}
const hasBothPending = addTabEnabled && removeTabEnabled;
const hasUnselectedAdd = addTabEnabled && selectedAddCount === 0;
const hasUnselectedRemove = removeTabEnabled && selectedRemoveCount === 0;
if (hasBothPending && (hasUnselectedAdd || hasUnselectedRemove)) {
if (partialSubmitConfirmed) {
submitSelectedChanges();
return;
}
const missingTab = hasUnselectedAdd ? 'add' : 'remove';
const missingType = hasUnselectedAdd ? t('新增') : t('删除');
const missingCount = hasUnselectedAdd
? normalizedAddModels.length
: normalizedRemoveModels.length;
setActiveTab(missingTab);
Modal.confirm({
title: t('仍有未处理项'),
content: t(
'你还没有处理{{type}}模型({{count}}个)。是否仅提交当前已勾选内容?',
{
type: missingType,
count: missingCount,
},
),
okText: t('仅提交已勾选'),
cancelText: t('去处理{{type}}', { type: missingType }),
centered: true,
onOk: () => {
setPartialSubmitConfirmed(true);
submitSelectedChanges();
},
});
return;
}
submitSelectedChanges();
};
return (
<Modal
visible={visible}
title={t('处理上游模型更新')}
okText={t('确定')}
cancelText={t('取消')}
size={isMobile ? 'full-width' : 'medium'}
centered
closeOnEsc
maskClosable
confirmLoading={confirmLoading}
onCancel={onCancel}
onOk={handleSubmit}
>
<div className='flex flex-col gap-3'>
<Typography.Text type='secondary' size='small'>
{t(
'可勾选需要执行的变更:新增会加入渠道模型列表,删除会从渠道模型列表移除。',
)}
</Typography.Text>
<Tabs
type='slash'
size='small'
tabList={tabList}
activeKey={activeTab}
onChange={(key) => setActiveTab(key)}
/>
<div className='flex items-center gap-3 text-xs text-gray-500'>
<span>
{t('新增已选 {{selected}} / {{total}}', {
selected: selectedAddCount,
total: normalizedAddModels.length,
})}
</span>
<span>
{t('删除已选 {{selected}} / {{total}}', {
selected: selectedRemoveCount,
total: normalizedRemoveModels.length,
})}
</span>
</div>
<Input
prefix={<IconSearch size={14} />}
placeholder={t('搜索模型')}
value={keyword}
onChange={(value) => setKeyword(value)}
showClear
/>
<div style={{ maxHeight: 320, overflowY: 'auto', paddingRight: 8 }}>
{currentModels.length === 0 ? (
<Empty
image={
<IllustrationNoResult style={{ width: 150, height: 150 }} />
}
darkModeImage={
<IllustrationNoResultDark style={{ width: 150, height: 150 }} />
}
description={t('暂无匹配模型')}
style={{ padding: 24 }}
/>
) : (
<Checkbox.Group
value={currentSelectedModels}
onChange={(values) =>
currentSetSelectedModels(normalizeModels(values))
}
>
<div className='grid grid-cols-1 md:grid-cols-2 gap-x-4'>
{currentModels.map((model) => (
<Checkbox
key={`${activeTab}:${model}`}
value={model}
className='my-1'
>
{model}
</Checkbox>
))}
</div>
</Checkbox.Group>
)}
</div>
<div className='flex items-center justify-end gap-2'>
<Typography.Text type='secondary' size='small'>
{t('已选择 {{selected}} / {{total}}', {
selected: checkedCount,
total: currentModels.length,
})}
</Typography.Text>
<Checkbox
checked={isAllChecked}
indeterminate={isIndeterminate}
aria-label={t('全选当前列表模型')}
onChange={(e) => handleToggleAllCurrent(e.target.checked)}
/>
</div>
</div>
</Modal>
);
};
export default ChannelUpstreamUpdateModal;

View File

@@ -27,7 +27,7 @@ import {
verifyJSON,
} from '../../../../helpers';
import { useIsMobile } from '../../../../hooks/common/useIsMobile';
import { CHANNEL_OPTIONS } from '../../../../constants';
import { CHANNEL_OPTIONS, MODEL_FETCHABLE_CHANNEL_TYPES } from '../../../../constants';
import {
SideSheet,
Space,
@@ -100,6 +100,7 @@ const REGION_EXAMPLE = {
'gemini-1.5-flash-002': 'europe-west2',
'claude-3-5-sonnet-20240620': 'europe-west1',
};
const UPSTREAM_DETECTED_MODEL_PREVIEW_LIMIT = 8;
const PARAM_OVERRIDE_LEGACY_TEMPLATE = {
temperature: 0,
@@ -203,6 +204,11 @@ const EditChannelModal = (props) => {
allow_include_obfuscation: false,
allow_inference_geo: false,
claude_beta_query: false,
upstream_model_update_check_enabled: false,
upstream_model_update_auto_sync_enabled: false,
upstream_model_update_last_check_time: 0,
upstream_model_update_last_detected_models: [],
upstream_model_update_ignored_models: '',
};
const [batch, setBatch] = useState(false);
const [multiToSingle, setMultiToSingle] = useState(false);
@@ -257,6 +263,23 @@ const EditChannelModal = (props) => {
return [];
}
}, [inputs.model_mapping]);
const upstreamDetectedModels = useMemo(
() =>
Array.from(
new Set(
(inputs.upstream_model_update_last_detected_models || [])
.map((model) => String(model || '').trim())
.filter(Boolean),
),
),
[inputs.upstream_model_update_last_detected_models],
);
const upstreamDetectedModelsPreview = useMemo(
() => upstreamDetectedModels.slice(0, UPSTREAM_DETECTED_MODEL_PREVIEW_LIMIT),
[upstreamDetectedModels],
);
const upstreamDetectedModelsOmittedCount =
upstreamDetectedModels.length - upstreamDetectedModelsPreview.length;
const modelSearchMatchedCount = useMemo(() => {
const keyword = modelSearchValue.trim();
if (!keyword) {
@@ -665,6 +688,14 @@ const EditChannelModal = (props) => {
}
};
const formatUnixTime = (timestamp) => {
const value = Number(timestamp || 0);
if (!value) {
return t('暂无');
}
return new Date(value * 1000).toLocaleString();
};
const copyParamOverrideJson = async () => {
const raw =
typeof inputs.param_override === 'string'
@@ -759,6 +790,10 @@ const EditChannelModal = (props) => {
}
};
const clearParamOverride = () => {
handleInputChange('param_override', '');
};
const loadChannel = async () => {
setLoading(true);
let res = await API.get(`/api/channel/${channelId}`);
@@ -850,6 +885,22 @@ const EditChannelModal = (props) => {
data.allow_inference_geo =
parsedSettings.allow_inference_geo || false;
data.claude_beta_query = parsedSettings.claude_beta_query || false;
data.upstream_model_update_check_enabled =
parsedSettings.upstream_model_update_check_enabled === true;
data.upstream_model_update_auto_sync_enabled =
parsedSettings.upstream_model_update_auto_sync_enabled === true;
data.upstream_model_update_last_check_time =
Number(parsedSettings.upstream_model_update_last_check_time) || 0;
data.upstream_model_update_last_detected_models = Array.isArray(
parsedSettings.upstream_model_update_last_detected_models,
)
? parsedSettings.upstream_model_update_last_detected_models
: [];
data.upstream_model_update_ignored_models = Array.isArray(
parsedSettings.upstream_model_update_ignored_models,
)
? parsedSettings.upstream_model_update_ignored_models.join(',')
: '';
} catch (error) {
console.error('解析其他设置失败:', error);
data.azure_responses_version = '';
@@ -863,6 +914,11 @@ const EditChannelModal = (props) => {
data.allow_include_obfuscation = false;
data.allow_inference_geo = false;
data.claude_beta_query = false;
data.upstream_model_update_check_enabled = false;
data.upstream_model_update_auto_sync_enabled = false;
data.upstream_model_update_last_check_time = 0;
data.upstream_model_update_last_detected_models = [];
data.upstream_model_update_ignored_models = '';
}
} else {
// 兼容历史数据:老渠道没有 settings 时,默认按 json 展示
@@ -875,6 +931,11 @@ const EditChannelModal = (props) => {
data.allow_include_obfuscation = false;
data.allow_inference_geo = false;
data.claude_beta_query = false;
data.upstream_model_update_check_enabled = false;
data.upstream_model_update_auto_sync_enabled = false;
data.upstream_model_update_last_check_time = 0;
data.upstream_model_update_last_detected_models = [];
data.upstream_model_update_ignored_models = '';
}
if (
@@ -1005,7 +1066,7 @@ const EditChannelModal = (props) => {
const mappingKey = String(pairKey ?? '').trim();
if (!mappingKey) return;
if (!MODEL_FETCHABLE_TYPES.has(inputs.type)) {
if (!MODEL_FETCHABLE_CHANNEL_TYPES.has(inputs.type)) {
return;
}
@@ -1677,6 +1738,29 @@ const EditChannelModal = (props) => {
}
}
settings.upstream_model_update_check_enabled =
localInputs.upstream_model_update_check_enabled === true;
settings.upstream_model_update_auto_sync_enabled =
settings.upstream_model_update_check_enabled &&
localInputs.upstream_model_update_auto_sync_enabled === true;
settings.upstream_model_update_ignored_models = Array.from(
new Set(
String(localInputs.upstream_model_update_ignored_models || '')
.split(',')
.map((model) => model.trim())
.filter(Boolean),
),
);
if (
!Array.isArray(settings.upstream_model_update_last_detected_models) ||
!settings.upstream_model_update_check_enabled
) {
settings.upstream_model_update_last_detected_models = [];
}
if (typeof settings.upstream_model_update_last_check_time !== 'number') {
settings.upstream_model_update_last_check_time = 0;
}
localInputs.settings = JSON.stringify(settings);
// 清理不需要发送到后端的字段
@@ -1698,6 +1782,11 @@ const EditChannelModal = (props) => {
delete localInputs.allow_include_obfuscation;
delete localInputs.allow_inference_geo;
delete localInputs.claude_beta_query;
delete localInputs.upstream_model_update_check_enabled;
delete localInputs.upstream_model_update_auto_sync_enabled;
delete localInputs.upstream_model_update_last_check_time;
delete localInputs.upstream_model_update_last_detected_models;
delete localInputs.upstream_model_update_ignored_models;
let res;
localInputs.auto_ban = localInputs.auto_ban ? 1 : 0;
@@ -3076,7 +3165,7 @@ const EditChannelModal = (props) => {
>
{t('填入所有模型')}
</Button>
{MODEL_FETCHABLE_TYPES.has(inputs.type) && (
{MODEL_FETCHABLE_CHANNEL_TYPES.has(inputs.type) && (
<Button
size='small'
type='tertiary'
@@ -3179,6 +3268,44 @@ const EditChannelModal = (props) => {
}
/>
{MODEL_FETCHABLE_CHANNEL_TYPES.has(inputs.type) && (
<>
<Form.Switch
field='upstream_model_update_check_enabled'
label={t('是否检测上游模型更新')}
checkedText={t('开')}
uncheckedText={t('关')}
onChange={(value) =>
handleChannelOtherSettingsChange(
'upstream_model_update_check_enabled',
value,
)
}
extraText={t(
'开启后由后端定时任务检测该渠道上游模型变化',
)}
/>
<div className='text-xs text-gray-500 mb-2'>
{t('上次检测时间')}:&nbsp;
{formatUnixTime(
inputs.upstream_model_update_last_check_time,
)}
</div>
<Form.Input
field='upstream_model_update_ignored_models'
label={t('已忽略模型')}
placeholder={t('例如gpt-4.1-nano,gpt-4o-mini')}
onChange={(value) =>
handleInputChange(
'upstream_model_update_ignored_models',
value,
)
}
showClear
/>
</>
)}
<Form.Input
field='test_model'
label={t('默认测试模型')}
@@ -3208,7 +3335,7 @@ const EditChannelModal = (props) => {
editorType='keyValue'
formApi={formApiRef.current}
renderStringValueSuffix={({ pairKey, value }) => {
if (!MODEL_FETCHABLE_TYPES.has(inputs.type)) {
if (!MODEL_FETCHABLE_CHANNEL_TYPES.has(inputs.type)) {
return null;
}
const disabled = !String(pairKey ?? '').trim();
@@ -3328,45 +3455,101 @@ const EditChannelModal = (props) => {
initValue={autoBan}
/>
<Form.Switch
field='upstream_model_update_auto_sync_enabled'
label={t('是否自动同步上游模型更新')}
checkedText={t('开')}
uncheckedText={t('关')}
disabled={!inputs.upstream_model_update_check_enabled}
onChange={(value) =>
handleChannelOtherSettingsChange(
'upstream_model_update_auto_sync_enabled',
value,
)
}
extraText={t(
'开启后检测到新增模型会自动加入当前渠道模型列表',
)}
/>
<div className='text-xs text-gray-500 mb-3'>
{t('上次检测到可加入模型')}:&nbsp;
{upstreamDetectedModels.length === 0 ? (
t('暂无')
) : (
<>
<Tooltip
position='topLeft'
content={
<div className='max-w-[640px] break-all text-xs leading-5'>
{upstreamDetectedModels.join(', ')}
</div>
}
>
<span className='cursor-help break-all'>
{upstreamDetectedModelsPreview.join(', ')}
</span>
</Tooltip>
<span className='ml-1 text-gray-400'>
{upstreamDetectedModelsOmittedCount > 0
? t('(共 {{total}} 个,省略 {{omit}} 个)', {
total: upstreamDetectedModels.length,
omit: upstreamDetectedModelsOmittedCount,
})
: t('(共 {{total}} 个)', {
total: upstreamDetectedModels.length,
})}
</span>
</>
)}
</div>
<div className='mb-4'>
<div className='flex items-center justify-between gap-2 mb-1'>
<Text className='text-sm font-medium'>{t('参数覆盖')}</Text>
<Space wrap>
<Button
size='small'
type='primary'
icon={<IconCode size={14} />}
onClick={() => setParamOverrideEditorVisible(true)}
size='small'
type='primary'
icon={<IconCode size={14} />}
onClick={() => setParamOverrideEditorVisible(true)}
>
{t('可视化编辑')}
</Button>
<Button
size='small'
onClick={() =>
applyParamOverrideTemplate('operations', 'fill')
}
size='small'
onClick={() =>
applyParamOverrideTemplate('operations', 'fill')
}
>
{t('填充新模板')}
</Button>
<Button
size='small'
onClick={() =>
applyParamOverrideTemplate('legacy', 'fill')
}
size='small'
onClick={() =>
applyParamOverrideTemplate('legacy', 'fill')
}
>
{t('填充旧模板')}
</Button>
<Button
size='small'
type='tertiary'
onClick={clearParamOverride}
>
{t('清空')}
</Button>
</Space>
</div>
<Text type='tertiary' size='small'>
{t('此项可选,用于覆盖请求参数。不支持覆盖 stream 参数')}
</Text>
<div
className='mt-2 rounded-xl p-3'
style={{
backgroundColor: 'var(--semi-color-fill-0)',
border: '1px solid var(--semi-color-fill-2)',
}}
className='mt-2 rounded-xl p-3'
style={{
backgroundColor: 'var(--semi-color-fill-0)',
border: '1px solid var(--semi-color-fill-2)',
}}
>
<div className='flex items-center justify-between mb-2'>
<Tag color={paramOverrideMeta.tagColor}>
@@ -3374,17 +3557,17 @@ const EditChannelModal = (props) => {
</Tag>
<Space spacing={8}>
<Button
size='small'
icon={<IconCopy />}
type='tertiary'
onClick={copyParamOverrideJson}
size='small'
icon={<IconCopy />}
type='tertiary'
onClick={copyParamOverrideJson}
>
{t('复制')}
</Button>
<Button
size='small'
type='tertiary'
onClick={() => setParamOverrideEditorVisible(true)}
size='small'
type='tertiary'
onClick={() => setParamOverrideEditorVisible(true)}
>
{t('编辑')}
</Button>
@@ -3397,82 +3580,81 @@ const EditChannelModal = (props) => {
</div>
<Form.TextArea
field='header_override'
label={t('请求头覆盖')}
placeholder={
t('此项可选,用于覆盖请求头参数') +
'\n' +
t('格式示例:') +
'\n{\n "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36 Edg/139.0.0.0",\n "Authorization": "Bearer {api_key}"\n}'
}
autosize
onChange={(value) =>
handleInputChange('header_override', value)
}
extraText={
<div className='flex flex-col gap-1'>
<div className='flex gap-2 flex-wrap items-center'>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() =>
handleInputChange(
'header_override',
JSON.stringify(
{
'*': true,
're:^X-Trace-.*$': true,
'X-Foo': '{client_header:X-Foo}',
Authorization: 'Bearer {api_key}',
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36 Edg/139.0.0.0',
},
null,
2,
),
)
}
>
{t('填入模板')}
</Text>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() =>
handleInputChange(
'header_override',
JSON.stringify(
{
'*': true,
},
null,
2,
),
)
}
>
{t('填入透传模版')}
</Text>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() => formatJsonField('header_override')}
>
{t('格式化')}
</Text>
</div>
<div>
<Text type='tertiary' size='small'>
{t('支持变量:')}
</Text>
<div className='text-xs text-tertiary ml-2'>
<div>
{t('渠道密钥')}: {'{api_key}'}
field='header_override'
label={t('请求头覆盖')}
placeholder={
t('此项可选,用于覆盖请求头参数') +
'\n' +
t('格式示例:') +
'\n{\n "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36 Edg/139.0.0.0",\n "Authorization": "Bearer {api_key}"\n}'
}
autosize
onChange={(value) =>
handleInputChange('header_override', value)
}
extraText={
<div className='flex flex-col gap-1'>
<div className='flex gap-2 flex-wrap items-center'>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() =>
handleInputChange(
'header_override',
JSON.stringify(
{
'*': true,
're:^X-Trace-.*$': true,
'X-Foo': '{client_header:X-Foo}',
Authorization: 'Bearer {api_key}',
'User-Agent':
'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/139.0.0.0 Safari/537.36 Edg/139.0.0.0',
},
null,
2,
),
)
}
>
{t('填入模板')}
</Text>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() =>
handleInputChange(
'header_override',
JSON.stringify(
{
'*': true,
},
null,
2,
),
)
}
>
{t('填入透传模版')}
</Text>
<Text
className='!text-semi-color-primary cursor-pointer'
onClick={() => formatJsonField('header_override')}
>
{t('格式化')}
</Text>
</div>
<div>
<Text type='tertiary' size='small'>
{t('支持变量:')}
</Text>
<div className='text-xs text-tertiary ml-2'>
<div>
{t('渠道密钥')}: {'{api_key}'}
</div>
</div>
</div>
</div>
</div>
}
showClear
}
showClear
/>
<JSONEditor
key={`status_code_mapping-${isEdit ? channelId : 'new'}`}
field='status_code_mapping'

View File

@@ -276,6 +276,7 @@ const LEGACY_TEMPLATE = {
const OPERATION_TEMPLATE = {
operations: [
{
description: 'Set default temperature for openai/* models.',
path: 'temperature',
mode: 'set',
value: 0.7,
@@ -294,8 +295,9 @@ const OPERATION_TEMPLATE = {
const HEADER_PASSTHROUGH_TEMPLATE = {
operations: [
{
description: 'Pass through X-Request-Id header to upstream.',
mode: 'pass_headers',
value: ['Authorization'],
value: ['X-Request-Id'],
keep_origin: true,
},
],
@@ -304,6 +306,8 @@ const HEADER_PASSTHROUGH_TEMPLATE = {
const GEMINI_IMAGE_4K_TEMPLATE = {
operations: [
{
description:
'Set imageSize to 4K when model contains gemini/image and ends with 4k.',
mode: 'set',
path: 'generationConfig.imageConfig.imageSize',
value: '4K',
@@ -311,7 +315,17 @@ const GEMINI_IMAGE_4K_TEMPLATE = {
{
path: 'original_model',
mode: 'contains',
value: 'gemini-3-pro-image-preview',
value: 'gemini',
},
{
path: 'original_model',
mode: 'contains',
value: 'image',
},
{
path: 'original_model',
mode: 'suffix',
value: '4k',
},
],
logic: 'AND',
@@ -319,11 +333,13 @@ const GEMINI_IMAGE_4K_TEMPLATE = {
],
};
const AWS_BEDROCK_ANTHROPIC_BETA_OVERRIDE_TEMPLATE = {
const AWS_BEDROCK_ANTHROPIC_COMPAT_TEMPLATE = {
operations: [
{
description: 'Normalize anthropic-beta header tokens for Bedrock compatibility.',
mode: 'set_header',
path: 'anthropic-beta',
// https://github.com/BerriAI/litellm/blob/main/litellm/anthropic_beta_headers_config.json
value: {
'advanced-tool-use-2025-11-20': 'tool-search-tool-2025-10-19',
bash_20241022: null,
@@ -355,6 +371,11 @@ const AWS_BEDROCK_ANTHROPIC_BETA_OVERRIDE_TEMPLATE = {
'web-search-2025-03-05': null,
},
},
{
description: 'Remove all tools[*].custom.input_examples before upstream relay.',
mode: 'delete',
path: 'tools.*.custom.input_examples',
},
],
};
@@ -378,7 +399,7 @@ const TEMPLATE_PRESET_CONFIG = {
},
pass_headers_auth: {
group: 'scenario',
label: '请求头透传(Authorization',
label: '请求头透传(X-Request-Id',
kind: 'operations',
payload: HEADER_PASSTHROUGH_TEMPLATE,
},
@@ -402,9 +423,9 @@ const TEMPLATE_PRESET_CONFIG = {
},
aws_bedrock_anthropic_beta_override: {
group: 'scenario',
label: 'AWS Bedrock anthropic-beta覆盖',
label: 'AWS Bedrock Claude 兼容模板',
kind: 'operations',
payload: AWS_BEDROCK_ANTHROPIC_BETA_OVERRIDE_TEMPLATE,
payload: AWS_BEDROCK_ANTHROPIC_COMPAT_TEMPLATE,
},
};
@@ -764,6 +785,7 @@ const createDefaultCondition = () => normalizeCondition({});
const normalizeOperation = (operation = {}) => ({
id: nextLocalId(),
description: typeof operation.description === 'string' ? operation.description : '',
path: typeof operation.path === 'string' ? operation.path : '',
mode: OPERATION_MODE_VALUES.has(operation.mode) ? operation.mode : 'set',
value_text: toValueText(operation.value),
@@ -1086,6 +1108,7 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
if (!keyword) return operations;
return operations.filter((operation) => {
const searchableText = [
operation.description,
operation.mode,
operation.path,
operation.from,
@@ -1151,10 +1174,14 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
const payloadOps = filteredOps.map((operation) => {
const mode = operation.mode || 'set';
const meta = MODE_META[mode] || MODE_META.set;
const descriptionValue = String(operation.description || '').trim();
const pathValue = operation.path.trim();
const fromValue = operation.from.trim();
const toValue = operation.to.trim();
const payload = { mode };
if (descriptionValue) {
payload.description = descriptionValue;
}
if (meta.path) {
payload.path = pathValue;
}
@@ -1563,6 +1590,7 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
if (index < 0) return prev;
const source = prev[index];
const cloned = normalizeOperation({
description: source.description,
path: source.path,
mode: source.mode,
value: parseLooseValue(source.value_text),
@@ -1812,14 +1840,6 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
{t('重置')}
</Button>
</Space>
<Text
type='tertiary'
size='small'
className='cursor-pointer select-none mt-1 whitespace-nowrap'
onClick={() => openFieldGuide('path')}
>
{t('字段速查')}
</Text>
</div>
</Card>
@@ -1891,7 +1911,7 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
<Input
value={operationSearch}
placeholder={t('搜索规则(类型 / 路径 / 来源 / 目标)')}
placeholder={t('搜索规则(描述 / 类型 / 路径 / 来源 / 目标)')}
onChange={(nextValue) =>
setOperationSearch(nextValue || '')
}
@@ -1958,6 +1978,23 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
>
{getOperationSummary(operation, index)}
</Text>
{String(operation.description || '').trim() ? (
<Text
type='tertiary'
size='small'
className='block mt-1'
style={{
lineHeight: 1.5,
wordBreak: 'break-word',
overflow: 'hidden',
display: '-webkit-box',
WebkitLineClamp: 2,
WebkitBoxOrient: 'vertical',
}}
>
{operation.description}
</Text>
) : null}
</div>
<Tag size='small' color='grey'>
{(operation.conditions || []).length}
@@ -2035,6 +2072,7 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
type='danger'
theme='borderless'
icon={<IconDelete />}
aria-label={t('删除规则')}
onClick={() =>
removeOperation(selectedOperation.id)
}
@@ -2085,6 +2123,25 @@ const ParamOverrideEditorModal = ({ visible, value, onSave, onCancel }) => {
>
{MODE_DESCRIPTIONS[mode] || ''}
</Text>
<div className='mt-2'>
<Text type='tertiary' size='small'>
{t('规则描述(可选)')}
</Text>
<Input
value={selectedOperation.description || ''}
placeholder={t('例如:清理工具参数,避免上游校验错误')}
onChange={(nextValue) =>
updateOperation(selectedOperation.id, {
description: nextValue || '',
})
}
maxLength={180}
showClear
/>
<Text type='tertiary' size='small' className='mt-1 block'>
{`${String(selectedOperation.description || '').length}/180`}
</Text>
</div>
{meta.value ? (
mode === 'return_error' && returnErrorDraft ? (

View File

@@ -76,7 +76,6 @@ const PricingEndpointTypes = ({
value: 'all',
label: t('全部端点'),
tagCount: getEndpointTypeCount('all'),
disabled: models.length === 0,
},
...availableEndpointTypes.map((endpointType) => {
const count = getEndpointTypeCount(endpointType);
@@ -84,7 +83,6 @@ const PricingEndpointTypes = ({
value: endpointType,
label: getEndpointTypeLabel(endpointType),
tagCount: count,
disabled: count === 0,
};
}),
];
@@ -96,6 +94,7 @@ const PricingEndpointTypes = ({
activeValue={filterEndpointType}
onChange={setFilterEndpointType}
loading={loading}
variant='green'
t={t}
/>
);

View File

@@ -52,20 +52,19 @@ const PricingGroups = ({
.length;
let ratioDisplay = '';
if (g === 'all') {
ratioDisplay = t('全部');
// ratioDisplay = t('全部');
} else {
const ratio = groupRatio[g];
if (ratio !== undefined && ratio !== null) {
ratioDisplay = `x${ratio}`;
ratioDisplay = `${ratio}x`;
} else {
ratioDisplay = 'x1';
ratioDisplay = '1x';
}
}
return {
value: g,
label: g === 'all' ? t('全部分组') : g,
tagCount: ratioDisplay,
disabled: modelCount === 0,
};
});
@@ -76,6 +75,7 @@ const PricingGroups = ({
activeValue={filterGroup}
onChange={setFilterGroup}
loading={loading}
variant='teal'
t={t}
/>
);

View File

@@ -52,6 +52,7 @@ const PricingQuotaTypes = ({
activeValue={filterQuotaType}
onChange={setFilterQuotaType}
loading={loading}
variant='amber'
t={t}
/>
);

View File

@@ -78,7 +78,6 @@ const PricingTags = ({
value: 'all',
label: t('全部标签'),
tagCount: getTagCount('all'),
disabled: models.length === 0,
},
];
@@ -88,7 +87,6 @@ const PricingTags = ({
value: tag,
label: tag,
tagCount: count,
disabled: count === 0,
});
});
@@ -102,6 +100,7 @@ const PricingTags = ({
activeValue={filterTag}
onChange={setFilterTag}
loading={loading}
variant='rose'
t={t}
/>
);

View File

@@ -83,7 +83,6 @@ const PricingVendors = ({
value: 'all',
label: t('全部供应商'),
tagCount: getVendorCount('all'),
disabled: models.length === 0,
},
];
@@ -96,7 +95,6 @@ const PricingVendors = ({
label: vendor,
icon: icon ? getLobeHubIcon(icon, 16) : null,
tagCount: count,
disabled: count === 0,
});
});
@@ -107,7 +105,6 @@ const PricingVendors = ({
value: 'unknown',
label: t('未知供应商'),
tagCount: count,
disabled: count === 0,
});
}
@@ -121,6 +118,7 @@ const PricingVendors = ({
activeValue={filterVendor}
onChange={setFilterVendor}
loading={loading}
variant='violet'
t={t}
/>
);

View File

@@ -113,15 +113,6 @@ const PricingSidebar = ({
t={t}
/>
<PricingTags
filterTag={filterTag}
setFilterTag={setFilterTag}
models={tagModels}
allModels={categoryProps.models}
loading={loading}
t={t}
/>
<PricingGroups
filterGroup={filterGroup}
setFilterGroup={handleGroupClick}
@@ -140,6 +131,15 @@ const PricingSidebar = ({
t={t}
/>
<PricingTags
filterTag={filterTag}
setFilterTag={setFilterTag}
models={tagModels}
allModels={categoryProps.models}
loading={loading}
t={t}
/>
<PricingEndpointTypes
filterEndpointType={filterEndpointType}
setFilterEndpointType={setFilterEndpointType}

View File

@@ -96,15 +96,6 @@ const FilterModalContent = ({ sidebarProps, t }) => {
t={t}
/>
<PricingTags
filterTag={filterTag}
setFilterTag={setFilterTag}
models={tagModels}
allModels={categoryProps.models}
loading={loading}
t={t}
/>
<PricingGroups
filterGroup={filterGroup}
setFilterGroup={setFilterGroup}
@@ -123,6 +114,15 @@ const FilterModalContent = ({ sidebarProps, t }) => {
t={t}
/>
<PricingTags
filterTag={filterTag}
setFilterTag={setFilterTag}
models={tagModels}
allModels={categoryProps.models}
loading={loading}
t={t}
/>
<PricingEndpointTypes
filterEndpointType={filterEndpointType}
setFilterEndpointType={setFilterEndpointType}

View File

@@ -191,4 +191,9 @@ export const CHANNEL_OPTIONS = [
},
];
// Channel types that support upstream model list fetching in UI.
export const MODEL_FETCHABLE_CHANNEL_TYPES = new Set([
1, 4, 14, 34, 17, 26, 27, 24, 47, 25, 20, 23, 31, 40, 42, 48, 43,
]);
export const MODEL_TABLE_PAGE_SIZE = 10;

View File

@@ -0,0 +1,56 @@
/*
Copyright (C) 2025 QuantumNous
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
For commercial licensing, please contact support@quantumnous.com
*/
export const normalizeModelList = (models = []) =>
Array.from(
new Set(
(models || []).map((model) => String(model || '').trim()).filter(Boolean),
),
);
export const parseUpstreamUpdateMeta = (settings) => {
let parsed = null;
if (settings && typeof settings === 'object') {
parsed = settings;
} else if (typeof settings === 'string') {
try {
parsed = JSON.parse(settings);
} catch (error) {
parsed = null;
}
}
if (!parsed || typeof parsed !== 'object') {
return {
enabled: false,
pendingAddModels: [],
pendingRemoveModels: [],
};
}
return {
enabled: parsed.upstream_model_update_check_enabled === true,
pendingAddModels: normalizeModelList(
parsed.upstream_model_update_last_detected_models,
),
pendingRemoveModels: normalizeModelList(
parsed.upstream_model_update_last_removed_models,
),
};
};

View File

@@ -0,0 +1,309 @@
/*
Copyright (C) 2025 QuantumNous
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as
published by the Free Software Foundation, either version 3 of the
License, or (at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
For commercial licensing, please contact support@quantumnous.com
*/
import { useRef, useState } from 'react';
import { API, showError, showInfo, showSuccess } from '../../helpers';
import { normalizeModelList } from './upstreamUpdateUtils';
const getManualIgnoredModelCountFromSettings = (settings) => {
let parsed = null;
if (settings && typeof settings === 'object') {
parsed = settings;
} else if (typeof settings === 'string') {
try {
parsed = JSON.parse(settings);
} catch (error) {
parsed = null;
}
}
if (!parsed || typeof parsed !== 'object') {
return 0;
}
return normalizeModelList(parsed.upstream_model_update_ignored_models).length;
};
export const useChannelUpstreamUpdates = ({ t, refresh }) => {
const [showUpstreamUpdateModal, setShowUpstreamUpdateModal] = useState(false);
const [upstreamUpdateChannel, setUpstreamUpdateChannel] = useState(null);
const [upstreamUpdateAddModels, setUpstreamUpdateAddModels] = useState([]);
const [upstreamUpdateRemoveModels, setUpstreamUpdateRemoveModels] = useState(
[],
);
const [upstreamUpdatePreferredTab, setUpstreamUpdatePreferredTab] =
useState('add');
const [upstreamApplyLoading, setUpstreamApplyLoading] = useState(false);
const [detectAllUpstreamUpdatesLoading, setDetectAllUpstreamUpdatesLoading] =
useState(false);
const [applyAllUpstreamUpdatesLoading, setApplyAllUpstreamUpdatesLoading] =
useState(false);
const applyUpstreamUpdatesInFlightRef = useRef(false);
const detectChannelUpstreamUpdatesInFlightRef = useRef(false);
const detectAllUpstreamUpdatesInFlightRef = useRef(false);
const applyAllUpstreamUpdatesInFlightRef = useRef(false);
const openUpstreamUpdateModal = (
record,
pendingAddModels = [],
pendingRemoveModels = [],
preferredTab = 'add',
) => {
const normalizedAddModels = normalizeModelList(pendingAddModels);
const normalizedRemoveModels = normalizeModelList(pendingRemoveModels);
if (
!record?.id ||
(normalizedAddModels.length === 0 && normalizedRemoveModels.length === 0)
) {
showInfo(t('该渠道暂无可处理的上游模型更新'));
return;
}
setUpstreamUpdateChannel(record);
setUpstreamUpdateAddModels(normalizedAddModels);
setUpstreamUpdateRemoveModels(normalizedRemoveModels);
const normalizedPreferredTab = preferredTab === 'remove' ? 'remove' : 'add';
setUpstreamUpdatePreferredTab(normalizedPreferredTab);
setShowUpstreamUpdateModal(true);
};
const closeUpstreamUpdateModal = () => {
setShowUpstreamUpdateModal(false);
setUpstreamUpdateChannel(null);
setUpstreamUpdateAddModels([]);
setUpstreamUpdateRemoveModels([]);
setUpstreamUpdatePreferredTab('add');
};
const applyUpstreamUpdates = async ({
addModels: selectedAddModels = [],
removeModels: selectedRemoveModels = [],
} = {}) => {
if (applyUpstreamUpdatesInFlightRef.current) {
showInfo(t('正在处理,请稍候'));
return;
}
if (!upstreamUpdateChannel?.id) {
closeUpstreamUpdateModal();
return;
}
applyUpstreamUpdatesInFlightRef.current = true;
setUpstreamApplyLoading(true);
try {
const normalizedSelectedAddModels = normalizeModelList(selectedAddModels);
const normalizedSelectedRemoveModels =
normalizeModelList(selectedRemoveModels);
const selectedAddSet = new Set(normalizedSelectedAddModels);
const ignoreModels = upstreamUpdateAddModels.filter(
(model) => !selectedAddSet.has(model),
);
const res = await API.post(
'/api/channel/upstream_updates/apply',
{
id: upstreamUpdateChannel.id,
add_models: normalizedSelectedAddModels,
ignore_models: ignoreModels,
remove_models: normalizedSelectedRemoveModels,
},
{ skipErrorHandler: true },
);
const { success, message, data } = res.data || {};
if (!success) {
showError(message || t('操作失败'));
return;
}
const addedCount = data?.added_models?.length || 0;
const removedCount = data?.removed_models?.length || 0;
const totalIgnoredCount = getManualIgnoredModelCountFromSettings(
data?.settings,
);
const ignoredCount = normalizeModelList(ignoreModels).length;
showSuccess(
t(
'已处理上游模型更新:加入 {{added}} 个,删除 {{removed}} 个,本次忽略 {{ignored}} 个,当前已忽略模型 {{totalIgnored}} 个',
{
added: addedCount,
removed: removedCount,
ignored: ignoredCount,
totalIgnored: totalIgnoredCount,
},
),
);
closeUpstreamUpdateModal();
await refresh();
} catch (error) {
showError(
error?.response?.data?.message || error?.message || t('操作失败'),
);
} finally {
applyUpstreamUpdatesInFlightRef.current = false;
setUpstreamApplyLoading(false);
}
};
const applyAllUpstreamUpdates = async () => {
if (applyAllUpstreamUpdatesInFlightRef.current) {
showInfo(t('正在批量处理,请稍候'));
return;
}
applyAllUpstreamUpdatesInFlightRef.current = true;
setApplyAllUpstreamUpdatesLoading(true);
try {
const res = await API.post(
'/api/channel/upstream_updates/apply_all',
{},
{ skipErrorHandler: true },
);
const { success, message, data } = res.data || {};
if (!success) {
showError(message || t('批量处理失败'));
return;
}
const channelCount = data?.processed_channels || 0;
const addedCount = data?.added_models || 0;
const removedCount = data?.removed_models || 0;
const failedCount = (data?.failed_channel_ids || []).length;
showSuccess(
t(
'已批量处理上游模型更新:渠道 {{channels}} 个,加入 {{added}} 个,删除 {{removed}} 个,失败 {{fails}} 个',
{
channels: channelCount,
added: addedCount,
removed: removedCount,
fails: failedCount,
},
),
);
await refresh();
} catch (error) {
showError(
error?.response?.data?.message || error?.message || t('批量处理失败'),
);
} finally {
applyAllUpstreamUpdatesInFlightRef.current = false;
setApplyAllUpstreamUpdatesLoading(false);
}
};
const detectChannelUpstreamUpdates = async (channel) => {
if (detectChannelUpstreamUpdatesInFlightRef.current) {
showInfo(t('正在检测,请稍候'));
return;
}
if (!channel?.id) {
return;
}
detectChannelUpstreamUpdatesInFlightRef.current = true;
try {
const res = await API.post(
'/api/channel/upstream_updates/detect',
{
id: channel.id,
},
{ skipErrorHandler: true },
);
const { success, message, data } = res.data || {};
if (!success) {
showError(message || t('检测失败'));
return;
}
const addCount = data?.add_models?.length || 0;
const removeCount = data?.remove_models?.length || 0;
showSuccess(
t('检测完成:新增 {{add}} 个,删除 {{remove}} 个', {
add: addCount,
remove: removeCount,
}),
);
await refresh();
} catch (error) {
showError(
error?.response?.data?.message || error?.message || t('检测失败'),
);
} finally {
detectChannelUpstreamUpdatesInFlightRef.current = false;
}
};
const detectAllUpstreamUpdates = async () => {
if (detectAllUpstreamUpdatesInFlightRef.current) {
showInfo(t('正在批量检测,请稍候'));
return;
}
detectAllUpstreamUpdatesInFlightRef.current = true;
setDetectAllUpstreamUpdatesLoading(true);
try {
const res = await API.post(
'/api/channel/upstream_updates/detect_all',
{},
{ skipErrorHandler: true },
);
const { success, message, data } = res.data || {};
if (!success) {
showError(message || t('批量检测失败'));
return;
}
const channelCount = data?.processed_channels || 0;
const addCount = data?.detected_add_models || 0;
const removeCount = data?.detected_remove_models || 0;
const failedCount = (data?.failed_channel_ids || []).length;
showSuccess(
t(
'批量检测完成:渠道 {{channels}} 个,新增 {{add}} 个,删除 {{remove}} 个,失败 {{fails}} 个',
{
channels: channelCount,
add: addCount,
remove: removeCount,
fails: failedCount,
},
),
);
await refresh();
} catch (error) {
showError(
error?.response?.data?.message || error?.message || t('批量检测失败'),
);
} finally {
detectAllUpstreamUpdatesInFlightRef.current = false;
setDetectAllUpstreamUpdatesLoading(false);
}
};
return {
showUpstreamUpdateModal,
setShowUpstreamUpdateModal,
upstreamUpdateChannel,
upstreamUpdateAddModels,
upstreamUpdateRemoveModels,
upstreamUpdatePreferredTab,
upstreamApplyLoading,
detectAllUpstreamUpdatesLoading,
applyAllUpstreamUpdatesLoading,
openUpstreamUpdateModal,
closeUpstreamUpdateModal,
applyUpstreamUpdates,
applyAllUpstreamUpdates,
detectChannelUpstreamUpdates,
detectAllUpstreamUpdates,
};
};

View File

@@ -35,6 +35,8 @@ import {
} from '../../constants';
import { useIsMobile } from '../common/useIsMobile';
import { useTableCompactMode } from '../common/useTableCompactMode';
import { useChannelUpstreamUpdates } from './useChannelUpstreamUpdates';
import { parseUpstreamUpdateMeta } from './upstreamUpdateUtils';
import { Modal, Button } from '@douyinfe/semi-ui';
import { openCodexUsageModal } from '../../components/table/channels/modals/CodexUsageModal';
@@ -235,6 +237,9 @@ export const useChannelsData = () => {
let channelTags = {};
for (let i = 0; i < channels.length; i++) {
channels[i].upstreamUpdateMeta = parseUpstreamUpdateMeta(
channels[i].settings,
);
channels[i].key = '' + channels[i].id;
if (!enableTagMode) {
channelDates.push(channels[i]);
@@ -432,6 +437,8 @@ export const useChannelsData = () => {
}
};
const upstreamUpdates = useChannelUpstreamUpdates({ t, refresh });
// Channel management
const manageChannel = async (id, action, record, value) => {
let data = { id };
@@ -1194,6 +1201,7 @@ export const useChannelsData = () => {
setShowMultiKeyManageModal,
currentMultiKeyChannel,
setCurrentMultiKeyChannel,
...upstreamUpdates,
// Form
formApi,

96
web/src/index.css vendored
View File

@@ -344,6 +344,102 @@ code {
text-overflow: ellipsis;
}
/* Badge for count/multiplier in filter buttons */
.sbg-badge {
display: inline-flex;
align-items: center;
justify-content: center;
flex-shrink: 0;
min-width: 18px;
height: 18px;
padding: 0 6px;
border-radius: 9px;
font-size: 11px;
font-weight: 600;
font-variant-numeric: tabular-nums;
line-height: 1;
background-color: var(--semi-color-fill-0);
color: var(--semi-color-text-2);
transition: background-color 0.15s ease, color 0.15s ease;
}
.sbg-badge-active {
background-color: var(--semi-color-primary-light-active);
color: var(--semi-color-primary);
}
/* ---- SelectableButtonGroup color variants ---- */
.sbg-variant-violet {
--semi-color-primary: #6d28d9;
--semi-color-primary-light-default: rgba(124, 58, 237, 0.08);
--semi-color-primary-light-hover: rgba(124, 58, 237, 0.15);
--semi-color-primary-light-active: rgba(124, 58, 237, 0.22);
}
.sbg-variant-teal {
--semi-color-primary: #0f766e;
--semi-color-primary-light-default: rgba(20, 184, 166, 0.08);
--semi-color-primary-light-hover: rgba(20, 184, 166, 0.15);
--semi-color-primary-light-active: rgba(20, 184, 166, 0.22);
}
.sbg-variant-amber {
--semi-color-primary: #b45309;
--semi-color-primary-light-default: rgba(245, 158, 11, 0.08);
--semi-color-primary-light-hover: rgba(245, 158, 11, 0.15);
--semi-color-primary-light-active: rgba(245, 158, 11, 0.22);
}
.sbg-variant-rose {
--semi-color-primary: #be123c;
--semi-color-primary-light-default: rgba(244, 63, 94, 0.08);
--semi-color-primary-light-hover: rgba(244, 63, 94, 0.15);
--semi-color-primary-light-active: rgba(244, 63, 94, 0.22);
}
.sbg-variant-green {
--semi-color-primary: #047857;
--semi-color-primary-light-default: rgba(16, 185, 129, 0.08);
--semi-color-primary-light-hover: rgba(16, 185, 129, 0.15);
--semi-color-primary-light-active: rgba(16, 185, 129, 0.22);
}
/* Dark mode: lighter text, slightly stronger backgrounds */
html.dark .sbg-variant-violet {
--semi-color-primary: #a78bfa;
--semi-color-primary-light-default: rgba(139, 92, 246, 0.14);
--semi-color-primary-light-hover: rgba(139, 92, 246, 0.22);
--semi-color-primary-light-active: rgba(139, 92, 246, 0.3);
}
html.dark .sbg-variant-teal {
--semi-color-primary: #2dd4bf;
--semi-color-primary-light-default: rgba(45, 212, 191, 0.14);
--semi-color-primary-light-hover: rgba(45, 212, 191, 0.22);
--semi-color-primary-light-active: rgba(45, 212, 191, 0.3);
}
html.dark .sbg-variant-amber {
--semi-color-primary: #fbbf24;
--semi-color-primary-light-default: rgba(251, 191, 36, 0.14);
--semi-color-primary-light-hover: rgba(251, 191, 36, 0.22);
--semi-color-primary-light-active: rgba(251, 191, 36, 0.3);
}
html.dark .sbg-variant-rose {
--semi-color-primary: #fb7185;
--semi-color-primary-light-default: rgba(251, 113, 133, 0.14);
--semi-color-primary-light-hover: rgba(251, 113, 133, 0.22);
--semi-color-primary-light-active: rgba(251, 113, 133, 0.3);
}
html.dark .sbg-variant-green {
--semi-color-primary: #34d399;
--semi-color-primary-light-default: rgba(52, 211, 153, 0.14);
--semi-color-primary-light-hover: rgba(52, 211, 153, 0.22);
--semi-color-primary-light-active: rgba(52, 211, 153, 0.3);
}
/* Tabs组件样式 */
.semi-tabs-content {
padding: 0 !important;

View File

@@ -68,6 +68,7 @@ export default function SettingsChats(props) {
const BUILTIN_TEMPLATES = [
{ name: 'Cherry Studio', url: 'cherrystudio://providers/api-keys?v=1&data={cherryConfig}' },
{ name: 'AionUI', url: 'aionui://provider/add?v=1&data={aionuiConfig}' },
{ name: '流畅阅读', url: 'fluentread' },
{ name: 'CC Switch', url: 'ccswitch' },
{ name: 'Lobe Chat', url: 'https://chat-preview.lobehub.com/?settings={"keyVaults":{"openai":{"apiKey":"{key}","baseURL":"{address}/v1"}}}' },