mirror of
https://github.com/QuantumNous/new-api.git
synced 2026-03-31 07:45:44 +00:00
Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
13ff448049 |
260
LOGGING.md
Normal file
260
LOGGING.md
Normal file
@@ -0,0 +1,260 @@
|
||||
# 日志系统说明
|
||||
|
||||
本项目使用 Go 标准库的 `log/slog` 实现结构化日志记录。
|
||||
|
||||
## 📋 功能特性
|
||||
|
||||
### 1. 标准的文件存储结构
|
||||
|
||||
- **当前日志文件**: `oneapi.log` - 实时写入的日志文件
|
||||
- **归档日志文件**: `oneapi.2024-01-02-153045.log` - 自动轮转后的历史日志
|
||||
|
||||
### 2. 自动日志轮转
|
||||
|
||||
日志文件会在以下情况自动轮转:
|
||||
|
||||
- **按大小轮转**: 当日志文件超过指定大小时(默认 100MB)
|
||||
- **启动时日期检查**: 程序启动时如果检测到日志文件是旧日期创建的,会自动轮转
|
||||
- **自动清理**: 只保留最近 N 个日志文件(默认 7 个)
|
||||
|
||||
### 3. 结构化日志
|
||||
|
||||
所有日志都包含以下结构化字段:
|
||||
|
||||
```
|
||||
time=2024-01-02T15:30:45 level=INFO msg="user logged in" request_id=abc123 user_id=1001
|
||||
```
|
||||
|
||||
### 4. 多种输出格式
|
||||
|
||||
- **Text 格式** (默认): 人类可读的文本格式
|
||||
- **JSON 格式**: 便于日志分析工具解析
|
||||
|
||||
### 5. 灵活的日志级别
|
||||
|
||||
支持四个日志级别:
|
||||
- `DEBUG`: 调试信息
|
||||
- `INFO`: 一般信息
|
||||
- `WARN`: 警告信息
|
||||
- `ERROR`: 错误信息
|
||||
|
||||
## ⚙️ 配置方式
|
||||
|
||||
### 环境变量配置
|
||||
|
||||
```bash
|
||||
# 日志目录(必需,否则只输出到控制台)
|
||||
--log-dir=./logs
|
||||
|
||||
# 日志级别(可选,默认: INFO,DEBUG 模式除外)
|
||||
export LOG_LEVEL=DEBUG # 可选值: DEBUG, INFO, WARN, ERROR
|
||||
|
||||
# 日志格式(可选,默认: text)
|
||||
export LOG_FORMAT=json # 可选值: text, json
|
||||
|
||||
# 单个日志文件最大大小(可选,默认: 100,单位: MB)
|
||||
export LOG_MAX_SIZE_MB=200
|
||||
|
||||
# 保留的日志文件数量(可选,默认: 7)
|
||||
export LOG_MAX_FILES=14
|
||||
|
||||
# 启用调试模式(会自动将日志级别设为 DEBUG)
|
||||
export DEBUG=true
|
||||
```
|
||||
|
||||
### 命令行参数
|
||||
|
||||
```bash
|
||||
# 启动时指定日志目录
|
||||
./new-api --log-dir=./logs
|
||||
|
||||
# 如果不指定日志目录,日志只输出到控制台
|
||||
./new-api
|
||||
```
|
||||
|
||||
## 📝 使用示例
|
||||
|
||||
### 基础使用
|
||||
|
||||
```go
|
||||
import (
|
||||
"context"
|
||||
"github.com/QuantumNous/new-api/logger"
|
||||
)
|
||||
|
||||
// 记录信息日志
|
||||
logger.LogInfo(ctx, "user registered successfully")
|
||||
|
||||
// 记录警告日志
|
||||
logger.LogWarn(ctx, "API rate limit approaching")
|
||||
|
||||
// 记录错误日志
|
||||
logger.LogError(ctx, "failed to connect to database")
|
||||
|
||||
// 记录调试日志(只在 DEBUG 模式下输出)
|
||||
logger.LogDebug(ctx, "processing request with params: %v", params)
|
||||
|
||||
// 记录系统日志(无 context)
|
||||
logger.LogSystemInfo("application started")
|
||||
logger.LogSystemError("critical system error")
|
||||
```
|
||||
|
||||
### 日志输出示例
|
||||
|
||||
**Text 格式** (易读格式):
|
||||
```
|
||||
[INFO] 2024/01/02 - 15:30:45 | SYSTEM | application started
|
||||
[INFO] 2024/01/02 - 15:30:46 | abc123 | user registered successfully
|
||||
[WARN] 2024/01/02 - 15:30:47 | def456 | API rate limit approaching | remaining=10, limit=100
|
||||
[ERROR] 2024/01/02 - 15:30:48 | ghi789 | failed to connect to database | error="connection timeout"
|
||||
```
|
||||
|
||||
格式说明:`[级别] 时间 | 请求ID/组件 | 消息 | 额外属性(如有)`
|
||||
|
||||
**JSON 格式**:
|
||||
```json
|
||||
{"time":"2024-01-02 15:30:45","level":"INFO","msg":"application started","request_id":"SYSTEM"}
|
||||
{"time":"2024-01-02 15:30:46","level":"INFO","msg":"user registered successfully","request_id":"abc123"}
|
||||
{"time":"2024-01-02 15:30:47","level":"WARN","msg":"API rate limit approaching","request_id":"def456"}
|
||||
```
|
||||
|
||||
## 📂 日志文件结构
|
||||
|
||||
```
|
||||
logs/
|
||||
├── oneapi.log # 当前活动日志文件
|
||||
├── oneapi.2024-01-01-090000.log # 昨天的日志
|
||||
├── oneapi.2024-01-01-150000.log # 昨天下午的日志(如果超过大小限制)
|
||||
├── oneapi.2023-12-31-090000.log # 更早的日志
|
||||
└── ... # 最多保留配置数量的历史文件
|
||||
```
|
||||
|
||||
## 🔄 日志轮转机制
|
||||
|
||||
### 轮转触发条件
|
||||
|
||||
1. **文件大小检查**: 每写入 1000 条日志后检查一次文件大小
|
||||
2. **启动时日期检查**: 程序启动时检查日志文件的修改日期,如果不是今天则轮转
|
||||
3. **自动清理**: 轮转时自动删除超过保留数量的旧日志文件
|
||||
|
||||
> **注意**: 日志不会在运行时动态检查日期变化。如果需要每天自动轮转日志,建议:
|
||||
> - 使用定时任务(如 cron)每天重启服务
|
||||
> - 或者配置较小的日志文件大小,让它自动按大小轮转
|
||||
|
||||
### 轮转流程
|
||||
|
||||
1. 检测到需要轮转时,关闭当前日志文件
|
||||
2. 将 `oneapi.log` 重命名为 `oneapi.YYYY-MM-DD-HHmmss.log`
|
||||
3. 创建新的 `oneapi.log` 文件
|
||||
4. 异步清理超过数量限制的旧日志文件
|
||||
5. 记录轮转事件到新日志文件
|
||||
|
||||
## 🎯 最佳实践
|
||||
|
||||
### 1. 生产环境配置
|
||||
|
||||
```bash
|
||||
# 使用 INFO 级别,避免过多调试信息
|
||||
export LOG_LEVEL=INFO
|
||||
|
||||
# 使用 JSON 格式,便于日志分析工具处理
|
||||
export LOG_FORMAT=json
|
||||
|
||||
# 设置合适的文件大小和保留数量
|
||||
export LOG_MAX_SIZE_MB=500
|
||||
export LOG_MAX_FILES=30
|
||||
|
||||
# 指定日志目录
|
||||
./new-api --log-dir=/var/log/oneapi
|
||||
```
|
||||
|
||||
### 2. 开发环境配置
|
||||
|
||||
```bash
|
||||
# 使用 DEBUG 级别查看详细信息
|
||||
export DEBUG=true
|
||||
|
||||
# 使用 Text 格式,便于阅读
|
||||
export LOG_FORMAT=text
|
||||
|
||||
# 较小的文件大小和保留数量
|
||||
export LOG_MAX_SIZE_MB=50
|
||||
export LOG_MAX_FILES=7
|
||||
|
||||
./new-api --log-dir=./logs
|
||||
```
|
||||
|
||||
### 3. 容器环境配置
|
||||
|
||||
```bash
|
||||
# 只输出到标准输出,由容器运行时管理日志
|
||||
./new-api
|
||||
|
||||
# 或者使用 JSON 格式便于日志收集系统处理
|
||||
export LOG_FORMAT=json
|
||||
./new-api
|
||||
```
|
||||
|
||||
## 🔍 日志分析
|
||||
|
||||
### 使用 grep 分析文本日志
|
||||
|
||||
```bash
|
||||
# 查找错误日志
|
||||
grep '\[ERROR\]' logs/oneapi.log
|
||||
|
||||
# 查找特定请求的所有日志
|
||||
grep 'abc123' logs/*.log
|
||||
|
||||
# 查看最近的警告和错误
|
||||
tail -f logs/oneapi.log | grep -E '\[(WARN|ERROR)\]'
|
||||
|
||||
# 查找包含特定关键词的日志
|
||||
grep 'database' logs/oneapi.log
|
||||
|
||||
# 查看今天的所有错误
|
||||
grep "\[ERROR\] $(date +%Y/%m/%d)" logs/oneapi.log
|
||||
```
|
||||
|
||||
### 使用 jq 分析 JSON 日志
|
||||
|
||||
```bash
|
||||
# 提取所有错误日志
|
||||
cat logs/oneapi.log | jq 'select(.level=="ERROR")'
|
||||
|
||||
# 统计各级别日志数量
|
||||
cat logs/oneapi.log | jq -r '.level' | sort | uniq -c
|
||||
|
||||
# 查找特定时间范围的日志
|
||||
cat logs/oneapi.log | jq 'select(.time >= "2024-01-02 15:00:00" and .time <= "2024-01-02 16:00:00")'
|
||||
```
|
||||
|
||||
## 📊 性能优化
|
||||
|
||||
1. **异步日志轮转**: 轮转操作在后台 goroutine 中执行,不阻塞主程序
|
||||
2. **批量写入检查**: 每 1000 次写入才检查一次轮转条件,减少 I/O 开销
|
||||
3. **读写锁**: 使用 `sync.RWMutex` 保护日志器,提高并发性能
|
||||
4. **零分配**: `slog` 库在大多数情况下实现零内存分配
|
||||
|
||||
## 🚨 故障排查
|
||||
|
||||
### 日志文件未创建
|
||||
|
||||
- 检查日志目录是否存在且有写入权限
|
||||
- 确认启动时指定了 `--log-dir` 参数
|
||||
|
||||
### 日志文件过多
|
||||
|
||||
- 调整 `LOG_MAX_FILES` 环境变量
|
||||
- 手动清理不需要的旧日志文件
|
||||
|
||||
### 日志级别不正确
|
||||
|
||||
- 检查 `LOG_LEVEL` 环境变量是否正确设置
|
||||
- 确认 `DEBUG` 环境变量的值(会覆盖 LOG_LEVEL)
|
||||
|
||||
## 📖 相关文档
|
||||
|
||||
- [Go slog 官方文档](https://pkg.go.dev/log/slog)
|
||||
- [结构化日志最佳实践](https://go.dev/blog/slog)
|
||||
|
||||
@@ -3,7 +3,7 @@ package common
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"log"
|
||||
"log/slog"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strconv"
|
||||
@@ -43,9 +43,10 @@ func InitEnv() {
|
||||
if os.Getenv("SESSION_SECRET") != "" {
|
||||
ss := os.Getenv("SESSION_SECRET")
|
||||
if ss == "random_string" {
|
||||
log.Println("WARNING: SESSION_SECRET is set to the default value 'random_string', please change it to a random string.")
|
||||
log.Println("警告:SESSION_SECRET被设置为默认值'random_string',请修改为随机字符串。")
|
||||
log.Fatal("Please set SESSION_SECRET to a random string.")
|
||||
slog.Warn("SESSION_SECRET is set to the default value 'random_string', please change it to a random string.")
|
||||
slog.Warn("警告:SESSION_SECRET被设置为默认值'random_string',请修改为随机字符串。")
|
||||
slog.Error("Please set SESSION_SECRET to a random string.")
|
||||
os.Exit(1)
|
||||
} else {
|
||||
SessionSecret = ss
|
||||
}
|
||||
@@ -62,12 +63,14 @@ func InitEnv() {
|
||||
var err error
|
||||
*LogDir, err = filepath.Abs(*LogDir)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
slog.Error("failed to get absolute path for log directory", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
if _, err := os.Stat(*LogDir); os.IsNotExist(err) {
|
||||
err = os.Mkdir(*LogDir, 0777)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
slog.Error("failed to create log directory", "error", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,6 +2,7 @@ package common
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"log/slog"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
@@ -9,18 +10,16 @@ import (
|
||||
)
|
||||
|
||||
func SysLog(s string) {
|
||||
t := time.Now()
|
||||
_, _ = fmt.Fprintf(gin.DefaultWriter, "[SYS] %v | %s \n", t.Format("2006/01/02 - 15:04:05"), s)
|
||||
slog.Info(s, "component", "system")
|
||||
}
|
||||
|
||||
func SysError(s string) {
|
||||
t := time.Now()
|
||||
_, _ = fmt.Fprintf(gin.DefaultErrorWriter, "[SYS] %v | %s \n", t.Format("2006/01/02 - 15:04:05"), s)
|
||||
slog.Error(s, "component", "system")
|
||||
}
|
||||
|
||||
func FatalLog(v ...any) {
|
||||
t := time.Now()
|
||||
_, _ = fmt.Fprintf(gin.DefaultErrorWriter, "[FATAL] %v | %v \n", t.Format("2006/01/02 - 15:04:05"), v)
|
||||
msg := fmt.Sprint(v...)
|
||||
slog.Error(msg, "component", "system", "level", "fatal")
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
|
||||
584
logger/logger.go
584
logger/logger.go
@@ -5,9 +5,11 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"log"
|
||||
"log/slog"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"sort"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
@@ -19,81 +21,561 @@ import (
|
||||
)
|
||||
|
||||
const (
|
||||
loggerINFO = "INFO"
|
||||
loggerWarn = "WARN"
|
||||
loggerError = "ERR"
|
||||
loggerDebug = "DEBUG"
|
||||
// 日志轮转配置
|
||||
defaultMaxLogSize = 100 * 1024 * 1024 // 100MB
|
||||
defaultMaxLogFiles = 7 // 保留最近7个日志文件
|
||||
defaultLogFileName = "newapi.log"
|
||||
checkRotateInterval = 1000 // 每1000次写入检查一次是否需要轮转
|
||||
)
|
||||
|
||||
const maxLogCount = 1000000
|
||||
var (
|
||||
logMutex sync.RWMutex
|
||||
rotateCheckLock sync.Mutex
|
||||
defaultLogger *slog.Logger
|
||||
logFile *os.File
|
||||
logFilePath string
|
||||
logDirPath string
|
||||
writeCount int64
|
||||
maxLogSize int64 = defaultMaxLogSize
|
||||
maxLogFiles int = defaultMaxLogFiles
|
||||
useJSONFormat bool
|
||||
)
|
||||
|
||||
var logCount int
|
||||
var setupLogLock sync.Mutex
|
||||
var setupLogWorking bool
|
||||
func init() {
|
||||
// Initialize with a text handler to stdout
|
||||
handler := createHandler(os.Stdout)
|
||||
defaultLogger = slog.New(handler)
|
||||
slog.SetDefault(defaultLogger)
|
||||
}
|
||||
|
||||
// SetupLogger 初始化日志系统
|
||||
func SetupLogger() {
|
||||
defer func() {
|
||||
setupLogWorking = false
|
||||
}()
|
||||
if *common.LogDir != "" {
|
||||
ok := setupLogLock.TryLock()
|
||||
if !ok {
|
||||
log.Println("setup log is already working")
|
||||
return
|
||||
logMutex.Lock()
|
||||
defer logMutex.Unlock()
|
||||
|
||||
// 读取环境变量配置
|
||||
if maxSize := os.Getenv("LOG_MAX_SIZE_MB"); maxSize != "" {
|
||||
if size, err := fmt.Sscanf(maxSize, "%d", &maxLogSize); err == nil && size > 0 {
|
||||
maxLogSize = maxLogSize * 1024 * 1024 // 转换为字节
|
||||
}
|
||||
defer func() {
|
||||
setupLogLock.Unlock()
|
||||
}()
|
||||
logPath := filepath.Join(*common.LogDir, fmt.Sprintf("oneapi-%s.log", time.Now().Format("20060102150405")))
|
||||
fd, err := os.OpenFile(logPath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
||||
if err != nil {
|
||||
log.Fatal("failed to open log file")
|
||||
}
|
||||
if maxFiles := os.Getenv("LOG_MAX_FILES"); maxFiles != "" {
|
||||
fmt.Sscanf(maxFiles, "%d", &maxLogFiles)
|
||||
}
|
||||
if os.Getenv("LOG_FORMAT") == "json" {
|
||||
useJSONFormat = true
|
||||
}
|
||||
|
||||
if *common.LogDir == "" {
|
||||
// 如果没有配置日志目录,只输出到标准输出
|
||||
handler := createHandler(os.Stdout)
|
||||
defaultLogger = slog.New(handler)
|
||||
slog.SetDefault(defaultLogger)
|
||||
return
|
||||
}
|
||||
|
||||
logDirPath = *common.LogDir
|
||||
logFilePath = filepath.Join(logDirPath, defaultLogFileName)
|
||||
|
||||
// 检查日志文件是否需要按日期轮转(仅在启动时检查)
|
||||
if err := checkAndRotateOnStartup(); err != nil {
|
||||
slog.Error("failed to check log file on startup", "error", err)
|
||||
}
|
||||
|
||||
// 打开或创建日志文件
|
||||
if err := openLogFile(); err != nil {
|
||||
slog.Error("failed to open log file", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 创建多路输出(控制台 + 文件)
|
||||
multiWriter := io.MultiWriter(os.Stdout, logFile)
|
||||
|
||||
// 更新 gin 的默认输出
|
||||
gin.DefaultWriter = multiWriter
|
||||
gin.DefaultErrorWriter = multiWriter
|
||||
|
||||
// 更新 slog handler
|
||||
handler := createHandler(multiWriter)
|
||||
defaultLogger = slog.New(handler)
|
||||
slog.SetDefault(defaultLogger)
|
||||
|
||||
slog.Info("logger initialized",
|
||||
"log_dir", logDirPath,
|
||||
"max_size_mb", maxLogSize/(1024*1024),
|
||||
"max_files", maxLogFiles,
|
||||
"format", getLogFormat())
|
||||
}
|
||||
|
||||
// createHandler 创建日志处理器
|
||||
func createHandler(w io.Writer) slog.Handler {
|
||||
if useJSONFormat {
|
||||
opts := &slog.HandlerOptions{
|
||||
Level: getLogLevel(),
|
||||
}
|
||||
gin.DefaultWriter = io.MultiWriter(os.Stdout, fd)
|
||||
gin.DefaultErrorWriter = io.MultiWriter(os.Stderr, fd)
|
||||
return slog.NewJSONHandler(w, opts)
|
||||
}
|
||||
return NewReadableTextHandler(w, getLogLevel())
|
||||
}
|
||||
|
||||
// ReadableTextHandler 自定义的易读文本处理器
|
||||
type ReadableTextHandler struct {
|
||||
w io.Writer
|
||||
level slog.Level
|
||||
mu sync.Mutex
|
||||
}
|
||||
|
||||
// NewReadableTextHandler 创建一个新的易读文本处理器
|
||||
func NewReadableTextHandler(w io.Writer, level slog.Level) *ReadableTextHandler {
|
||||
return &ReadableTextHandler{
|
||||
w: w,
|
||||
level: level,
|
||||
}
|
||||
}
|
||||
|
||||
func LogInfo(ctx context.Context, msg string) {
|
||||
logHelper(ctx, loggerINFO, msg)
|
||||
// Enabled 检查是否启用该级别
|
||||
func (h *ReadableTextHandler) Enabled(_ context.Context, level slog.Level) bool {
|
||||
return level >= h.level
|
||||
}
|
||||
|
||||
func LogWarn(ctx context.Context, msg string) {
|
||||
logHelper(ctx, loggerWarn, msg)
|
||||
// Handle 处理日志记录
|
||||
func (h *ReadableTextHandler) Handle(_ context.Context, r slog.Record) error {
|
||||
h.mu.Lock()
|
||||
defer h.mu.Unlock()
|
||||
|
||||
// 格式: [LEVEL] YYYY/MM/DD - HH:mm:ss | request_id | message | key=value ...
|
||||
buf := make([]byte, 0, 256)
|
||||
|
||||
// 日志级别
|
||||
level := r.Level.String()
|
||||
switch r.Level {
|
||||
case slog.LevelDebug:
|
||||
level = "DEBUG"
|
||||
case slog.LevelInfo:
|
||||
level = "INFO"
|
||||
case slog.LevelWarn:
|
||||
level = "WARN"
|
||||
case slog.LevelError:
|
||||
level = "ERROR"
|
||||
}
|
||||
buf = append(buf, '[')
|
||||
buf = append(buf, level...)
|
||||
buf = append(buf, "] "...)
|
||||
|
||||
// 时间
|
||||
buf = append(buf, r.Time.Format("2006/01/02 - 15:04:05")...)
|
||||
buf = append(buf, " | "...)
|
||||
|
||||
// 提取 request_id 和 component
|
||||
var requestID, component string
|
||||
otherAttrs := make([]slog.Attr, 0)
|
||||
|
||||
r.Attrs(func(a slog.Attr) bool {
|
||||
switch a.Key {
|
||||
case "request_id":
|
||||
requestID = a.Value.String()
|
||||
case "component":
|
||||
component = a.Value.String()
|
||||
default:
|
||||
otherAttrs = append(otherAttrs, a)
|
||||
}
|
||||
return true
|
||||
})
|
||||
|
||||
// 输出 request_id 或 component
|
||||
if requestID != "" {
|
||||
buf = append(buf, requestID...)
|
||||
buf = append(buf, " | "...)
|
||||
} else if component != "" {
|
||||
buf = append(buf, component...)
|
||||
buf = append(buf, " | "...)
|
||||
}
|
||||
|
||||
// 消息
|
||||
buf = append(buf, r.Message...)
|
||||
|
||||
// 其他属性
|
||||
if len(otherAttrs) > 0 {
|
||||
buf = append(buf, " | "...)
|
||||
for i, a := range otherAttrs {
|
||||
if i > 0 {
|
||||
buf = append(buf, ", "...)
|
||||
}
|
||||
buf = append(buf, a.Key...)
|
||||
buf = append(buf, '=')
|
||||
buf = appendValue(buf, a.Value)
|
||||
}
|
||||
}
|
||||
|
||||
buf = append(buf, '\n')
|
||||
_, err := h.w.Write(buf)
|
||||
return err
|
||||
}
|
||||
|
||||
func LogError(ctx context.Context, msg string) {
|
||||
logHelper(ctx, loggerError, msg)
|
||||
// appendValue 追加值到缓冲区
|
||||
func appendValue(buf []byte, v slog.Value) []byte {
|
||||
switch v.Kind() {
|
||||
case slog.KindString:
|
||||
s := v.String()
|
||||
// 如果字符串包含空格或特殊字符,加引号
|
||||
if strings.ContainsAny(s, " \t\n\r,=") {
|
||||
buf = append(buf, '"')
|
||||
buf = append(buf, s...)
|
||||
buf = append(buf, '"')
|
||||
} else {
|
||||
buf = append(buf, s...)
|
||||
}
|
||||
case slog.KindInt64:
|
||||
buf = append(buf, fmt.Sprintf("%d", v.Int64())...)
|
||||
case slog.KindUint64:
|
||||
buf = append(buf, fmt.Sprintf("%d", v.Uint64())...)
|
||||
case slog.KindFloat64:
|
||||
buf = append(buf, fmt.Sprintf("%g", v.Float64())...)
|
||||
case slog.KindBool:
|
||||
buf = append(buf, fmt.Sprintf("%t", v.Bool())...)
|
||||
case slog.KindDuration:
|
||||
buf = append(buf, v.Duration().String()...)
|
||||
case slog.KindTime:
|
||||
buf = append(buf, v.Time().Format("2006-01-02 15:04:05")...)
|
||||
default:
|
||||
buf = append(buf, fmt.Sprintf("%v", v.Any())...)
|
||||
}
|
||||
return buf
|
||||
}
|
||||
|
||||
func LogDebug(ctx context.Context, msg string, args ...any) {
|
||||
// WithAttrs 返回一个新的处理器,包含指定的属性
|
||||
func (h *ReadableTextHandler) WithAttrs(attrs []slog.Attr) slog.Handler {
|
||||
// 简化实现:不支持 With
|
||||
return h
|
||||
}
|
||||
|
||||
// WithGroup 返回一个新的处理器,使用指定的组
|
||||
func (h *ReadableTextHandler) WithGroup(name string) slog.Handler {
|
||||
// 简化实现:不支持组
|
||||
return h
|
||||
}
|
||||
|
||||
// checkAndRotateOnStartup 启动时检查日志文件是否需要按日期轮转
|
||||
func checkAndRotateOnStartup() error {
|
||||
// 检查日志文件是否存在
|
||||
fileInfo, err := os.Stat(logFilePath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
// 文件不存在,不需要轮转
|
||||
return nil
|
||||
}
|
||||
return fmt.Errorf("failed to stat log file: %w", err)
|
||||
}
|
||||
|
||||
// 获取文件的修改时间
|
||||
modTime := fileInfo.ModTime()
|
||||
modDate := modTime.Format("2006-01-02")
|
||||
today := time.Now().Format("2006-01-02")
|
||||
|
||||
// 如果文件的日期和今天不同,进行轮转
|
||||
if modDate != today {
|
||||
// 生成归档文件名(使用文件的修改日期)
|
||||
timestamp := modTime.Format("2006-01-02-150405")
|
||||
archivePath := filepath.Join(logDirPath, fmt.Sprintf("newapi.%s.log", timestamp))
|
||||
|
||||
// 重命名日志文件
|
||||
if err := os.Rename(logFilePath, archivePath); err != nil {
|
||||
return fmt.Errorf("failed to archive old log file: %w", err)
|
||||
}
|
||||
|
||||
slog.Info("rotated old log file on startup",
|
||||
"archive", archivePath,
|
||||
"reason", "date changed")
|
||||
|
||||
// 清理旧的日志文件
|
||||
gopool.Go(func() {
|
||||
cleanOldLogFiles()
|
||||
})
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// openLogFile 打开日志文件
|
||||
func openLogFile() error {
|
||||
// 关闭旧的日志文件
|
||||
if logFile != nil {
|
||||
logFile.Close()
|
||||
}
|
||||
|
||||
// 打开新的日志文件
|
||||
fd, err := os.OpenFile(logFilePath, os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to open log file: %w", err)
|
||||
}
|
||||
|
||||
logFile = fd
|
||||
writeCount = 0
|
||||
return nil
|
||||
}
|
||||
|
||||
// rotateLogFile 轮转日志文件
|
||||
func rotateLogFile() error {
|
||||
if logFile == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
rotateCheckLock.Lock()
|
||||
defer rotateCheckLock.Unlock()
|
||||
|
||||
// 获取当前日志文件信息
|
||||
fileInfo, err := logFile.Stat()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to stat log file: %w", err)
|
||||
}
|
||||
|
||||
// 检查文件大小是否需要轮转
|
||||
if fileInfo.Size() < maxLogSize {
|
||||
return nil
|
||||
}
|
||||
|
||||
// 关闭当前日志文件
|
||||
logFile.Close()
|
||||
|
||||
// 生成归档文件名
|
||||
timestamp := time.Now().Format("2006-01-02-150405")
|
||||
archivePath := filepath.Join(logDirPath, fmt.Sprintf("newapi.%s.log", timestamp))
|
||||
|
||||
// 重命名当前日志文件为归档文件
|
||||
if err := os.Rename(logFilePath, archivePath); err != nil {
|
||||
// 如果重命名失败,尝试复制
|
||||
if copyErr := copyFile(logFilePath, archivePath); copyErr != nil {
|
||||
return fmt.Errorf("failed to archive log file: %w", err)
|
||||
}
|
||||
os.Truncate(logFilePath, 0)
|
||||
}
|
||||
|
||||
// 清理旧的日志文件
|
||||
gopool.Go(func() {
|
||||
cleanOldLogFiles()
|
||||
})
|
||||
|
||||
// 打开新的日志文件
|
||||
if err := openLogFile(); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// 重新设置日志输出
|
||||
multiWriter := io.MultiWriter(os.Stdout, logFile)
|
||||
gin.DefaultWriter = multiWriter
|
||||
gin.DefaultErrorWriter = multiWriter
|
||||
|
||||
handler := createHandler(multiWriter)
|
||||
logMutex.Lock()
|
||||
defaultLogger = slog.New(handler)
|
||||
slog.SetDefault(defaultLogger)
|
||||
logMutex.Unlock()
|
||||
|
||||
slog.Info("log file rotated",
|
||||
"reason", "size limit reached",
|
||||
"archive", archivePath)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// cleanOldLogFiles 清理旧的日志文件
|
||||
func cleanOldLogFiles() {
|
||||
if logDirPath == "" {
|
||||
return
|
||||
}
|
||||
|
||||
files, err := os.ReadDir(logDirPath)
|
||||
if err != nil {
|
||||
slog.Error("failed to read log directory", "error", err)
|
||||
return
|
||||
}
|
||||
|
||||
// 收集所有归档日志文件
|
||||
var logFiles []os.DirEntry
|
||||
for _, file := range files {
|
||||
if !file.IsDir() && strings.HasPrefix(file.Name(), "newapi.") &&
|
||||
strings.HasSuffix(file.Name(), ".log") &&
|
||||
file.Name() != defaultLogFileName {
|
||||
logFiles = append(logFiles, file)
|
||||
}
|
||||
}
|
||||
|
||||
// 如果归档文件数量超过限制,删除最旧的
|
||||
if len(logFiles) > maxLogFiles {
|
||||
// 按名称排序(文件名包含时间戳)
|
||||
sort.Slice(logFiles, func(i, j int) bool {
|
||||
return logFiles[i].Name() < logFiles[j].Name()
|
||||
})
|
||||
|
||||
// 删除最旧的文件
|
||||
deleteCount := len(logFiles) - maxLogFiles
|
||||
for i := 0; i < deleteCount; i++ {
|
||||
filePath := filepath.Join(logDirPath, logFiles[i].Name())
|
||||
if err := os.Remove(filePath); err != nil {
|
||||
slog.Error("failed to remove old log file",
|
||||
"file", filePath,
|
||||
"error", err)
|
||||
} else {
|
||||
slog.Info("removed old log file", "file", logFiles[i].Name())
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// copyFile 复制文件
|
||||
func copyFile(src, dst string) error {
|
||||
sourceFile, err := os.Open(src)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer sourceFile.Close()
|
||||
|
||||
destFile, err := os.Create(dst)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer destFile.Close()
|
||||
|
||||
_, err = io.Copy(destFile, sourceFile)
|
||||
return err
|
||||
}
|
||||
|
||||
// getLogLevel 获取日志级别
|
||||
func getLogLevel() slog.Level {
|
||||
// 支持环境变量配置
|
||||
if level := os.Getenv("LOG_LEVEL"); level != "" {
|
||||
switch strings.ToUpper(level) {
|
||||
case "DEBUG":
|
||||
return slog.LevelDebug
|
||||
case "INFO":
|
||||
return slog.LevelInfo
|
||||
case "WARN", "WARNING":
|
||||
return slog.LevelWarn
|
||||
case "ERROR":
|
||||
return slog.LevelError
|
||||
}
|
||||
}
|
||||
|
||||
if common.DebugEnabled {
|
||||
if len(args) > 0 {
|
||||
msg = fmt.Sprintf(msg, args...)
|
||||
}
|
||||
logHelper(ctx, loggerDebug, msg)
|
||||
return slog.LevelDebug
|
||||
}
|
||||
return slog.LevelInfo
|
||||
}
|
||||
|
||||
// getLogFormat 获取日志格式
|
||||
func getLogFormat() string {
|
||||
if useJSONFormat {
|
||||
return "json"
|
||||
}
|
||||
return "text"
|
||||
}
|
||||
|
||||
// checkAndRotateLog 检查并轮转日志
|
||||
func checkAndRotateLog() {
|
||||
if logFile == nil {
|
||||
return
|
||||
}
|
||||
|
||||
writeCount++
|
||||
if writeCount%checkRotateInterval == 0 {
|
||||
gopool.Go(func() {
|
||||
if err := rotateLogFile(); err != nil {
|
||||
slog.Error("failed to rotate log file", "error", err)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func logHelper(ctx context.Context, level string, msg string) {
|
||||
writer := gin.DefaultErrorWriter
|
||||
if level == loggerINFO {
|
||||
writer = gin.DefaultWriter
|
||||
// LogInfo 记录信息级别日志
|
||||
func LogInfo(ctx context.Context, msg string) {
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
id := getRequestID(ctx)
|
||||
logMutex.RLock()
|
||||
logger := defaultLogger
|
||||
logMutex.RUnlock()
|
||||
logger.InfoContext(ctx, msg, "request_id", id)
|
||||
checkAndRotateLog()
|
||||
}
|
||||
|
||||
// LogWarn 记录警告级别日志
|
||||
func LogWarn(ctx context.Context, msg string) {
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
id := getRequestID(ctx)
|
||||
logMutex.RLock()
|
||||
logger := defaultLogger
|
||||
logMutex.RUnlock()
|
||||
logger.WarnContext(ctx, msg, "request_id", id)
|
||||
checkAndRotateLog()
|
||||
}
|
||||
|
||||
// LogError 记录错误级别日志
|
||||
func LogError(ctx context.Context, msg string) {
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
id := getRequestID(ctx)
|
||||
logMutex.RLock()
|
||||
logger := defaultLogger
|
||||
logMutex.RUnlock()
|
||||
logger.ErrorContext(ctx, msg, "request_id", id)
|
||||
checkAndRotateLog()
|
||||
}
|
||||
|
||||
// LogSystemInfo 记录系统信息
|
||||
func LogSystemInfo(msg string) {
|
||||
logMutex.RLock()
|
||||
logger := defaultLogger
|
||||
logMutex.RUnlock()
|
||||
logger.Info(msg, "request_id", "SYSTEM")
|
||||
checkAndRotateLog()
|
||||
}
|
||||
|
||||
// LogSystemError 记录系统错误
|
||||
func LogSystemError(msg string) {
|
||||
logMutex.RLock()
|
||||
logger := defaultLogger
|
||||
logMutex.RUnlock()
|
||||
logger.Error(msg, "request_id", "SYSTEM")
|
||||
checkAndRotateLog()
|
||||
}
|
||||
|
||||
// LogDebug 记录调试级别日志
|
||||
func LogDebug(ctx context.Context, msg string, args ...any) {
|
||||
if !common.DebugEnabled && getLogLevel() > slog.LevelDebug {
|
||||
return
|
||||
}
|
||||
|
||||
if ctx == nil {
|
||||
ctx = context.Background()
|
||||
}
|
||||
id := getRequestID(ctx)
|
||||
if len(args) > 0 {
|
||||
msg = fmt.Sprintf(msg, args...)
|
||||
}
|
||||
logMutex.RLock()
|
||||
logger := defaultLogger
|
||||
logMutex.RUnlock()
|
||||
logger.DebugContext(ctx, msg, "request_id", id)
|
||||
checkAndRotateLog()
|
||||
}
|
||||
|
||||
// getRequestID 从上下文中获取请求ID
|
||||
func getRequestID(ctx context.Context) string {
|
||||
if ctx == nil {
|
||||
return "SYSTEM"
|
||||
}
|
||||
id := ctx.Value(common.RequestIdKey)
|
||||
if id == nil {
|
||||
id = "SYSTEM"
|
||||
return "SYSTEM"
|
||||
}
|
||||
now := time.Now()
|
||||
_, _ = fmt.Fprintf(writer, "[%s] %v | %s | %s \n", level, now.Format("2006/01/02 - 15:04:05"), id, msg)
|
||||
logCount++ // we don't need accurate count, so no lock here
|
||||
if logCount > maxLogCount && !setupLogWorking {
|
||||
logCount = 0
|
||||
setupLogWorking = true
|
||||
gopool.Go(func() {
|
||||
SetupLogger()
|
||||
})
|
||||
if strID, ok := id.(string); ok {
|
||||
return strID
|
||||
}
|
||||
return "SYSTEM"
|
||||
}
|
||||
|
||||
func LogQuota(quota int) string {
|
||||
|
||||
8
main.go
8
main.go
@@ -4,7 +4,7 @@ import (
|
||||
"bytes"
|
||||
"embed"
|
||||
"fmt"
|
||||
"log"
|
||||
"log/slog"
|
||||
"net/http"
|
||||
"os"
|
||||
"strconv"
|
||||
@@ -118,7 +118,9 @@ func main() {
|
||||
|
||||
if os.Getenv("ENABLE_PPROF") == "true" {
|
||||
gopool.Go(func() {
|
||||
log.Println(http.ListenAndServe("0.0.0.0:8005", nil))
|
||||
if err := http.ListenAndServe("0.0.0.0:8005", nil); err != nil {
|
||||
slog.Error("pprof server failed", "error", err)
|
||||
}
|
||||
})
|
||||
go common.Monitor()
|
||||
common.SysLog("pprof enabled")
|
||||
@@ -127,7 +129,7 @@ func main() {
|
||||
// Initialize HTTP server
|
||||
server := gin.New()
|
||||
server.Use(gin.CustomRecovery(func(c *gin.Context, err any) {
|
||||
common.SysLog(fmt.Sprintf("panic detected: %v", err))
|
||||
logger.LogSystemError(fmt.Sprintf("panic detected: %v", err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{
|
||||
"error": gin.H{
|
||||
"message": fmt.Sprintf("Panic detected, error: %v. Please submit a issue here: https://github.com/Calcium-Ion/new-api", err),
|
||||
|
||||
@@ -138,11 +138,9 @@ func (channel *Channel) GetNextEnabledKey() (string, int, *types.NewAPIError) {
|
||||
enabledIdx = append(enabledIdx, i)
|
||||
}
|
||||
}
|
||||
// If no specific status list or none enabled, return an explicit error so caller can
|
||||
// properly handle a channel with no available keys (e.g. mark channel disabled).
|
||||
// Returning the first key here caused requests to keep using an already-disabled key.
|
||||
// If no specific status list or none enabled, fall back to first key
|
||||
if len(enabledIdx) == 0 {
|
||||
return "", 0, types.NewError(errors.New("no enabled keys"), types.ErrorCodeChannelNoAvailableKey)
|
||||
return keys[0], 0, nil
|
||||
}
|
||||
|
||||
switch channel.ChannelInfo.MultiKeyMode {
|
||||
|
||||
@@ -189,9 +189,7 @@ func RequestOpenAI2ClaudeMessage(c *gin.Context, textRequest dto.GeneralOpenAIRe
|
||||
// https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking#important-considerations-when-using-extended-thinking
|
||||
claudeRequest.TopP = 0
|
||||
claudeRequest.Temperature = common.GetPointer[float64](1.0)
|
||||
if !model_setting.ShouldPreserveThinkingSuffix(textRequest.Model) {
|
||||
claudeRequest.Model = strings.TrimSuffix(textRequest.Model, "-thinking")
|
||||
}
|
||||
claudeRequest.Model = strings.TrimSuffix(textRequest.Model, "-thinking")
|
||||
}
|
||||
|
||||
if textRequest.ReasoningEffort != "" {
|
||||
|
||||
@@ -127,8 +127,7 @@ func (a *Adaptor) Init(info *relaycommon.RelayInfo) {
|
||||
|
||||
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
|
||||
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled &&
|
||||
!model_setting.ShouldPreserveThinkingSuffix(info.OriginModelName) {
|
||||
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
|
||||
// 新增逻辑:处理 -thinking-<budget> 格式
|
||||
if strings.Contains(info.UpstreamModelName, "-thinking-") {
|
||||
parts := strings.Split(info.UpstreamModelName, "-thinking-")
|
||||
|
||||
@@ -27,7 +27,6 @@ import (
|
||||
"github.com/QuantumNous/new-api/relay/common_handler"
|
||||
relayconstant "github.com/QuantumNous/new-api/relay/constant"
|
||||
"github.com/QuantumNous/new-api/service"
|
||||
"github.com/QuantumNous/new-api/setting/model_setting"
|
||||
"github.com/QuantumNous/new-api/types"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -225,8 +224,7 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
request.Usage = json.RawMessage(`{"include":true}`)
|
||||
}
|
||||
// 适配 OpenRouter 的 thinking 后缀
|
||||
if !model_setting.ShouldPreserveThinkingSuffix(info.OriginModelName) &&
|
||||
strings.HasSuffix(info.UpstreamModelName, "-thinking") {
|
||||
if strings.HasSuffix(info.UpstreamModelName, "-thinking") {
|
||||
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-thinking")
|
||||
request.Model = info.UpstreamModelName
|
||||
if len(request.Reasoning) == 0 {
|
||||
|
||||
@@ -168,8 +168,7 @@ func (a *Adaptor) getRequestUrl(info *relaycommon.RelayInfo, modelName, suffix s
|
||||
func (a *Adaptor) GetRequestURL(info *relaycommon.RelayInfo) (string, error) {
|
||||
suffix := ""
|
||||
if a.RequestMode == RequestModeGemini {
|
||||
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled &&
|
||||
!model_setting.ShouldPreserveThinkingSuffix(info.OriginModelName) {
|
||||
if model_setting.GetGeminiSettings().ThinkingAdapterEnabled {
|
||||
// 新增逻辑:处理 -thinking-<budget> 格式
|
||||
if strings.Contains(info.UpstreamModelName, "-thinking-") {
|
||||
parts := strings.Split(info.UpstreamModelName, "-thinking-")
|
||||
|
||||
@@ -16,7 +16,6 @@ import (
|
||||
"github.com/QuantumNous/new-api/relay/channel/openai"
|
||||
relaycommon "github.com/QuantumNous/new-api/relay/common"
|
||||
"github.com/QuantumNous/new-api/relay/constant"
|
||||
"github.com/QuantumNous/new-api/setting/model_setting"
|
||||
"github.com/QuantumNous/new-api/types"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
@@ -292,9 +291,7 @@ func (a *Adaptor) ConvertOpenAIRequest(c *gin.Context, info *relaycommon.RelayIn
|
||||
return nil, errors.New("request is nil")
|
||||
}
|
||||
|
||||
if !model_setting.ShouldPreserveThinkingSuffix(info.OriginModelName) &&
|
||||
strings.HasSuffix(info.UpstreamModelName, "-thinking") &&
|
||||
strings.HasPrefix(info.UpstreamModelName, "deepseek") {
|
||||
if strings.HasSuffix(info.UpstreamModelName, "-thinking") && strings.HasPrefix(info.UpstreamModelName, "deepseek") {
|
||||
info.UpstreamModelName = strings.TrimSuffix(info.UpstreamModelName, "-thinking")
|
||||
request.Model = info.UpstreamModelName
|
||||
request.THINKING = json.RawMessage(`{"type": "enabled"}`)
|
||||
|
||||
@@ -67,9 +67,7 @@ func ClaudeHelper(c *gin.Context, info *relaycommon.RelayInfo) (newAPIError *typ
|
||||
request.TopP = 0
|
||||
request.Temperature = common.GetPointer[float64](1.0)
|
||||
}
|
||||
if !model_setting.ShouldPreserveThinkingSuffix(info.OriginModelName) {
|
||||
request.Model = strings.TrimSuffix(request.Model, "-thinking")
|
||||
}
|
||||
request.Model = strings.TrimSuffix(request.Model, "-thinking")
|
||||
info.UpstreamModelName = request.Model
|
||||
}
|
||||
|
||||
|
||||
@@ -1,23 +1,16 @@
|
||||
package model_setting
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"github.com/QuantumNous/new-api/setting/config"
|
||||
)
|
||||
|
||||
type GlobalSettings struct {
|
||||
PassThroughRequestEnabled bool `json:"pass_through_request_enabled"`
|
||||
ThinkingModelBlacklist []string `json:"thinking_model_blacklist"`
|
||||
PassThroughRequestEnabled bool `json:"pass_through_request_enabled"`
|
||||
}
|
||||
|
||||
// 默认配置
|
||||
var defaultOpenaiSettings = GlobalSettings{
|
||||
PassThroughRequestEnabled: false,
|
||||
ThinkingModelBlacklist: []string{
|
||||
"moonshotai/kimi-k2-thinking",
|
||||
"kimi-k2-thinking",
|
||||
},
|
||||
}
|
||||
|
||||
// 全局实例
|
||||
@@ -31,18 +24,3 @@ func init() {
|
||||
func GetGlobalSettings() *GlobalSettings {
|
||||
return &globalSettings
|
||||
}
|
||||
|
||||
// ShouldPreserveThinkingSuffix 判断模型是否配置为保留 thinking/-nothinking 后缀
|
||||
func ShouldPreserveThinkingSuffix(modelName string) bool {
|
||||
target := strings.TrimSpace(modelName)
|
||||
if target == "" {
|
||||
return false
|
||||
}
|
||||
|
||||
for _, entry := range globalSettings.ThinkingModelBlacklist {
|
||||
if strings.TrimSpace(entry) == target {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
@@ -37,7 +37,6 @@ const ModelSetting = () => {
|
||||
'claude.default_max_tokens': '',
|
||||
'claude.thinking_adapter_budget_tokens_percentage': 0.8,
|
||||
'global.pass_through_request_enabled': false,
|
||||
'global.thinking_model_blacklist': '[]',
|
||||
'general_setting.ping_interval_enabled': false,
|
||||
'general_setting.ping_interval_seconds': 60,
|
||||
'gemini.thinking_adapter_enabled': false,
|
||||
@@ -57,8 +56,7 @@ const ModelSetting = () => {
|
||||
item.key === 'gemini.version_settings' ||
|
||||
item.key === 'claude.model_headers_settings' ||
|
||||
item.key === 'claude.default_max_tokens' ||
|
||||
item.key === 'gemini.supported_imagine_models' ||
|
||||
item.key === 'global.thinking_model_blacklist'
|
||||
item.key === 'gemini.supported_imagine_models'
|
||||
) {
|
||||
if (item.value !== '') {
|
||||
item.value = JSON.stringify(JSON.parse(item.value), null, 2);
|
||||
|
||||
@@ -44,7 +44,7 @@ const PricingTags = ({
|
||||
(allModels.length > 0 ? allModels : models).forEach((model) => {
|
||||
if (model.tags) {
|
||||
model.tags
|
||||
.split(/[,;|]+/) // 逗号、分号或竖线(保留空格,允许多词标签如 "open weights")
|
||||
.split(/[,;|\s]+/) // 逗号、分号、竖线或空白字符
|
||||
.map((tag) => tag.trim())
|
||||
.filter(Boolean)
|
||||
.forEach((tag) => tagSet.add(tag.toLowerCase()));
|
||||
@@ -64,7 +64,7 @@ const PricingTags = ({
|
||||
if (!model.tags) return false;
|
||||
return model.tags
|
||||
.toLowerCase()
|
||||
.split(/[,;|]+/)
|
||||
.split(/[,;|\s]+/)
|
||||
.map((tg) => tg.trim())
|
||||
.includes(tagLower);
|
||||
}).length;
|
||||
|
||||
@@ -128,7 +128,7 @@ export const useModelPricingData = () => {
|
||||
if (!model.tags) return false;
|
||||
const tagsArr = model.tags
|
||||
.toLowerCase()
|
||||
.split(/[,;|]+/)
|
||||
.split(/[,;|\s]+/)
|
||||
.map((tag) => tag.trim())
|
||||
.filter(Boolean);
|
||||
return tagsArr.includes(tagLower);
|
||||
|
||||
@@ -23,7 +23,7 @@ import { useMemo } from 'react';
|
||||
const normalizeTags = (tags = '') =>
|
||||
tags
|
||||
.toLowerCase()
|
||||
.split(/[,;|]+/)
|
||||
.split(/[,;|\s]+/)
|
||||
.map((t) => t.trim())
|
||||
.filter(Boolean);
|
||||
|
||||
|
||||
@@ -561,9 +561,6 @@
|
||||
"启用绘图功能": "Enable drawing function",
|
||||
"启用请求体透传功能": "Enable request body pass-through functionality",
|
||||
"启用请求透传": "Enable request pass-through",
|
||||
"禁用思考处理的模型列表": "Models skipping thinking handling",
|
||||
"列出的模型将不会自动添加或移除-thinking/-nothinking 后缀": "Models in this list will not automatically add or remove the -thinking/-nothinking suffix.",
|
||||
"请输入JSON数组,如 [\"model-a\",\"model-b\"]": "Enter a JSON array, e.g. [\"model-a\",\"model-b\"]",
|
||||
"启用额度消费日志记录": "Enable quota consumption logging",
|
||||
"启用验证": "Enable Authentication",
|
||||
"周": "week",
|
||||
|
||||
@@ -564,9 +564,6 @@
|
||||
"启用绘图功能": "Activer la fonction de dessin",
|
||||
"启用请求体透传功能": "Activer la fonctionnalité de transmission du corps de la requête",
|
||||
"启用请求透传": "Activer la transmission de la requête",
|
||||
"禁用思考处理的模型列表": "Liste noire des modèles pour le traitement thinking",
|
||||
"列出的模型将不会自动添加或移除-thinking/-nothinking 后缀": "Les modèles listés ici n'ajouteront ni ne retireront automatiquement le suffixe -thinking/-nothinking.",
|
||||
"请输入JSON数组,如 [\"model-a\",\"model-b\"]": "Saisissez un tableau JSON, par ex. [\"model-a\",\"model-b\"]",
|
||||
"启用额度消费日志记录": "Activer la journalisation de la consommation de quota",
|
||||
"启用验证": "Activer l'authentification",
|
||||
"周": "semaine",
|
||||
|
||||
@@ -561,9 +561,6 @@
|
||||
"启用绘图功能": "画像生成機能を有効にする",
|
||||
"启用请求体透传功能": "リクエストボディのパススルー機能を有効にします。",
|
||||
"启用请求透传": "リクエストパススルーを有効にする",
|
||||
"禁用思考处理的模型列表": "Thinking処理を無効化するモデル一覧",
|
||||
"列出的模型将不会自动添加或移除-thinking/-nothinking 后缀": "ここに含まれるモデルでは-thinking/-nothinkingサフィックスを自動的に追加・削除しません。",
|
||||
"请输入JSON数组,如 [\"model-a\",\"model-b\"]": "JSON配列を入力してください(例:[\"model-a\",\"model-b\"])",
|
||||
"启用额度消费日志记录": "クォータ消費のログ記録を有効にする",
|
||||
"启用验证": "認証を有効にする",
|
||||
"周": "週",
|
||||
|
||||
@@ -567,9 +567,6 @@
|
||||
"启用绘图功能": "Включить функцию рисования",
|
||||
"启用请求体透传功能": "Включить функцию прозрачной передачи тела запроса",
|
||||
"启用请求透传": "Включить прозрачную передачу запросов",
|
||||
"禁用思考处理的模型列表": "Список моделей без обработки thinking",
|
||||
"列出的模型将不会自动添加或移除-thinking/-nothinking 后缀": "Для этих моделей суффиксы -thinking/-nothinking не будут добавляться или удаляться автоматически.",
|
||||
"请输入JSON数组,如 [\"model-a\",\"model-b\"]": "Введите JSON-массив, например [\"model-a\",\"model-b\"]",
|
||||
"启用额度消费日志记录": "Включить журналирование потребления квоты",
|
||||
"启用验证": "Включить проверку",
|
||||
"周": "Неделя",
|
||||
|
||||
@@ -558,9 +558,6 @@
|
||||
"启用绘图功能": "启用绘图功能",
|
||||
"启用请求体透传功能": "启用请求体透传功能",
|
||||
"启用请求透传": "启用请求透传",
|
||||
"禁用思考处理的模型列表": "禁用思考处理的模型列表",
|
||||
"列出的模型将不会自动添加或移除-thinking/-nothinking 后缀": "列出的模型将不会自动添加或移除-thinking/-nothinking 后缀",
|
||||
"请输入JSON数组,如 [\"model-a\",\"model-b\"]": "请输入JSON数组,如 [\"model-a\",\"model-b\"]",
|
||||
"启用额度消费日志记录": "启用额度消费日志记录",
|
||||
"启用验证": "启用验证",
|
||||
"周": "周",
|
||||
|
||||
@@ -29,44 +29,23 @@ import {
|
||||
} from '../../../helpers';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
|
||||
const thinkingExample = JSON.stringify(
|
||||
['moonshotai/kimi-k2-thinking', 'kimi-k2-thinking'],
|
||||
null,
|
||||
2,
|
||||
);
|
||||
|
||||
const defaultGlobalSettingInputs = {
|
||||
'global.pass_through_request_enabled': false,
|
||||
'global.thinking_model_blacklist': '[]',
|
||||
'general_setting.ping_interval_enabled': false,
|
||||
'general_setting.ping_interval_seconds': 60,
|
||||
};
|
||||
|
||||
export default function SettingGlobalModel(props) {
|
||||
const { t } = useTranslation();
|
||||
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [inputs, setInputs] = useState(defaultGlobalSettingInputs);
|
||||
const [inputs, setInputs] = useState({
|
||||
'global.pass_through_request_enabled': false,
|
||||
'general_setting.ping_interval_enabled': false,
|
||||
'general_setting.ping_interval_seconds': 60,
|
||||
});
|
||||
const refForm = useRef();
|
||||
const [inputsRow, setInputsRow] = useState(defaultGlobalSettingInputs);
|
||||
|
||||
const normalizeValueBeforeSave = (key, value) => {
|
||||
if (key === 'global.thinking_model_blacklist') {
|
||||
const text = typeof value === 'string' ? value.trim() : '';
|
||||
return text === '' ? '[]' : value;
|
||||
}
|
||||
return value;
|
||||
};
|
||||
const [inputsRow, setInputsRow] = useState(inputs);
|
||||
|
||||
function onSubmit() {
|
||||
const updateArray = compareObjects(inputs, inputsRow);
|
||||
if (!updateArray.length) return showWarning(t('你似乎并没有修改什么'));
|
||||
const requestQueue = updateArray.map((item) => {
|
||||
const normalizedValue = normalizeValueBeforeSave(
|
||||
item.key,
|
||||
inputs[item.key],
|
||||
);
|
||||
let value = String(normalizedValue);
|
||||
let value = String(inputs[item.key]);
|
||||
|
||||
return API.put('/api/option/', {
|
||||
key: item.key,
|
||||
@@ -95,30 +74,14 @@ export default function SettingGlobalModel(props) {
|
||||
|
||||
useEffect(() => {
|
||||
const currentInputs = {};
|
||||
for (const key of Object.keys(defaultGlobalSettingInputs)) {
|
||||
if (props.options[key] !== undefined) {
|
||||
let value = props.options[key];
|
||||
if (key === 'global.thinking_model_blacklist') {
|
||||
try {
|
||||
value =
|
||||
value && String(value).trim() !== ''
|
||||
? JSON.stringify(JSON.parse(value), null, 2)
|
||||
: defaultGlobalSettingInputs[key];
|
||||
} catch (error) {
|
||||
value = defaultGlobalSettingInputs[key];
|
||||
}
|
||||
}
|
||||
currentInputs[key] = value;
|
||||
} else {
|
||||
currentInputs[key] = defaultGlobalSettingInputs[key];
|
||||
for (let key in props.options) {
|
||||
if (Object.keys(inputs).includes(key)) {
|
||||
currentInputs[key] = props.options[key];
|
||||
}
|
||||
}
|
||||
|
||||
setInputs(currentInputs);
|
||||
setInputsRow(structuredClone(currentInputs));
|
||||
if (refForm.current) {
|
||||
refForm.current.setValues(currentInputs);
|
||||
}
|
||||
refForm.current.setValues(currentInputs);
|
||||
}, [props.options]);
|
||||
|
||||
return (
|
||||
@@ -147,38 +110,6 @@ export default function SettingGlobalModel(props) {
|
||||
/>
|
||||
</Col>
|
||||
</Row>
|
||||
<Row>
|
||||
<Col span={24}>
|
||||
<Form.TextArea
|
||||
label={t('禁用思考处理的模型列表')}
|
||||
field={'global.thinking_model_blacklist'}
|
||||
placeholder={
|
||||
t('例如:') +
|
||||
'\n' +
|
||||
thinkingExample
|
||||
}
|
||||
rows={4}
|
||||
rules={[
|
||||
{
|
||||
validator: (rule, value) => {
|
||||
if (!value || value.trim() === '') return true;
|
||||
return verifyJSON(value);
|
||||
},
|
||||
message: t('不是合法的 JSON 字符串'),
|
||||
},
|
||||
]}
|
||||
extraText={t(
|
||||
'列出的模型将不会自动添加或移除-thinking/-nothinking 后缀',
|
||||
)}
|
||||
onChange={(value) =>
|
||||
setInputs({
|
||||
...inputs,
|
||||
'global.thinking_model_blacklist': value,
|
||||
})
|
||||
}
|
||||
/>
|
||||
</Col>
|
||||
</Row>
|
||||
|
||||
<Form.Section text={t('连接保活设置')}>
|
||||
<Row style={{ marginTop: 10 }}>
|
||||
|
||||
Reference in New Issue
Block a user