diff --git a/CHANGELOG.md b/CHANGELOG.md index 6853fa126f4..ea8651a56fe 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -15,6 +15,7 @@ Docs: https://docs.openclaw.ai ### Fixes +- Memory: switch default local embedding model to the QAT `embeddinggemma-300m-qat-Q8_0` variant for better quality at the same footprint. (#15429) Thanks @azade-c. - Agents/Compaction: centralize exec default resolution in the shared tool factory so per-agent `tools.exec` overrides (host/security/ask/node and related defaults) persist across compaction retries. (#15833) Thanks @napetrov. - Voice Call: route webhook runtime event handling through shared manager event logic so rejected inbound hangups are idempotent in production, with regression tests for duplicate reject events and provider-call-ID remapping parity. (#15892) Thanks @dcantu96. - CLI/Completion: route plugin-load logs to stderr and write generated completion scripts directly to stdout to avoid `source <(openclaw completion ...)` corruption. (#15481) Thanks @arosstale. diff --git a/docs/concepts/memory.md b/docs/concepts/memory.md index 9ad902c6c4e..8907ef28c78 100644 --- a/docs/concepts/memory.md +++ b/docs/concepts/memory.md @@ -535,7 +535,7 @@ Notes: ### Local embedding auto-download -- Default local embedding model: `hf:ggml-org/embeddinggemma-300M-GGUF/embeddinggemma-300M-Q8_0.gguf` (~0.6 GB). +- Default local embedding model: `hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF/embeddinggemma-300m-qat-Q8_0.gguf` (~0.6 GB). - When `memorySearch.provider = "local"`, `node-llama-cpp` resolves `modelPath`; if the GGUF is missing it **auto-downloads** to the cache (or `local.modelCacheDir` if set), then loads it. Downloads resume on retry. - Native build requirement: run `pnpm approve-builds`, pick `node-llama-cpp`, then `pnpm rebuild node-llama-cpp`. - Fallback: if local setup fails and `memorySearch.fallback = "openai"`, we automatically switch to remote embeddings (`openai/text-embedding-3-small` unless overridden) and record the reason. diff --git a/src/memory/embeddings.test.ts b/src/memory/embeddings.test.ts index c9326da43cf..9603aede3a5 100644 --- a/src/memory/embeddings.test.ts +++ b/src/memory/embeddings.test.ts @@ -313,6 +313,7 @@ describe("local embedding normalization", () => { it("normalizes local embeddings to magnitude ~1.0", async () => { const unnormalizedVector = [2.35, 3.45, 0.63, 4.3, 1.2, 5.1, 2.8, 3.9]; + const resolveModelFileMock = vi.fn(async () => "/fake/model.gguf"); importNodeLlamaCppMock.mockResolvedValue({ getLlama: async () => ({ @@ -324,7 +325,7 @@ describe("local embedding normalization", () => { }), }), }), - resolveModelFile: async () => "/fake/model.gguf", + resolveModelFile: resolveModelFileMock, LlamaLogLevel: { error: 0 }, }); @@ -340,6 +341,10 @@ describe("local embedding normalization", () => { const magnitude = Math.sqrt(embedding.reduce((sum, x) => sum + x * x, 0)); expect(magnitude).toBeCloseTo(1.0, 5); + expect(resolveModelFileMock).toHaveBeenCalledWith( + "hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF/embeddinggemma-300m-qat-Q8_0.gguf", + undefined, + ); }); it("handles zero vector without division by zero", async () => {