fix(memory): align QAT default docs/tests (#15429) (thanks @azade-c)

This commit is contained in:
Peter Steinberger
2026-02-14 02:46:11 +01:00
parent 5219f74615
commit 61b5133264
3 changed files with 8 additions and 2 deletions

View File

@@ -15,6 +15,7 @@ Docs: https://docs.openclaw.ai
### Fixes
- Memory: switch default local embedding model to the QAT `embeddinggemma-300m-qat-Q8_0` variant for better quality at the same footprint. (#15429) Thanks @azade-c.
- Agents/Compaction: centralize exec default resolution in the shared tool factory so per-agent `tools.exec` overrides (host/security/ask/node and related defaults) persist across compaction retries. (#15833) Thanks @napetrov.
- Voice Call: route webhook runtime event handling through shared manager event logic so rejected inbound hangups are idempotent in production, with regression tests for duplicate reject events and provider-call-ID remapping parity. (#15892) Thanks @dcantu96.
- CLI/Completion: route plugin-load logs to stderr and write generated completion scripts directly to stdout to avoid `source <(openclaw completion ...)` corruption. (#15481) Thanks @arosstale.

View File

@@ -535,7 +535,7 @@ Notes:
### Local embedding auto-download
- Default local embedding model: `hf:ggml-org/embeddinggemma-300M-GGUF/embeddinggemma-300M-Q8_0.gguf` (~0.6 GB).
- Default local embedding model: `hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF/embeddinggemma-300m-qat-Q8_0.gguf` (~0.6 GB).
- When `memorySearch.provider = "local"`, `node-llama-cpp` resolves `modelPath`; if the GGUF is missing it **auto-downloads** to the cache (or `local.modelCacheDir` if set), then loads it. Downloads resume on retry.
- Native build requirement: run `pnpm approve-builds`, pick `node-llama-cpp`, then `pnpm rebuild node-llama-cpp`.
- Fallback: if local setup fails and `memorySearch.fallback = "openai"`, we automatically switch to remote embeddings (`openai/text-embedding-3-small` unless overridden) and record the reason.

View File

@@ -313,6 +313,7 @@ describe("local embedding normalization", () => {
it("normalizes local embeddings to magnitude ~1.0", async () => {
const unnormalizedVector = [2.35, 3.45, 0.63, 4.3, 1.2, 5.1, 2.8, 3.9];
const resolveModelFileMock = vi.fn(async () => "/fake/model.gguf");
importNodeLlamaCppMock.mockResolvedValue({
getLlama: async () => ({
@@ -324,7 +325,7 @@ describe("local embedding normalization", () => {
}),
}),
}),
resolveModelFile: async () => "/fake/model.gguf",
resolveModelFile: resolveModelFileMock,
LlamaLogLevel: { error: 0 },
});
@@ -340,6 +341,10 @@ describe("local embedding normalization", () => {
const magnitude = Math.sqrt(embedding.reduce((sum, x) => sum + x * x, 0));
expect(magnitude).toBeCloseTo(1.0, 5);
expect(resolveModelFileMock).toHaveBeenCalledWith(
"hf:ggml-org/embeddinggemma-300m-qat-q8_0-GGUF/embeddinggemma-300m-qat-Q8_0.gguf",
undefined,
);
});
it("handles zero vector without division by zero", async () => {