* fix: remove post-compaction audit injection (Layer 3)
Remove the post-compaction read audit that injects fake system messages
into conversations after context compaction. This audit:
- Hardcodes WORKFLOW_AUTO.md (a file that doesn't exist in standard
workspaces) as a required read after every compaction
- Leaks raw regex syntax (memory\/\d{4}-\d{2}-\d{2}\.md) in
user-facing warning messages
- Injects messages via enqueueSystemEvent that appear as user-role
messages, tricking agents into reading attacker-controlled files
- Creates a persistent prompt injection vector (see #27697)
Layer 1 (compaction summary) and Layer 2 (workspace context refresh
from AGENTS.md via post-compaction-context.ts) remain intact and are
sufficient for post-compaction context recovery.
Deleted files:
- src/auto-reply/reply/post-compaction-audit.ts
- src/auto-reply/reply/post-compaction-audit.test.ts
Modified files:
- src/auto-reply/reply/agent-runner.ts (removed imports, audit map,
flag setting, and Layer 3 audit block)
Fixes#27697, fixes#26851, fixes#20484, fixes#22339, fixes#25600
Relates to #26461
* fix: resolve lint failures from post-compaction audit removal
* Tests: add regression for removed post-compaction audit warnings
---------
Co-authored-by: Wilfred (OpenClaw Agent) <jay@openclaw.dev>
Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
When an agent triggers a gateway restart in supervised mode, the process
exits expecting launchd KeepAlive to respawn it. But ThrottleInterval
(default 10s, or 60s on older installs) can delay or prevent restart.
Now calls triggerOpenClawRestart() to issue an explicit launchctl
kickstart before exiting, ensuring immediate respawn. Falls back to
in-process restart if kickstart fails.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(onboard): increase verification timeout and reduce max_tokens for custom provider probes
The onboard wizard sends a chat-completion request to verify custom
providers. With max_tokens: 1024 and a 10 s timeout, large local
models (e.g. Qwen3.5-27B on llama.cpp) routinely time out because
the server needs to load the model and generate up to 1024 tokens
before responding.
Changes:
- Raise VERIFY_TIMEOUT_MS from 10 s to 30 s
- Lower max_tokens from 1024 to 1 (verification only needs a single
token to confirm the API is reachable and the model ID is valid)
- Add explicit stream: false to both OpenAI and Anthropic probes
Closes#27346
Made-with: Cursor
* Changelog: note custom-provider onboarding verification fix
---------
Co-authored-by: Philipp Spiess <hello@philippspiess.com>
Land contributor PR #29032 by @maloqab with Slack native alias docs, integration tests, and changelog entry.
Co-authored-by: maloqab <mitebaloqab@gmail.com>
Fix tool-call lookup failures when models emit whitespace-padded names by normalizing
both transcript history and live streamed embedded-runner tool calls before dispatch.
Co-authored-by: wangchunyue <80630709+openperf@users.noreply.github.com>
Co-authored-by: Sid <sidqin0410@gmail.com>
Co-authored-by: Philipp Spiess <hello@philippspiess.com>
When Ollama responds successfully but returns zero models (e.g. on Linux
with the bundled `ollama-stub.service`), `discoverOllamaModels` was
logging at `warn` level:
[agents/model-providers] No Ollama models found on local instance
This appeared on every agent invocation even when Ollama was not
intentionally configured, polluting production logs. An empty model
list is a normal operational state — it warrants at most a debug
note, not a warning.
Fix: change `log.warn` → `log.debug` for the zero-models branch.
The error paths (HTTP failure, fetch exception) remain at `warn`
since those indicate genuine connectivity problems.
Closes#26354