createOllamaStreamFn() only accepted baseUrl, ignoring custom headers
configured in models.providers.<provider>.headers. This caused 403
errors when Ollama endpoints are behind reverse proxies that require
auth headers (e.g. X-OLLAMA-KEY via HAProxy).
Add optional defaultHeaders parameter to createOllamaStreamFn() and
merge them into every fetch request. Provider headers from config are
now passed through at the call site in the embedded runner.
Fixes#24285
Qwen 3 (and potentially other reasoning-capable models served via Ollama)
returns its final answer in a `reasoning` field with an empty `content`
field. This causes blank/empty responses since OpenClaw only reads `content`.
Changes:
- Add `reasoning?` to OllamaChatResponse message type
- Fall back to `reasoning` when `content` is empty in buildAssistantMessage
- Accumulate `reasoning` chunks during streaming when `content` is empty
This allows Qwen 3 to work correctly both with and without /no_think mode.