mirror of
https://github.com/openclaw/openclaw.git
synced 2026-04-18 23:27:26 +00:00
feat(slack): add native text streaming support
Adds support for Slack's Agents & AI Apps text streaming APIs (chat.startStream, chat.appendStream, chat.stopStream) to deliver LLM responses as a single updating message instead of separate messages per block. Changes: - New src/slack/streaming.ts with stream lifecycle helpers using the SDK's ChatStreamer (client.chatStream()) - New 'streaming' config option on SlackAccountConfig - Updated dispatch.ts to route block replies through the stream when enabled, with graceful fallback to normal delivery - Docs in docs/channels/slack.md covering setup and requirements The streaming integration works by intercepting the deliver callback in the reply dispatcher. When streaming is enabled and a thread context exists, the first text delivery starts a stream, subsequent deliveries append to it, and the stream is finalized after dispatch completes. Media payloads and error cases fall back to normal message delivery. Refs: - https://docs.slack.dev/ai/developing-ai-apps#streaming - https://docs.slack.dev/reference/methods/chat.startStream - https://docs.slack.dev/reference/methods/chat.appendStream - https://docs.slack.dev/reference/methods/chat.stopStream
This commit is contained in:
@@ -563,6 +563,40 @@ Common failures:
|
||||
|
||||
For triage flow: [/channels/troubleshooting](/channels/troubleshooting).
|
||||
|
||||
## Text streaming
|
||||
|
||||
Slack's [Agents & AI Apps](https://docs.slack.dev/ai/developing-ai-apps) feature includes native text streaming APIs that let your app stream responses word-by-word (similar to ChatGPT) instead of waiting for the full response.
|
||||
|
||||
Enable it per-account:
|
||||
|
||||
```yaml
|
||||
channels:
|
||||
slack:
|
||||
streaming: true
|
||||
```
|
||||
|
||||
### Requirements
|
||||
|
||||
1. **Agents & AI Apps** must be toggled on in your [Slack app settings](https://api.slack.com/apps). This automatically adds the `assistant:write` scope.
|
||||
2. Streaming only works **within threads** (DM threads, channel threads). Messages without a thread context fall back to normal delivery automatically.
|
||||
3. Block streaming (`blockStreaming`) is automatically enabled when `streaming` is active so the LLM's incremental output feeds into the stream.
|
||||
|
||||
### Behavior
|
||||
|
||||
- On the first text block the bot calls `chat.startStream` to create a single updating message.
|
||||
- Subsequent text blocks are appended via `chat.appendStream`.
|
||||
- When the reply is complete the stream is finalized with `chat.stopStream`.
|
||||
- Media attachments (images, files) are delivered as separate messages alongside the stream.
|
||||
- If a streaming API call fails, the bot gracefully falls back to normal message delivery for the remainder of the response.
|
||||
|
||||
### Relevant Slack API methods
|
||||
|
||||
| Method | Purpose |
|
||||
| --------------------------------------------------------------------------------- | ------------------------- |
|
||||
| [`chat.startStream`](https://docs.slack.dev/reference/methods/chat.startStream) | Start a new text stream |
|
||||
| [`chat.appendStream`](https://docs.slack.dev/reference/methods/chat.appendStream) | Append text to the stream |
|
||||
| [`chat.stopStream`](https://docs.slack.dev/reference/methods/chat.stopStream) | Finalize the stream |
|
||||
|
||||
## Notes
|
||||
|
||||
- Mention gating is controlled via `channels.slack.channels` (set `requireMention` to `true`); `agents.list[].groupChat.mentionPatterns` (or `messages.groupChat.mentionPatterns`) also count as mentions.
|
||||
|
||||
Reference in New Issue
Block a user