How to Summarize Slack Channels Daily with OpenClaw in Paradime
Feb 26, 2026
How to Build an Automated Slack Channel Digest with Paradime, OpenClaw, and the Slack SDK
Your team's most important decisions are buried in Slack threads. Here's how to surface them automatically — every single morning.
The Silent Killer: Lost Context in Data Teams
Picture this: it's Monday morning. A new analytics engineer joins your team. They ask a simple question — "Why did we change the grain on fct_orders last sprint?"
Nobody remembers. The decision was made in a Slack thread three weeks ago, buried under 200 messages in #data-engineering. The Confluence page still describes the old schema. The person who made the call? On PTO.
This isn't a hypothetical. It's the daily reality for data teams everywhere:
Stale documentation — Wiki pages written months ago that no longer reflect reality. Models get refactored, but the docs describing them don't.
Missing context — Critical architectural decisions made in Slack DMs and threads that never reach any permanent record.
Tribal knowledge — The "just ask Sarah" problem. According to Alation, 96% of companies report losing critical tribal knowledge from staffing changes.
Figure 1: How context evaporates across the three most common knowledge channels in data teams.
The fix isn't "write more docs." The fix is automating context capture — extracting decisions, action items, and open questions from where they naturally happen (Slack) and routing them to where they're permanently useful.
In this guide, you'll build exactly that: an OpenClaw skill backed by the Slack SDK that reads your data team's channels every 24 hours, summarizes the key decisions and action items, and delivers a daily digest — all orchestrated on a cron schedule through Paradime's Bolt.
What is Paradime?
Paradime is the all-in-one AI-native platform for data engineering that replaces dbt Cloud™. Think of it as Cursor for Data — it gives analytics teams a single environment to code, ship, fix, and scale data pipelines using dbt™ and Python.
For this guide, the feature that matters most is Bolt — Paradime's production orchestration engine. Bolt lets you schedule dbt™ commands, Python scripts, and custom workflows using cron expressions, trigger-based dependencies, merge events, or API calls.
Key Bolt capabilities relevant to this tutorial:
Cron scheduling with timezone support and standard 5-field expressions
Schedules as code via
paradime_schedules.ymlalongside yourdbt_project.ymlSlack notifications built in — send pass/fail alerts to any channel
Environment variable overrides per schedule
AI-powered debugging with DinoAI for when things break
Here's what a Bolt schedule looks like as code:
📖 Full documentation: Paradime Bolt — Schedules as Code
What is OpenClaw?
OpenClaw is an open-source autonomous AI agent that runs on your own hardware. Unlike cloud-hosted AI assistants, OpenClaw operates locally, connects to LLMs like Claude or GPT via API, and uses a skills-based architecture to perform real tasks across your messaging apps, files, and operating systems.
For data teams, OpenClaw is compelling because:
It connects to Slack natively — via Socket Mode or HTTP Events API, with full channel history access
It runs on your machine — your data stays yours; no messages leave your infrastructure unless you configure it to
It's extensible via skills — self-contained plugins (a
SKILL.md+ optional scripts) that teach the agent new capabilitiesIt has built-in cron — the Gateway's scheduler can trigger tasks on any standard cron expression
OpenClaw skills follow a simple three-layer structure:
Manifest — YAML frontmatter declaring the skill's name, description, and required tools
Instructions — Markdown directives that tell the agent how to execute the skill
Supporting resources — Optional scripts (Python, Bash, Node.js) the skill invokes
📖 Full documentation: OpenClaw Skills | Creating Skills
Setup: openclaw-sdk + Slack SDK
Before writing the digest script, you need three things installed and configured: the OpenClaw CLI/package, the Slack SDK, and the necessary API tokens.
Step 1: Install OpenClaw
OpenClaw requires Node.js ≥ 22. Install it globally:
Run the onboarding wizard to configure your LLM provider (Anthropic, OpenAI, or a local model):
Step 2: Install the Slack SDK (Python)
The digest script will use the official Slack SDK for Python to call the conversations.history API:
Step 3: Create a Slack App
Go to api.slack.com/apps → Create New App → From scratch
Enable Socket Mode → create an App-Level Token (
xapp-...) withconnections:writescopeUnder OAuth & Permissions, add these Bot Token Scopes:
Install the app to your workspace → copy the Bot User OAuth Token (
xoxb-...)Invite the bot to each channel you want to digest:
/invite @YourBotName
Step 4: Configure OpenClaw's Slack Channel
Add Slack to your OpenClaw configuration at ~/.openclaw/openclaw.json:
Or use environment variables as fallback:
📖 Full Slack setup reference: OpenClaw Slack Channel Docs
Figure 2: Setup sequence — installing OpenClaw, connecting to your LLM provider, and authenticating with Slack.
The Script: Read, Summarize, and Deliver
Now for the core of the automation: a Python script that reads the last 24 hours of messages from your configured channels, uses an LLM to extract key decisions and action items, and posts a structured digest.
Environment Variables
The script expects three environment variables:
Variable | Description | Example |
|---|---|---|
| Bot User OAuth Token from your Slack App |
|
| Your LLM provider API key (used by OpenClaw for summarization) |
|
| Comma-separated list of Slack channel IDs to digest |
|
💡 Tip: Find channel IDs by right-clicking a channel name in Slack → View channel details → the ID is at the bottom of the dialog.
The Digest Script
Figure 3: The daily digest pipeline — from cron trigger to posted summary.
Bolt Schedule: Cron at 8 AM Daily
With the script ready, you need to schedule it to run every weekday morning. You have two options: OpenClaw's built-in cron, or Paradime Bolt. We'll set up both so you can choose the approach that fits your stack.
Option A: OpenClaw Built-in Cron
OpenClaw's Gateway has a built-in scheduler that persists jobs in ~/.openclaw/cron/jobs.json. You can add a cron job via the CLI or as a tool call:
Or as a JSON tool call for programmatic setup:
📖 Full cron reference: OpenClaw Cron Jobs
Option B: Paradime Bolt Schedule
If you're already running dbt™ pipelines through Paradime, you can add the digest as a Bolt schedule. This approach is especially powerful because you can chain it after your morning data pipeline — ensuring the digest runs only after fresh data is available.
💡 Pro tip: Use Bolt's On Run Completion trigger type to chain the digest after your morning
dbt run. This way, if the pipeline runs late, the digest waits instead of firing on stale context. See Trigger Types.
Figure 4: Chaining the Slack digest after the morning dbt™ pipeline using Bolt's On Run Completion trigger.
Monitoring and Debugging
Once the digest is running, you need visibility into whether it's working — and fast diagnostics when it isn't.
OpenClaw Diagnostics
OpenClaw ships with a built-in doctor command that checks your entire configuration:
This validates:
Gateway connectivity and port availability
LLM provider API key validity
Channel configurations (Slack token scopes, Socket Mode status)
Skill loading and dependency resolution
For live debugging, run the gateway in verbose mode:
Check cron job history and run logs:
Paradime Bolt Monitoring
Bolt provides run history and analytics directly in the UI:
Navigate to Bolt in your Paradime workspace
Click the schedule name → Run History to see past executions
Check run logs for stdout/stderr output from the Python script
Set up SLA alerts so you're notified if the digest takes longer than expected
If a run fails, Bolt's DinoAI can analyze the error logs and suggest fixes directly in the UI.
📖 Reference: Viewing Run History and Analytics
Adding dbt™ Tests for Data Quality
If you're storing digest outputs in your warehouse (e.g., for trend analysis), you can add dbt™ tests to validate the data:
📖 Reference: dbt™ Data Tests
Troubleshooting Common Issues
1. not_in_channel Error When Fetching History
Symptom: SlackApiError: The request to the Slack API failed. (error: 'not_in_channel')
Fix: The bot must be a member of each channel it reads. Invite it manually:
For private channels, the bot also needs the groups:history scope.
2. OpenClaw Gateway Won't Start
Symptom: Gateway start blocked: set gateway.mode=local
Fix: Set the gateway mode explicitly in your config:
Or run:
📖 Reference: OpenClaw Gateway Troubleshooting
3. Cron Job Runs But No Digest Appears
Checklist:
Verify
SLACK_CHANNELSenv var contains valid channel IDs (not channel names)Check that messages exist in the channels within the last 24 hours
Confirm
SLACK_DIGEST_CHANNELis set and the bot haschat:writepermission thereRun
openclaw cron runs --idto check the run log for errors
4. LLM Summarization Returns Empty or Garbled Output
Possible causes:
API rate limit hit — OpenClaw's cron has built-in retry with backoff (
maxAttempts: 3)Token limit exceeded — if channels are very active, the combined message text may exceed the model's context window
Fix: Add a truncation step before summarization:
5. Bolt Schedule Shows "No Previous State"
Symptom: Deferred schedule fails with No previous state comparison
Fix: Ensure the schedule has run at least once successfully before enabling deferred mode. Run it manually first via the Bolt UI or API:
6. Timezone Mismatches
Symptom: The digest runs at the wrong time or includes messages outside the expected window.
Fix: Ensure consistent timezone usage:
Bolt schedule: set
timezone: America/New_York(or your timezone)OpenClaw cron: use
"tz": "America/New_York"in the schedule configThe Python script uses
datetime.now(timezone.utc)for the 24-hour lookback — this is timezone-agnostic by design
Figure 5: Decision tree for diagnosing a missing daily digest.
Wrapping Up
Let's step back and look at what you've built:
Before | After |
|---|---|
Decisions buried in Slack threads | Decisions extracted and posted daily |
"Ask Sarah" for context | Context captured automatically, even when Sarah is on PTO |
Stale Confluence pages | Living digests that reflect what actually happened |
New hires spend weeks building context | New hires read the digest archive from day one |
The architecture is simple and robust:
Slack SDK reads the last 24 hours of messages via
conversations.historyOpenClaw summarizes them using your LLM of choice — locally, with no data egress
Paradime Bolt (or OpenClaw's built-in cron) triggers the script every weekday at 8 AM
The structured digest — decisions, action items, open questions — lands in your team's channel before standup
This gets you near-100% coverage of the decisions flowing through your team's Slack channels. No more tribal knowledge. No more "I think we decided that in a thread somewhere." The context is captured, structured, and searchable.
Next Steps
Extend the skill to also scan Slack threads (using
conversations.replies) for deeper context extractionStore digests in your warehouse — load them into a dbt™ model for trend analysis (which channels are most active? Which decisions get revisited?)
Add dbt™-llm-evals — use Paradime's dbt-llm-evals package to evaluate summary quality directly in your warehouse
Chain with data freshness checks — use
dbt source freshnessin your Bolt pipeline to ensure the digest only fires when upstream data is current
The best documentation isn't written — it's captured. And with Paradime, OpenClaw, and the Slack SDK, capturing it is now a 30-minute setup that runs forever.
Ready to automate your data pipelines? Get started with Paradime — the AI-native dbt Cloud™ replacement. Explore OpenClaw for local AI agent workflows, and check the Slack SDK documentation for advanced API usage.

