How to Curate Industry News Digests with OpenClaw in Paradime
Feb 26, 2026
How to Build an Automated News Digest with Paradime, OpenClaw & Slack
Every data team has felt this pain. Your dbt™ project has 400 models, but half lack descriptions. A new analyst joins, asks "What does fct_user_engagement mean?", and three people give three different answers. Industry news about breaking changes in Snowflake or new dbt™ features arrives buried in someone's Twitter feed—never surfacing where the team can act on it. Stale docs, missing context, and tribal knowledge aren't just annoyances; they're compounding risks that slow down every decision your team makes.
What if you could wire up a system that automatically aggregates the news your data team cares about, filters it for relevance, summarizes the top stories, and drops a polished digest into Slack every morning at 8 AM—all orchestrated through Paradime Bolt?
This guide walks you through exactly that. You'll combine Paradime, OpenClaw, and the Slack SDK into a fully automated, daily news-digest pipeline that achieves near-100% coverage of the sources your team needs to stay current.
What is Paradime?
Paradime is an all-in-one, AI-native platform that replaces dbt Cloud™. It gives data teams a single workspace to code, ship, fix, and scale data pipelines for analytics and AI. Think of it as "Cursor for Data."
Key capabilities relevant to this workflow:
Capability | What It Does |
|---|---|
Code IDE | AI-native IDE with DinoAI that cuts dbt™ and Python development time by 83%+ |
Bolt | Production scheduler for dbt™ and Python pipelines with cron, event-driven, merge, and API triggers |
Radar | FinOps tooling to cut Snowflake and BigQuery costs |
Paradime Docs | AI-powered documentation with one-click autogeneration and bi-directional YAML sync |
Bolt is the orchestration engine you'll use to schedule the news-digest script on a daily cron. It supports Schedules as Code via a paradime_schedules.yml file, environment variable overrides at the schedule level, and Slack notifications baked right in.
What is OpenClaw?
OpenClaw is a self-hosted, local-first AI assistant platform. You run an always-on process called the Gateway on hardware you control, and the Gateway connects to messaging apps, runs agent turns, and optionally invokes tools. Its architecture is modular—capabilities ship as composable Skills that you install individually.
For news aggregation, OpenClaw's toolkit has three pillars:
Figure 1: OpenClaw's three-pillar news pipeline — from raw feeds to human-ready briefings.
Aggregation — Pulls from RSS/Atom feeds, custom JSON endpoints, and web scraping targets into a unified chronological stream. Feeds can be grouped into named collections (e.g.,
"dbt-ecosystem") and queried conversationally.Monitoring — Defines watch rules combining keywords, sentiment thresholds, and source scopes. Triggers real-time alerts when matching articles appear.
Writing Assistance — Generates daily/weekly briefings, draft blog posts, and social media snippets from aggregated content. Customizable via natural-language prompts or YAML config.
The openclaw-feeds skill (GitHub) is an open-source RSS aggregator that fetches headlines with concurrent fetching, streamed JSON output, and built-in deduplication. It requires only Python 3, feedparser, and network access—no API keys needed for the feed-fetching layer itself.
Setup: openclaw-sdk + RSS Feeds + Slack SDK
Prerequisites
Before you begin, ensure you have:
Node.js ≥ 22 (for OpenClaw Gateway)
Python 3.10+ (for the aggregation/summarization script)
A Paradime account with Bolt access
A Slack workspace where you can create an app
Step 1: Install OpenClaw
This sets up the Gateway process at ~/.openclaw/ and creates your configuration file at ~/.openclaw/openclaw.json.
Step 2: Install the RSS Feeds Skill
The openclaw-feeds skill handles all RSS parsing. Install it from the official skills repository:
Then install the Python dependencies the skill requires:
Step 3: Configure Your RSS Feed Sources
Create a feeds_config.yaml file in your project root to define the sources your team cares about:
Step 4: Create a Slack Incoming Webhook
Go to https://api.slack.com/apps?new_app=1 and create a new app.
Navigate to Incoming Webhooks → toggle Activate Incoming Webhooks to On.
Click Add New Webhook to Workspace, select the channel (e.g.,
#data-team-news), and authorize.Copy the webhook URL — you'll store this as an environment variable.
Figure 2: Slack incoming webhook setup flow — from app creation to webhook URL.
Script: Aggregate, Filter, and Summarize Top 5 Stories
Here's the complete Python script that ties everything together. It reads from your configured RSS feeds, filters for relevance using keyword matching, uses the OpenClaw SDK to summarize the top stories, and posts the digest to Slack.
How the Script Works
Figure 3: End-to-end flow of the news digest script — from config loading to Slack delivery.
Environment Variables: OPENCLAW_API_KEY, SLACK_WEBHOOK_URL, RSS_FEEDS
The script depends on three environment variables. Here's what each one does and where to set it:
Variable | Purpose | Where to Get It |
|---|---|---|
| Authenticates requests to the OpenClaw Gateway API. This is the token you set in your | Set during |
| The incoming webhook URL that Slack generates for your app. Posts go to the channel you authorized. | Slack API Portal → Your App → Incoming Webhooks |
| Path to your | Your project repo |
Setting Variables in Paradime Bolt
In Paradime, navigate to Settings → Workspaces → Environment Variables → Bolt Schedules and add each variable:
You can also override these at the schedule level. For example, if you want a different Slack channel for a weekend digest, create a separate schedule with an overridden SLACK_WEBHOOK_URL pointing to #data-team-weekend. Only Admin roles can override environment variable values in Bolt Schedules. See Paradime docs on environment variable overrides.
Setting Variables in OpenClaw
OpenClaw supports environment variables via a .env file at ~/.openclaw/.env or inline in the config:
For secure secret management, OpenClaw also supports SecretRef objects that pull from environment variables, files, or shell commands. See OpenClaw Secrets Management.
Bolt Schedule: Cron Daily at 8 AM
Paradime Bolt supports four trigger types: Scheduled Run (cron), On Run Completion, On Merge, and Bolt API. For a daily news digest, you'll use a cron-based Scheduled Run.
Option A: Schedules as Code (Recommended)
Create a paradime_schedules.yml file in the root of your dbt™ project (alongside dbt_project.yml):
How it works: Paradime auto-checks
paradime_schedules.ymlon your default branch every 10 minutes. For immediate pickup, use the "Parse Schedules" button in the Bolt UI. Cron syntax reference: crontab.guru.
Option B: UI-Based Schedule
Navigate to Bolt in Paradime.
Click Create Schedule.
Set the trigger type to Scheduled Run.
Enter the cron expression:
0 8 * * *.Select your timezone.
Add the command:
python3 news_digest.py.Configure notifications and deploy.
Complementary: OpenClaw Cron Job
If you want OpenClaw to also run its own summarization independently (e.g., for a Telegram or Discord channel), you can configure a cron job directly in the Gateway:
Or via the config file:
Figure 4: Architecture overview — Bolt triggers the script, which fetches RSS feeds, summarizes via OpenClaw, and posts to Slack.
Monitoring and Debugging
A daily digest is only useful if it actually runs. Here's how to monitor the pipeline across both platforms.
Paradime Bolt Monitoring
Bolt provides built-in schedule monitoring out of the box:
Schedule Dashboard — View run status, duration, and history for every schedule. Filter by owner, cron configuration, or run status.
SLA Alerts — The
sla_minutes: 15setting in your YAML config triggers a notification if the run exceeds 15 minutes.Failure Notifications — Email and Slack alerts fire automatically on failed runs.
Run Logs — Click into any run to see stdout/stderr from your
python3 news_digest.pyexecution.
OpenClaw Gateway Monitoring
OpenClaw provides several diagnostic tools:
For cron jobs specifically:
Adding Observability to the Script
Add structured logging to make debugging easier in Bolt's run logs:
Troubleshooting Common Issues
RSS Feed Failures
Symptom | Cause | Fix |
|---|---|---|
| Feeds use non-standard format or require auth | Verify feed URL in a browser; some feeds require cookies or user-agent headers |
Timeout on specific feeds | Source server is slow or blocking | The |
Duplicate stories | Same story syndicated across multiple feeds | Add deduplication by URL in the |
OpenClaw Gateway Issues
Symptom | Cause | Fix |
|---|---|---|
| Gateway not running | Run |
| Wrong or expired | Verify token matches |
| Too many requests to the underlying LLM provider | Adjust concurrency; add backoff in your script; or configure rate-limiting in OpenClaw config |
Port conflict on 18789 | Another process using the port | Check with |
Stale PID lock file | Gateway crashed without cleanup | Check |
Slack Webhook Failures
Symptom | Cause | Fix |
|---|---|---|
| Malformed JSON payload | Validate your payload structure matches the Block Kit reference |
| Webhook URL revoked or app uninstalled | Regenerate the webhook in the Slack API portal |
Message posts but no formatting | Using | Switch to Block Kit format as shown in the script above |
Paradime Bolt Schedule Issues
Symptom | Cause | Fix |
|---|---|---|
Schedule not appearing | YAML syntax error in | Validate YAML locally; check the Bolt UI for parse errors; click "Parse Schedules" to force reload |
Schedule runs but fails | Missing environment variables | Verify all three env vars are set in Settings → Environment Variables → Bolt Schedules |
SLA breach alerts | Script taking too long | Profile which feeds are slow; reduce feed count or parallelize fetching |
Quick Diagnostic Checklist
Figure 5: Troubleshooting decision tree — quickly diagnose why your morning digest didn't arrive.
Wrapping Up
Let's recap what you've built:
A curated feed configuration — YAML-driven RSS sources grouped by collection, version-controlled alongside your dbt™ project.
An aggregation + summarization script — Python fetches entries from the last 24 hours, scores them by relevance, and uses OpenClaw's Gateway API to generate a human-readable briefing.
Automated Slack delivery — Every morning at 8 AM, your team gets a polished digest in
#data-team-newswithout anyone lifting a finger.Production-grade orchestration — Paradime Bolt handles scheduling, retries, SLA monitoring, and failure notifications via Schedules as Code.
Full observability — Between Bolt's run logs, OpenClaw's diagnostic commands, and structured logging in the script, you can pinpoint failures in seconds.
Figure 6: The transformation — from scattered tribal knowledge to a single source of truth delivered daily.
Next Steps
Expand your feed sources — Add feeds for tools your team uses: Fivetran, Airbyte, Looker, Preset, etc.
Add keyword groups — Create separate digests for different teams (analytics, platform, ML) by customizing relevance keywords per schedule.
Evaluate with dbt™-llm-evals — If you're generating AI summaries at scale, use Paradime's dbt™-llm-evals package to evaluate summary quality directly in your warehouse:
Enforce documentation coverage — While you're at it, use
dbt_project_evaluatorto audit your documentation coverage. Thefct_documentation_coveragemodel raises a warning if coverage drops below your target threshold:
The pain of stale docs, missing context, and tribal knowledge doesn't fix itself. But with the right automation—Paradime for orchestration, OpenClaw for AI-powered aggregation, and Slack for delivery—you can build a system that keeps your entire team informed, every single day.

