How to Auto-Answer FAQs in Slack with OpenClaw in Paradime
Feb 26, 2026
How to Build a Slack FAQ Bot with Paradime, OpenClaw, and Bolt
Every data team has a #help channel that silently fills up with unanswered questions. Someone posts at 3 PM, and by 5 PM the question is buried under unrelated threads. The person who could answer is in a different timezone. The answer already exists — sitting in a FAQ doc nobody can find fast enough.
This guide walks you through building an automated Slack FAQ bot that scans your help channel for unanswered questions, matches them against your knowledge base, and posts AI-generated answers as thread replies — all orchestrated on a cron schedule through Paradime Bolt.
The approach is incident-friendly: structured steps, a clear decision tree for matching logic, and a "time to first clue" mindset. Every component is designed for reproducibility and minimal fixes when something breaks.
What Is Paradime
Paradime is an AI-native platform for data engineering, often described as "Cursor for Data." It replaces dbt Cloud™ with a unified workspace where teams can code, ship, fix, and scale data pipelines for analytics and AI.
Three core pillars define the platform:
Code IDE — An AI-native IDE that eliminates rote work and cuts dbt™ and Python development time by 83% or more, using full context of your data, docs, and tickets.
Bolt — A scheduler purpose-built for dbt™ and Python pipelines that transforms data with dbt™ and enriches with AI. Bolt includes cron scheduling, CI/CD, SLA monitoring, and Slack/email notifications.
Radar — FinOps tooling to reduce Snowflake and BigQuery cost and credit consumption.
For this tutorial, Bolt is the orchestration engine. It runs your FAQ bot script on a recurring cron schedule, manages environment variables (like API keys), and sends you Slack alerts when something fails.
Paradime also maintains dbt-llm-evals, an open-source package for evaluating LLM-generated content directly inside your data warehouse. If you later want to measure the quality of your FAQ bot's answers, you can wire in dbt™-llm-evals to score responses against criteria like accuracy, relevance, and completeness:
What Is OpenClaw
OpenClaw is an open-source, self-hosted AI agent platform that runs on your own machine — laptop, homelab, or VPS. Unlike SaaS AI assistants, OpenClaw keeps your data on your infrastructure, using your own API keys and LLM provider accounts.
OpenClaw connects to the chat apps you already use: Slack, Discord, WhatsApp, Telegram, Microsoft Teams, and more. Its Gateway acts as the control plane, managing sessions, channels, tools, and scheduled tasks (cron jobs).
Key capabilities relevant to this build:
Multi-channel inbox — including native Slack integration via Socket Mode or HTTP Events API.
Custom skills — add domain-specific knowledge and behaviors through
SKILL.mdfiles.Built-in cron scheduler — persist jobs, wake the agent on schedule, and deliver output back to a chat channel.
Local-first architecture — your keys, your data, your infrastructure.
For Slack specifically, OpenClaw supports bot token authentication (xoxb-), app-level tokens (xapp-), event subscriptions, and threaded replies — exactly the primitives our FAQ bot needs.
Architecture Overview
Before jumping into code, here's how every component connects:
Figure 1: End-to-end flow — Bolt triggers the script on a 30-minute cron, the script scans Slack for unanswered questions, matches against the FAQ, generates an answer via OpenClaw, and posts it as a thread reply.
Setup: openclaw-sdk + Slack SDK + Knowledge Base File
Prerequisites
A Paradime account with Bolt enabled (sign up)
An OpenClaw instance running on your machine or VPS (install guide)
A Slack workspace where you can install custom apps
Python 3.10+
An LLM provider API key (Anthropic, OpenAI, etc.)
Step 1: Install OpenClaw
During onboarding, select your LLM provider and model. For FAQ-quality answers, anthropic/claude-sonnet-4-20250514 is a good balance of cost and quality.
Alternatively, install via npm:
Step 2: Create Your Slack App
Go to Slack API — Create App and create a new app from scratch.
Under OAuth & Permissions, add these Bot Token Scopes:
Scope | Purpose |
|---|---|
| Read messages from public channels |
| List channels |
| Post replies |
| Detect emoji reactions |
| Resolve user names |
| Detect @mentions |
Enable Socket Mode and generate an App-Level Token (
xapp-...) with theconnections:writescope.Under Event Subscriptions, subscribe to
message.channels.Install the App to your workspace and copy the Bot User OAuth Token (
xoxb-...).
Step 3: Install Python Dependencies
In your dbt™ project root, create a pyproject.toml (or use Poetry):
Then install:
Step 4: Create the Knowledge Base File
Create knowledge_base/faq.md in your project root:
This file is your FAQ source of truth. The bot will match incoming questions against these entries and use them as context for generating answers.
Step 5: Configure OpenClaw for Slack
Add Slack as a channel in your OpenClaw config (~/.openclaw/openclaw.json):
Verify the connection:
Script: Scan, Match, and Reply
This is the core of the FAQ bot — a Python script that:
Pulls recent messages from the
#helpchannelFilters for unanswered questions (no thread replies)
Matches each question against the FAQ document
Generates an AI-powered answer via OpenClaw
Posts the answer as a thread reply
Figure 2: Decision tree — the script distinguishes between high-confidence FAQ matches, low-confidence best-effort answers, and non-question messages.
The Script: scripts/faq_bot.py
How the Matching Logic Works
The script takes a "time to first clue" approach — it doesn't try to be perfect, it tries to be fast and helpful:
Figure 3: Matching logic — the LLM handles semantic matching, not keyword search. The FAQ document is included as full context in every prompt.
The LLM acts as both the matcher and the answer generator. By including the entire FAQ document as context, the model can semantically match questions even when the phrasing differs significantly from the FAQ entries. The confidence tag (📚 Matched from FAQ vs. 🤖 Best-effort answer) makes it clear to the reader how trustworthy the response is.
Env Vars: SLACK_BOT_TOKEN, OPENCLAW_API_KEY
The bot requires these environment variables at runtime:
Variable | Description | Example |
|---|---|---|
| Bot User OAuth Token from Slack |
|
| Gateway token for authenticating with your OpenClaw instance |
|
| Slack channel ID for the help channel |
|
| Relative path to the FAQ markdown file |
|
| URL of your OpenClaw gateway |
|
Setting Env Vars in Paradime Bolt
In Paradime, environment variables for production schedules are configured through the UI:
Go to Settings → Workspaces → Environment Variables
In the Bolt Schedules section, click Add New
Enter the key name (e.g.,
SLACK_BOT_TOKEN) and its valueClick Save
You can also bulk upload variables via a CSV file with Key,Value columns.
Never hardcode secrets. Always use environment variables. Paradime stores them encrypted and injects them at runtime.
For your OpenClaw instance, configure the API key in ~/.openclaw/.env:
OpenClaw's environment variable precedence (highest to lowest):
Process environment (parent shell/daemon)
.envin the current working directoryGlobal
.envat~/.openclaw/.envConfig
envblock in~/.openclaw/openclaw.json
Bolt Schedule: Cron Every 30 Minutes
Paradime Bolt supports schedules-as-code via paradime_schedules.yml in your dbt™ project root. Here's how to set up the FAQ bot to run every 30 minutes:
Schedule Configuration
Key Configuration Details
poetry installas the first command: Paradime uses Poetry for dependency management. The first command must install dependencies and create the virtual environment.schedule: "*/30 * * * *": Standard 5-field cron expression — runs at minute 0 and minute 30 of every hour. Validate your cron expressions at crontab.guru.sla_minutes: 5: If the script takes longer than 5 minutes, fire an SLA alert. This is your canary for OpenClaw gateway timeouts or Slack API rate limits.Notifications on failure and SLA breach: You want to know immediately if the bot stops working.
File Structure
Alternative: OpenClaw Native Cron
If you want to run the scan directly through OpenClaw's built-in cron instead of Paradime Bolt, you can add a cron job via CLI:
Or as a JSON tool call:
OpenClaw cron jobs persist under ~/.openclaw/cron/ so restarts don't lose schedules.
Figure 4: Two scheduling options — Paradime Bolt gives you SLA monitoring, failure alerts, and environment variable management; OpenClaw cron gives you a lighter, agent-native approach.
Monitoring and Debugging
Paradime Bolt Monitoring
Bolt provides built-in monitoring for every scheduled run:
Run History — View pass/fail status, duration, and logs for every execution in the Bolt UI.
SLA Alerts — If the FAQ bot exceeds the 5-minute SLA, Bolt sends alerts to your configured Slack channel and email.
Failure Notifications — Immediate alerts on script errors, missing dependencies, or timeout.
Configure notification destinations in paradime_schedules.yml:
Paradime also integrates with custom Alert Templates so you can customize the Slack notification format for your team.
OpenClaw Gateway Logs
For debugging the AI generation side, check OpenClaw's gateway logs:
Set the log level to debug for more detail:
Slack API Debugging
Common Slack debugging techniques:
Decision Tree: When Something Breaks
Figure 5: Incident decision tree — start from the symptom "bot not replying" and follow the branches to the root cause.
Troubleshooting Common Issues
1. not_in_channel Error from Slack
Symptom: SlackApiError: The request to the Slack API failed. (error: 'not_in_channel')
Fix: The bot must be invited to the help channel. In Slack, type /invite @your-bot-name in the #help channel.
2. OpenClaw Gateway Not Responding
Symptom: ConnectionError: Connection refused when the script tries to reach the gateway.
Fix:
Verify gateway.mode is set to "local" in your config if running locally:
3. Bot Replies to Its Own Messages (Infinite Loop)
Symptom: The bot detects its own replies as "unanswered" and replies again.
Fix: The script already filters out bot_id messages. Verify the filter is working:
If your bot posts without a bot_id (rare), add a check against the bot's own user ID:
4. Duplicate Replies on Overlapping Runs
Symptom: Two Bolt runs overlap, and the same question gets answered twice.
Fix: Add idempotency — track which messages have already been processed. A simple approach is to add a reaction emoji after replying:
5. Poetry Install Fails in Bolt
Symptom: poetry: command not found or dependency resolution errors in Bolt run logs.
Fix: Ensure poetry install is the first command in your schedule and that your pyproject.toml is committed to the repository on the main branch. See the Paradime Poetry setup guide.
6. FAQ Answers Are Generic or Off-Topic
Symptom: The bot replies but the answers don't match the FAQ content.
Fix:
Ensure
FAQ_PATHpoints to the correct file and it's accessible at runtime.Add more specific entries to your
faq.md— the more context the LLM has, the better it matches.Consider chunking your FAQ if it exceeds the model's context window (unlikely with most modern models, but worth checking for very large knowledge bases).
7. Rate Limiting from Slack or LLM Provider
Symptom: 429 Too Many Requests or rate_limited errors.
Fix:
For Slack: The Slack API rate limits allow ~1 request per second for posting messages. Add a small delay between replies:
For LLM providers: Reduce the number of concurrent calls or increase the cron interval from 30 minutes to 60 minutes.
Wrapping Up
You now have a fully automated Slack FAQ bot that:
Scans your
#helpchannel every 30 minutes for unanswered questionsMatches questions against your FAQ knowledge base using semantic understanding via OpenClaw
Replies with AI-generated, context-grounded answers as thread replies
Runs on a schedule via Paradime Bolt with cron, SLA monitoring, and failure alerts
Stays debuggable with clear logging, a decision tree for incidents, and idempotency guards
Figure 6: Before and after — questions go from hours-to-answer to minutes-to-first-clue.
Next Steps
Expand the knowledge base — Add more FAQ entries as patterns emerge from unanswered questions.
Add quality monitoring — Wire in dbt™-llm-evals to evaluate the accuracy and helpfulness of bot-generated answers over time.
Enable feedback loops — Let users react with 👍/👎 to bot answers, and log that signal back to your warehouse for continuous improvement.
Customize the OpenClaw skill — Create a dedicated
SKILL.mdin~/.openclaw/workspace/skills/faq-bot/with instructions specific to your team's domain language and tone.
The goal isn't to replace your team's expertise — it's to surface the right answer fast, reduce "time to first clue," and free up experts for the questions that actually require human judgment.

