How to Automate Incident Post-Mortems with OpenClaw in Paradime
Feb 26, 2026
Automate Incident Post-Mortems with Paradime, OpenClaw, and Slack — No Config Pain Required
Still copying Slack threads into a Google Doc at 2 a.m. after an incident? That's not a post-mortem process — that's punishment for being on-call.
In this guide, you'll build an automated incident post-mortem pipeline that reads your Slack incident channel, extracts the timeline of actions and decisions using OpenClaw's AI agent, generates a structured Google Docs report, and orchestrates the entire workflow through Paradime's Bolt scheduler — all without touching a single YAML config file on your local machine.
Here's the best part: every secret, every credential, every environment variable lives in Paradime's UI. No .env files floating around dev laptops. No credentials committed to Git. Just clean, secure, auditable configuration.
What Is Paradime?
Paradime is an AI-native platform for data engineering that replaces dbt Cloud™. It's built for fast-moving teams that need to code, ship, fix, and scale data pipelines for analytics and AI — all from a single workspace.
But the feature that matters for this guide is Bolt — Paradime's production orchestration engine. Bolt handles:
Scheduling with cron, event triggers, on-merge triggers, and API-based triggers
Environment variables managed entirely through the UI (no local config files)
Notifications via Slack, email, and Microsoft Teams
DinoAI-powered debugging that generates plain-English summaries of failed runs
CI/CD with Turbo CI for pull-request validation
Bolt is where your post-mortem automation script will run. You configure the schedule, set your API keys as environment variables in the Paradime Settings page, and trigger the workflow via a single API call when an incident is resolved.
Figure 1: High-level flow — from incident resolution to a published post-mortem document.
Why Paradime over a raw cron job? Because Bolt gives you run history, debug logs, notifications on failure, and UI-managed secrets. Your SRE team shouldn't need to SSH into a box to check if the post-mortem script ran.
What Is OpenClaw?
OpenClaw (formerly Clawdbot/Moltbot) is a free, open-source autonomous AI agent developed by Peter Steinberger. It runs locally on your hardware and orchestrates tasks across chat apps, files, the web, and your operating system using large language models (Claude, GPT, DeepSeek, and others).
What makes OpenClaw relevant here:
Skills system: Task-specific capabilities defined in
SKILL.mdfiles. OpenClaw ships with a Slack skill that can read messages, react, pin items, and fetch member info.Local-first architecture: Your data never leaves your infrastructure. Credentials and interaction history stay on your machine (or your Paradime workspace).
Extensible integrations: Connects to LLM providers via a gateway, with env-var-based secret management and config substitution.
For this pipeline, OpenClaw serves as the AI brain that reads raw Slack messages and transforms them into a structured incident timeline — extracting key decisions, action items, owners, and root-cause signals.
Figure 2: OpenClaw's role — AI-powered extraction from raw Slack threads.
Setup: OpenClaw + Slack SDK + Google Docs API
Let's wire up the three SDKs. The beauty of this approach is that all sensitive configuration is handled through Paradime's environment variables — no local .env files needed.
Install Dependencies
Verify OpenClaw Installation
Tip: If
openclawisn't found after install, add the npm global bin directory to your PATH:
Configure OpenClaw for LLM Access
OpenClaw needs access to an LLM provider for the AI analysis step. Configure it in ~/.openclaw/openclaw.json:
Notice the ${ANTHROPIC_API_KEY} syntax — OpenClaw resolves environment variables at runtime, so the actual key lives in your Paradime workspace, not in this config file. This is the env var substitution pattern from the OpenClaw docs.
Set Up Slack App
Create a Slack App at api.slack.com/apps with these Bot Token Scopes:
Scope | Why |
|---|---|
| Read messages from incident channels |
| List channels to find the incident channel |
| Read private incident channels |
| List private channels |
| Resolve user IDs to names in the timeline |
Install the app to your workspace and grab the Bot User OAuth Token (starts with xoxb-).
Set Up Google Docs API
Go to the Google Cloud Console
Create a new project (or use an existing one)
Enable the Google Docs API and Google Drive API
Create a Service Account under IAM & Admin
Generate a JSON key file for the service account
Share the target Google Drive folder with the service account email
The JSON key file contents will be stored as the GOOGLE_CREDENTIALS_JSON environment variable in Paradime.
Figure 3: Google credentials flow — from GCP console to Paradime's secure variable store.
Script: Read Incident Channel → Extract Timeline → Generate Post-Mortem Doc
Here's the complete automation script. It ties together all three services — Slack for reading incident data, OpenClaw for AI-powered analysis, and Google Docs for report generation.
The Full Script
How the Script Flows
Figure 4: The complete execution sequence — from Bolt trigger to published post-mortem.
Environment Variables: Secure, UI-Managed, Zero Local Config
This is where Paradime fundamentally changes the game. Instead of juggling .env files, passing secrets through CI pipelines, or (worst case) hardcoding tokens, every credential is managed through the Paradime Environment Variables settings page.
Required Variables
Variable | Description | Where to Get It |
|---|---|---|
| Slack Bot User OAuth Token ( | Slack App Settings → OAuth & Permissions |
| Full JSON key for Google service account | GCP Console → IAM → Service Accounts |
| API key for OpenClaw's configured LLM provider | Your LLM provider dashboard (Anthropic, OpenAI, etc.) |
| Slack channel ID for the incident | Slack channel details → Copy Channel ID |
| Google Drive folder ID for post-mortem docs | Drive folder URL (the ID after |
Adding Variables in Paradime
Navigate to Settings → Workspaces → Environment Variables
In the Bolt Schedules section, click Add New
Enter the Key (e.g.,
SLACK_BOT_TOKEN) and ValueClick the Save icon (💾)
Repeat for each variable
Pro tip: You can also bulk upload via CSV with
Key,Valuecolumns. Great for setting up multiple environments at once.
Per-Schedule Overrides
Need different credentials for staging vs. production? Paradime lets you override environment variables at the schedule level:
Navigate to the Bolt UI and select the schedule
Click Edit → scroll to Environment Variables Override
Enter override values for specific variables
Click Deploy
Schedule-level overrides take precedence over workspace defaults. If no override is set, the global value is used.
Figure 5: Environment variable inheritance — workspace defaults with per-schedule overrides.
Why This Matters for Security
Opinionated take: If your incident-response tooling requires engineers to store API tokens locally, you've already lost. Here's what the Paradime approach gives you:
No secrets on laptops — credentials live in Paradime's SOC 2 Type II-certified infrastructure
Audit trail — changes to environment variables are tracked
Role-based access — only Admin roles can add, edit, or remove variables
No Git exposure — impossible to accidentally commit a
.envfile
Bolt Schedule: API Trigger After Incident
The post-mortem script shouldn't run on a cron schedule — it should fire when an incident is resolved. Paradime Bolt's API trigger is perfect for this.
Create the Bolt Schedule
In the Paradime UI, navigate to Bolt → Create Schedule
Set the schedule type and configure Command Settings to run your post-mortem script
Under Trigger Type, select Scheduled Run and click the OFF toggle — this ensures the schedule is only triggered via API, not on a timer
Configure notification settings to alert your team on Slack when the post-mortem is generated (or if the run fails)
Click Deploy
Trigger via API
When your incident management tool (PagerDuty, Opsgenie, FireHydrant, etc.) marks an incident as resolved, call the Paradime Bolt API:
Or with cURL:
Integration with Incident Management Tools
The most robust setup uses Paradime webhooks bidirectionally:
Figure 6: End-to-end integration — incident management tool triggers Bolt, which runs the script and notifies the team.
Monitoring and Debugging
Once your post-mortem pipeline is live, you need visibility into every run. Paradime Bolt gives you three levels of logging and a full analytics dashboard.
Run History and Logs
Navigate to the Bolt UI → select your incident-postmortem schedule → Run History. Each run shows:
Field | Description |
|---|---|
Status | Passed, Failed, Running, or Cancelled |
Trigger | API, Scheduled, On Merge, or On Completion |
Branch and Commit | Git context for the run |
Last Run | When the run started |
Duration | Total execution time |
Run ID | Unique identifier for API reference |
Click into any run to access detailed logs. Paradime offers three log levels:
Summary Logs: DinoAI-generated overview — the quick "what happened" for when you need a 10-second answer
Console Logs: Chronological record of all operations — the standard debugging view
Debug Logs: System-level detail — for when you need to trace exactly why the Google Docs API returned a 403
Artifacts
Every Bolt run generates downloadable artifacts (like run_results.json) that you can use for auditing, retrospective analysis, or feeding into downstream analytics.
OpenClaw Diagnostics
For issues on the OpenClaw side, run these commands in order:
Bolt Notifications for Pipeline Health
Set up Slack notifications on the Bolt schedule to get alerted when:
The post-mortem script fails (broken Slack token, Google API quota exceeded, etc.)
The run exceeds its SLA threshold (AI analysis taking too long)
The run succeeds (team gets a link to the post-mortem doc)
Navigate to the schedule → Edit → Notification Settings → Add destination → choose Slack, email, or Microsoft Teams.
Troubleshooting Common Issues
Slack API Errors
Error | Cause | Fix |
|---|---|---|
| Bot isn't a member of the incident channel | Invite the bot to the channel: |
| Bot token lacks required OAuth scopes | Add scopes in Slack App Settings → OAuth & Permissions → Reinstall |
| Too many API calls | Add |
| Expired or incorrect bot token | Regenerate token and update |
Google Docs API Errors
Error | Cause | Fix |
|---|---|---|
| Service account lacks access to the Drive folder | Share the folder with the service account email address |
| Newlines or special characters mangled | Ensure the entire JSON key is stored as a single-line string in Paradime |
| Too many API calls | Batch your |
OpenClaw Issues
Symptom | Diagnostic Command | Common Fix |
|---|---|---|
| Check PATH |
|
Gateway won't start |
| Set |
No LLM response |
| Verify |
Timeout on large channels | N/A | Reduce Slack message |
Paradime Bolt Issues
Symptom | Fix |
|---|---|
Environment variable not available in run | Ensure variable is created in Settings → Workspaces → Environment Variables → Bolt Schedules (not IDE variables) |
API trigger returns auth error | Verify API key has Bolt Schedules Admin capability — check API key settings |
Schedule won't trigger via API | Confirm the schedule is deployed with cron toggle set to OFF |
Bonus: Tying It Into dbt™ Data Quality with dbt™-llm-evals
If you're already using Paradime for dbt™ pipelines, consider adding dbt™-llm-evals to evaluate the quality of your AI-generated post-mortems over time. This open-source package brings LLM evaluation directly into your data warehouse.
Add it to your packages.yml:
Configure evaluation criteria in dbt_project.yml:
Then track whether your post-mortem AI agent is generating consistently useful, accurate reports — or starting to hallucinate root causes. That's the kind of operational rigor that turns a cool automation into a trustworthy system.
Wrapping Up
Let's recap what you've built:
Figure 7: The complete architecture — trigger, execution, configuration, and observability layers.
What this gives your team:
Speed: Post-mortems generated in minutes, not hours
Consistency: Every post-mortem follows the same structure
Security: Zero credentials on developer machines
Auditability: Full run history with three levels of logging
Reliability: Slack/email notifications when things go wrong
The opinionated takeaway: Incident response tooling should be as reliable as the systems it monitors. Running a post-mortem script from a cron job on someone's laptop — with secrets in a .env file — is tech debt waiting to bite you at 3 a.m. Paradime's Bolt gives you production-grade orchestration with UI-managed secrets. OpenClaw gives you AI analysis without sending your incident data to a third-party SaaS. And the whole thing triggers automatically when your incident management tool says "resolved."
Stop copying Slack threads into Google Docs manually. Your on-call engineers have better things to do.

