How to Automate Incident Post-Mortems with OpenClaw in Paradime

Feb 26, 2026

Table of Contents

Automate Incident Post-Mortems with Paradime, OpenClaw, and Slack — No Config Pain Required

Still copying Slack threads into a Google Doc at 2 a.m. after an incident? That's not a post-mortem process — that's punishment for being on-call.

In this guide, you'll build an automated incident post-mortem pipeline that reads your Slack incident channel, extracts the timeline of actions and decisions using OpenClaw's AI agent, generates a structured Google Docs report, and orchestrates the entire workflow through Paradime's Bolt scheduler — all without touching a single YAML config file on your local machine.

Here's the best part: every secret, every credential, every environment variable lives in Paradime's UI. No .env files floating around dev laptops. No credentials committed to Git. Just clean, secure, auditable configuration.

What Is Paradime?

Paradime is an AI-native platform for data engineering that replaces dbt Cloud™. It's built for fast-moving teams that need to code, ship, fix, and scale data pipelines for analytics and AI — all from a single workspace.

But the feature that matters for this guide is Bolt — Paradime's production orchestration engine. Bolt handles:

  • Scheduling with cron, event triggers, on-merge triggers, and API-based triggers

  • Environment variables managed entirely through the UI (no local config files)

  • Notifications via Slack, email, and Microsoft Teams

  • DinoAI-powered debugging that generates plain-English summaries of failed runs

  • CI/CD with Turbo CI for pull-request validation

Bolt is where your post-mortem automation script will run. You configure the schedule, set your API keys as environment variables in the Paradime Settings page, and trigger the workflow via a single API call when an incident is resolved.

Figure 1: High-level flow — from incident resolution to a published post-mortem document.

Why Paradime over a raw cron job? Because Bolt gives you run history, debug logs, notifications on failure, and UI-managed secrets. Your SRE team shouldn't need to SSH into a box to check if the post-mortem script ran.

What Is OpenClaw?

OpenClaw (formerly Clawdbot/Moltbot) is a free, open-source autonomous AI agent developed by Peter Steinberger. It runs locally on your hardware and orchestrates tasks across chat apps, files, the web, and your operating system using large language models (Claude, GPT, DeepSeek, and others).

What makes OpenClaw relevant here:

  • Skills system: Task-specific capabilities defined in SKILL.md files. OpenClaw ships with a Slack skill that can read messages, react, pin items, and fetch member info.

  • Local-first architecture: Your data never leaves your infrastructure. Credentials and interaction history stay on your machine (or your Paradime workspace).

  • Extensible integrations: Connects to LLM providers via a gateway, with env-var-based secret management and config substitution.

For this pipeline, OpenClaw serves as the AI brain that reads raw Slack messages and transforms them into a structured incident timeline — extracting key decisions, action items, owners, and root-cause signals.

Figure 2: OpenClaw's role — AI-powered extraction from raw Slack threads.

Setup: OpenClaw + Slack SDK + Google Docs API

Let's wire up the three SDKs. The beauty of this approach is that all sensitive configuration is handled through Paradime's environment variables — no local .env files needed.

Install Dependencies

Verify OpenClaw Installation

Tip: If openclaw isn't found after install, add the npm global bin directory to your PATH:

Configure OpenClaw for LLM Access

OpenClaw needs access to an LLM provider for the AI analysis step. Configure it in ~/.openclaw/openclaw.json:

Notice the ${ANTHROPIC_API_KEY} syntax — OpenClaw resolves environment variables at runtime, so the actual key lives in your Paradime workspace, not in this config file. This is the env var substitution pattern from the OpenClaw docs.

Set Up Slack App

Create a Slack App at api.slack.com/apps with these Bot Token Scopes:

Scope

Why

channels:history

Read messages from incident channels

channels:read

List channels to find the incident channel

groups:history

Read private incident channels

groups:read

List private channels

users:read

Resolve user IDs to names in the timeline

Install the app to your workspace and grab the Bot User OAuth Token (starts with xoxb-).

Set Up Google Docs API

  1. Go to the Google Cloud Console

  2. Create a new project (or use an existing one)

  3. Enable the Google Docs API and Google Drive API

  4. Create a Service Account under IAM & Admin

  5. Generate a JSON key file for the service account

  6. Share the target Google Drive folder with the service account email

The JSON key file contents will be stored as the GOOGLE_CREDENTIALS_JSON environment variable in Paradime.

Figure 3: Google credentials flow — from GCP console to Paradime's secure variable store.

Script: Read Incident Channel → Extract Timeline → Generate Post-Mortem Doc

Here's the complete automation script. It ties together all three services — Slack for reading incident data, OpenClaw for AI-powered analysis, and Google Docs for report generation.

The Full Script

How the Script Flows

Figure 4: The complete execution sequence — from Bolt trigger to published post-mortem.

Environment Variables: Secure, UI-Managed, Zero Local Config

This is where Paradime fundamentally changes the game. Instead of juggling .env files, passing secrets through CI pipelines, or (worst case) hardcoding tokens, every credential is managed through the Paradime Environment Variables settings page.

Required Variables

Variable

Description

Where to Get It

SLACK_BOT_TOKEN

Slack Bot User OAuth Token (xoxb-...)

Slack App Settings → OAuth & Permissions

GOOGLE_CREDENTIALS_JSON

Full JSON key for Google service account

GCP Console → IAM → Service Accounts

OPENCLAW_API_KEY

API key for OpenClaw's configured LLM provider

Your LLM provider dashboard (Anthropic, OpenAI, etc.)

INCIDENT_CHANNEL_ID

Slack channel ID for the incident

Slack channel details → Copy Channel ID

GOOGLE_FOLDER_ID

Google Drive folder ID for post-mortem docs

Drive folder URL (the ID after /folders/)

Adding Variables in Paradime

  1. Navigate to SettingsWorkspacesEnvironment Variables

  2. In the Bolt Schedules section, click Add New

  3. Enter the Key (e.g., SLACK_BOT_TOKEN) and Value

  4. Click the Save icon (💾)

  5. Repeat for each variable

Pro tip: You can also bulk upload via CSV with Key,Value columns. Great for setting up multiple environments at once.

Per-Schedule Overrides

Need different credentials for staging vs. production? Paradime lets you override environment variables at the schedule level:

  1. Navigate to the Bolt UI and select the schedule

  2. Click Edit → scroll to Environment Variables Override

  3. Enter override values for specific variables

  4. Click Deploy

Schedule-level overrides take precedence over workspace defaults. If no override is set, the global value is used.

Figure 5: Environment variable inheritance — workspace defaults with per-schedule overrides.

Why This Matters for Security

Opinionated take: If your incident-response tooling requires engineers to store API tokens locally, you've already lost. Here's what the Paradime approach gives you:

  • No secrets on laptops — credentials live in Paradime's SOC 2 Type II-certified infrastructure

  • Audit trail — changes to environment variables are tracked

  • Role-based access — only Admin roles can add, edit, or remove variables

  • No Git exposure — impossible to accidentally commit a .env file

Bolt Schedule: API Trigger After Incident

The post-mortem script shouldn't run on a cron schedule — it should fire when an incident is resolved. Paradime Bolt's API trigger is perfect for this.

Create the Bolt Schedule

  1. In the Paradime UI, navigate to BoltCreate Schedule

  2. Set the schedule type and configure Command Settings to run your post-mortem script

  3. Under Trigger Type, select Scheduled Run and click the OFF toggle — this ensures the schedule is only triggered via API, not on a timer

  4. Configure notification settings to alert your team on Slack when the post-mortem is generated (or if the run fails)

  5. Click Deploy

Trigger via API

When your incident management tool (PagerDuty, Opsgenie, FireHydrant, etc.) marks an incident as resolved, call the Paradime Bolt API:

Or with cURL:

Integration with Incident Management Tools

The most robust setup uses Paradime webhooks bidirectionally:

Figure 6: End-to-end integration — incident management tool triggers Bolt, which runs the script and notifies the team.

Monitoring and Debugging

Once your post-mortem pipeline is live, you need visibility into every run. Paradime Bolt gives you three levels of logging and a full analytics dashboard.

Run History and Logs

Navigate to the Bolt UI → select your incident-postmortem schedule → Run History. Each run shows:

Field

Description

Status

Passed, Failed, Running, or Cancelled

Trigger

API, Scheduled, On Merge, or On Completion

Branch and Commit

Git context for the run

Last Run

When the run started

Duration

Total execution time

Run ID

Unique identifier for API reference

Click into any run to access detailed logs. Paradime offers three log levels:

  • Summary Logs: DinoAI-generated overview — the quick "what happened" for when you need a 10-second answer

  • Console Logs: Chronological record of all operations — the standard debugging view

  • Debug Logs: System-level detail — for when you need to trace exactly why the Google Docs API returned a 403

Artifacts

Every Bolt run generates downloadable artifacts (like run_results.json) that you can use for auditing, retrospective analysis, or feeding into downstream analytics.

OpenClaw Diagnostics

For issues on the OpenClaw side, run these commands in order:

Bolt Notifications for Pipeline Health

Set up Slack notifications on the Bolt schedule to get alerted when:

  • The post-mortem script fails (broken Slack token, Google API quota exceeded, etc.)

  • The run exceeds its SLA threshold (AI analysis taking too long)

  • The run succeeds (team gets a link to the post-mortem doc)

Navigate to the schedule → EditNotification SettingsAdd destination → choose Slack, email, or Microsoft Teams.

Troubleshooting Common Issues

Slack API Errors

Error

Cause

Fix

channel_not_found

Bot isn't a member of the incident channel

Invite the bot to the channel: /invite @your-bot-name

missing_scope

Bot token lacks required OAuth scopes

Add scopes in Slack App Settings → OAuth & Permissions → Reinstall

ratelimited

Too many API calls

Add time.sleep(1) between pagination calls; reduce limit parameter

invalid_auth

Expired or incorrect bot token

Regenerate token and update SLACK_BOT_TOKEN in Paradime Settings

Google Docs API Errors

Error

Cause

Fix

403 Forbidden

Service account lacks access to the Drive folder

Share the folder with the service account email address

Invalid JSON in GOOGLE_CREDENTIALS_JSON

Newlines or special characters mangled

Ensure the entire JSON key is stored as a single-line string in Paradime

Quota exceeded

Too many API calls

Batch your batchUpdate requests (the script already does this)

OpenClaw Issues

Symptom

Diagnostic Command

Common Fix

openclaw: command not found

Check PATH

export PATH="$(npm prefix -g)/bin:$PATH"

Gateway won't start

openclaw gateway status

Set gateway.mode="local" in config or run openclaw configure

No LLM response

openclaw logs --follow

Verify ANTHROPIC_API_KEY (or your provider key) is set correctly

Timeout on large channels

N/A

Reduce Slack message limit or filter by timestamp range

Paradime Bolt Issues

Symptom

Fix

Environment variable not available in run

Ensure variable is created in Settings → Workspaces → Environment Variables → Bolt Schedules (not IDE variables)

API trigger returns auth error

Verify API key has Bolt Schedules Admin capability — check API key settings

Schedule won't trigger via API

Confirm the schedule is deployed with cron toggle set to OFF

Bonus: Tying It Into dbt™ Data Quality with dbt™-llm-evals

If you're already using Paradime for dbt™ pipelines, consider adding dbt™-llm-evals to evaluate the quality of your AI-generated post-mortems over time. This open-source package brings LLM evaluation directly into your data warehouse.

Add it to your packages.yml:

Configure evaluation criteria in dbt_project.yml:

Then track whether your post-mortem AI agent is generating consistently useful, accurate reports — or starting to hallucinate root causes. That's the kind of operational rigor that turns a cool automation into a trustworthy system.

Wrapping Up

Let's recap what you've built:

Figure 7: The complete architecture — trigger, execution, configuration, and observability layers.

What this gives your team:

  1. Speed: Post-mortems generated in minutes, not hours

  2. Consistency: Every post-mortem follows the same structure

  3. Security: Zero credentials on developer machines

  4. Auditability: Full run history with three levels of logging

  5. Reliability: Slack/email notifications when things go wrong

The opinionated takeaway: Incident response tooling should be as reliable as the systems it monitors. Running a post-mortem script from a cron job on someone's laptop — with secrets in a .env file — is tech debt waiting to bite you at 3 a.m. Paradime's Bolt gives you production-grade orchestration with UI-managed secrets. OpenClaw gives you AI analysis without sending your incident data to a third-party SaaS. And the whole thing triggers automatically when your incident management tool says "resolved."

Stop copying Slack threads into Google Docs manually. Your on-call engineers have better things to do.

Resources

Interested to Learn More?
Try Out the Free 14-Days Trial

Stop Managing Pipelines. Start Shipping Them.

Join the teams that replaced manual dbt™ workflows with agentic AI. Free to start, no credit card required.

Stop Managing Pipelines. Start Shipping Them.

Join the teams that replaced manual dbt™ workflows with agentic AI. Free to start, no credit card required.

Stop Managing Pipelines. Start Shipping Them.

Join the teams that replaced manual dbt™ workflows with agentic AI. Free to start, no credit card required.

Copyright © 2026 Paradime Labs, Inc. Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc. Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc. Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.