How to Generate Document Summaries with OpenClaw in Paradime
Feb 26, 2026
How to Build an Automated Google Docs Summary Pipeline with Paradime, OpenClaw, and Slack
When an incident hits at 2 a.m., nobody wants to dig through five Google Docs to find the one paragraph that matters. What if every critical document already had an executive summary—generated automatically and dropped into Slack with a link—before anyone even asked?
This guide walks you through building that pipeline: a Python script that reads specified Google Docs, uses OpenClaw to generate executive summaries, and posts them to Slack. We'll wire it all together inside Paradime Bolt so it runs on demand or on a weekly schedule—and we'll make sure you can debug it when things go sideways.
Figure 1: End-to-end data flow — Google Docs content is read, summarized by OpenClaw, and posted to Slack, all orchestrated by Paradime Bolt.
What Is Paradime?
Paradime is an all-in-one, AI-native platform that replaces dbt Cloud™ for analytics and data engineering teams. It ships three core products:
Product | What It Does |
|---|---|
Code IDE | An AI-powered IDE (often called "Cursor for Data") that cuts dbt™ and Python development time by 83%+. Includes DinoAI for context-aware code generation. |
Bolt | A production scheduler for dbt™ and Python pipelines. Supports cron, event-driven, on-merge, and API triggers. Built-in CI/CD, notifications, and DinoAI-powered debugging. |
Radar | FinOps tooling to reduce Snowflake and BigQuery costs. |
For this guide, Bolt is the key piece. It lets us schedule our summarization script—either on a weekly cron or triggered on demand via the Bolt API—without managing a separate orchestrator.
Why Paradime instead of a standalone cron job? Bolt gives you run history, log-level debugging with DinoAI summaries, Slack/email notifications on failure, and environment-variable management all in one place. When a summary fails at 3 a.m., you want time to first clue in seconds, not minutes.
What Is OpenClaw?
OpenClaw is a self-hosted AI gateway that connects messaging platforms—WhatsApp, Telegram, Discord, Slack, and more—to AI coding agents. It's MIT-licensed, open source, and designed for developers who want an always-available AI assistant running on their own hardware.
For our use case, we care about OpenClaw's agent runtime: multi-provider LLM streaming (OpenAI, Anthropic, Google Gemini, Ollama, and 20+ others), built-in tools (file I/O, web fetch, memory), and the Python SDK (openclaw-py) that lets us invoke summarization programmatically.
Key OpenClaw capabilities relevant here:
Multi-provider LLM support — Choose Claude, GPT, Gemini, or a local model. Swap providers without code changes.
SKILL.md-based skill injection — Define custom skills (like "summarize a document") declaratively.
Built-in tools — File read/write, web fetch, exec, cron, and more—20+ tools out of the box.
OpenAI-compatible HTTP API —
/v1/chat/completionsendpoint for easy integration.
Setup: openclaw-py + Google Docs API + Slack SDK
Prerequisites
Requirement | Version / Details |
|---|---|
Python | >= 3.10 |
Paradime account | With Bolt plan enabled |
Google Cloud project | With Google Docs API enabled |
Slack workspace | With an incoming webhook configured |
OpenClaw | Installed locally or on a server |
Step 1: Install Python Dependencies
openclaw-py— The Python SDK for OpenClaw. Provides agent runtime, multi-provider LLM streaming, and 20+ built-in tools. (PyPI)google-api-python-client+google-auth— Official Google client libraries for the Docs API.slack-sdk— Slack's official Python SDK, including theWebhookClientfor posting messages.
Step 2: Configure Google Service Account Credentials
Create a service account in your Google Cloud project, enable the Google Docs API, and download the JSON credentials file. Share the target Google Docs with the service account email address.
Step 3: Configure OpenClaw
Set your preferred LLM provider in the OpenClaw config (~/.openclaw/openclaw.json):
Or initialize via the wizard:
Step 4: Set Up Slack Incoming Webhook
In your Slack workspace, create an incoming webhook (Slack docs) and note the webhook URL.
Environment Variables
All secrets live in environment variables—never hardcode them.
Variable | Purpose | Where to Set |
|---|---|---|
| Google service account JSON (stringified) | Paradime Bolt env vars |
| API key for your chosen LLM provider (Anthropic, OpenAI, etc.) | Paradime Bolt env vars |
| Slack incoming webhook URL | Paradime Bolt env vars |
Setting Environment Variables in Paradime Bolt
Navigate to Settings → Workspaces → Environment Variables in the Paradime UI.
In the Bolt Schedules section, click Add New.
Enter the key name (e.g.,
GOOGLE_CREDENTIALS_JSON) and paste the value.Click the Save icon.
For schedule-specific overrides (e.g., a different Slack channel per schedule), use the Environment Variables Override in the schedule editor.
Figure 2: Environment variable configuration flow in Paradime Bolt.
The Script: Read Google Docs → Generate Summary → Post to Slack
Here's the complete Python script. It reads specified Google Docs, generates an executive summary for each using OpenClaw, and posts the result to Slack with links back to the original documents.
Figure 3: Sequence diagram showing how the script processes each document end-to-end.
Bolt Schedule: On-Demand or Weekly
Option A: Weekly Cron Schedule
Create a Bolt schedule in Paradime to run the summarization script every Monday at 8:00 AM:
Navigate to Bolt in Paradime.
Click Create Schedule.
Configure:
Set up Notifications: Slack channel
#doc-summaries, notify on failure.Click Publish.
Bolt supports Python scripts natively in schedules. If you use Poetry, prepend
poetry installas the first command.
Option B: On-Demand via Bolt API
Trigger the schedule programmatically when you need it—during an incident, after a doc is updated, or from an internal tool.
Using the Paradime Python SDK:
Or via GraphQL:
Figure 4: Two scheduling approaches—automated weekly via cron, or on-demand via the Bolt API during incidents.
Monitoring and Debugging
The incident-friendly mindset demands fast time to first clue. Here's how Paradime Bolt gives you that.
Run History & Analytics
Navigate to Bolt → Select Schedule → Run History to see every execution with:
✅/❌ Status indicators
Start time, duration, and git branch
One-click access to logs and artifacts
Three Log Levels for Fast Triage
Log Type | What It Shows | When to Use |
|---|---|---|
Summary Logs | DinoAI-generated overview of failures with suggested fixes | First look — get oriented in 10 seconds |
Console Logs | Full execution output with jump-to-error navigation | Find the specific line that broke |
Debug Logs | System-level operations, package installs, env resolution | Deep troubleshooting (auth issues, missing deps) |
DinoAI-Powered Debugging
Bolt's built-in AI (DinoAI) automatically analyzes failed runs and produces a human-readable summary:
This is your time to first clue—you know exactly what broke without reading 200 lines of stack trace.
Slack Notifications on Failure
Configure Bolt to notify #on-call on every failure:
Edit the schedule → Notification Settings.
Set Slack notify on:
Failed.Set Slack channels:
#on-call.Click Publish.
The notification includes a direct link to the failed run in Paradime—one click from Slack to the error logs.
Figure 5: The debugging escalation path — from Slack notification to root cause in three clicks.
Evaluating Summary Quality with dbt™-llm-evals
Once your summaries are flowing, how do you know they're actually good? This is where Paradime's open-source dbt-llm-evals package comes in.
dbt-llm-evals lets you evaluate LLM-generated content directly inside your data warehouse—no data egress, no external APIs. It uses a judge-based evaluation framework to score outputs on criteria like accuracy, relevance, tone, and completeness.
Quick Setup
Add to your packages.yml:
Configure in dbt_project.yml:
Example: Evaluate Document Summaries
Then monitor quality over time:
This closes the loop: generate summaries → evaluate quality → alert on regressions → fix the prompt.
Troubleshooting Common Issues
Structured for the incident mindset: symptom → cause → fix → verification.
1. Google Docs API: DefaultCredentialsError
Detail | |
|---|---|
Symptom |
|
Cause |
|
Fix | Verify the env var is set in Bolt: Settings → Workspaces → Environment Variables → Bolt Schedules. Ensure the JSON is valid (no unescaped quotes, no truncation). |
Verify |
|
2. Google Docs API: HttpError 403 — Forbidden
Detail | |
|---|---|
Symptom |
|
Cause | The service account doesn't have access to the document |
Fix | Share the Google Doc with the service account email (e.g., |
Verify | Re-run the script for that single doc ID |
3. OpenClaw: pyclaw: command not found
Detail | |
|---|---|
Symptom |
|
Cause |
|
Fix | Add |
Verify |
|
4. OpenClaw: LLM Provider Returns 429 (Rate Limit)
Detail | |
|---|---|
Symptom |
|
Cause | Too many concurrent requests or context window exceeded |
Fix | Add a delay between documents; truncate |
Verify | Run with a single document first |
5. Slack: WebhookClient Returns Non-200
Detail | |
|---|---|
Symptom |
|
Cause | Malformed blocks JSON, expired webhook URL, or the webhook was deleted |
Fix | Validate blocks JSON locally with |
Verify |
|
6. Bolt Schedule: Runs but Produces No Output
Detail | |
|---|---|
Symptom | Schedule shows ✅ but no Slack messages appear |
Cause |
|
Fix | Check Console Logs in Bolt; verify |
Verify | Look for |
OpenClaw Gateway Diagnostics
If you're running OpenClaw as a persistent gateway (rather than one-shot pyclaw agent calls), use the built-in diagnostic ladder:
Figure 6: Decision tree for troubleshooting — follow the branch matching the failing component.
Wrapping Up
Here's what you've built:
A Python script that reads Google Docs via the Docs API, extracts full text (including tables), and feeds it to OpenClaw for summarization.
An OpenClaw-powered summarizer that generates structured executive summaries—one paragraph, three takeaways, and action items—using any LLM provider you choose.
Slack integration that posts formatted summaries with buttons linking back to the original documents.
Paradime Bolt orchestration with two trigger modes: weekly cron for routine summaries, and on-demand API triggers for incident response.
A monitoring and debugging stack that gives you DinoAI-powered failure summaries, three levels of log detail, and Slack notifications—all tuned for minimal time to first clue.
Optional quality evaluation via
dbt-llm-evalsto catch prompt regressions before your team notices degraded summaries.
Quick Reference: Key Links
Resource | URL |
|---|---|
Paradime Docs | |
Bolt Schedules | |
Bolt Python SDK | |
Bolt Environment Variables | |
OpenClaw Docs | |
OpenClaw Python SDK | |
OpenClaw Troubleshooting | |
Google Docs API — Extract Text | |
Slack Incoming Webhooks | |
dbt™-llm-evals |
The whole pipeline prioritizes reproducibility (every run is logged, every env var is versioned) and minimal fixes (DinoAI tells you exactly what broke and how to fix it). That's the incident-friendly way to ship document automation.

