dbt™ CI/CD Best Practices for Data Teams
Feb 26, 2026
dbt™ CI/CD Best Practices for Data Teams
Every data team reaches a point where manual testing and hand-deployed dbt™ models stop working. Pull requests pile up, errors slip into production, and the time between writing a transformation and trusting it in a dashboard stretches from minutes to days. dbt™ CI/CD solves this by automating the testing and deployment of your data models, giving your team a repeatable, reliable path from code change to production.
This guide covers everything you need to implement dbt™ CI/CD the right way—from understanding the fundamentals of Slim CI and dbt™ defer, to setting up pre-commit hooks, validating data quality in pipelines, and building a complete end-to-end workflow that scales with your team.
What Is dbt™ CI/CD and Why Data Teams Need It
dbt™ CI/CD brings the same automated testing and deployment workflows that software engineers have relied on for years to the world of analytics engineering. Instead of manually running dbt™ commands and hoping nothing breaks, CI/CD automates the entire process—catching errors before they reach production and deploying validated changes with consistency.
This matters because manual processes simply do not scale. When your dbt™ project has 20 models, you might get away with running dbt build locally and eyeballing the results. When it has 200 or 2,000 models, maintained by a growing team of analytics engineers, manual testing becomes a bottleneck that slows everyone down and introduces risk.
Continuous Integration (CI): Automatically runs dbt™ builds and tests on every pull request to validate that code changes compile correctly, pass all tests, and produce expected results—before anything is merged into the main branch.
Continuous Delivery/Deployment (CD): Takes validated changes and pushes them to production, either automatically or with a manual approval step, ensuring that the path from code to production is standardized and governed.
The result is increased confidence in every change, faster iteration cycles, and a standardized process that ensures code quality without sacrificing speed.
Continuous Delivery vs Continuous Deployment for dbt™
Many teams use "continuous delivery" and "continuous deployment" interchangeably, but the distinction is important when designing your dbt™ CI/CD workflow. The right choice depends on your team's culture, risk tolerance, and the maturity of your test suite.
Continuous Delivery
With continuous delivery, every code change is automatically tested and staged for production, but a human must approve the final deployment. This creates a deliberate checkpoint in the process.
This approach works well for teams that need governance controls—for example, organizations with strict compliance requirements, shared production environments where a bad deployment could impact multiple teams, or projects where the test suite doesn't yet cover every critical path. A release manager or senior engineer reviews the staged changes, confirms that everything looks correct, and triggers the production deployment.
In practice, many teams implement this with a release branch workflow. Feature branches merge into a release branch (e.g., release/0.1.0), where integration testing happens. At the end of a release cycle, the release branch is merged into main with manual approval, and production jobs pick up the changes.
Continuous Deployment
Continuous deployment removes the manual step entirely. When a pull request passes all CI tests and is merged, the changes are automatically deployed to production without human intervention.
This is the faster path, but it demands a higher bar. Your test suite needs to be comprehensive enough that you trust it to catch issues that a human reviewer would. Teams with mature test coverage, isolated development environments, and a culture of fast iteration tend to thrive with continuous deployment.
Which Approach Fits Your Team
Choose delivery if: you have compliance or regulatory requirements, work in a shared production environment where errors have wide blast radius, or your test coverage is still maturing.
Choose deployment if: you have a strong, comprehensive test suite, are a fast-moving team that values velocity, and use isolated development or staging environments where changes can be validated independently.
There is no wrong answer here. Many teams start with continuous delivery and graduate to continuous deployment as their test coverage and confidence grow.
What Is dbt™ Slim CI and How It Works
Slim CI is one of the most impactful optimizations you can make to your dbt™ CI/CD pipeline. Instead of building and testing your entire dbt™ project on every pull request, Slim CI (also called TurboCI in Paradime Bolt) only builds and tests models that have been modified, along with their downstream dependencies.
For large dbt™ projects, this is transformative. A full CI run on a project with hundreds of models might take 30 minutes or more and consume significant warehouse compute. Slim CI can reduce that to just a few minutes, testing only what actually changed.
The Role of the dbt™ Manifest
The manifest.json file is the backbone of Slim CI. Think of it as the complete map of your dbt™ project—it stores metadata about every model, test, source, snapshot, and their relationships to each other. When dbt™ compiles or builds your project, it generates this artifact, capturing the full state of the project at that point in time.
This manifest is what makes intelligent state comparison possible. Without it, dbt™ would have no way to know which models changed between your feature branch and production.
How Slim CI Identifies Modified Models
Slim CI works through state comparison. The process is straightforward:
Your production environment has a
manifest.jsonthat represents the current state of all models in production.When a CI job runs on a pull request, dbt™ compiles the project from the feature branch, generating a new manifest.
dbt™ compares the two manifests to identify exactly which models have changed—whether it's a modification to the SQL logic, a change in configuration, or an update to a schema definition.
Using the
state:modified+selector, dbt™ selects those modified models plus all of their downstream dependencies (the+suffix), ensuring that any cascading impact is also tested.
This targeted approach means that if you modify a single staging model, only that model and the marts or reports that depend on it get built and tested—not the hundreds of unrelated models elsewhere in your project.
Why Slim CI Cuts Build Times and Costs
The benefits of Slim CI compound as your project grows:
Faster feedback loops: Developers get CI results in minutes, not hours, which keeps them in flow and unblocks code reviews.
Lower warehouse compute costs: By only running what's necessary, you avoid rebuilding tables and views that haven't changed, which can translate to significant savings on platforms like Snowflake, BigQuery, or Databricks.
More frequent CI runs: When CI is fast and cheap, teams are more likely to run it on every commit rather than batching changes together, catching issues earlier in the development process.
How dbt™ Defer and State Processing Work
dbt™ defer is the mechanism that makes Slim CI practical. Without it, building modified models that depend on upstream models would require rebuilding those upstream models too—defeating the purpose of running a slim build.
What Is dbt™ State Processing
State processing is dbt™'s ability to compare the state of a project across different branches or environments by analyzing their respective manifest.json files. It answers the question: "What's different between what I have here and what's running in production?"
dbt™ supports granular state comparison, detecting changes to model bodies, configurations, macros, and more. You can even use separate state files for comparison and deferral by passing different paths to --state and --defer-state, giving you precise control over which environment you compare against versus which environment you defer to.
Using dbt™ Defer to Reference Production
The --defer flag tells dbt™ that for any model not included in the current selection (i.e., unmodified upstream models), it should resolve ref() calls by pointing to the corresponding table or view in the production environment instead of trying to build it from scratch.
Here's what this means in practice: if you modify fct_orders and it depends on stg_orders and stg_customers, dbt™ will build the new version of fct_orders but read stg_orders and stg_customers directly from production. No unnecessary rebuilds.
Approach | What Gets Built | Use Case |
|---|---|---|
Full CI | All models in the project | Initial setup, major refactors, full regression testing |
Slim CI with defer | Modified models + downstream dependencies only | Standard pull request validation |
Comparing Feature Branch Models to Staging or Production
Teams can configure dbt™ defer to reference either a staging or production environment, depending on their workflow. Some teams maintain a staging environment as an intermediate step, where models are validated before being promoted to production. Others defer directly to production for simplicity.
The choice depends on your environment strategy. If you have a well-maintained staging environment that closely mirrors production, deferring to staging can add an extra layer of safety. If your staging environment drifts from production or doesn't exist, deferring directly to production is the more reliable option.
How to Implement dbt™ Slim CI Step by Step
If you're using dbt Core™ and managing your own CI/CD pipeline (rather than using a managed service), here are the concrete steps to implement Slim CI.
1. Generate a Production Manifest
First, you need a manifest.json that represents your production environment. Run a full dbt build or dbt compile against your production target. The compile command is lighter-weight if you only need the manifest for state comparison without actually executing models.
The resulting manifest.json will be in your project's target/ directory. This file is the reference point that all future CI runs will compare against.
2. Store the Manifest in Your CI Environment
The production manifest needs to be accessible to your CI/CD runner. Common approaches include:
Cloud storage (S3, GCS, Azure Blob): Upload the manifest after each production run and download it at the start of each CI run. This is the most common pattern for teams using GitHub Actions or GitLab CI.
CI/CD artifact storage: Many CI platforms (GitHub Actions, GitLab CI) support artifact storage that can persist files between jobs and pipelines.
Orchestration tool artifacts: If you use Airflow, Dagster, or similar tools to run production dbt™ jobs, store the manifest as an artifact of that job.
The key requirement is that the manifest is always up to date—it should reflect the latest successful production run.
3. Run dbt™ with State and Defer Flags
With the production manifest available, your CI job uses this command pattern:
Breaking this down:
--select state:modified+selects all models that have been modified compared to the production manifest, plus their downstream dependencies.--defertells dbt™ to resolveref()calls for unmodified upstream models by pointing to production.--state ./prod-manifest/specifies the path to the directory containing the productionmanifest.json.
4. Add Unit Tests to Your Slim CI Pipeline
Model builds alone don't guarantee correctness. Include dbt test in your CI pipeline—or better yet, use dbt build, which runs both models and tests in dependency order:
This ensures that schema tests (not null, unique, accepted values, relationships), data tests, and any custom singular tests all run against your modified models as part of every pull request.
Best Practices for dbt™ Pre-Commit Hooks and Linting
The best CI/CD pipeline catches issues as early as possible. Pre-commit hooks shift quality checks left—running them on your local machine before code ever reaches the CI pipeline. This saves time, reduces CI failures, and keeps the feedback loop tight.
SQLFluff for SQL Linting
SQLFluff is the standard linter for SQL in dbt™ projects. It enforces consistent formatting and style rules across your entire team, eliminating debates about indentation, capitalization, aliasing conventions, and more.
Configure SQLFluff with a .sqlfluff file in your project root to define your team's rules. Common rules include enforcing consistent keyword capitalization, requiring trailing commas, and standardizing join syntax. When run as a pre-commit hook, SQLFluff checks modified SQL files before they're committed, ensuring that only clean, consistently formatted code enters your repository.
dbt-checkpoint for Model Validation
While SQLFluff handles SQL formatting, dbt-checkpoint validates dbt™-specific patterns and best practices. It provides hooks that can enforce rules such as:
Every model must have at least one test defined.
All models must have a description in their YAML properties file.
Model names must follow a defined naming convention (e.g., staging models prefixed with
stg_, marts withfct_ordim_).Sources must have freshness checks configured.
These checks catch structural issues that would otherwise surface later in code review or, worse, in production.
Enforcing Git Commit Standards
Clean Git history makes debugging and collaboration easier. Use pre-commit hooks to enforce:
Commit message formats: Require structured commit messages (e.g., conventional commits like
feat: add new orders modelorfix: correct revenue calculation).Branch naming conventions: Enforce patterns like
feature/,fix/, orrefactor/prefixes so that the purpose of each branch is immediately clear.
How to Validate Data Quality in dbt™ CI/CD Pipelines
Linting and formatting checks catch code quality issues, but they don't tell you whether your data is correct. True data validation in CI goes a step further—testing the actual output of your models.
Running dbt™ Tests in CI
dbt™ tests should be a non-negotiable part of every CI run. This includes:
Schema tests:
not_null,unique,accepted_values, andrelationshipstests defined in your YAML files.Data tests: Custom SQL queries in your
tests/directory that validate business logic, such as ensuring that revenue is never negative or that every order has a corresponding customer.Unit tests: dbt™ unit tests that validate transformation logic with controlled inputs, ensuring that your SQL produces expected outputs regardless of the underlying data.
When using Slim CI, tests associated with your modified models and their downstream dependencies are automatically included when using the state:modified+ selector.
Using Data Diffs to Compare Model Outputs
Data diffs take validation further by comparing the actual output of a model between your feature branch and production. Instead of just checking that tests pass, a data diff reveals:
Row count changes (did your modification add or remove rows unexpectedly?).
Value changes in specific columns (did a calculation change produce different results?).
Schema changes (were columns added, removed, or retyped?).
This makes it significantly easier for reviewers to understand the real-world impact of a code change during the pull request review process.
Column-Level Lineage Diff for Impact Analysis
Understanding which downstream assets are affected by a change is critical for responsible deployment. A column-level lineage diff shows you not just which models are downstream, but which specific columns in those models—and in connected BI tools—are impacted.
Paradime Bolt offers a native column-level lineage diff feature that surfaces this information directly within your pull request. This means reviewers can see, at a glance, that modifying the revenue column in fct_orders will affect three downstream models and two Looker dashboards, enabling informed merge decisions.
Alerting and Notification Best Practices for dbt™ CI/CD
A CI/CD pipeline is only as effective as the feedback loop it creates. If a CI job fails and nobody notices for hours, you've lost the speed advantage that automation is supposed to provide.
Slack and Microsoft Teams Notifications
Configure your CI/CD pipeline to send real-time notifications to designated channels when jobs pass or fail. The most effective notification strategies include:
Sending failure alerts to a shared engineering channel so the team has visibility.
Tagging the pull request author directly so they know immediate action is needed.
Including key details in the notification—job name, failure reason, and a link to the logs—so engineers can triage without context-switching.
Keep success notifications minimal to avoid alert fatigue. A simple "CI passed" message or a green checkmark in the PR is usually sufficient.
Auto-Creating Tickets in JIRA or Linear
For teams that need formal tracking of CI failures, automatically generating tickets in your project management tool ensures that nothing falls through the cracks. When a CI job fails:
A ticket is created with the failure details, linked to the pull request.
Ownership is assigned to the PR author or the on-call engineer.
The team can track Mean Time to Resolution (MTTR) across CI failures, identifying patterns and systemic issues.
This is especially valuable for larger teams where multiple pull requests are in flight simultaneously.
Integrating with DataDog and Monte Carlo
For organizations with data observability platforms, connecting your dbt™ CI/CD pipeline to tools like DataDog or Monte Carlo provides centralized monitoring and advanced alerting. This allows you to:
Correlate dbt™ CI/CD job failures with upstream data quality incidents.
Monitor CI job duration trends to catch performance regressions.
Create unified dashboards that give leadership visibility into the health of the entire data pipeline, from ingestion through transformation to BI.
What an Ideal dbt™ CI/CD Pipeline Looks Like
Pulling all of these best practices together, here is the reference architecture for a complete, production-grade dbt™ CI/CD pipeline:
Pre-commit hooks run locally to check for SQL linting errors (SQLFluff), validate dbt™ model conventions (dbt-checkpoint), and enforce commit message standards—before code is ever pushed.
A pull request triggers a Slim CI job. The CI runner downloads the latest production
manifest.jsonand begins the build.The job runs
dbt build --deferon modified models and their downstream dependencies, referencing production for all unmodified upstream models.Unit tests and data tests execute to validate transformation logic and data quality, ensuring nothing is broken by the change.
A column-level lineage diff is generated showing the full impact of the changes on downstream models, dashboards, and BI reports.
Notifications are sent to Slack or Teams on success or failure, with tickets auto-created in JIRA or Linear for any failures.
A successful merge triggers automated deployment to production, either immediately (continuous deployment) or after manual approval (continuous delivery).
Paradime Bolt provides this full pipeline out of the box—including TurboCI for optimized Slim CI, native column-level lineage diffs, and integrations with Slack, JIRA, and DataDog—without requiring you to stitch together multiple tools and custom scripts.
How to Migrate dbt™ CI/CD from dbt Cloud™
For teams currently using dbt Cloud™'s built-in CI/CD and considering alternatives, migration doesn't have to be disruptive. A structured approach ensures continuity.
Exporting Jobs and Schedules
Before migrating, create a comprehensive inventory of your current dbt Cloud™ setup:
All job configurations, including the dbt™ commands each job runs, the target environment, and any custom flags.
Schedules and triggers (cron expressions, webhook-based CI triggers, API-triggered jobs).
Environment variables and secrets used across jobs.
Any custom notification configurations or integrations.
This inventory becomes your migration checklist, ensuring nothing is missed in the transition.
Mapping dbt Cloud™ CI Features to Alternatives
dbt Cloud™ Feature | Alternative Approach |
|---|---|
Slim CI Jobs | State-based selection ( |
Environment variables | CI/CD secrets management (e.g., GitHub Secrets, HashiCorp Vault) |
Job scheduling | An orchestration tool (e.g., Airflow, Dagster) or Paradime Bolt |
CI check status in PR | GitHub Actions status checks or Paradime Bolt's native PR integration |
Zero-Downtime Migration with a dbt Cloud™ Importer
Paradime Bolt's dbt Cloud™ importer enables a near-instant migration of jobs, schedules, and environments. Rather than manually recreating every job and schedule, the importer reads your existing dbt Cloud™ configuration and replicates it in Paradime Bolt, allowing teams to switch platforms without disrupting production pipelines or missing scheduled runs.
Build Faster and Cut MTTR with AI-Native dbt™ CI/CD
Paradime Bolt is the modern, AI-native alternative for dbt™ CI/CD. It brings together everything covered in this guide into a single platform:
TurboCI for optimized dbt™ Slim CI that builds and tests only what changed.
Column-level lineage diffs for instant impact analysis on every pull request.
Native integrations with JIRA, Slack, DataDog, and Monte Carlo for closed-loop alerting and observability.
A seamless dbt Cloud™ importer that migrates your existing jobs, schedules, and environments without downtime.
If you're ready to stop stitching together scripts and start shipping dbt™ models with confidence, start for free.
Frequently Asked Questions About dbt™ CI/CD
How long should a dbt™ CI pipeline take to run?
A well-optimized dbt™ CI pipeline using Slim CI typically completes in minutes rather than hours. Since it only builds modified models and their downstream dependencies—rather than the entire project—run times stay short even as your project grows. If your CI runs regularly exceed 10–15 minutes, it's worth investigating whether your Slim CI configuration is correctly deferring to production and selecting only modified models.
Can I run dbt™ CI/CD without dbt Cloud™?
Yes. dbt Core™ users can implement full CI/CD pipelines using tools like GitHub Actions, GitLab CI, or platforms like Paradime Bolt that provide built-in Slim CI and deployment automation. The core dbt™ flags (--state, --defer, state:modified+) work the same way regardless of how you orchestrate them.
What is the difference between Slim CI and full CI runs?
Slim CI only builds and tests modified models plus their downstream dependencies, using state comparison against a production manifest and deferring to production for unmodified upstream models. A full CI run rebuilds the entire dbt™ project from scratch, regardless of what changed. Slim CI is dramatically faster and cheaper, making it the standard for day-to-day pull request validation—while full CI runs are reserved for major refactors or initial project setup.
How do I handle dbt™ CI failures in shared development branches?
Use branch protection rules in your Git provider (GitHub, GitLab, Azure DevOps) to block merges when CI checks fail. This ensures that broken code cannot enter the main branch and affect other team members. Pair this with notification configurations that alert the pull request author immediately, so they can diagnose and fix the issue before it blocks teammates.
Does dbt™ Slim CI work with dbt Cloud™ alternatives?
Yes. Slim CI is a pattern built on dbt™'s native state comparison and defer capabilities, not a feature locked to any specific platform. It works with any dbt Core™-compatible tool, including Paradime Bolt, GitHub Actions, GitLab CI, and orchestration platforms like Airflow or Dagster. The key requirements are the same everywhere: a production manifest.json for state comparison and the --defer and --state flags to enable targeted builds.


