dbt™ and Microsoft Fabric on Paradime: End-to-End Setup Guide
Feb 26, 2026
How to Set Up dbt™ Fabric Integration with Paradime: Warehouse, Lakehouse & Target Configuration
Microsoft Fabric unifies data engineering and analytics under one roof—but the moment you wire dbt™ into the picture, a wave of questions hits: Do I point my dbt™ target at a Warehouse or a Lakehouse? Which adapter do I install? How do dev and prod stay isolated?
This guide cuts through the ambiguity. You will learn exactly how Fabric's Warehouse and Lakehouse map to dbt™ targets, how to configure connections in Paradime for both development (Code IDE) and production (Bolt Scheduler), and how to validate everything end-to-end.
What You Need to Know About dbt™ + Fabric
Warehouse vs Lakehouse Considerations
Microsoft Fabric offers two first-class storage workloads—Warehouse and Lakehouse—and each maps to a different dbt™ adapter with distinct capabilities.
Dimension | Fabric Warehouse | Fabric Lakehouse |
|---|---|---|
dbt™ adapter | ||
|
|
|
Connection protocol | ODBC (TDS / SQL endpoint) | Livy API (Spark sessions) |
SQL dialect | T-SQL | Spark SQL |
Write support | Full DML + DDL | Via Spark; SQL analytics endpoint is read-only |
Multi-table transactions | ✅ ACID-compliant | ❌ |
Data types | Structured | Structured, semi-structured, unstructured |
Materializations | table, view, incremental (merge, append, delete+insert, microbatch), snapshot, table_clone | table, incremental |
Auth methods | Service Principal, Entra ID password, CLI, auto, environment | Azure CLI only (Service Principal not yet supported via Livy) |
Best for dbt™ | SQL-first transformation layer (silver → gold) | Spark-heavy ingestion and Python models |
Rule of thumb: If your team writes SQL transformations and needs reliable incremental loads, Warehouse +
dbt-fabricis the production-grade choice. Use the Lakehouse when you need Spark/Python models or are landing raw files in OneLake first.
Figure 1 — How your dbt™ project's type setting routes to the correct Fabric workload and underlying OneLake storage.
How Paradime Manages Connections and Environments
Paradime separates connections by purpose:
Connection slot | Purpose | Target name | Owner |
|---|---|---|---|
Code IDE | Individual developer work |
| Each developer |
Bolt Scheduler | Automated production runs |
| Service account / schedule owner |
TurboCI | Pull-request validation |
| CI bot |
Each slot stores its own profiles.yml fragment—server, database, schema, and credentials—so there is no risk of a developer accidentally writing to a production schema. Paradime injects the correct profile at runtime based on which environment triggers the dbt run.
Figure 2 — Paradime's three connection slots each target a different schema inside the same (or separate) Fabric Warehouse.
Prerequisites in Microsoft Fabric
Workspace + Capacity Requirements
Before Paradime (or any external tool) can reach your Fabric environment, you need:
An active Fabric capacity — A Fabric trial provides an F64 capacity. For persistent workloads, provision at least an F2 or higher SKU through Azure. If the capacity is paused, all ODBC/Livy connections will fail.
A Fabric workspace assigned to that capacity — Workspaces are the security and billing boundary. Create separate workspaces for dev and prod if your governance model requires it.
Capacity not throttled — Fabric throttles compute when cumulative CU consumption exceeds the SKU allowance. Monitor via the Microsoft Fabric Capacity Metrics app to avoid silent failures during dbt™ runs.
Create / Select Warehouse or Lakehouse
For Warehouse (recommended for dbt™):
Open your Fabric workspace → + New → Warehouse.
Name it (e.g.,
analytics_warehouse).Note the Server hostname and Database name from the warehouse settings—you will need them for
profiles.yml.
For Lakehouse:
Open your Fabric workspace → + New → Lakehouse.
Name it (e.g.,
raw_lakehouse).Note the Workspace ID and Lakehouse ID (GUIDs) from the URL or settings pane.
⚠️ The Lakehouse SQL analytics endpoint is read-only. If you try to run
dbt runagainst it via thedbt-fabric(ODBC) adapter, materializations that create or replace tables will fail. Use thedbt-fabricsparkadapter for write operations against a Lakehouse.
Identity / Auth Setup and Permissions
dbt™ connects to Fabric through Microsoft Entra ID (formerly Azure AD). Two authentication paths dominate:
Method | Best for | Setup |
|---|---|---|
Service Principal | Production / CI (headless) | Register an App in Entra ID → generate client secret → add the SPN to the Fabric workspace with Member or Admin role |
Entra ID Password / CLI | Development (interactive) | Developer signs in with |
Minimum permissions checklist:
The identity (user or SPN) must be added to the Fabric workspace with at least Member role (Microsoft docs).
The identity needs CONNECT privileges on the warehouse database.
For
CREATE SCHEMAoperations (common in dbt™ dev workflows), grantschema_authorizationor ensure the identity is a workspace Admin.For the Lakehouse / Spark adapter, Service Principal auth is not yet supported by the Livy API—use Azure CLI auth.
Create the Fabric Connection in Paradime
Settings → Connections
Click Settings in the Paradime top menu bar.
In the left sidebar, click Connections.
Click Add New next to the Code IDE section.
Select Microsoft Fabric from the provider list.
Fill in the profile configuration (see next section).
Provide a dbt Profile Name that matches the
profile:key in yourdbt_project.yml.Set the Target field to
dev.Click Test Connection to verify.
Repeat the process for the Scheduler (Bolt) section, changing the target to prod and the schema to your production schema.
Connection Parameters Explained
Below is a complete reference for the dbt-fabric adapter parameters you will enter in Paradime:
Parameter | Required | Example | Notes |
|---|---|---|---|
| ✅ |
| Paradime's runtime includes this driver |
| ✅ |
| Fabric SQL connection string hostname |
| ✅ |
| Always 1433 for Fabric |
| ✅ |
| The Warehouse name in Fabric |
| ✅ |
| Target schema for model output |
| ✅ |
| See auth table above |
| SPN only |
| Azure Entra directory/tenant ID |
| SPN only |
| App registration client ID |
| SPN only |
| App registration secret value |
| Password only |
| Entra ID email |
| Password only |
| Entra ID password |
| ❌ |
| Auto-retry on transient failures |
| ❌ |
| Seconds; 0 = default |
| ❌ |
| Seconds; 0 = no timeout |
| ❌ |
| Always |
| ❌ |
| Keep |
| ❌ |
| Parallel model execution |
Tip: Validate your YAML before pasting it into Paradime. A single indentation error will cause a silent connection failure. Use yamlformatter.org for a quick check.
Credential Storage and Rotation
Paradime never stores credentials in its application database. All secrets—warehouse passwords, Service Principal client secrets, environment variable values—are encrypted at rest and in transit inside HashiCorp Vault running on Paradime's own AWS infrastructure.
Key security properties:
Each company gets an isolated Vault path; no cross-tenant access is possible.
Developer
profiles.ymlcredentials are sandboxed inside each developer's own Kubernetes pod—no developer can read another's secrets.Queried data is held in memory only and erased on page refresh (Paradime is a data processor, not a data store).
SOC 2 audited—report available at trust.paradime.io.
Rotation workflow: When your Entra ID Service Principal secret expires, update the credential in Paradime under Settings → Connections, edit the relevant connection, and replace the client_secret value. The change takes effect immediately for the next dbt™ run—no redeploy needed.
Firewall allowlisting: If your Fabric workspace enforces IP restrictions, add the Paradime egress IP for your data region:
Region | IP Address |
|---|---|
🇪🇺 EU – Frankfurt ( |
|
🇪🇺 EU – Ireland ( |
|
🇪🇺 EU – London ( |
|
🇺🇸 US East – N. Virginia ( |
|
Configure the Code IDE (Development)
Dev Target Strategy (Schemas / Namespaces)
The goal in development is isolation: every developer writes to their own schema so nobody overwrites another's work—or worse, production data.
Recommended pattern for Fabric:
Enforce this with a generate_schema_name macro override in macros/get_custom_schema.sql:
With this macro:
| Model's | Resulting schema |
|---|---|---|
|
|
|
|
|
|
Figure 3 — The generate_schema_name macro routes dev builds into prefixed schemas and prod builds into clean schema names.
Model Build Patterns to Avoid Conflicts
Limit data in dev — Add a filter to expensive models so dev runs finish fast and consume fewer Fabric CUs:
Use
dbt buildinstead of separatedbt run+dbt test— This runs each model and its tests in DAG order, catching failures earlier.Avoid ephemeral materializations — The
dbt-fabricadapter does not support nested CTEs (a T-SQL limitation). Ephemeral models that are referenced by other ephemeral models will error. Useviewas the lightweight alternative.Use
tsql-utilsinstead ofdbt-utils— Fabric's T-SQL dialect requires the tsql-utils package for cross-database macros likesurrogate_keyanddate_spine.
Configure Bolt Scheduler (Production)
Prod Target Configuration
Navigate to Settings → Connections in Paradime.
Click Add New next to the Scheduler section.
Select Microsoft Fabric.
Enter the production profile:
Set Target to
prod.Click Test Connection.
Important: Always use a Service Principal for the Bolt Scheduler connection. Interactive authentication (CLI / browser) cannot run in headless mode. Make sure the SPN is a Member of the Fabric workspace and has
CONNECTon the warehouse.
Schedule + Job Ownership
Create your first Bolt schedule:
Open Bolt from the Paradime home screen.
Click + New Schedule → + Create New Schedule.
Configure:
Field | Example |
|---|---|
Type | Standard |
Name |
|
Commands |
|
Git Branch |
|
Owner Email |
|
Trigger | Cron: |
Slack Notify On |
|
Slack Channel |
|
Click Save.
Figure 4 — End-to-end Bolt schedule execution flow, from cron trigger through Fabric materialization to Slack notification.
Bolt also supports YAML-based scheduling (configuration-as-code) so your schedule definitions can live alongside your dbt™ project in version control.
Validate with SQL Scratchpad
Once both connections are saved, validate them before you write a single model.
Run a Simple Query to Confirm Context
Open the Code IDE in Paradime.
Create a new Scratchpad tab (files land in the gitignored
paradime_scratch/folder).Run a context-verification query:
Click Preview Data. Verify the values match your Code IDE connection settings.
Verify Create-Table Permissions
Next, confirm you can actually write objects:
If this succeeds, your identity has the required DDL permissions. If it fails with a permissions error, revisit the Identity / Auth Setup section above.
Bonus: Run
dbt debugfrom the Code IDE terminal. It performs a comprehensive connection check—driver, authentication, database reachability, and schema permissions—in one shot.
Troubleshooting
Auth / Permission Issues
Symptom | Likely cause | Fix |
|---|---|---|
| Wrong server hostname, capacity paused, or SPN not added to workspace | Double-check |
| Incorrect | Regenerate client secret in Entra ID; update in Paradime connection settings |
| Identity lacks DDL rights | Grant the identity Admin workspace role, or set |
| SPN not enabled in Fabric admin portal | Admin portal → Tenant settings → Enable "Service principals can use Fabric APIs" (Microsoft docs) |
| Driver missing from runtime | Not an issue in Paradime (pre-installed); for local dev, install via Microsoft downloads |
Workspace / Capacity Limitations
Symptom | Likely cause | Fix |
|---|---|---|
Queries hang or timeout | Capacity throttled (CU overage) | Check the Fabric Capacity Metrics app; scale up SKU or reduce concurrent workloads |
| Capacity in rejection phase | Wait for CU replenishment or increase capacity SKU |
Connection works in IDE but not in Bolt | Firewall blocks Paradime's egress IP | Allowlist the correct Paradime IP for your region |
Lakehouse writes fail via | SQL analytics endpoint is read-only | Switch to |
Target / Schema Naming Mismatches
Symptom | Likely cause | Fix |
|---|---|---|
Models land in |
| Add the custom macro shown in the Dev Target Strategy section |
Models land in |
| Update the |
Tests create views in wrong schema | Known dbt-fabric issue (#168) | Override the test schema macro or pin to a dbt-fabric version with the fix |
| Paradime connection for TurboCI not configured | Add a separate connection under the TurboCI section in Settings → Connections |
Summary
Getting dbt™ Fabric integration right comes down to three decisions:
Warehouse vs Lakehouse — Choose
dbt-fabric(type:fabric) for SQL-first transformations with full write support; choosedbt-fabricspark(type:fabricspark) only when you need Spark/Python models against a Lakehouse.Auth method — Use Service Principal for production and CI (headless); use Entra ID password or CLI for interactive development.
Schema isolation — Override
generate_schema_nameso dev schemas are prefixed (dbt__) and prod schemas stay clean ().
With Paradime managing the connection plumbing—separate Code IDE, Bolt, and TurboCI slots backed by HashiCorp Vault—your team can focus on building models instead of debugging profiles.yml.
Next steps:


