dbt™ and Microsoft Fabric on Paradime: End-to-End Setup Guide

Feb 26, 2026

Table of Contents

How to Set Up dbt™ Fabric Integration with Paradime: Warehouse, Lakehouse & Target Configuration

Microsoft Fabric unifies data engineering and analytics under one roof—but the moment you wire dbt™ into the picture, a wave of questions hits: Do I point my dbt™ target at a Warehouse or a Lakehouse? Which adapter do I install? How do dev and prod stay isolated?

This guide cuts through the ambiguity. You will learn exactly how Fabric's Warehouse and Lakehouse map to dbt™ targets, how to configure connections in Paradime for both development (Code IDE) and production (Bolt Scheduler), and how to validate everything end-to-end.

What You Need to Know About dbt™ + Fabric

Warehouse vs Lakehouse Considerations

Microsoft Fabric offers two first-class storage workloads—Warehouse and Lakehouse—and each maps to a different dbt™ adapter with distinct capabilities.

Dimension

Fabric Warehouse

Fabric Lakehouse

dbt™ adapter

dbt-fabric

dbt-fabricspark

type in profiles.yml

fabric

fabricspark

Connection protocol

ODBC (TDS / SQL endpoint)

Livy API (Spark sessions)

SQL dialect

T-SQL

Spark SQL

Write support

Full DML + DDL

Via Spark; SQL analytics endpoint is read-only

Multi-table transactions

✅ ACID-compliant

Data types

Structured

Structured, semi-structured, unstructured

Materializations

table, view, incremental (merge, append, delete+insert, microbatch), snapshot, table_clone

table, incremental

Auth methods

Service Principal, Entra ID password, CLI, auto, environment

Azure CLI only (Service Principal not yet supported via Livy)

Best for dbt™

SQL-first transformation layer (silver → gold)

Spark-heavy ingestion and Python models

Rule of thumb: If your team writes SQL transformations and needs reliable incremental loads, Warehouse + dbt-fabric is the production-grade choice. Use the Lakehouse when you need Spark/Python models or are landing raw files in OneLake first.

# Warehouse target (dbt-fabric)
my_project:
  target: dev
  outputs:
    dev:
      type: fabric
      driver: 'ODBC Driver 18 for SQL Server'
      server: your-workspace.fabric.microsoft.com
      port: 1433
      database: my_warehouse
      schema: dbt_dev
      authentication: ServicePrincipal
      tenant_id: "00000000-0000-0000-0000-000000001234"
      client_id: "00000000-0000-0000-0000-000000001234"
      client_secret: "S3cret!"
# Warehouse target (dbt-fabric)
my_project:
  target: dev
  outputs:
    dev:
      type: fabric
      driver: 'ODBC Driver 18 for SQL Server'
      server: your-workspace.fabric.microsoft.com
      port: 1433
      database: my_warehouse
      schema: dbt_dev
      authentication: ServicePrincipal
      tenant_id: "00000000-0000-0000-0000-000000001234"
      client_id: "00000000-0000-0000-0000-000000001234"
      client_secret: "S3cret!"
# Warehouse target (dbt-fabric)
my_project:
  target: dev
  outputs:
    dev:
      type: fabric
      driver: 'ODBC Driver 18 for SQL Server'
      server: your-workspace.fabric.microsoft.com
      port: 1433
      database: my_warehouse
      schema: dbt_dev
      authentication: ServicePrincipal
      tenant_id: "00000000-0000-0000-0000-000000001234"
      client_id: "00000000-0000-0000-0000-000000001234"
      client_secret: "S3cret!"
# Lakehouse target (dbt-fabricspark)
my_project:
  target: dev
  outputs:
    dev:
      type: fabricspark
      method: livy
      authentication: CLI
      endpoint: <https: api.fabric.microsoft.com="" v1="">
      workspaceid: "your-workspace-guid"
      lakehouseid: "your-lakehouse-guid"
      lakehouse: my_lakehouse
      schema

# Lakehouse target (dbt-fabricspark)
my_project:
  target: dev
  outputs:
    dev:
      type: fabricspark
      method: livy
      authentication: CLI
      endpoint: <https: api.fabric.microsoft.com="" v1="">
      workspaceid: "your-workspace-guid"
      lakehouseid: "your-lakehouse-guid"
      lakehouse: my_lakehouse
      schema

# Lakehouse target (dbt-fabricspark)
my_project:
  target: dev
  outputs:
    dev:
      type: fabricspark
      method: livy
      authentication: CLI
      endpoint: <https: api.fabric.microsoft.com="" v1="">
      workspaceid: "your-workspace-guid"
      lakehouseid: "your-lakehouse-guid"
      lakehouse: my_lakehouse
      schema

flowchart LR
    A[dbt™ Project] -->|type: fabric| B[dbt-fabric adapter]
    A -->|type: fabricspark| C[dbt-fabricspark adapter]
    B -->|ODBC / TDS| D[Fabric Warehouse]
    C -->|Livy API / Spark| E[Fabric Lakehouse]
    D --> F[OneLake - Delta Tables]

flowchart LR
    A[dbt™ Project] -->|type: fabric| B[dbt-fabric adapter]
    A -->|type: fabricspark| C[dbt-fabricspark adapter]
    B -->|ODBC / TDS| D[Fabric Warehouse]
    C -->|Livy API / Spark| E[Fabric Lakehouse]
    D --> F[OneLake - Delta Tables]

flowchart LR
    A[dbt™ Project] -->|type: fabric| B[dbt-fabric adapter]
    A -->|type: fabricspark| C[dbt-fabricspark adapter]
    B -->|ODBC / TDS| D[Fabric Warehouse]
    C -->|Livy API / Spark| E[Fabric Lakehouse]
    D --> F[OneLake - Delta Tables]

Figure 1 — How your dbt™ project's type setting routes to the correct Fabric workload and underlying OneLake storage.

How Paradime Manages Connections and Environments

Paradime separates connections by purpose:

Connection slot

Purpose

Target name

Owner

Code IDE

Individual developer work

dev

Each developer

Bolt Scheduler

Automated production runs

prod

Service account / schedule owner

TurboCI

Pull-request validation

ci

CI bot

Each slot stores its own profiles.yml fragment—server, database, schema, and credentials—so there is no risk of a developer accidentally writing to a production schema. Paradime injects the correct profile at runtime based on which environment triggers the dbt run.

flowchart TD
    subgraph Paradime
        IDE[Code IDE connection<br>target = dev]
        BOLT[Bolt Scheduler connection<br>target = prod]
        CI[TurboCI connection<br>target = ci]
    end
    IDE -->|developer runs| WH_DEV[Warehouse schema: dbt_jsmith]
    BOLT -->|scheduled runs| WH_PROD[Warehouse schema: analytics]
    CI -->|PR checks| WH_CI[Warehouse schema: dbt_ci]
flowchart TD
    subgraph Paradime
        IDE[Code IDE connection<br>target = dev]
        BOLT[Bolt Scheduler connection<br>target = prod]
        CI[TurboCI connection<br>target = ci]
    end
    IDE -->|developer runs| WH_DEV[Warehouse schema: dbt_jsmith]
    BOLT -->|scheduled runs| WH_PROD[Warehouse schema: analytics]
    CI -->|PR checks| WH_CI[Warehouse schema: dbt_ci]
flowchart TD
    subgraph Paradime
        IDE[Code IDE connection<br>target = dev]
        BOLT[Bolt Scheduler connection<br>target = prod]
        CI[TurboCI connection<br>target = ci]
    end
    IDE -->|developer runs| WH_DEV[Warehouse schema: dbt_jsmith]
    BOLT -->|scheduled runs| WH_PROD[Warehouse schema: analytics]
    CI -->|PR checks| WH_CI[Warehouse schema: dbt_ci]

Figure 2 — Paradime's three connection slots each target a different schema inside the same (or separate) Fabric Warehouse.

Prerequisites in Microsoft Fabric

Workspace + Capacity Requirements

Before Paradime (or any external tool) can reach your Fabric environment, you need:

  1. An active Fabric capacity — A Fabric trial provides an F64 capacity. For persistent workloads, provision at least an F2 or higher SKU through Azure. If the capacity is paused, all ODBC/Livy connections will fail.

  2. A Fabric workspace assigned to that capacity — Workspaces are the security and billing boundary. Create separate workspaces for dev and prod if your governance model requires it.

  3. Capacity not throttled — Fabric throttles compute when cumulative CU consumption exceeds the SKU allowance. Monitor via the Microsoft Fabric Capacity Metrics app to avoid silent failures during dbt™ runs.

Create / Select Warehouse or Lakehouse

For Warehouse (recommended for dbt™):

  1. Open your Fabric workspace → + NewWarehouse.

  2. Name it (e.g., analytics_warehouse).

  3. Note the Server hostname and Database name from the warehouse settings—you will need them for profiles.yml.

For Lakehouse:

  1. Open your Fabric workspace → + NewLakehouse.

  2. Name it (e.g., raw_lakehouse).

  3. Note the Workspace ID and Lakehouse ID (GUIDs) from the URL or settings pane.

⚠️ The Lakehouse SQL analytics endpoint is read-only. If you try to run dbt run against it via the dbt-fabric (ODBC) adapter, materializations that create or replace tables will fail. Use the dbt-fabricspark adapter for write operations against a Lakehouse.

Identity / Auth Setup and Permissions

dbt™ connects to Fabric through Microsoft Entra ID (formerly Azure AD). Two authentication paths dominate:

Method

Best for

Setup

Service Principal

Production / CI (headless)

Register an App in Entra ID → generate client secret → add the SPN to the Fabric workspace with Member or Admin role

Entra ID Password / CLI

Development (interactive)

Developer signs in with az login or provides username + password

Minimum permissions checklist:

  • The identity (user or SPN) must be added to the Fabric workspace with at least Member role (Microsoft docs).

  • The identity needs CONNECT privileges on the warehouse database.

  • For CREATE SCHEMA operations (common in dbt™ dev workflows), grant schema_authorization or ensure the identity is a workspace Admin.

  • For the Lakehouse / Spark adapter, Service Principal auth is not yet supported by the Livy API—use Azure CLI auth.

Create the Fabric Connection in Paradime

Settings → Connections

  1. Click Settings in the Paradime top menu bar.

  2. In the left sidebar, click Connections.

  3. Click Add New next to the Code IDE section.

  4. Select Microsoft Fabric from the provider list.

  5. Fill in the profile configuration (see next section).

  6. Provide a dbt Profile Name that matches the profile: key in your dbt_project.yml.

  7. Set the Target field to dev.

  8. Click Test Connection to verify.

Repeat the process for the Scheduler (Bolt) section, changing the target to prod and the schema to your production schema.

Connection Parameters Explained

Below is a complete reference for the dbt-fabric adapter parameters you will enter in Paradime:

Parameter

Required

Example

Notes

driver

ODBC Driver 18 for SQL Server

Paradime's runtime includes this driver

server

your-workspace.fabric.microsoft.com

Fabric SQL connection string hostname

port

1433

Always 1433 for Fabric

database

analytics_warehouse

The Warehouse name in Fabric

schema

dbt_jsmith (dev) / analytics (prod)

Target schema for model output

authentication

ServicePrincipal or ActiveDirectoryPassword

See auth table above

tenant_id

SPN only

00000000-…

Azure Entra directory/tenant ID

client_id

SPN only

00000000-…

App registration client ID

client_secret

SPN only

S3cret!

App registration secret value

user

Password only

user@company.com

Entra ID email

password

Password only

••••••••

Entra ID password

retries

1

Auto-retry on transient failures

login_timeout

0

Seconds; 0 = default

query_timeout

0

Seconds; 0 = no timeout

encrypt

true

Always true for Fabric

trust_cert

false

Keep false in production

threads

4

Parallel model execution

Tip: Validate your YAML before pasting it into Paradime. A single indentation error will cause a silent connection failure. Use yamlformatter.org for a quick check.

Credential Storage and Rotation

Paradime never stores credentials in its application database. All secrets—warehouse passwords, Service Principal client secrets, environment variable values—are encrypted at rest and in transit inside HashiCorp Vault running on Paradime's own AWS infrastructure.

Key security properties:

  • Each company gets an isolated Vault path; no cross-tenant access is possible.

  • Developer profiles.yml credentials are sandboxed inside each developer's own Kubernetes pod—no developer can read another's secrets.

  • Queried data is held in memory only and erased on page refresh (Paradime is a data processor, not a data store).

  • SOC 2 audited—report available at trust.paradime.io.

Rotation workflow: When your Entra ID Service Principal secret expires, update the credential in Paradime under Settings → Connections, edit the relevant connection, and replace the client_secret value. The change takes effect immediately for the next dbt™ run—no redeploy needed.

Firewall allowlisting: If your Fabric workspace enforces IP restrictions, add the Paradime egress IP for your data region:

Region

IP Address

🇪🇺 EU – Frankfurt (eu-central-1)

18.198.76.50

🇪🇺 EU – Ireland (eu-west-1)

3.248.153.24

🇪🇺 EU – London (eu-west-2)

3.8.231.109

🇺🇸 US East – N. Virginia (us-east-1)

52.4.225.182

Configure the Code IDE (Development)

Dev Target Strategy (Schemas / Namespaces)

The goal in development is isolation: every developer writes to their own schema so nobody overwrites another's work—or worse, production data.

Recommended pattern for Fabric:

Dev schema:   dbt_<username>        → dbt_jsmith
Prod schema:  <clean custom="" name="">   → analytics, staging, marketing
CI schema:    dbt_ci_pr_<number>    → dbt_ci_pr_142</number></clean></username>
Dev schema:   dbt_<username>        → dbt_jsmith
Prod schema:  <clean custom="" name="">   → analytics, staging, marketing
CI schema:    dbt_ci_pr_<number>    → dbt_ci_pr_142</number></clean></username>
Dev schema:   dbt_<username>        → dbt_jsmith
Prod schema:  <clean custom="" name="">   → analytics, staging, marketing
CI schema:    dbt_ci_pr_<number>    → dbt_ci_pr_142</number></clean></username>

Enforce this with a generate_schema_name macro override in macros/get_custom_schema.sql:

{% macro generate_schema_name(custom_schema_name, node) -%}
  {%- set default_schema = target.schema -%}
  {%- if target.name == 'prod' -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ custom_schema_name | trim }}
    {%- endif -%}
  {%- else -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ default_schema }}_{{ custom_schema_name | trim }}
    {%- endif -%}
  {%- endif -%}
{%- endmacro %}
{% macro generate_schema_name(custom_schema_name, node) -%}
  {%- set default_schema = target.schema -%}
  {%- if target.name == 'prod' -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ custom_schema_name | trim }}
    {%- endif -%}
  {%- else -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ default_schema }}_{{ custom_schema_name | trim }}
    {%- endif -%}
  {%- endif -%}
{%- endmacro %}
{% macro generate_schema_name(custom_schema_name, node) -%}
  {%- set default_schema = target.schema -%}
  {%- if target.name == 'prod' -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ custom_schema_name | trim }}
    {%- endif -%}
  {%- else -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ default_schema }}_{{ custom_schema_name | trim }}
    {%- endif -%}
  {%- endif -%}
{%- endmacro %}

With this macro:

target.name

Model's config(schema='staging')

Resulting schema

dev (dbt_jsmith)

staging

dbt_jsmith_staging

prod (analytics)

staging

staging

flowchart TD
    A["dbt run --target dev"] --> B{target.name == prod?}
    B -- No --> C["default_schema + '_' + custom_schema_name"]
    B -- Yes --> D["custom_schema_name only"]
    C --> E["dbt_jsmith_staging"]
    D --> F["staging"]
flowchart TD
    A["dbt run --target dev"] --> B{target.name == prod?}
    B -- No --> C["default_schema + '_' + custom_schema_name"]
    B -- Yes --> D["custom_schema_name only"]
    C --> E["dbt_jsmith_staging"]
    D --> F["staging"]
flowchart TD
    A["dbt run --target dev"] --> B{target.name == prod?}
    B -- No --> C["default_schema + '_' + custom_schema_name"]
    B -- Yes --> D["custom_schema_name only"]
    C --> E["dbt_jsmith_staging"]
    D --> F["staging"]

Figure 3 — The generate_schema_name macro routes dev builds into prefixed schemas and prod builds into clean schema names.

Model Build Patterns to Avoid Conflicts

  1. Limit data in dev — Add a filter to expensive models so dev runs finish fast and consume fewer Fabric CUs:

  2. Use dbt build instead of separate dbt run + dbt test — This runs each model and its tests in DAG order, catching failures earlier.

  3. Avoid ephemeral materializations — The dbt-fabric adapter does not support nested CTEs (a T-SQL limitation). Ephemeral models that are referenced by other ephemeral models will error. Use view as the lightweight alternative.

  4. Use tsql-utils instead of dbt-utils — Fabric's T-SQL dialect requires the tsql-utils package for cross-database macros like surrogate_key and date_spine.

Configure Bolt Scheduler (Production)

Prod Target Configuration

  1. Navigate to Settings → Connections in Paradime.

  2. Click Add New next to the Scheduler section.

  3. Select Microsoft Fabric.

  4. Enter the production profile:

  5. Set Target to prod.

  6. Click Test Connection.

Important: Always use a Service Principal for the Bolt Scheduler connection. Interactive authentication (CLI / browser) cannot run in headless mode. Make sure the SPN is a Member of the Fabric workspace and has CONNECT on the warehouse.

Schedule + Job Ownership

Create your first Bolt schedule:

  1. Open Bolt from the Paradime home screen.

  2. Click + New Schedule → + Create New Schedule.

  3. Configure:

Field

Example

Type

Standard

Name

daily_full_build

Commands

dbt build --target prod

Git Branch

main

Owner Email

data-team@company.com

Trigger

Cron: 0 6 * * * (daily at 06:00 UTC)

Slack Notify On

failed

Slack Channel

#data-alerts

  1. Click Save.




Figure 4 — End-to-end Bolt schedule execution flow, from cron trigger through Fabric materialization to Slack notification.

Bolt also supports YAML-based scheduling (configuration-as-code) so your schedule definitions can live alongside your dbt™ project in version control.

Validate with SQL Scratchpad

Once both connections are saved, validate them before you write a single model.

Run a Simple Query to Confirm Context

  1. Open the Code IDE in Paradime.

  2. Create a new Scratchpad tab (files land in the gitignored paradime_scratch/ folder).

  3. Run a context-verification query:

  4. Click Preview Data. Verify the values match your Code IDE connection settings.

Verify Create-Table Permissions

Next, confirm you can actually write objects:

-- Attempt to create and drop a validation table
CREATE TABLE dbt_validation_test (id INT);
DROP TABLE

-- Attempt to create and drop a validation table
CREATE TABLE dbt_validation_test (id INT);
DROP TABLE

-- Attempt to create and drop a validation table
CREATE TABLE dbt_validation_test (id INT);
DROP TABLE

If this succeeds, your identity has the required DDL permissions. If it fails with a permissions error, revisit the Identity / Auth Setup section above.

Bonus: Run dbt debug from the Code IDE terminal. It performs a comprehensive connection check—driver, authentication, database reachability, and schema permissions—in one shot.

Troubleshooting

Auth / Permission Issues

Symptom

Likely cause

Fix

dbt was unable to connect to the specified database

Wrong server hostname, capacity paused, or SPN not added to workspace

Double-check server value; confirm capacity is active; add SPN as workspace Member

Login failed for user

Incorrect tenant_id, client_id, or client_secret

Regenerate client secret in Entra ID; update in Paradime connection settings

CREATE SCHEMA permission denied

Identity lacks DDL rights

Grant the identity Admin workspace role, or set schema_authorization in profiles.yml

The service principal is not authorized

SPN not enabled in Fabric admin portal

Admin portal → Tenant settings → Enable "Service principals can use Fabric APIs" (Microsoft docs)

ODBC Driver 18 for SQL Server not found

Driver missing from runtime

Not an issue in Paradime (pre-installed); for local dev, install via Microsoft downloads

Workspace / Capacity Limitations

Symptom

Likely cause

Fix

Queries hang or timeout

Capacity throttled (CU overage)

Check the Fabric Capacity Metrics app; scale up SKU or reduce concurrent workloads

Unable to complete the action because your organization's Fabric capacity has been exceeded

Capacity in rejection phase

Wait for CU replenishment or increase capacity SKU

Connection works in IDE but not in Bolt

Firewall blocks Paradime's egress IP

Allowlist the correct Paradime IP for your region

Lakehouse writes fail via dbt-fabric adapter

SQL analytics endpoint is read-only

Switch to dbt-fabricspark adapter (type: fabricspark) for write operations against a Lakehouse

Target / Schema Naming Mismatches

Symptom

Likely cause

Fix

Models land in dbt_jsmith_staging in prod

generate_schema_name macro not overridden

Add the custom macro shown in the Dev Target Strategy section

Models land in dbo instead of custom schema

schema field missing or set to dbo in profiles.yml

Update the schema value in your Paradime connection config

Tests create views in wrong schema

Known dbt-fabric issue (#168)

Override the test schema macro or pin to a dbt-fabric version with the fix

dbt run --target ci uses dev credentials

Paradime connection for TurboCI not configured

Add a separate connection under the TurboCI section in Settings → Connections

Summary

Getting dbt™ Fabric integration right comes down to three decisions:

  1. Warehouse vs Lakehouse — Choose dbt-fabric (type: fabric) for SQL-first transformations with full write support; choose dbt-fabricspark (type: fabricspark) only when you need Spark/Python models against a Lakehouse.

  2. Auth method — Use Service Principal for production and CI (headless); use Entra ID password or CLI for interactive development.

  3. Schema isolation — Override generate_schema_name so dev schemas are prefixed (dbt__) and prod schemas stay clean ().

With Paradime managing the connection plumbing—separate Code IDE, Bolt, and TurboCI slots backed by HashiCorp Vault—your team can focus on building models instead of debugging profiles.yml.

Next steps:

Interested to Learn More?
Try Out the Free 14-Days Trial
decorative icon

Future of Data Work
Available Today

decorative icon

Future of Data Work
Available Today

decorative icon

Future of Data Work
Available Today

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.