dbt™ and Starburst and Trino on Paradime: Fast Connection and Faster Validation

Feb 26, 2026

Table of Contents

How to Connect dbt™ to Starburst and Trino Using Paradime

Setting up dbt™ with Starburst or Trino shouldn't feel like deciphering a flight manual. Yet the combination of catalogs, schemas, session properties, and authentication methods trips up even experienced analytics engineers.

This guide walks you through every step of connecting dbt™ to Starburst or Trino inside Paradime — from prerequisites to production scheduling — with validation queries you can run in under a minute.

What This Guide Covers

How Paradime Simplifies dbt™ + Trino/Starburst Setup

With dbt Core™, connecting to Starburst or Trino means manually configuring profiles.yml, managing credentials in local files, and context-switching between your terminal and query editor. Paradime collapses that workflow into a single platform: a browser-based Code IDE for development, a Bolt Scheduler for production runs, and a SQL Scratchpad for instant validation — all sharing the same secure credential store.

flowchart LR
    A[Starburst / Trino Cluster] -->|Connector| B[Paradime]
    B --> C[Code IDE\\ndev target]
    B --> D[Bolt Scheduler\\nprod target]
    B --> E[SQL Scratchpad\\nvalidation]
flowchart LR
    A[Starburst / Trino Cluster] -->|Connector| B[Paradime]
    B --> C[Code IDE\\ndev target]
    B --> D[Bolt Scheduler\\nprod target]
    B --> E[SQL Scratchpad\\nvalidation]
flowchart LR
    A[Starburst / Trino Cluster] -->|Connector| B[Paradime]
    B --> C[Code IDE\\ndev target]
    B --> D[Bolt Scheduler\\nprod target]
    B --> E[SQL Scratchpad\\nvalidation]

Figure 1: Paradime connects your Starburst/Trino cluster to three work surfaces — development, production, and ad-hoc validation — through a single managed connection.

How to Validate Your Connection Immediately

Before you build a single dbt™ model, run a quick sanity check in Paradime's SQL Scratchpad:

-- Confirm your cluster is reachable and list available catalogs

-- Confirm your cluster is reachable and list available catalogs

-- Confirm your cluster is reachable and list available catalogs

If you see a list of catalogs come back, your connection is live. We'll expand on this in the validation section below.

Prerequisites for Starburst/Trino

Before you touch Paradime's settings page, gather the following from your Starburst or Trino administrator.

Host/Port and HTTP Path (If Relevant)

Parameter

Description

Example

Host

The hostname of your cluster. Do not include the http:// or https:// prefix.

analytics.galaxy.starburst.io

Port

The port your cluster listens on. Defaults to 443 for TLS-enabled clusters.

443

For Starburst Galaxy, your host typically follows the pattern -.trino.galaxy.starburst.io. For Starburst Enterprise or open-source Trino, use the hostname your admin provides.

Auth Method (Basic/LDAP/OAuth/JWT) and TLS Certs

Paradime's Trino connector supports multiple authentication methods:

  • LDAP — Username and password authentication via LDAP. Most common for Starburst Enterprise.

  • JWT — Token-based authentication. Common when clusters are behind a gateway or service mesh.

  • OAuth Console — Browser-based OAuth flow. Preferred in Starburst Galaxy.

If your cluster enforces TLS (it should), confirm that your port is correct (typically 443) and that the http_scheme is set to https. If your organization uses self-signed certificates, work with your admin to ensure the certificate chain is trusted by Paradime's infrastructure. See Paradime IP Restrictions to whitelist the correct egress IPs.

Catalog and Schema Selection

This is where Starburst/Trino terminology diverges from traditional databases. Let's clarify:

flowchart TD
    CAT[Catalog\\ne.g. analytics_lakehouse] --> SCH1[Schema\\ne.g. raw]
    CAT --> SCH2[Schema\\ne.g. staging]
    CAT --> SCH3[Schema\\ne.g. marts]
    SCH1 --> T1[Table: raw.orders]
    SCH2 --> T2[Table: staging.stg_orders]
    SCH3 --> T3[Table: marts.fct_orders]
flowchart TD
    CAT[Catalog\\ne.g. analytics_lakehouse] --> SCH1[Schema\\ne.g. raw]
    CAT --> SCH2[Schema\\ne.g. staging]
    CAT --> SCH3[Schema\\ne.g. marts]
    SCH1 --> T1[Table: raw.orders]
    SCH2 --> T2[Table: staging.stg_orders]
    SCH3 --> T3[Table: marts.fct_orders]
flowchart TD
    CAT[Catalog\\ne.g. analytics_lakehouse] --> SCH1[Schema\\ne.g. raw]
    CAT --> SCH2[Schema\\ne.g. staging]
    CAT --> SCH3[Schema\\ne.g. marts]
    SCH1 --> T1[Table: raw.orders]
    SCH2 --> T2[Table: staging.stg_orders]
    SCH3 --> T3[Table: marts.fct_orders]

Figure 2: In Trino, a catalog is a named connection to a data source (like a Hive metastore or Iceberg lake). A schema is a logical grouping of tables within that catalog. The fully qualified address of any table is catalog.schema.table.

Think of catalogs like databases in Snowflake or BigQuery projects — except each catalog can connect to an entirely different storage backend (S3 via Hive, PostgreSQL, MySQL, etc.). When you select a catalog and schema in Paradime, you're telling dbt™ where to read source data from and where to materialize models.

Key rule: The user account you authenticate with must have read and write access to the target catalog and schema. dbt™ needs to create tables and views there.

Create a Starburst Connection in Paradime

Settings → Connections

  1. Click Settings in the Paradime top menu bar.

  2. In the left sidebar, click Connections.

  3. Click Add New next to the Code IDE section.

  4. Select Trino from the warehouse list.

Connection Fields Explained (Catalog/Schema/Role/Session Properties)

You'll see a profile configuration field accepting YAML. Here's a complete example for LDAP authentication:

type: trino
method: ldap
user: analyst@company.com
password: "{{ env_var('TRINO_PASSWORD') }}"
host: analytics.galaxy.starburst.io
catalog: analytics_lakehouse
schema: staging
port: 443
threads: 4
type: trino
method: ldap
user: analyst@company.com
password: "{{ env_var('TRINO_PASSWORD') }}"
host: analytics.galaxy.starburst.io
catalog: analytics_lakehouse
schema: staging
port: 443
threads: 4
type: trino
method: ldap
user: analyst@company.com
password: "{{ env_var('TRINO_PASSWORD') }}"
host: analytics.galaxy.starburst.io
catalog: analytics_lakehouse
schema: staging
port: 443
threads: 4

A breakdown of every field:

Field

Required

Description

type

Always trino.

method

Authentication method: ldap, jwt, oauth_console, kerberos, or certificate.

user

Your username. For Starburst Galaxy, append the role: analyst@company.com/accountadmin.

password

✅ (LDAP)

Password for LDAP auth. Use environment variables for safety.

host

Cluster hostname — no protocol prefix.

catalog

The Trino catalog your models target (maps to database in dbt™ terminology).

schema

The default schema for materialized models.

port

Typically 443.

threads

Number of concurrent model runs. Default: 1.

roles

Catalog-specific role assignments, e.g., system: analyst.

session_properties

Custom Trino session settings (e.g., query_max_run_time: '10m').

http_scheme

http or https. Defaults to http; set to https for TLS clusters.

For JWT authentication, replace user/password with:

method: jwt
jwt_token: "{{ env_var('TRINO_JWT_TOKEN') }}"
method: jwt
jwt_token: "{{ env_var('TRINO_JWT_TOKEN') }}"
method: jwt
jwt_token: "{{ env_var('TRINO_JWT_TOKEN') }}"
  1. Enter a dbt™ Profile Name that matches the profile: key in your dbt_project.yml.

  2. Set the Target field to dev.

  3. Adjust the Schema and Threads fields as needed.

Secure Credential Storage

Never paste raw passwords into configuration fields. Paradime stores all credentials in HashiCorp Vault, encrypted at rest and in transit. Each developer's profiles.yml is isolated in a dedicated file-system folder that no other user can access.

For an extra layer of safety, reference secrets via environment variables:

password: "{{ env_var('TRINO_PASSWORD') }}"
password: "{{ env_var('TRINO_PASSWORD') }}"
password: "{{ env_var('TRINO_PASSWORD') }}"

Set TRINO_PASSWORD under Settings → Workspaces → Environment Variables → Code IDE.

Configure the Code IDE Target (Development)

Dev Schema Strategy

In development, you want every engineer to write to their own isolated schema so models don't collide. The standard dbt™ pattern is to use a custom generate_schema_name macro. Create macros/get_custom_schema.sql:

{% macro generate_schema_name(custom_schema_name, node) -%}
  {%- set default_schema = target.schema -%}
  {%- if target.name == 'prod' -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ custom_schema_name | trim }}
    {%- endif -%}
  {%- else -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ default_schema }}_{{ custom_schema_name | trim }}
    {%- endif -%}
  {%- endif -%}
{%- endmacro %}
{% macro generate_schema_name(custom_schema_name, node) -%}
  {%- set default_schema = target.schema -%}
  {%- if target.name == 'prod' -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ custom_schema_name | trim }}
    {%- endif -%}
  {%- else -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ default_schema }}_{{ custom_schema_name | trim }}
    {%- endif -%}
  {%- endif -%}
{%- endmacro %}
{% macro generate_schema_name(custom_schema_name, node) -%}
  {%- set default_schema = target.schema -%}
  {%- if target.name == 'prod' -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ custom_schema_name | trim }}
    {%- endif -%}
  {%- else -%}
    {%- if custom_schema_name is none -%}
      {{ default_schema }}
    {%- else -%}
      {{ default_schema }}_{{ custom_schema_name | trim }}
    {%- endif -%}
  {%- endif -%}
{%- endmacro %}

With this macro in place and your Code IDE target set to dev with schema: dbt_john:

Environment

Custom Schema Config

Resulting Schema

Dev (John)

staging

dbt_john_staging

Dev (Jane)

staging

dbt_jane_staging

Prod

staging

staging

This isolation means John and Jane can dbt run simultaneously without overwriting each other's tables.

Working with Multiple Catalogs

Trino's superpower is query federation — querying across catalogs backed by entirely different storage systems. In dbt™, the catalog field in your connection maps to target.database. To materialize models in a different catalog, use the database config:

-- models/marts/fct_orders.sql
{{ config(
    database='gold_lakehouse',
    schema='marts'
) }}

SELECT * FROM {{ ref('stg_orders') }}
-- models/marts/fct_orders.sql
{{ config(
    database='gold_lakehouse',
    schema='marts'
) }}

SELECT * FROM {{ ref('stg_orders') }}
-- models/marts/fct_orders.sql
{{ config(
    database='gold_lakehouse',
    schema='marts'
) }}

SELECT * FROM {{ ref('stg_orders') }}

For more advanced routing, override the generate_database_name macro in your project:

{% macro generate_database_name(custom_database_name=none, node=none) -%}
  {%- set default_database = target.database -%}
  {%- if custom_database_name is none -%}
    {{ default_database }}
  {%- else -%}
    {{ custom_database_name | trim }}
  {%- endif -%}
{%- endmacro %}
{% macro generate_database_name(custom_database_name=none, node=none) -%}
  {%- set default_database = target.database -%}
  {%- if custom_database_name is none -%}
    {{ default_database }}
  {%- else -%}
    {{ custom_database_name | trim }}
  {%- endif -%}
{%- endmacro %}
{% macro generate_database_name(custom_database_name=none, node=none) -%}
  {%- set default_database = target.database -%}
  {%- if custom_database_name is none -%}
    {{ default_database }}
  {%- else -%}
    {{ custom_database_name | trim }}
  {%- endif -%}
{%- endmacro %}

This lets you read from a raw_postgres catalog and write to an analytics_lakehouse catalog — all within one dbt™ project and one Paradime connection.

flowchart LR
    subgraph Trino Cluster
        C1[raw_postgres catalog]
        C2[analytics_lakehouse catalog]
    end
    subgraph dbt Project
        SRC[sources\\nraw_postgres.public.orders] --> STG[stg_orders\\nanalytics_lakehouse.staging]
        STG --> FCT[fct_orders\\nanalytics_lakehouse.marts]

flowchart LR
    subgraph Trino Cluster
        C1[raw_postgres catalog]
        C2[analytics_lakehouse catalog]
    end
    subgraph dbt Project
        SRC[sources\\nraw_postgres.public.orders] --> STG[stg_orders\\nanalytics_lakehouse.staging]
        STG --> FCT[fct_orders\\nanalytics_lakehouse.marts]

flowchart LR
    subgraph Trino Cluster
        C1[raw_postgres catalog]
        C2[analytics_lakehouse catalog]
    end
    subgraph dbt Project
        SRC[sources\\nraw_postgres.public.orders] --> STG[stg_orders\\nanalytics_lakehouse.staging]
        STG --> FCT[fct_orders\\nanalytics_lakehouse.marts]

Figure 3: A single dbt™ project reading from one catalog and materializing into another via Trino federation.

Configure Bolt Scheduler (Production)

Prod Schema + Permissions

Bolt is Paradime's production orchestrator. It uses a separate connection from the Code IDE, ensuring production runs are never affected by development configuration changes.

  1. Navigate to Settings → Connections.

  2. Click Add New next to the Bolt Schedules section.

  3. Select Trino and provide production credentials.

Your production profile should target clean schema names (no developer prefix):

type: trino
method: ldap
user: svc_dbt_prod@company.com
password: "{{ env_var('TRINO_PROD_PASSWORD') }}"
host: analytics.galaxy.starburst.io
catalog: analytics_lakehouse
schema: production
port: 443
threads: 8
type: trino
method: ldap
user: svc_dbt_prod@company.com
password: "{{ env_var('TRINO_PROD_PASSWORD') }}"
host: analytics.galaxy.starburst.io
catalog: analytics_lakehouse
schema: production
port: 443
threads: 8
type: trino
method: ldap
user: svc_dbt_prod@company.com
password: "{{ env_var('TRINO_PROD_PASSWORD') }}"
host: analytics.galaxy.starburst.io
catalog: analytics_lakehouse
schema: production
port: 443
threads: 8

Set the Target field to prod. This works in tandem with the generate_schema_name macro shown earlier — when target.name == 'prod', models materialize to their clean, intended schema names.

Permissions tip: The production service account (svc_dbt_prod) should have CREATE, INSERT, SELECT, and DROP privileges on the target catalog and schema. In Starburst Enterprise, assign these via built-in access control (BIAC). In Starburst Galaxy, use the role system.

Schedule Safety: Separate Credentials and Targets

flowchart TD
    subgraph Paradime
        DEV[Code IDE Connection\\nuser: john@company.com\\ntarget: dev\\nschema: dbt_john]
        PROD[Bolt Scheduler Connection\\nuser: svc_dbt_prod@company.com\\ntarget: prod\\nschema: production]
        CI[TurboCI Connection\\nuser: svc_dbt_ci@company.com\\ntarget: ci\\nschema: dbt_ci]
    end
    DEV -->|dev runs| WH[Starburst / Trino Cluster]

flowchart TD
    subgraph Paradime
        DEV[Code IDE Connection\\nuser: john@company.com\\ntarget: dev\\nschema: dbt_john]
        PROD[Bolt Scheduler Connection\\nuser: svc_dbt_prod@company.com\\ntarget: prod\\nschema: production]
        CI[TurboCI Connection\\nuser: svc_dbt_ci@company.com\\ntarget: ci\\nschema: dbt_ci]
    end
    DEV -->|dev runs| WH[Starburst / Trino Cluster]

flowchart TD
    subgraph Paradime
        DEV[Code IDE Connection\\nuser: john@company.com\\ntarget: dev\\nschema: dbt_john]
        PROD[Bolt Scheduler Connection\\nuser: svc_dbt_prod@company.com\\ntarget: prod\\nschema: production]
        CI[TurboCI Connection\\nuser: svc_dbt_ci@company.com\\ntarget: ci\\nschema: dbt_ci]
    end
    DEV -->|dev runs| WH[Starburst / Trino Cluster]

Figure 4: Paradime enforces environment isolation by maintaining separate connection configurations for dev, prod, and CI.

This separation ensures:

  • A developer can't accidentally run a production build — the Code IDE physically cannot use production credentials.

  • Production credentials are never exposed to individual developers — only the Bolt Scheduler accesses them.

  • CI runs (TurboCI) use a third, dedicated connection writing to a disposable schema like dbt_ci.

To create a Bolt schedule:

  1. Open the Bolt application from the Paradime Home Screen.

  2. Click + New Schedule → + Create New Schedule.

  3. Configure the schedule name, commands (e.g., dbt build), git branch (main), trigger type, and cron expression.

  4. Save and publish.

See Creating Bolt Schedules for the full walkthrough.

Paradime Exclusive: Validate in SQL Scratchpad

Paradime's SQL Scratchpad is a gitignored, persistent workspace inside the Code IDE. Use it to verify your connection is healthy before running any dbt™ commands.

SHOW CATALOGS / SHOW SCHEMAS Sanity Checks

Open a Scratchpad tab and run:

-- Step 1: List all catalogs your account can see

-- Step 1: List all catalogs your account can see

-- Step 1: List all catalogs your account can see

Expected output:




Then confirm your target schema exists:

-- Step 2: List schemas in your target catalog
SHOW SCHEMAS FROM

-- Step 2: List schemas in your target catalog
SHOW SCHEMAS FROM

-- Step 2: List schemas in your target catalog
SHOW SCHEMAS FROM

Expected output:




If your catalog or schema is missing from these results, stop here — dbt™ will fail with the same error downstream. Fix permissions or names before proceeding.

Run a Lightweight Query Against a Known Table

Pick a table you know exists and run a bounded query:

-- Step 3: Verify you can read data
SELECT * FROM analytics_lakehouse.staging.stg_orders LIMIT 5

-- Step 3: Verify you can read data
SELECT * FROM analytics_lakehouse.staging.stg_orders LIMIT 5

-- Step 3: Verify you can read data
SELECT * FROM analytics_lakehouse.staging.stg_orders LIMIT 5

If this returns rows, your authentication, catalog, schema, and read permissions are all confirmed. You're ready to dbt run.

flowchart LR
    S1[SHOW CATALOGS] -->|catalogs listed?| S2[SHOW SCHEMAS FROM catalog]
    S2 -->|schema found?| S3[SELECT * FROM catalog.schema.table LIMIT 5]
    S3 -->|rows returned?| S4[✅ Connection validated]
    S1 -->|error?| FIX[Check host / port / auth]
    S2 -->|missing?| FIX2[Check catalog name or permissions]
    S3 -->|denied?| FIX3[Check schema permissions]
flowchart LR
    S1[SHOW CATALOGS] -->|catalogs listed?| S2[SHOW SCHEMAS FROM catalog]
    S2 -->|schema found?| S3[SELECT * FROM catalog.schema.table LIMIT 5]
    S3 -->|rows returned?| S4[✅ Connection validated]
    S1 -->|error?| FIX[Check host / port / auth]
    S2 -->|missing?| FIX2[Check catalog name or permissions]
    S3 -->|denied?| FIX3[Check schema permissions]
flowchart LR
    S1[SHOW CATALOGS] -->|catalogs listed?| S2[SHOW SCHEMAS FROM catalog]
    S2 -->|schema found?| S3[SELECT * FROM catalog.schema.table LIMIT 5]
    S3 -->|rows returned?| S4[✅ Connection validated]
    S1 -->|error?| FIX[Check host / port / auth]
    S2 -->|missing?| FIX2[Check catalog name or permissions]
    S3 -->|denied?| FIX3[Check schema permissions]

Figure 5: A three-step validation flow you can run in SQL Scratchpad before ever touching dbt™.

Common Issues and Fixes

Catalog Not Found / Schema Not Found

Symptom:

or

Causes and fixes:

Cause

Fix

Typo in catalog or schema field

Run SHOW CATALOGS and SHOW SCHEMAS FROM in SQL Scratchpad to get exact names. Names are case-sensitive in Trino.

Catalog not mounted on your cluster

Ask your Starburst admin to verify the catalog is configured in the cluster's catalog properties.

Insufficient permissions

Your user may be able to authenticate but lack SHOW SCHEMAS access. Request the appropriate role or grants from your admin.

Schema doesn't exist yet

dbt™ can auto-create schemas it materializes into, but your user needs CREATE SCHEMA privilege on the catalog. Run CREATE SCHEMA IF NOT EXISTS analytics_lakehouse.staging manually if needed.

Auth Failures

Symptom:

or

HTTPSConnectionPool: Max retries exceeded ... [SSL: CERTIFICATE_VERIFY_FAILED]
HTTPSConnectionPool: Max retries exceeded ... [SSL: CERTIFICATE_VERIFY_FAILED]
HTTPSConnectionPool: Max retries exceeded ... [SSL: CERTIFICATE_VERIFY_FAILED]

Causes and fixes:

Cause

Fix

Wrong method value

Double-check method matches your cluster's auth config. Using ldap against a cluster that expects jwt will fail silently or with a generic 401.

Incorrect username format

For Starburst Galaxy, the username must include the role suffix: analyst@company.com/accountadmin. For Starburst Enterprise, use the LDAP username.

Password or token expired

Rotate credentials and update the environment variable in Settings → Environment Variables.

SSL certificate not trusted

Ensure http_scheme: https is set and your cluster's TLS certificate is signed by a trusted CA. Self-signed certificates require additional configuration at the infrastructure level.

Paradime IP not whitelisted

Your firewall must allow inbound connections from Paradime's egress IPs.

Session Property Mismatches

Symptom:

or

Causes and fixes:

Cause

Fix

Property name doesn't exist in your Trino version

Session property names change between Trino versions. Verify available properties by running SHOW SESSION in SQL Scratchpad.

Property scoped to a specific catalog connector

Some properties are catalog-scoped (e.g., hive.insert_existing_partitions_behavior). Prefix them correctly in session_properties.

Invalid value type

A property expecting a duration (e.g., '10m') will reject an integer like 10. Check Trino session property documentation for expected types.

You can also set session properties on a per-model basis using dbt™ hooks instead of globally in profiles.yml:

{{ config(
    pre_hook="SET SESSION query_max_run_time = '15m'"
) }}
{{ config(
    pre_hook="SET SESSION query_max_run_time = '15m'"
) }}
{{ config(
    pre_hook="SET SESSION query_max_run_time = '15m'"
) }}

This is safer because a misconfigured global session property will break every model, while a per-model hook limits the blast radius.

Wrapping Up

Connecting dbt™ to Starburst or Trino in Paradime comes down to three things: getting the right credentials from your admin, entering them in the right place, and validating before you build. The SHOW CATALOGS → SHOW SCHEMAS → SELECT LIMIT 5 pattern gives you confidence in under a minute.

If something goes wrong, the fix is almost always in the catalog name, the auth method, or the permissions — and you can diagnose all three from SQL Scratchpad without leaving Paradime.

Useful resources:

Interested to Learn More?
Try Out the Free 14-Days Trial
decorative icon

Future of Data Work
Available Today

decorative icon

Future of Data Work
Available Today

decorative icon

Future of Data Work
Available Today

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.