dbt™ and BigQuery on Paradime: Service Account Setup in Minutes

Feb 26, 2026

Table of Contents

dbt™ BigQuery Setup: The Complete Guide to a Secure, Production-Ready Connection with Paradime

Setting up dbt™ with BigQuery shouldn't feel like threading a needle in the dark. Between wrangling gcloud auth locally, configuring profiles.yml across machines, and praying your service account has the right IAM roles — teams lose hours before writing a single model.

This guide walks you through a safe, repeatable dbt™ BigQuery setup using Paradime. You'll learn how to apply the principle of least privilege from day one, cleanly separate development from production, and validate everything before your first dbt run.

Whether you're an analytics engineer connecting BigQuery for the first time, or a platform team standardizing your org's dbt™ setup, this guide has you covered.

Why Use Paradime for BigQuery dbt™ Setup?

Most dbt™ BigQuery setup guides start with local CLI installation, gcloud auth, and hand-editing profiles.yml. That approach works for solo developers, but it introduces friction and risk as teams grow.

Paradime eliminates these pain points by providing a managed, browser-based dbt™ development environment with native BigQuery integration.

Skip Local gcloud Auth + profiles.yml

With dbt Core™ installed locally, every developer must:

  1. Install the gcloud CLI

  2. Run gcloud auth application-default login

  3. Manually configure ~/.dbt/profiles.yml with the correct project, dataset, and credentials

Here's what a typical local profiles.yml looks like for BigQuery:

Every developer needs access to that keyfile, must keep the path consistent, and has to update the file when anything changes. This does not scale.

With Paradime, the connection is configured once at the workspace level through the UI. Developers log in and start working — no local setup, no credential files floating around on laptops.

Separate Dev and Prod Targets Cleanly

One of the most critical (and commonly botched) aspects of any dbt™ BigQuery setup is the separation between development and production environments.

Paradime enforces this separation architecturally:

  • Code IDE connections target your dev dataset (e.g., dbt_john, dbt_jane)

  • Bolt Scheduler connections target your prod dataset (e.g., analytics, dbt_prod)

Figure 1: Paradime enforces environment separation between Code IDE (dev) and Bolt Scheduler (prod) connections.

Because these are separate connection configurations with separate credentials, there's zero risk of a developer accidentally writing to production during ad-hoc development.

Prerequisites (BigQuery + IAM)

Before touching Paradime, you need a properly configured Google Cloud project with the right IAM guardrails. This section follows the principle of least privilege — granting only the permissions dbt™ actually needs.

Create/Select a GCP Project and Datasets

  1. Open the BigQuery Console and select or create a GCP project.

  2. Create your BigQuery datasets. In BigQuery, datasets are the equivalent of schemas. You'll need at least:

⚠️ Important: All datasets that dbt™ will reference in a single run must be in the same location (e.g., US multi-region). Mixing US and EU datasets in one query causes a notFound / location mismatch error.

Create a Service Account

Create a dedicated service account for dbt™ — don't reuse a personal Google account or a shared admin account.

🔒 Security tip: Treat this JSON key like a password. Never commit it to Git. Upload it directly to Paradime's encrypted connection settings.

Minimum IAM Roles to Run dbt™ (Dataset-Level Permissions)

dbt™ needs to run jobs and read/write data. The minimum IAM roles are:

IAM Role

Purpose

Scope

BigQuery Job User (roles/bigquery.jobUser)

Run queries and jobs

Project-level

BigQuery Data Editor (roles/bigquery.dataEditor)

Create/update/delete tables in datasets

Dataset-level or project-level

For dataset-level grants (recommended for least privilege), use the BigQuery Console or a SQL DCL statement:

Figure 2: Least-privilege IAM strategy — Job User at the project level and Data Editor scoped to individual datasets.

Why not just grant BigQuery Admin? Because that violates least privilege. BigQuery Admin can delete datasets, manage access controls, and modify project-level settings — none of which dbt™ needs.

Create the BigQuery Connection in Paradime

With your GCP project and service account ready, it's time to wire everything together in Paradime.

Settings → Connections

  1. Click Settings in the top menu bar of the Paradime interface

  2. Click "Connections" in the left sidebar

  3. Click "Add New" next to Code IDE Environment

  4. Select "BigQuery"

Upload/Enter Service Account JSON Securely

Paradime offers two authentication methods:

  • Service Account JSON (recommended for teams) — Upload your .json key file directly. Paradime stores it encrypted and never exposes it to individual developers.

  • BigQuery OAuth — Each developer authorizes via their Google account. Better for personal dev sandboxes, but requires each user to manage their own scopes.

For most teams, Service Account JSON is the right choice because:

  • A single credential is managed centrally

  • Key rotation happens in one place

  • Developers never see or handle credentials

Set Project, Default Dataset, and Location

Fill in the workspace-level fields:

Field

Description

Example

Profile Name

Must match profile in dbt_project.yml

my_bigquery_project

Target

Environment name

dev

Dataset Location

Must match your BigQuery dataset region

US

Project ID

Your GCP project identifier

my-gcp-project

Service Account JSON

The uploaded key file

dbt-paradime-key.json

And the user-level fields (each developer sets their own):

Field

Description

Example

Dataset

Developer's personal dev dataset

dbt_john

Threads

Parallel execution threads

4

💡 Pro tip: The Dataset Location field in Paradime must match your BigQuery dataset's actual region. If your datasets are in US, set location to US. Mismatches cause cryptic notFound errors at runtime.

Configure Development (Code IDE)

The Code IDE is where developers write, test, and iterate on dbt™ models. Getting the development configuration right prevents collisions and keeps production safe.

Per-Developer Schema/Dataset Strategy

The gold standard for multi-developer dbt™ projects is per-developer datasets. Each developer gets their own BigQuery dataset (configured as their dataset in the Paradime user-level connection settings):

  • John → dbt_john

  • Jane → dbt_jane

  • CI → dbt_pr_123

When dbt™'s default generate_schema_name macro runs, it concatenates the target schema with any custom schema you define in your models. For a model configured with schema: staging running under John's connection:

This means every developer's materializations land in isolated datasets with zero risk of overwriting each other.

To get clean schema names in production while keeping developer isolation in dev, add this macro to your project:

Default Target and Naming Conventions

Establish team conventions early:

Convention

Recommendation

Dev target name

dev

Dev dataset prefix

dbt_ (e.g., dbt_john)

Threads (dev)

4 (lower to conserve BigQuery slots)

Threads (prod)

8 or higher

CI dataset prefix

dbt_ci_

In your dbt_project.yml, set the profile name to match what you configured in Paradime:

Configure Production (Bolt Scheduler)

Production runs are handled by Paradime's Bolt Scheduler — a dedicated execution environment with its own connection, credentials, and dataset targets.

Prod Dataset Strategy

Your production connection should target clean, user-facing dataset names:

Field

Value

Profile Name

my_bigquery_project

Target

prod

Dataset

analytics or dbt_prod

Dataset Location

US (must match dev)

Service Account

A separate prod service account JSON

Threads

8

To set this up in Paradime:

  1. Go to Settings → Connections

  2. Click "Add New" next to Bolt Schedules

  3. Select "BigQuery"

  4. Upload your production service account JSON and fill in the fields

🔒 Best practice: Use a different service account for production than for development. The prod service account should have Data Editor on prod datasets only, and the dev service account should have Data Editor on dev datasets only. Neither should have access to the other's datasets.

Job-Level Variables and Protection Against Writing to Prod from Dev

Bolt supports environment variables that you can set globally or override per-schedule:

  1. Go to Settings → Workspaces → Environment Variables

  2. In the Bolt Schedules section, click Add New

  3. Add variables like:

Key

Value

DBT_TARGET

prod

PROD_DATASET

analytics

These variables can be referenced in your Bolt schedule commands and dbt_project.yml via {{ env_var('PROD_DATASET') }}.

Figure 3: Environment variables in Bolt ensure prod commands only write to production datasets. Dev connections have no access to prod datasets.

Additional safeguards:

  • The generate_schema_name macro (shown earlier) strips the developer prefix in prod, so models land in clean dataset names like staging and marts

  • Bolt's "Defer to Production" feature lets dev runs reference production tables for upstream dependencies without materializing into prod

  • Schedule-level environment variable overrides let you run the same dbt™ project with different dataset targets per schedule

Paradime Exclusive: Validate with SQL Scratchpad

Before you run dbt run for the first time, you should verify that your connection, permissions, and location settings are all correct. Paradime's SQL Scratchpad makes this trivially easy.

Run a Test Query in the Correct Location

  1. In the Code IDE, click the "New File" button (top-right)

  2. Paradime creates a file in the paradime_scratch folder (e.g., scratch-1)

  3. Write a simple validation query:

  4. Click Run (or use the Data Explorer preview) to see instant results

If this query returns your dataset with the correct location, your connection is working.

Confirm Dataset Write Permissions

Next, verify that your service account can actually create objects in the target dataset:

If either statement fails with accessDenied, your service account is missing the BigQuery Data Editor role on that dataset.

💡 Why this matters: Scratchpad files are automatically gitignored, so these validation queries never pollute your dbt™ project repository. They persist across sessions for reference but stay out of version control.

First Run Checklist

You've configured connections, set up IAM, and validated permissions. Time to run dbt™.

dbt deps and Package Resolution

If your project uses packages (from dbt Hub or Git), run dbt deps first to install them:

Your packages.yml might look like:

In Paradime's Code IDE, open the Terminal panel and run dbt deps. You should see output confirming each package was installed successfully.

dbt run + dbt test Expectations in BigQuery

Now, execute your first build:

What to expect on a successful first run:

Command

Expected Output

dbt run

Models materialize in your dev dataset (e.g., dbt_john.my_first_model)

dbt test

Schema tests and data tests pass (or fail with clear error messages)

Verify in the BigQuery Console that tables/views were created in the correct dataset:

Figure 4: Sequence of a successful first dbt™ run in Paradime — from package installation through model materialization and testing.

Troubleshooting

Even with a clean setup, you may hit issues. Here are the three most common problems and their fixes.

Access Denied on Dataset

Error: Access Denied: Dataset my-gcp-project:analytics: User does not have permission to query table/dataset.

Cause: The service account is missing the BigQuery Data Editor role on the target dataset, or BigQuery Job User at the project level.

Fix:

  1. Verify the service account email in your Paradime connection settings

  2. Check IAM permissions:

  3. Grant the missing role:

  4. For dataset-level grants, use the BigQuery Console: BigQuery → Explorer → Select dataset → Sharing → Permissions → Add Principal

Location Mismatch Errors

Error: Not found: Dataset my-gcp-project:dbt_john was not found in location EU (when your dataset is actually in US).

Cause: The Dataset Location field in your Paradime connection doesn't match the actual BigQuery dataset location. Or you're trying to query across regions (e.g., joining a US dataset with an EU dataset).

Fix:

  1. Check your dataset's actual location:

  2. Update the Dataset Location in Paradime's connection settings to match (e.g., change from EU to US)

  3. Ensure all datasets referenced in your dbt™ project (sources, seeds, models) are in the same location

⚠️ Common gotcha: BigQuery dataset location is immutable — you can't change it after creation. If there's a mismatch, you'll need to recreate the dataset in the correct region.

Service Account Key Rotation Best Practices

Service account JSON keys should be rotated regularly. Google recommends rotating keys every 90 days or fewer.

Rotation process:

Figure 5: Safe service account key rotation workflow — create, deploy, test, disable, monitor, delete.

Step-by-step:

  1. Create a new key for the existing service account:

  2. Upload the new key in Paradime: Settings → Connections → Edit → Upload new Service Account JSON

  3. Validate using the SQL Scratchpad (run a simple query to confirm the new key works)

  4. Disable the old key in the GCP Console IAM (don't delete it immediately)

  5. Monitor for 24–48 hours to confirm nothing breaks

  6. Delete the old key once you're confident the new one is fully operational

Additional best practices from Google Cloud documentation:

  • Never commit service account keys to source code repositories

  • Don't pass keys between users — upload directly to Paradime

  • Consider using BigQuery OAuth for development connections to eliminate dev-side keys entirely

  • Use gcloud asset search-all-resources to audit keys older than 90 days:

Wrapping Up

A secure, production-ready dbt™ BigQuery setup isn't just about getting a dbt run to succeed — it's about building a foundation that scales safely. Here's what you've accomplished by following this guide:

Least-privilege IAM: Service accounts with only BigQuery Job User + Data Editor, scoped to specific datasets✅ Dev/prod separation: Isolated connections via Paradime's Code IDE and Bolt Scheduler✅ Per-developer datasets: No collisions, no accidental production writes✅ Quick validation: SQL Scratchpad to test permissions and location before your first run✅ Key rotation plan: A repeatable process for rotating service account credentials

With Paradime handling the connection management, credential storage, and environment separation, your team can focus on what actually matters: building reliable data models.

Ready to get started? Sign up for a free Paradime trial and connect your BigQuery project in under 10 minutes.

Further Reading

Interested to Learn More?
Try Out the Free 14-Days Trial
decorative icon

Future of Data Work
Available Today

decorative icon

Future of Data Work
Available Today

decorative icon

Future of Data Work
Available Today

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.

Copyright © 2026 Paradime Labs, Inc.

Made with ❤️ in San Francisco ・ London

*dbt® and dbt Core® are federally registered trademarks of dbt Labs, Inc. in the United States and various jurisdictions around the world. Paradime is not a partner of dbt Labs. All rights therein are reserved to dbt Labs. Paradime is not a product or service of or endorsed by dbt Labs, Inc.